ooda architecture multi-agent distributed-systems

What Ukraine's Drones Taught Me About Building AI Systems

Extracting software architecture lessons from autonomous drone systems, OODA loops, and distributed state management.

V
Vario aka Mnehmos

! A Note on Context

The most sophisticated AI deployment in the world right now isn't in Silicon Valley. It's on a 600-mile front line in Ukraine. I'm about to extract software architecture lessons from weapons systems—systems designed to kill people. I don't want to sanitize their origin, nor do I want to celebrate it. This is an attempt to learn from necessity.

The Problem: Manual Drones Were Failing

By mid-2024, the situation for Ukrainian drone operators had changed dramatically. Russian electronic warfare (EW) had improved to the point where manual FPV drone hit rates dropped to 30-50% under jamming. For new pilots, it was as low as 10%.

The solution wasn't better pilots or stronger signals. It was rethinking the latency of the decision loop.

The OODA Framework

👁️

OBSERVE

Raw data collection: Drones, ELINT, sensors, and open-source intelligence.

🧠

ORIENT

Contextual significance: Fusing data into a common operating picture (DELTA).

⚖️

DECIDE

Operational choices: AI-refined targeting and human command validation.

ACT

Kinetic execution: TFL-1 terminal guidance or precision strikes.

Tap a stage to explore

DELTA: The OS of the Battlefield

DELTA is Ukraine’s Situational Awareness and Battlefield Management System. It's a cloud-native platform that aggregates data from thousands of sensors—drones, satellites, chatbots, and signal intelligence—into a single interface.

0+
Enemy Targets
Processed daily by the DELTA system
0
AI Detections
Weekly unit identifications by Avengers AI
0+
Daily Objects
Classified UAV video streams via Vezha
0$
Module Cost
TFL-1 terminal guidance entry point

01 Observe & Orient

The aggregate stream of thousands of drones is processed by Vezha and Avengers AI, identifying over 12,000 enemy units per week automatically.

02 Decide & Act

The TFL-1 terminal guidance module ($50-$100) handles the "last mile." Once the operator selects a target, the drone locks on and strikes autonomously, even if the radio link is severed.

DELTA: OODA at National Scale

DELTA is Ukraine's situational awareness and battlefield management system. According to CSIS analysis and Ukrainian government statements, it aggregates data from drones, satellites, ground sensors, radar, and even civilian informants via chatbots—all into a single interface that runs on standard laptops, tablets, and phones.

What matters architecturally: DELTA is built as an ecosystem of modules, each handling a different phase of the OODA loop.

OBSERVE

  • Drone video feeds via Vezha
  • Satellite imagery integration
  • Ground sensor networks
  • Civilian intelligence chatbots

ORIENT

  • Avengers AI analyzes video
  • Identifies 70% of enemy units
  • Detection time: ~2.2s per unit
  • Automatic feed prioritization

DECIDE

  • Deltamonitor live map
  • Target Hub prioritizes strikes
  • Direct digital mission assignment
  • "Uber for Artillery" routing

ACT

  • Mission Control coordination
  • AI terminal guidance (TFL-1)
  • Automated result feedback
  • Rapid re-looping

In December 2024, Ukrainian forces reportedly conducted their first fully unmanned operation near Lyptsi—reconnaissance, target acquisition, and strike executed by machines with human oversight but not direct control.

The Nervous System Insight

Here's what changed my thinking about AI architecture: safety features need to operate at nervous system speed.

When you touch something hot, you don't think "that's hot, I should move my hand." Your hand is already moving. The reflex happens before conscious processing.

LAYER 1: REFLEXES (milliseconds)
├── Hardcoded limits ├── Immediate fallbacks ├── Circuit breakers └── No AI involved—just rules
LAYER 2: REACTIONS (seconds)
├── Pattern matching (machine vision) ├── Known-good responses └── Lightweight inference
LAYER 3: REASONING (seconds to minutes)
├── Full situational analysis ├── Complex planning └── Human decision points

Most AI systems put everything in Layer 3. That's like routing "hand on stove" through your prefrontal cortex. You'll figure out the right answer eventually—with burns.

I learned this the hard way. Early versions of my RPG game engine let the AI agent manage character hit points directly. Prompt instructions said "never reduce HP below zero." Worked fine in testing. In production, a complex sequence blew right past it.

The fix wasn't better prompting. It was moving the constraint to code:

// This runs AFTER the agent acts, BEFORE state commits
if (character.hp < 0) character.hp = 0;
if (character.hp === 0 && !character.deathSavesStarted) {
  character.deathSavesStarted = true;
  emit('death_save_required', character.id);
}

The agent can't reason its way around this. It's not a suggestion. It's a reflex.

The Database Is the Intelligence

Another insight: the agent isn't smart. The database is smart. The agent is just hands.

Ukraine's "mother drone" system illustrates this. A carrier drone delivers AI-guided FPVs 300km behind enemy lines. The AI doesn't "know" anything substantial. It queries onboard models and sensor data. Kill the process, spin up a new one, feed it the same data—same output.

DATABASE (THE ACTUAL INTELLIGENCE)
Sensor feeds • Drone telemetry • Target coordinates
Unit positions • Historical patterns
STATELESS PROCESSES
  • Avengers: Video analysis
  • Target Hub: Strike planning
  • Vezha: Stream routing
These are just automation

Lose a situational awareness center? The others keep running. Lose a drone operator's connection? The AI completes the strike. The system degrades gracefully because state lives in the database, not in any single process.

How This Shapes My Architecture

I build MCP (Model Context Protocol) servers—the tools that let AI agents interact with the world. Every tool I build now follows these wartime lessons.

Separation of Concerns

Split by OODA Phase

  • OBSERVE: Read-only tools (get_context, search_index)
  • ORIENT: Analysis tools (validate_schema, check_line_of_sight)
  • DECIDE: The LLM reasons over observed + oriented data
  • ACT: Write tools with built-in reflexes (emit_event, execute_action)

Latency & Autonomy

What Centralizes vs. What stays Local?

Centralize
  • Shared context
  • Historical decisions
  • Schema contracts
  • Coordination state
Keep Local
  • Reflexes & invariants
  • Latency-critical ops
  • Offline fallbacks
  • "Terminal guidance" logic

Extracting engineering principles from the cutting edge of distributed systems.