Network & Interconnect
Instead of isolated "AI satellites," the Orbital AI Factory operates as a distributed, laser-linked supercomputer in low Earth orbit, designed for high-bandwidth, synchronised AI workloads.
Inside the Node
Optical Compute Fabric
Each AI pod connects into a high-speed optical backplane that links GPU/ASIC racks across the node. The goal is to make a cluster of pods behave like a single, tightly-coupled AI supercomputer rather than isolated boxes.
Throughput
Aggregate internal fabric targets are in the tens of terabits per second per node, so large models and training shuffles can move freely between pods without saturating links.
Latency & Determinism
Short, fixed-length paths on the spine and simple topologies (ring / star / fat-tree) keep latency predictable for synchronised AI workloads.
Between Nodes (The Orbital Data Center Network)
Free-Space Optical Links (FSO)
Instead of relying on traditional RF alone, Orbital AI Factory nodes use laser crosslinks to talk to each other and to relay nodes. These free-space optical links provide fiber-class bandwidth in vacuum with very narrow beams and low interference.
Link Budget & Capacity
Each node carries multiple steerable optical terminals, with individual links in the hundreds of Gbps to multi-Tbps range. With several links active in parallel, node-to-node bandwidth also scales into the tens of Tbps, enough to keep large, synchronised AI clusters fed.
Topology
Nodes form a resilient mesh across multiple orbital planes and inclinations. Traffic can be routed:
- pod → local node
- node → neighboring nodes via FSO
- node → ground via optical or RF gateway sats
RF as a Control / Backup Plane
Conventional radio links remain for command, housekeeping, and as a lower-rate fallback. High-volume AI data and model sync ride on the optical links.
Why It Matters
Instead of a handful of isolated "AI satellites," the Orbital AI Factory behaves like a distributed, laser-linked supercomputer in low Earth orbit, designed specifically for high-bandwidth, synchronised AI workloads.