In March, Elon Musk laid out a bold proposal: place data centers in orbit. With SpaceX’s work and its xAI merger, he highlighted a simple advantage of space for AI computing — abundant sunlight — and sketched visions of swarms of satellites doing heavy AI workloads. Musk suggested orbital AI could become economically attractive within a few years. Many experts are far more cautious: while the idea has intuitive appeal, turning it into a practical, cost-effective reality faces significant engineering, logistical, and economic hurdles.
Why companies are looking up
AI is driving a rapid rise in electricity demand. The International Energy Agency projects global data-center power use could nearly double by the end of the decade. In response, cloud and AI firms are pursuing dedicated gas turbines, nuclear projects, and other energy strategies. For some startups and established companies, orbit looks like an alternative source of plentiful, renewable power.
Several organizations are already testing concepts. Starcloud launched a spacecraft carrying an Nvidia H100 and ran a version of Google’s Gemini from orbit; it plans a follow-on craft with far more solar generating capacity, though still only a few kilowatts. Google’s Project Suncatcher, a proposed cluster of dozens of satellites built in partnership with Planet, aims to demonstrate tightly integrated orbital compute; two prototype satellites are planned for early 2027. Planet’s leadership says the timing feels right, while acknowledging uncertainty about when the economics will make sense.
The hard practical problems
Power scale: AI accelerators consume enormous power. By comparison, the International Space Station’s solar arrays — roughly half a football field each — produce about 100 kilowatts on average. A facility delivering tens or hundreds of megawatts would require structures or constellations vastly larger than the ISS. That makes scaling from demo systems to data-center-level capacity a core technical and cost challenge.
Thermal management: In vacuum there’s no air to convect heat away. Satellites must route waste heat to radiators that emit it as infrared. Any high-power orbital compute node needs not only large solar arrays but equally substantial radiator area, or the workload must be distributed across many smaller craft with their own radiators.
Communications and latency: Spreading compute across multiple small satellites eases power and cooling requirements but creates heavy inter-satellite data movement needs. That likely means laser links for high-throughput transfers. Even at light speed, distances and relative motion introduce latency that can penalize tightly coupled applications. Proposals such as Project Suncatcher favor compact clusters to keep latencies low; Musk has described both huge constellations and very large single-platform designs, including concepts with solar arrays on the order of hundreds of meters.
Launch costs: Sending mass to orbit remains expensive. Current prices are roughly $1,000 per kilogram in many cases. Google’s internal work indicates launch costs would need to drop by a factor of at least five — to the low hundreds of dollars per kilogram — before space-based data centers become economically feasible at scale. Companies like SpaceX hope heavy-lift rockets such as Starship will bring those costs down; several startups explicitly tie their plans to such reductions.
Operations and maintenance: Terrestrial data centers rely on frequent hands-on servicing, upgrades, and inspections. A typical large facility can be the size of multiple football fields and consumes megawatts of power, requiring regular technician access. Orbital data centers would force different operational models: highly modular, pretested hardware, extensive remote management, and either robotic servicing, replaceable satellite modules, or infrequent, high-cost crewed missions. Some tasks can be achieved with software and remote updates, but customers may still want the ability to swap hardware or gain physical access rapidly.
Economic and timeline skepticism
Experts highlight many conditional steps that must align. Brandon Lucia, a satellite-computing researcher, calls optimistic timelines a stretch: the idea looks attractive on paper but faces difficult practical hurdles. Raul Martynek, CEO of a large data-center operator, notes the cascade of technical advances and cost reductions that would be required, and doubts that all would materialize within a few years. For many companies running extensive ground-based fleets, orbital competition is not an immediate worry.
What would need to change
To transition from prototypes to full-scale orbital data centers, several breakthroughs are necessary: dramatically larger and lighter solar and radiator structures or novel power architectures; efficient inter-satellite laser networking and cluster designs that keep latency acceptable; launch costs that fall substantially; robust remote-operation and robotic-servicing ecosystems; and successful deployment of heavy-lift vehicles. Each of these is independently challenging; together they make the path to economically viable space data centers long and uncertain.
Bottom line
Putting data centers into orbit is technically possible in principle: tests show chips can run in space, and sunlight is abundant. But converting early demonstrations into cost-effective, large-scale replacements for terrestrial facilities will require major advances across power collection, thermal engineering, communications, launch economics, and operations. Whether those pieces come together fast enough to match the timelines some proponents suggest remains an open question.