Standing before a crowd in March, Elon Musk outlined a vision that literally reaches beyond Earth: put data centers into orbit. Musk, whose SpaceX recently merged with xAI, argued space offers a key advantage for powering AI workloads — sunlight is plentiful. He sketched a future of vast numbers of data-crunching satellites circling the planet, and suggested the economics could flip in favor of orbital AI within a few years.
Not everyone agrees. Experts say the idea’s basic appeal — abundant solar energy — masks many hard engineering, logistical and economic obstacles. “That timeline is an optimistic interpretation,” says Brandon Lucia, an electrical and computer engineering professor at Carnegie Mellon who specializes in satellite computers. The concept is attractive on a napkin, but making it practical is another matter.
A growing power crunch on Earth helps explain why companies are exploring space. AI is driving a surge in electricity consumption: global data-center power use could roughly double to nearly 1,000 terawatt-hours by the end of the decade, according to the International Energy Agency. Firms are reacting by building dedicated gas turbines, investing in nuclear, and looking for any place with plentiful, cheap energy. For some startups and big companies, that place may be orbit.
Starcloud, a company aiming to build orbital data centers, warns terrestrial power constraints could leave chips idle unless new supply is found. It launched a first spacecraft last fall carrying an Nvidia H100 chip and demonstrated running a version of Google’s Gemini from orbit. The firm plans a second craft with 100 times the power generation of the first — but still only about 8 kilowatts.
Google is also studying the idea with Project Suncatcher, a proposed cluster of 81 satellites to be built with Planet; two prototype satellites are slated for launch in early 2027. Planet’s CEO says the time is right to pursue orbital data centers, though he acknowledges the cost-efficiency timeline is uncertain.
Scaling from prototypes to data-center-scale facilities is the big challenge. The chips used for AI demand enormous amounts of power. To put that in perspective, the International Space Station’s solar arrays — each about half the size of a football field — generate roughly 100 kilowatts on average, about what a single large car engine produces. A 100-megawatt data center in space would require structures hundreds of times larger than the ISS, depending on orbit.
Power isn’t the only obstacle. Electronics generate heat that must be removed, and space is a vacuum. There’s no air to convect heat away, so satellites rely on radiators: plumbing heat to large panels that emit it as infrared. That means any AI satellite would need not only massive solar arrays but similarly large radiators, or the system would have to be distributed across many smaller satellites in a constellation.
Smaller-satellite constellations can spread power and cooling needs, but they introduce heavy demands for moving data between machines. That likely requires laser links to transfer large volumes of data between satellites. Even at light speed, distances in orbit create latency that can slow computing, particularly for tightly coupled workloads. Project Suncatcher proposes extremely tight clusters to minimize latency; Musk has talked about huge constellations and unveiled an “AI Sat Mini” design with solar arrays around 180 meters across. He’s also suggested deploying satellites over the poles in enormous numbers.
Launch costs are another gating factor. Today, launching payloads to orbit can cost roughly $1,000 per kilogram. Google’s analysis suggests launch costs must fall by at least a factor of five — to about $200 per kilogram — before space-based data centers become economically sensible at scale. SpaceX is betting that its Starship — still under development — will dramatically lower launch prices and make ambitious orbital projects feasible. Some investors and startup founders explicitly tie their plans to Starship’s success.
Even if you can build and power orbital data centers, operating and maintaining them poses more questions. Terrestrial data centers are active workplaces where technicians install, upgrade and repair hardware constantly. A large facility like DataBank’s IAD1 in Ashburn, Virginia, spans 144,000 square feet and consumes around 13 megawatts — about 130 times the ISS’s power. Staff and vendors visit daily to perform upgrades and maintenance. Customers also expect physical access to their equipment at times.
Space data centers would require different operational models: more remote management, heavily tested and modular hardware sent up ready to run, and possibly robotic servicing or replacement. Some functions can be done in software, and chips can be tested extensively on the ground, but customers may still want hands-on access or rapid hardware turnover to stay competitive.
Industry veterans remain unconvinced that space will yield a near-term threat to Earth-based data centers. “There’s a lot of ifs and a lot of advancements that would have to occur, and I find it kind of hard to believe that all that could happen in two or three years,” says Raul Martynek, CEO of DataBank. For companies running large terrestrial fleets, the prospect isn’t keeping them up at night.
In short, orbital data centers are technically feasible in principle: energy is abundant, and companies are already testing chips in space. But turning prototypes into full-scale, cost-effective replacements for terrestrial data centers will require breakthroughs in solar and radiator scale, inter-satellite communications, launch costs, thermal engineering, and remote operations — along with successful deployment of heavy-lift rockets like Starship. Whether those pieces fall into place quickly enough for Musk’s timeline remains an open question.