Musk’s space data centers and the risk of a new dependence on the US

SpaceX has agreed to acquire the AI company xAI, and this deal became a catalyst for a broader discussion of Elon Musk’s larger plan. The idea is an integrated setup in which launches, connectivity, and computing sit under a single management and investment umbrella, with the endgame described as moving part of the computing infrastructure into space.

Experts emphasize that fully fledged orbital data centers remain a decades-long project. At the same time, waiting until the technology is clearly cost-effective could end up creating a dependence on US orbital infrastructure if it is built and entrenched by a single player before the others.

Why xAI appears in the story

The purchase of xAI is seen not only as a matter of revenue and asset consolidation, but also as a step toward vertical integration. In this logic, SpaceX provides access to orbit and transport capacity, Starlink and other satellite systems can provide communications channels, and xAI drives demand for compute for training and running models.

This setup looks like an attempt to assemble the entire value chain, from rockets and orbital platforms to artificial intelligence services. Critics of such a model describe it as potential infrastructure bottlenecks, where control over multiple layers at once makes competition and regulation more difficult.

Why move computing off Earth

A data center is a physical facility with servers and storage systems, which can be compared to a factory floor for data. It requires a lot of energy, takes up space, and needs cooling, while demand for AI compute increases the strain on power grids and hardware supply chains.

The idea of an orbital data center rests on two promises. First, solar energy in orbit is available almost constantly, which looks attractive against the backdrop of rising electricity prices. Second, space is associated with cold, and in public debate this often sounds like a “natural” bonus for cooling computing hardware.

Timelines without wishful thinking: two horizons

Elon Musk previously spoke about the emergence of space data centers within 2 or 3 years. Jermaine Gutierrez, a researcher at the European Space Policy Institute, commented on such estimates ironically and quite accurately in terms of expectation management. When it comes to Elon Musk, I always mentally add an extra zero to any of his forecasts.

In an ESPI report, the timeline for an orbital data center competitive in terms of compute capacity is estimated at at least 20 years, if systematic work begins right now. At the same time, the nearer horizon looks different. Orbital cloud services and edge computing, i.e., processing data closer to the signal source on stations or platforms, could appear in roughly 5 years if there is progress in launch costs and thermal engineering.

Why orbit isn’t ready yet

The key engineering barriers boil down to a few critical problem areas, each of which brings with it complex economics and new risks:

  • Cooling and heat rejection. Space is cold, but rejecting heat is harder because there is no medium to carry it away like air or water on Earth. According to Gutierrez, there is no liquid that dissipates heat; radiators are what remain, and you end up running up against the Stefan–Boltzmann law. He noted that this leads to massive thermal control infrastructure that may turn out to be larger than the computing hardware itself.
  • Launch costs and Starship’s readiness. The project’s economics hinge on full reusability and a high launch cadence so the launch price approaches the cost of propellant. At the same time, Starship has not yet reached orbit, and the timeline for bringing it to operational maturity remains uncertain.
  • Repair and wear. Components in orbit often have a service life of around five years due to radiation damage, and maintenance will require complex logistics.
  • Service robots. The necessary level of robotics has not yet been achieved, but discussions often point to a possible tie-in with robotics efforts associated with Tesla.

Even if some tasks are handled through modular replacement of units, the question remains who will be able to service such infrastructure—and on what terms—and who will own orbital capacity at critical moments.

Energy as the currency of AI costs

A separate thread is the argument about energy. OpenAI CEO Sam Altman has put forward the thesis that the cost of AI compute will, over time, converge toward the cost of energy. In this sense, control over cheap generation becomes not only an engineering issue, but also a lever of market power.

In orbit, solar energy after deployment really does look almost constant and effectively “free” if you count only operating costs. Gutierrez states the geopolitical risk plainly. If dominance in the infrastructure of space-based solar power ends up in US hands, that may be the risk.

Monopolization and power across the entire chain

Experts’ concerns are tied not to a single technology, but to the accumulation of control at multiple layers at once. Cheap orbital energy combined with control over launches, communications channels, computing platforms, and service distribution could lead to the global software supply chain becoming dependent on a single player and a single jurisdiction.

Sentient co-founder Himanshu Tyagi suggests looking more broadly than the satellites and models themselves. The real risk is not in sci‑fi scenarios about an out-of-control superintelligence, but in who ultimately holds the keys. He adds that the accumulation of power across the stack, including compute, deployment, distribution, capital, and governance, resembles an oligopoly that is hard to regulate, hard to compete with, and hard to audit.

From Earth to orbit: where gigawatts of compute are headed

When thinking about the future of data centers, it is important to understand that on Earth they have already become the foundation for many industries far removed from space ambitions. Modern compute capacity supports not only AI and cloud services, but also a whole range of digital platforms, each of which has its own infrastructure requirements:

  • The financial sector uses data centers to process millions of real-time transactions. Banking apps, stock exchange platforms, and cryptocurrency exchanges depend on instant server response: any latency in processing a payment or updating quotes can result in financial losses for thousands of users. That is why the largest banks invest in distributed networks of data centers around the world.
  • Cloud gaming is another area where compute capacity and stability are critical. Services like GeForce Now, Xbox Cloud Gaming, or PlayStation Now stream games to users’ devices in real time, and server load can spike sharply at the time of major releases or during large-scale online events. For players, this means that graphics quality and minimal lag directly depend on how powerful and well located the data center they are connected to is.
  • Online gambling and sports betting platforms process enormous volumes of data every day: odds update in real time, match streams run without delays, and payment systems must work instantly. This is especially important for dynamic sports such as football or cricket, where the outcome of a match can be decided in a single over, and thousands of fans simultaneously follow updates in apps. The authors of a website popular among Indian bettors, which compiles information about mobile apps for cricket betting, add that it is precisely the resilience of the server infrastructure that determines whether a player will manage to place a bet a second before the start of the over or the stream freezes at the most tense moment.

In their observation, on the days of major tournaments such as the Indian Premier League, the load on cricket betting apps grows so much that without powerful distributed data centers, neither instant odds delivery nor the processing of millions of simultaneous transactions is possible.

Security and sovereignty as separate threads

Supporters of orbital computing separately emphasize the topic of cybersecurity. Javier Izquierdo, strategic director of the satellite operator Hispasat, noted that in space it is harder to hack, and an architecture with data processing in orbit can reduce the number of vulnerable links in the transmission chain.

But sovereignty hinges not only on cryptography and routing, but on law and control of infrastructure. For Europe, an additional irritant is the US Cloud Act, under which US companies may be required to restrict access to services outside the United States, and this risk carries over to orbital services if they are controlled by the same players.

The player map—and Europe’s gap

The United States remains the key venue for the private space economy, and SpaceX in this picture acts as a launch provider and a potential gatekeeper of orbital logistics. China is also moving toward space-based computing. The Three Body constellation is mentioned, where satellites develop edge capabilities and carry AI payloads.

In Europe, there is interest in orbital data centers, but there is no single plan, and the cloud market already shows dependence on AWS and Google. Gutierrez said the continent lacks a “brand it and go for it” approach, and he called the Ariane 4 era the last clear moment of European leadership vision. Companies like Thales are conducting research; however, without pan-European coordination and major customers, such work risks remaining a collection of fragmented technical experiments.