Nvidia Launches Space-Grade AI Chips as Orbital Data Centers Race Heats Up
Chip giant Nvidia is getting serious about space. At the GTC conference, the company presented a new generation of computing modules developed specifically for use in orbit. This positions Nvidia as a central technology supplier for an emerging industry: orbital AI data centers designed to process data directly in space.
Nvidia’s New Space Hardware
At the center of the announcement is the Nvidia Space-1 Vera Rubin Module, a computing module developed specifically for space. Compared to the previous H100 GPU, the new Rubin chip delivers up to 25 times more AI computing performance for space-based inferencing, according to Nvidia. The module is designed for so-called SWaP environments — applications with strict requirements regarding size, weight, and power consumption.
The portfolio is complemented by two additional platforms that are already available today:
- Nvidia IGX Thor: An industrial-grade platform for mission-critical edge environments, with support for functional safety, secure boot, and autonomous operation.
- Nvidia Jetson Orin: An ultra-compact, energy-efficient module for AI inferencing directly on board satellites, optimized for real-time processing of image, navigation, and sensor data.
For processing large volumes of data on the ground, Nvidia relies on the RTX PRO 6000 Blackwell Server Edition GPU, which the company says enables processing up to 100 times faster than conventional CPU-based systems.
“Space computing, the final frontier, has arrived. As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated.” Jensen Huang, founder and CEO of Nvidia
These Partners Are on Board
Nvidia has secured a number of partners from the space industry for its space ambitions. Six companies are already using Nvidia’s platforms for their missions:
- Aetherflux: The company is developing solar-powered computing and energy infrastructure in orbit and relies on the Vera Rubin Module for autonomous operations.
- Axiom Space: The commercial space station operator is integrating Nvidia’s platforms into its infrastructure.
- Kepler Communications: The company is building a data network for real-time connectivity in space and uses Jetson Orin to intelligently manage data streams.
- Planet Labs PBC: The Earth observation specialist processes global satellite imagery daily and aims to use Nvidia’s CorrDiff AI models to go from raw data to actionable insights in near real time.
- Sophia Space: The company offers modular, passively cooled computing platforms for satellite operators and relies on Jetson Orin for AI capabilities within strict SWaP constraints.
- Starcloud: The company is building purpose-built orbital data centers and aims to enable training and inferencing workloads directly in space for the first time, together with Nvidia.
Who Else Is Working on AI in Space
Nvidia is not alone. The race for orbital AI infrastructure has several prominent participants.
Google: Project Suncatcher
Google is pursuing similar goals under the name “Project Suncatcher”. The company has already tested Tensor Processing Units (TPUs) with a particle accelerator to simulate the radiation conditions in low Earth orbit. The launch of two prototype satellites equipped with TPUs is planned, with initial in-orbit tests scheduled for 2027. In the long term, Google is even considering constellations of 81 satellites in clusters with gigawatt-scale computing power. Google has also secured Planet Labs as a partner for initial deployments.
SpaceX and Elon Musk: One Million Satellites
Elon Musk’s plans are the most ambitious of all. SpaceX has filed an application with the US communications authority FCC for a constellation of up to one million satellites that would collectively function as an orbital AI data center. Connected via high-speed lasers and powered by solar energy, this network is intended to provide global computing power. Musk stated this could be “the cheapest way to generate AI computing power in two to three years.” For the chips, he is relying on technology from his company Tesla, though concrete timelines remain unclear. In the wake of the recent merger of SpaceX with his AI startup xAI, the project has gained additional strategic momentum.
Jeff Bezos and Blue Origin
Amazon founder Jeff Bezos is also engaging with the topic. In 2025, he stated that he could envision data centers in space within the next 10 to 20 years, arguing on environmental grounds: resource-intensive industries such as data centers should be relocated to space in the long term in order to relieve the burden on Earth.
Who Is Skeptical
Not all players in the tech industry share the enthusiasm for orbital data centers. The criticism comes from prominent quarters:
- Sam Altman (OpenAI): The CEO of OpenAI is among the critics of the idea.
- Matt Garman (AWS): The head of Amazon Web Services is skeptical of the concept.
- Jim Chanos: The well-known short seller has publicly criticized the plans.
- Gartner analysts: The market research firm has classified orbital data centers as unrealistic.
Among other things, critics have described the idea as “ridiculous,” “AI snake oil,” and “peak insanity.”
The Technical Hurdles: Why This Is So Difficult
The skepticism has concrete technical reasons. Orbital data centers face challenges for which fully proven solutions do not yet exist.
The Cooling Problem
In the vacuum of space, there is neither air nor water available for heat dissipation. The only way to get rid of waste heat is thermal radiation via large radiator surfaces. These would need to be enormous in size, are heavy, expensive to launch, and sensitive to direct solar radiation. To date, there is no practically proven solution capable of handling the waste heat of entire AI clusters at scale.
Maintenance and Upgrades
Every hardware replacement in orbit requires a rocket launch. This makes maintenance and technological advancement extremely expensive and logistically demanding. While terrestrial data centers are regularly upgraded with new hardware, orbital systems are largely on their own once launched.
Radiation and Reliability
Hardware in orbit is exposed to cosmic radiation, which can damage chips and cause errors. Components must be hardened or shielded accordingly, which increases costs and weight.
Economic Viability
Launch costs, development effort, and the complexity of operations currently make orbital data centers many times more expensive than comparable capacities on Earth. Even optimistic scenarios assume it will take decades before the technology could be economically competitive.
Conclusion: A Marathon, Not a Sprint
Nvidia’s entry into the space computing market is a significant signal: the most important chip supplier to the AI industry is betting on orbital infrastructure as a growth sector. With the Space-1 Vera Rubin Module, IGX Thor, and Jetson Orin, Nvidia offers a tiered platform for a range of requirements in space.
Yet the industry is still in its infancy. Initial proof-of-concepts are technically feasible, but widespread infrastructure remains a distant prospect. In the coming years, hybrid approaches are likely to emerge first, in which terrestrial data centers are supplemented by orbitally supported nodes. Whether the promise of orbital AI data centers can be fulfilled will be decided over the next few decades.

