The worst real estate on Earth is better than the best real estate on Mars or Luna.
[1] https://www.amazon.com/City-Mars-settle-thought-through/dp/1...
Very true..
Here's a recent HN link to a chilling documentary about one of the most isolated settlements in the world: https://news.ycombinator.com/item?id=46040459
Follow the rationale:
1. Nation states ultimately control three key infrastructure pieces required to run data centers (a) land (protected by sovereign armed forces) (b) internet / internet infra (c) electricity. If crypto ever became a legitimate threat, nation states could simply seize any one of or all these three and basically negate any use of crypto.
2. So, if you have data centers that no longer rely on power derived from a nation state, land controller by a nation state or connectivity provided by the nation state's cabling infra, then you can always access your currency and assets.
Microsoft was talking about submarine data centers powered by tidal forces in the early 2000s.
There have been talks of data centers on Sealand-like nation states.
Geothermal ...
Exotic data center builds will always be hyped. Always be within the realm of feasibility when cost is no object, but probably outside of practicality or need.
Next it'll be fusion-powered data centers.
https://cfs.energy/news-and-media/commonwealth-fusion-system...
Fiscal rules are sort of man made.
What did the royal navy do? There is no mention of the UK using force against sealand in either the Wikipedia page or this BBC article about sealand. (Though obviously the royal navy could retake sealand if they wanted)
Microsoft did something similar with their submarine data center pilots. This gets more press because AI.
On the SEU issue I’ll add in that even in LEO you can still get SEUs - the ISS is in LEO and gets SEUs on occasion. There’s also the South Atlantic Anomaly where spacecraft in LEO see a higher number of SEUs.
The section of the article that talks about them isn’t great. At least for FPGAs, the state of the art is to run 2-3 copies of the logic, and detect output discrepancies before they can create side effects.
I guess you could build a GPU that way, but it’d have 1/3 the parallelism as a normal one for the same die size and power budget. The article says it’d be a 2-3 order of magnitude loss.
It’s still a terrible idea, pf course.
In other words, a) background temperature (to the extent it's even meaningful) is much warmer than Earth's surface and b) cooling is much, much more difficult than on Earth.
Fun fact though, make your radiator hotter and you can dump just as much if not more energy then you would typically via convective cooling. At 1400C (just below the melting point of steel) you can shed 450kW of heat per square meter, all you need is a really fancy heat pump!
There's no atmosphere that helps with heat loss through convection, there's nowhere to shed heat through conduction, all you have is radiation. It is a serious engineering challenge for spacecrafts to getting rid of the little heat they generate, and avoid being overheated by the sun.
- Earth temperatures are variable, and radiation only works at night
- The required radiator area is much smaller for the space installation
- The engineering is simple: CPU -> cooler -> liquid -> pipe -> radiator. We're assuming no constraint on capex so we can omit heat pumps
Even optimistically, capex goes up by a lot to reduce opex, which means you need a really really long breakeven time, which means a long time where nothing breaks. How many months of reduced electricity costs is wiped out if you have to send a tech to orbit?
Oh, and don't forget the radiation slowly destroying all your transistors. Does that count as opex? Can you break even before your customers start complaining about corruption?
It’s a little worrying so many don’t know that.
>vacuum is a fucking terrible heat convector
Yes we're talking about radiating not convection
And a kilowatt from one square meter is awful. You can do far more than that with access to an atmosphere, never mind water.
For example, the JWST uses a RAD750 ( https://en.wikipedia.org/wiki/RAD750 ) which is based on a PowerPC 750 running at 110 MHz to 200 MHz.
Its successor is the RAD5500 ( https://en.wikipedia.org/wiki/RAD5500 )... which runs at between 66 MHz and 462 MHz.
> The RAD5545 processor employs four RAD5500 cores, achieving performance characteristics of up to 5.6 giga-operations per second (GOPS) and over 3.7 GFLOPS. Power consumption is 20 watts with all peripherals operating.
That's kind of neat... but not exactly data center performance.
Back to the older RAD750...
> The RAD750 system has a price that is comparable to the RAD6000, the latter of which as of 2002 was listed at US$200,000 (equivalent to $349,639 in 2024).
That isn't exactly price performance. Well, unless you're constrained by "it costs millions to replace it."
So... I'm not really sure what devices they'd be putting up there.
The "data centers in space" is much more a "space launch is a hot technology, AI and data centers are a hot technology... put the two together and its too the moon!" (Or at least that's what we tell the investors before we try to spend all their money)
Best case scenario custom ASICs for specialised workloads either for edge computing of orbital workloads or military stuff.That would be with ability to replace/upgrade components rather than a sealed sat like environment.
Its similar to the hype for spacelink type sats for internet connectivity rather than a proper fiber buildout that would solve most of the issues at less cost.After the last couple of years seeing the deployment in UKR,Sahel its mostly a mil tool.
[1] https://www.theregister.com/2024/01/24/updated_hpe_spaceborn...
One of these projects is bonkers IMO: china-has-an-underwater-data-center-the-us-will-build-them-in-space
https://www.forbes.com/sites/suwannagauntlett/2025/10/20/chi...
Hitting something in orbit just requires you to be in the way at the right time.
Basically an intercept is a lot easier.
You want to push things out of orbit not turn a massive structure into a supersonic shard field for 20 years
But these baffoons only see the blinky shiney and completely miss the point of the stories. They have a child's view of SF the way that men in their teens and 20d thought they were supposed to be like Tyler Durden.
If you want to avoid national laws and have great cooling, then submerse your datacenter in the ocean instead.
https://news.microsoft.com/source/features/sustainability/pr...
So obviously we're not going to be some SREs into space to babysit the machines. Have everything fail in place? Have robots do it? What about the regular supply missions to keep replacing all the failing hardware (there's only so many spare HDDs you can have on hand).
The whole thing is farcical.
See also: Any on-prem horror show that budgeted for capex, rent, cooling, network and power, but not maintenance.
Shut up! This is the chance for one of us to go into space! I don't care if all I'm doing is swapping 1U pizza boxes in the cold hard vacuum of space, I'm down!
The next generation Starlink (V3) will have 250 square meters of solar panels per satellite, and they are planning on launching about 10,000 of them, so now you're at 2.5 million m^2 of panels or 100 times ISS.
All those satellites have their own radiators to manage heat. True, they lose some heat by beaming it to the ground, but data center satellites would just need proportionally larger radiators.
And, of course, all those satellite have CPUs and memory chips; they are already hardened to resist space radiation (or else they wouldn't function).
Almost every single objection to data centers in space has already been overcome at a smaller scale with Starlink. The only one that might apply is cost: if it's cheaper to build data centers on Earth, then space doesn't make sense (and it won't happen). But prices are always coming down in space, and prices on Earth keep going up (because of environmental restrictions).
So the only problem left to be solved is that space datacenters would be millions of times more expensive per unit of compute than a ground based datacenter. And cost millions of times more to maintain.
Also remember that data centers last for about 5 years; after that the gpus are obsolete. That’s no different than the lifetime of a Starlink satellite.
If launch costs keep dropping and environmental costs keep rising, space based data centers will make sense.
Plus, environmental costs of data centers keep rising.
Did you not read the article? It had many objections that make it clear datacenters in space are unworkable...
It needs to be scaled up, but there is no obstacle to that (at least none that the article mentions).
The only valid objection is cost, but space prices keep dropping and earth prices keep rising.
It does sound to me like other concepts that Google has explored and shelved, like building data centers out of shipping container sized units and building data centers underwater.
[1] https://services.google.com/fh/files/misc/suncatcher_paper.p...
> Cooling would be achieved through a thermal system of heat pipes and radiators while operating at nominal temperatures
Which is kind of similar to writing a paper about building a bridge over the Pacific and saying "The bridge would be strong enough by being built out of steel". Like you can say it, but that doesn't magically make it true.
https://www.tomshardware.com/desktops/servers/microsoft-shel...
But the real reason they won't work is because they're investor scams that were never serious in the first place.
Then again there's lots of space in space, perhaps it's possible to isolate racks/aisles into their own individual satellites, each with massive radiant heatshedding panels? It's an interesting problem space that would be very interesting to try to solve, but ultimately I agree with OP when we come back around to "But, why?" Research for the sake of research is a valid answer, but "For prod"? I don't see it.
If humans are going to expand beyond the Earth, we'll certainly need to get much better at building and maintaining things in space, but we don't need to put data centers in space just to support people stuck on the ground.
> After laughing at "the vacuum of space for cooling" I closed the page because there was nothing serious there. Basic high school physics student would be laughing at that sentence.
>I mean, when you tell people that within 10 years it could be the case that most new data centers are being built in space, that sounds wacky to a lot of people, but not to YC. (8:00)
Reminds me of the hyperloop. Well yes, things in vacuum tube go fast. Now does enough things go fast to make any sense...
You're worried about rates when we can't even get the ball rolling on safety for human occupancy, maintenance, workability.
I swear, nothing on Earth more dangerous than someone with dollar signs in their eyes.
I’m under the impression you need to radiate through matter (air, water, physical materials, etc).
Is my understanding of the theory just wrong?
The main way that heat dissipates from space stations and satellites is through thermal radiation: https://en.wikipedia.org/wiki/Thermal_radiation.
I man you totally can radiate excess heat energy on earth, but your comment implies that the parents idea of radiating off excess "energy", specifically HEAT energy in space is possible, which it isn't.
You can radiate excess energy for sure, but you'd first have to convert it away from heat energy into light or radio waves or similar.
I don't think we even have that tech at this point in time, and neither do we have any concepts how this could be done in theory.
That's technically correct I guess, at some temperature threshold it becomes possible to bleed some fractions of energy while the material is exceedingly hot.
Space stations need enormous radiator panels to dissipate the heat from the onboard computers and the body heat of a few humans. Cooling an entire data center would require utterly colossal radiator panels.
So, it makes sense to always start there.
- More junk whizzing around Earth.
- Inaccessibility for maintenance.
- Power costs.
- Susceptibility to solar storms and cosmic rays.
Risky/untried things aren't dumb because they're hard, they're dumb when they're more expensive/harder than cheaper/easier alternatives that already exist that do the same thing.
Of course it’s stupid and it’s never going to work. The same is true for Carbon Capture and Storage, blue hydrogen, etc. It’s nonsense from the start, but it didn’t stop governments around the world to spend billions on it.
It works like this: companies spend a few millions on PR to market a sci-fi project that’s barely plausible. Governments who really want to preserve the status quo but are pressured to “do something” can just announce that they’re sinking billions in it and voila! They’re green, they’re going to save the world.
It’s just a scam to get public money really.
Latency becomes high but you send large batches of work.
Probably not at all economical compared to anywhere on Earth but the physics work better than orbit where you need giant heat sinks.
https://pmc.ncbi.nlm.nih.gov/articles/PMC9646997/ ("Thermophysical properties of the regolith on the lunar far side revealed by the in situ temperature probing of the Chang’E-4 mission" (2022))
https://www.engineeringtoolbox.com/thermal-conductivity-d_42...
(Imagine, for entertainment purposes, what would happen if you wrapped a running server rack in a giant ball of rock-wool insulation, 50 meters in radius).
Only way to dissipate large amounts of heat on the moon is with sky-facing radiators.
It’s another huge problem for orbit though. Shielding would add a ton of mass and destroy the economics.
That said anything has to be better then almost literally nothing so I'm still holding out for datacenters on the moon.
I presume Earth's gravity largely keeps the exosphere it has around it. With some modest fractional % lost year by year. There is a colossal vast volume out there! But given that there's so little matter up in space, what if any temperature rise would we expect from say a constant 1TW of heat being added?
https://www.nasa.gov/wp-content/uploads/2015/03/135642main_b...
[1] https://en.wikipedia.org/wiki/Alexander_and_the_Terrible,_Ho...
I’m not arguing it’ll be easy or will ultimately work, but articles like this are unhelpful because they don’t address the fundamental insight being proposed.
Starlink satellites would be pointless for doing computation because they are spread across the Earth resulting in horrible latency. AI companies spend lots of money on super fast connects within a datacenter.
Starlink with GPU might have some advantage for running edge GPU. But most Starlink customers are close to ground station and it makes a lot more sense to have GPUs there. It is a lot easier to manage them than launching new satellites which could take years.
But, 1) literally the smartest people and AI in the world will be working on this and 2) man I want to see us get to a type 2 civilisation bad.
The layout of this blog post is also very interesting, it presents a bunch of very hard items to solve and funny enough the last has been solved recently with starlink. So we can approach this problem, it requires great engineering but it’s possible. Maybe it’s as complicated as CERNs LHC but we have one of those.
Next up then is the strong why? When you’re in space, if you set the cost of electricity to zero, the equation gets massively skewed.
Thermal is the biggest challenge but if you have unlimited electricity, lots of stuff becomes possible. Fluorinert cooling, piezoelectric pumps and dual/multi stage cooling loops with step ups. We can put liquid cooling with piezos on phones now, so that technology is moving in the right direction.
For a thought experiment, if launch costs were $0/kg, would this be possible? If the answers yes, then at some point above $0/kg it becomes uneconomical, the challenge is then to beat that number.
Any active cooling solution you can think of actually makes the problem worse (unless it's "eject hot mass").
There are dozens of companies solving each problem outlined here; if we never attempt the 'hard' thing we will never progress. The author could have easily taken a tone of 'these are all the things that are hard that we will need to solve first' but actively chose to take the 'catastrophically bad idea' angle.
From a more positive angle, I'm a big fan of Northwood Space and they're tackling the 'Communications' problem outlined in this article pretty well.
It's the opposite of engineering, where you understand a problem space and then try to determine the optimal solution given the constraints. This starts with an assumption that the solution is correct, and then tries to engineer fixes to gaps in the solution, without ever reevaluating the solution choice.
> Unlike traditional parabolic dish antennas, our phased array antenna can connect with multiple satellites simultaneously.
if that's how they plan to reach more than 1Gbps, then that's not 100Gbps per satellite, that's 100 for a collection of satellites.
Starlink is about 100Mbps. That's 1000x times less than 100Gbps
But for a more nuanced and optimistic take, this one is good and highlights all the same issues and more https://www.peraspera.us/realities-of-space-based-compute/
(TLDR: the actual use cases for datacentres in space rely on the exact opposite assumption from visions of space clouds for LLMs: most of space is far away and has data transmission latency and throughput issues so you want to do a certain amount of processing for your space data collection and infrastructure and autonomous systems on the edge)
Nobody is proposing data centers at the South Pole. This isn’t because it’s difficult. It is difficult, but that’s not the reason it’s not being looked at. Nobody’s doing it because it’s pointless. It’s a massive hassle for very little gain. It’s never going to be worth the cost no matter what problems get solved.
Data centers in space are like that. It’s not that it’s difficult. It’s that the downsides are fundamentally much worse than the advantages, because the advantages aren’t very significant. Ok, you get somewhat more consistent solar power and you can reach a wider ground area by radio or laser. And in exchange for that, you get to deal with cooling in a near perfect insulator, a significantly increased radiation environment, and difficult-to-impossible maintenance. Those challenges can be overcome, sure, but why?
This whole thing makes no sense. Maybe there’s something we just aren’t seeing, or maybe this is what happens when people are able to accumulate far too much money and nobody is willing to tell them they’re being stupid.
Latency wise it seems okay for llm training to put them higher than Starlink to make them last longer and avoid decelerating because of the atmosphere. And for inference, well, if the infra can be amortized over decades than it might make the inference price cheap enough to endure additional latencies.
Concerning communication, SpaceX I think already has inter-starlinks laser comms, at least a prototype.
Similarly, making stuff have a great life expectancy is much more expensive than having it optimized for cost and operational requirements instead but stored somewhere you can replace individual components as and when they fail, and it's also much easier to maximise life expectancy somewhere bombarded by considerably less radiation.
If anything, I'd expect large-scale Mars datacenters before large-scale space datacenters, if we can find viable resources there.
There are plenty of data centers in urban centers; most major internet exchanges have their core in a skyscraper in a significant downtown, and there will almost always be several floors of colospace surrounding that, and typically in neighboring buildings as well. But when that is too expensive, it's almost always the case that there are satellite DCs in the surrounding suburbs. Running fiber out to the warehouse district isn't too expensive, especially compared to putting things in orbit; and terrestrial power delivery has got to be a lot less expensive and more reliable too.
According to a quick search, StarLink has one 100g space laser on equipped satellites; that's peanuts for terrestrial equipment.
Underwater [0] is the obvious choice for both space and cooling. Seal the thing and chuck it next to an internet backbone cable.
> More than half the world’s population lives within 120 miles of the coast. By putting datacenters underwater near coastal cities, data would have a short distance to travel
> Among the components crated up and sent to Redmond are a handful of failed servers and related cables. The researchers think this hardware will help them understand why the servers in the underwater datacenter are eight times more reliable than those on land.
[0] https://news.microsoft.com/source/features/sustainability/pr...
The obsolete stuff can be deorbited or recycled in space.
You still have to build the GPUs, etc for the datacenter whether it’s on Earth or in orbit. But to put it in space you also need massive new cooling solution, radiation shielding, orbital boosting, data transmission bandwidth, and you have to launch all of that.
And then, there are zero benefits to putting a datacenter in space over building it on Earth. So why would you want to add all that extra expense?
As an armchair layman, this claim intuitively doesn't feel very correct.
Of course AI is far from a trustworthy source, but just using it here to get a rough idea of what it thinks about the issue:
"Ground sites average only a few kWh/m²/day compared to ~32.7 kWh/m²/day of continuous, top-of-atmosphere sunlight." .. "continuous exposure (depending on orbit), no weather, and the ability to use high-efficiency cells — all make space solar far denser in delivered energy per m² of panel."