Alternative Biochemistry: Gas Giant

It’s time, once again, to play my favorite game: Alternative Biochemistry!

For more information, you can read the post. But a quick summary of what we’re doing:

In short, the way the game works is that you get to design a plausible alternative biochemistry within the context of a given extraterrestrial environment that accords with chemistry and physics, then discuss the implications of that biochemistry on the lifeforms made from it.

Last time, we looked at an iceworld, a rogue planet at frigid temperatures of 20 degrees above absolute zero, with life based on water ice crystals floating in oceans of Hydrogen. This time, we’re looking at something we have a bit more experience with: Big Gas Giants like Jupiter.

What could live here?

The Environment

Gas Giants are large. Like really, really large, and their internal structure is both complex and poorly understood. In broad terms, their composition is mostly Hydrogen, about 75%, with about 20% Helium, and the remaining 5% is heavier elements, things like Oxygen and Nitrogen and Carbon and Neon and Iron and Aluminium and Sulfur, which astronomers call “metals”.

The outermost layer is the atmosphere, where Hydrogen and Helium exist as gases, along with clouds made from water vapor or ammonia or sometimes more complicated carbon- or sulfur-based compounds. The atmosphere circulates very aggressively, with extremely high winds and big storms. In a smaller gas giant like Saturn, the atmosphere is poor in Helium, which over time is drifting down towards the core as gravity pulls it down.

As depth increases, both temperature and pressure increase. Past a certain point there is a phase transition in which Hydrogen becomes metallic. In this phase, the gaseous hydrogen molecules are pushed so close together that the electrons become “delocalized”, shared between large numbers of adjacent molecules as the hydrogen arranges it into an ordered crystal.

Metallic Hydrogen only exists at very high pressures, and while we’re fairly confident that it does exist as a phase of matter (based on analysis of Jupiter and Saturn and their respective magnetic fields) it has not been studied directly on Earth because the pressures required are so difficult to produce and basically impossible to sustain.

The amount of metallic hydrogen grows quickly as the planet’s mass grows. Neptune and Uranus, with mass around 20x Earth, have none at all. Saturn, at 95 times Earth’s mass, contains only a thin layer of Metallic Hydrogen, deep inside the planet and surrounding the core. Jupiter, at 320 times the mass of Earth, is composed mostly of Metallic Hydrogen. As the mass of the planet rises, gravity increases and the overall radius rises; thus the thickness of the gas layer required to generate the required pressure falls because gas weighs more, and the same thickness corresponds to a smaller fraction of the planet’s radius.

Jupiter is mostly metallic hydrogen.

The inner portion of the metallic hydrogen layer is in a liquid state, with atoms not locked into a crystal lattice but still conductive, and underneath this is a core made from icy and rocky materials several times the mass of Earth.

The Biochemistry

It would be easy enough to describe a life form based on a carbon or sulfur biochemistry, derived from the material in the clouds, which floats, glides, or flies through the atmosphere of Jupiter, gaining energy where it can either from sunlight or wind or internal heat. Such a biosphere could be interesting, but its existence would be of marginal impact to the planet as a whole: Jupiter’s atmosphere is extremely poor in “metals” like Carbon and Sulfur, and the lack of a solid surface makes it somewhat implausible that such a biosphere could really thrive given the conditions. While this is absolutely reasonable feedstock for a science fiction story, here at Gamma Factor we sometimes like to push through to the heart of things.

Image of Jupiter’s clouds by the Juno spacecraft. What’s under there?

Here on Earth, biology has profoundly shaped geology and climate, making and remaking the surface, oceans, and atmosphere of our world. This Gas Giant biochemistry will do the same for a Jovian planet: We’re going to look at the phase transition between gas/liquid and metallic Hydrogen.

I mentioned above that the properties of metallic Hydrogen are not well known; naturally, the behavior of the material around the phase change is even less so. We are engaging in speculative chemistry, not real chemistry.

There is some depth within the planet where a mixture of hydrogen and helium is compressed into a liquid state, and some further depth at which the planet is composed of metallic Hydrogen. Between the two is the phase boundary.

The mechanics of the phase boundary are going to be extremely complicated. Hydrogen will be forming crystals, first small and then larger, around nucleation points, whether they be metallic hydrogen nanocrystals or fragments of ice or dust, and at the same time helium diffusion into the ice and slight changes in pressure or temperature may cause melting or boiling, or freezing.

It is these nanocrystals which are, in my opinion, most interesting. I suspect that under these conditions Hydrogen may have a tendency to form Superatoms. A superatom is a collection of atoms (typically under 100) where the atoms’ outer shell electrons combine to orbit the grouping, which behaves chemically like a single atom. It’s not fusion (the nuclei remain separate and are bound together chemically), just an interesting bit of chemistry.

In the case of metallic Hydrogen, which has one electron, this is not likely to work in exactly the same way, but it’s easy to imagine how a nanocrystal of a certain size might have a few electrons to spare or share and could thus interact chemically with other clusters that have different numbers to complete each other, provided the substrate remains just on the edge of stability.

In the normal course of events, these clusters would eventually all combine, thus forming one normal-sized crystal of metallic hydrogen, then thousands and millions as the metal congeals out of the liquid. By controlling the amount of helium in the surrounding fluid and judiciously using (by chance, of course, over millions of years of phase changes) other elements like carbon, oxygen, nitrogen, iron, and sulfur to stabilize these superatomic compounds, these fleeting interactions can likely be stabilized, allowing them to persist for longer periods of time, over greater and greater ranges of temperature and forming the macromolecules out of which life ultimately is made.

Theoretical Hydrogen Phase Diagram. Source.

I have already speculated far beyond the realm of human knowledge, so I won’t go into too much detail about the superchemical structure (eg chemistry done with superatoms) that might come about. It’s likely the case that bonding between these superatoms will tend to be more covalent than ionic, which is a good starting place for biochemistry.

What I can say is that at 500 GPa (5 million atmospheres) of pressure and beyond a lot of energy can be stored as volume changes: A phase transition where the volume declines without increasing pressure can be an excellent means of storing energy, releasing 500 MJ per Liter of volume change. Subcooling Hydrogen (taking it to a pressure where it would normally change into its metallic form, thus falling in volume, but inhibiting it from doing so) could be the means of energy storage for these life forms.

The source of energy, ultimately, will be the planet itself: Jupiter radiates 2.5 times as much energy into space as it receives from the Sun. Ultimately this energy comes from the very slow contraction of the planet as it cools. In practice it’s likely that the convection currents and the heating they cause will be the source of energy for these life forms, possibly transmitted to them by the magnetic fields those currents create. It’s also possible that deep inside Jupiter there’s an equivalent to volcanic activity which could be a source of energy for life just as black smokers are on Earth. Jupiter is a massive, roiling world whose insides are still basically unknown to us.

Global Characteristics

Superatoms being larger than the regular variety, I would expect these Jovian lifeforms to be larger in size than a comparably complex Terran form but, in the high gravity, perhaps flatter, and more layered, like mats of algae covering the planet; and the stabilized superatomic hydrogen compounds of which they’re made, both the living and the once living/now dead may coat the planet in layers, like limestone on Earth is made from the shells of long-dead crustaceans.

But more interesting to me its its effects on the entire planet. These lifeforms will be made, ultimately, from Hydrogen, and will need to very carefully modulate their helium composition in order to maintain their stability. This will likely lead to helium being pushed out, into the atmosphere, while Hydrogen is sequestered as organic matter. At the end of this process, you may have a gas giant whose atmosphere is composed primarily of Helium

We now have telescopes powerful enough to start analyzing the atmospheric composition of exoplanets. And should we find one whose atmosphere is highly enriched in helium, at the expense of Hydrogen which really should be there, this may be a reason why.

JWST found water vapor in the atmosphere of a planet a 1000 light years away

It’s Not Hydrogen

Hydrogen is in the air again.

Hydrogen is in the air again, but hopefully not like this.

The basic appeal of Hydrogen is the same, in some respects, as biofuels, and is often proposed by some of the same people. As I wrote:

It’s a tidy solution in some ways because nothing has to change except where the fuel is coming from, which is what many people want, especially the ones who are in charge of everything.

That tidiness hides a lot of issues. And while the drawbacks of biofuels mean that they only really make sense in special cases, the drawbacks of hydrogen mean that there’s virtually no context where it makes sense as a means of energy storage.

It runs on water, man!

There’s plenty of drawbacks to talk about, but first I want to talk about the benefits that draw people to Hydrogen: Hydrogen has the highest energy density of any chemical fuel on a mass basis (142 MJ/kg when the weight of the oxygen is excluded) and Hydrogen burns clean, producing only water (which does not pollute the air or contribute to climate change) as a combustion product.

Unfortunately, the benefits end there.

Hydrogen has several critical drawbacks that, taken together, mean that it’s worth using only in extremely niche applications. It’s not a source of energy, only a storage mechanism. But it’s hard to store, requiring high pressure, low temperature, or both, and even then requires very large volumes. It’s a light gas that seeps through and weakens solid metal. It’s extremely explosive in air.

Hydrogen Doesn’t Generate Energy, Only Stores It

Strictly speaking, this is not true, as in certain ways of generating Hydrogen it is a net source of energy. But when you look into how that works it only highlights all the ways that Hydrogen is not a viable part of our energy system.

There are three general processes for generating Hydrogen, two extracting it from fossil fuels and one extracting it from water.

The first is called Gray Hydrogen. To produce Gray Hydrogen, you start with a fossil fuel (typically natural gas, made mostly of methane with some other hydrocarbons) and heat it with water vapor over a catalyst. The products are carbon monoxide and Hydrogen. You separate the two, burn the carbon monoxide to carbon dioxide (as bad as CO2 is, CO is worse), and use the hydrogen for energy.

Meet the new energy system, same as the old energy system

In effect, this process has the same emissions as using natural gas and produces less net energy. It requires more equipment, and produces Hydrogen, an inferior fuel. I see no reason why this process would be used for energy purposes (hydrogen can be used as an industrial process gas, which is a different case not relevant to this post).

The next is Blue Hydrogen. Blue Hydrogen is the same process as Gray Hydrogen, but with sequestration from the point source of the Hydrogen refinery. In effect, it’s the same idea as “Clean Coal”. I have serious doubts about whether hiding gases underground is a permanent solution, and some studies have estimated that Blue Hydrogen actually has more emissions than Gray. I don’t agree with all of the assumptions in that study, but the emissions will tend to be more similar to fossil fuels than different. Blue Hydrogen adds yet another process onto Gray Hydrogen, and will cost that much more. It’s not that emissions-free fuels aren’t worth paying more for, so much as that I think there are better options and don’t think that this process is truly emissions free.

Finally, there’s Green Hydrogen. The Green Hydrogen process uses electricity to split water into its component elements, Hydrogen and Oxygen, through electrolysis. If that electricity comes from a green source (Solar, Wind, Nuclear, Hydro, or equivalent), the process can be truly emissions-free.

Of the three options, Green Hydrogen is the least bad: It does, at a minimum, achieve the intended goal of cutting greenhouse gases out of energy production. But it’s important to remember that it does not generate energy. All it does is store it and move it around.

First, this means that Hydrogen alone can never be a solution to decarbonization. That energy has to come from somewhere, and that source needs to be or become carbon-free. But it also means that the efficiency really matters. Best case, between electrolysis to produce hydrogen and a fuel cell to extract energy, you’re going to lose about 50% of the energy you put in. The electrical grid loses about 5%. Between greater losses and greater difficulty of transportation, it will in almost every case be better to get your energy from the grid if it’s possible to do so, circumventing Hydrogen entirely with transmission wires.

Energy Storage That’s Hard to Store

Conceptually, the best way to understand Hydrogen is as a non-fossil version of natural gas, but with properties that make it less useful as a fuel.

Hydrogen is most often stored as a gas pressurized to 200 atmospheres. That way if anything happens it’ll explode while it explodes.

Like natural gas, there are two main ways to store hydrogen: As a compressed gas around 200 atmospheres and as a liquid. Each has its own drawbacks.

Starting with liquid: Hydrogen’s boiling point is 21K. That’s -252 C, -422 F. By comparison, natural gas liquefies around 111 K/-162 C/-260 F. I know both of these just sound like extremely cold temperatures, but there’s actually a big difference. First, both might be cold but the former is much colder than the latter. The difference between 21K and 111K is 90K, which is almost as large as the difference between freezing and boiling. And 21K is proportionally way closer to absolute zero, which means it requires a lot more energy to cool down to and keep at, and that there will be a much higher rate of heat transfer and boiloff.

And even then, the density of liquid hydrogen is abysmal, just 7% as much as water. Despite its high energy density per mass, this means its energy density per volume is middling, and substantially worse than other carbon-neutral fuels like bioethanol.

Unfortunately, compressed hydrogen is actually even worse on this measure. What you gain in terms of non-cryogenic temperatures, you lose in terms of very high pressures and even lower densities: The density of compressed hydrogen is 4.5x lower than liquid hydrogen, with correspondingly lower volumetric energy density.

Despite its very high energy density on a mass basis, Hydrogen’s low density means it still comes out behind.

For natural gas, it’s not a question of whether the liquid or compressed gas is better so much as when each is appropriate. Liquified Natural Gas is typically used when the gas is being transported (eg on a boat or truck), and sometimes for storage. Compressed Natural Gas is used in pipelines and for long term storage in most cases.

Note that natural gas has not caught on as a fuel for vehicles, especially for personal cars. The reason is that both flammable cryogenic liquids and flammable high pressure gases are dangerous and should not be handled by the general public. If regular people were to fill their cars with either on a regular basis, you would see very frequent accidents involving fires and explosions.

A Disaster Waiting to Happen

Hydrogen’s flammability is rated as “4”, which is the highest level.

Hydrogen is a very flammable gas.

Hydrogen is flammable enough that even engineers and physicists avoid it when they can. Hydrogen gas can support combustion at concentrations in air as low as 4%, and has one of the lowest ignition energies of any known substance: Even a small static charge can be enough. Hydrogen will diffuse very quickly through the air. Liquid hydrogen is cold enough that liquid oxygen will collect on adjacent surfaces, with potentially catastrophic results. A Hydrogen flame is invisible, which means you may not know there is one until it’s too late. The gas will diffuse through and weaken Iron (the most important industrial metal when alloyed to make Steel), potentially causing catastrophic failure.

Hydrogen is much more dangerous than gasoline or natural gas. Handling it safely requires a great deal of training, care, and attention to detail, and the more it is used the more accidents there will be.

I have worked with Hydrogen, and my recommendation to you on safety is that you should not work with Hydrogen if you can avoid it, and my recommendation to policymakers is that they should not make policy that will result in people having to work with it.

What Are We Really Trying to Do?

The upshot of all of this is that Hydrogen has a diverse set of drawbacks that make it a poor choice for almost any application. For grid-scale power storage in the context of high usage of variable renewables, there are solutions like flow batteries (Along with various other battery types that have poor energy density but lower cost), compressed air, pumped hydro, or a global electric grid. For ground transportation, Hydrogen no better than batteries on volumetric energy density, while requiring inconveniently-shaped cylindrical tanks and posing a large explosion hazard. For air travel, where mass and volume are both important, bioethanol is a better choice. And wherever possible, you want to plug directly into the electric grid to get power from the source.

Bioethanol is carbon-free in the same way that Green Hydrogen is: If farm equipment is replaced with electrical equipment powered by green energy, then the biofuels produced are carbon free.

I don’t have enough information to speak about costs in the abstract, but my expectation is that batteries (of a kind optimized for the application) will be the cheapest option, with bioethanol available at higher price for specific cases, like aviation, where higher mass and volumetric energy density are needed.

Hydrogen is not compatible with existing infrastructure, which means there’s no short term argument for it, and is not desirable as a long term solution. Its use should and will remain limited to its applications as an industrial process gas.

The Twilight of SLS

In late 2009, the national mood in the United States turned sour.

President Obama had won his historic victory in a landslide just a year earlier. But it had been a long year. The Financial Crisis of 2007-8 had metastasized into the Great Recession of 2008-9. Unemployment was high and rising, wages were falling, and millions of people had lost their homes to foreclosure.

The government had stepped in to save the banks, but the stimulus bill had disappeared into the economy as unemployment kept rising. Healthcare negotiations were contentious and stalled. Across the country, the Tea Party was getting attention and support, demanding tax cuts and balanced budgets. People were suffering, and they were mad, and a lot of the blame fell (incorrectly, we can now say) on the Keynesian deficit spending Democrats had passed to try to rescue the economy.

So budget cuts were in the air. All of this would eventually end in the Republican wave of 2010 and the Budget Control Act of 2011, which cut $900 billion out of the federal budget over a decade. But the machete of austerity came for NASA even sooner than that.

Bush’s Legacy

From 2004 to 2009, NASA had been developing hardware for the Constellation Program, George W. Bush’s initiative to return to the Moon and eventually continue on to Mars. At the same time, it had brought the Space Shuttle back into flight after the Columbia disaster, operating it through its planned end in 2011, and supporting construction and operations on the International Space Station.

Constellation wasn’t going well. Bush had announced an ambitious program with an even more ambitious schedule. It would feature two rockets, one of which would be the largest ever launched. Congress was not so enthusiastic: The program was underfunded, while also being over budget, behind schedule, and facing serious technical issues.

Why would they do this

President Obama empaneled a committee, which concluded that Constellation needed either more money or less ambition. He chose the latter path, and dropped it from his budget request the following year.

It was a bad time for the US Crewed Spaceflight program. After the Columbia disaster, the Space Shuttle was widely viewed as a disappointment: Too expensive, of limited utility for exploration, dangerous for crew, limited to a flight rate of only a few per year, and inexorably winding down towards its final flight. Falcon 9 had not yet flown. Orion, Starliner, and Dragon were years from completion. Ares I was cancelled, permanently, with no replacement on the horizon. Obama’s “New Direction” amounted to nothing at all, an assortment of ideas that nobody expected to actually happen. The United States was paying Russia to launch astronauts to the Space Station aboard the ancient Soyuz craft because we had no crewed launcher of our own post-shuttle.

The problem wasn’t just that poor planning or unforeseeable events had left us in a tough spot. The history of the US Space program over the 40 years since 1969 felt, at that time, like a story of decline: One President would start something, then the next would cancel it, usually before it came to fruition. Nixon killed Apollo, Shuttle was a disappointment, Return to the Moon was dead, and the Space Station had an expiration date. What were we even really trying to do? Crewed space travel is expensive, and the scientific returns are limited. Perhaps it was a luxury we would no longer support.

Rising like a Phoenix

This is when SLS was born. Congress has the power of the purse, and not the President, and what Obama had shown in cancelling Constellation was that crewed space travel was not important to him. From this point on Congress, and not the President, would be running the show.

Congress is many things: It’s idiosyncratic, it’s parochial, it’s slow to change, it loves the military-industrial complex, and its members like to save jobs in their states and districts. SLS accomplishes all of those things by keeping the production line for Shuttle alive. It was less important at the time that there were no concretely planned missions for this rocket. Build it and, perhaps, they will come.

Image by ULA. I know this post is not very favorable towards SLS, but I still love a big rocket.

And for what it’s worth, if you’re lowering your expectations and just trying to build a big rocket as easily as possible, SLS is not a bad design. It reuses as much shuttle hardware as possible, meaning that it’s reliable, well-characterized, and carries low program risk. With high congressional support and low congressional turnover rates, it was politically well optimized to actually be completed. It is in some senses the minimal viable product for superheavy lift. In this context, the high cost of development and use is a strength and not a liability, because it means that as many congresspeople as possible get a slice of the pie for their constituents. Does pork by any other name not taste as sweet?

From the standpoint of the early 2010s, SLS was the only reasonable path to superheavy lift, even if there were a lot of open questions in the program design. No reasonable person would have expected SpaceX to be as successful in that time as they have been, to the point where their own superheavy launch vehicle now stands, possibly ready for an orbital attempt.

And this is where it gets interesting.

The Tortoise and the Hare

SLS is a business-as-usual rocket, built mostly from existing hardware by existing contractors under the existing paradigm. Government contracting is important, and the way things were done under Business As Usual is wasteful in a lot of ways. SLS cannot challenge that paradigm, because it draws its support from supporting the existing powers. SLS’s greatest strength is that it’s shuttle-derived, because you know what you’re getting, but that’s also its greatest weakness because it’s expensive and can’t launch too frequently.

Image from SpaceFlightNow

SLS also exists now. As I write this it’s sitting on the pad. Because it’s built from legacy hardware, we have every reason to think it will work as advertised: Its very first mission will launch an uncrewed Orion capsule around the Moon and back.

Starship is a whole different beast. Elon Musk makes a lot of claims, only some of which are really believable. $2 million per launch? Launching every two hours? I’ll believe it when I see it.

What I do believe is that the vertical landings that Falcon 9 has perfected will be performed here also, with improved launch site ops that allow for even lower costs. That the first stage will be pretty rapidly reusable, that the second will be reusable eventually, and that SpaceX will be able to actually leverage reusability for a meaningful reduction in costs.

NASA seems to believe it too: A Starship-derived vehicle was chosen by NASA for the Artemis III crewed Moon landing, which is scheduled for sometime in the 2025-2026 timeframe.

Head to Head?

As I write this, SLS is a real launch vehicle that’s ready to go, and Starship is not. But that won’t be true forever, as SpaceX really has completed a lot of the hardest work on Starship (the engines, the tanks, and the vertical landing tests, for example), and is working hard on the rest. There will, I hope, come a day when they are both in service at the same time, and thus will be competing with each other in at least some sense.

In the arctic winter, dawn and dusk are just a few hours apart. So too may dawn and dusk be for SLS.

So you have SLS, at about $2 billion per launch, and Starship at, let’s say, $100-200 million per launch with a little more payload. SLS has demonstrated reliability through heritage, and Starship through volume. Which would you choose for your mission or constellation? In just about any case I have to imagine you would prefer to save $2 billion. My cost overrun is your budget cut, after all.

What advantages might SLS have over Starship? Payload volume on Starship is limited by the vehicle’s diameter to about 8m across, whereas SLS can accommodate larger fairings with relative ease. Because SLS has a nonreusable upper stage, it can launch payloads directly into their desired orbit, whereas Starship likely needs to stop in LEO and allow a transfer stage to do that boosting. Real, but not huge advantages relative to their cost.

And, of course, SLS exists now. But even this is undermined by the choice of Starship for HLS: SLS is a moon rocket, but without Starship it cannot actually land people on the Moon.

So my view is that the lifetime of SLS is short, likely measured in years before it’s rendered obsolete by the newest generation of heavy and superheavy vehicles, as their advantages in cost far out-compete what SLS is able to offer without being designed for meaningful reusability.

We are very lucky to live in an era with such rapid progress in launch vehicle technology.

I Have No Choice but to Hate Obamacare

In December 2019, I made a huge mistake: I turned 26.

I had left my job earlier that year, and consequently lost my healthcare. The healthcare I had at that job wasn’t that good (I had a $1400 deductible and a pretty limited network), but it was affordable. The COBRA premium was way higher, and since I had also just lost my income it would’ve been out of reach.

I went back on one of my parents’ healthcare for the last few months of that year, but time moves on, and I was on my own for 2020.

A big medical bill could easily be ruinous (hundreds, thousands, tens of thousands, more? Anything can cost anything.), and New Jersey, where I was living, had just passed an individual mandate: I would be fined $2,000 or more for not having health insurance.

So I logged on to healthcare.gov, where I found I had dozens of options, each worse than the last.

Based on my estimated income, I qualified for a small subsidy of about $40 a month. That means I could pay about $250 a month for a catastrophic plan with an $8000 deductible (Prior to 2018, such a plan would not have been considered insurance at all), $300 for a bronze plan ($3000 deductible, 50% coinsurance, $8000 out of pocket max), or something approaching good coverage for $650 a month or more.

I went with the bronze plan. It was with a company that specialized in the Obamacare individual market, advertised on the subway about their customer service, and promised free video visits. Given the high deductible common to every Obamacare plan, free video visits seemed like the only way I would ever actually get to see a doctor. I don’t really know how to pick the best plan from such a large variety of terrible choices.

I had that plan for nine months. In that time I paid almost $3000 in premiums and received about $30 in benefits (my annual flu shot). I also had a covid test, which was $125 and not covered (even though it was legally required to be) because it was out of network and I didn’t have the bandwidth to fight to get reimbursed after the initial rejection. It turned out that, though free, a video visit was hard to get a usefully prompt appointment for (what good is talking to a doctor two weeks from when you get sick?).

In September of 2020, I got healthcare through a new job, and I could finally cancel my useless-but-expensive Obamacare coverage, which I did at the first possible opportunity.

Obama liked to say that if you like your plan you can keep it, but he never mentioned that if you hate your plan it will haunt you for the rest of your life.

The cancellation didn’t take (clerical error? Website glitch?), and I got a bill for the month of October. I cancelled the plan again, and after spending some time on the phone with customer service also had the bill voided.

Then in December 2020 I was automatically re-enrolled in that plan for 2021 by Healthcare.gov.

This time, I cancelled through healthcare.gov rather than through the insurance company, and after spending several hours on the phone I was assured that this wouldn’t happen again.

Except it did, in early January, when I got an email telling me that my payment was late. So I called up the insurance company, again, and cancelled the plan, again. I seriously-unseriously told people that I looked forward to being automatically re-enrolled into this plan, which I had at this point cancelled four times, for 2022.

Unfortunately, my silly little joke came true. In November 2021, I got an email saying that I was being automatically re-enrolled in the plan I had not been enrolled in for over a year.

Is this something I’m going to have to do every year for the rest of my life?

God knows what happens to you if you don’t pay your premium. Do they send it to collections? Destroy your credit? Harass you forever? Get the government involved?

I called up the insurance company, yet again, to cancel the plan, yet again. Delete my account from your system, do not ever reenroll me in this plan, I don’t want to hear a damn thing from you ever again. And they did eventually do that, but they said I needed to talk to the federal exchange because that’s where the automatic reenrollment is coming from.

So I called up healthcare.gov, and after spending an hour waiting for help I was finally told that New Jersey had opened up their own exchange and I needed to talk to them. But they couldn’t help me because I hadn’t activated my online account. I had never created an account with the New Jersey exchange, and there was an activation code buried deep in my email alongside 10,000 spam emails that I ignore.

After spending even more time on the phone, I was eventually able to unwind the auto-reenrollment. It turns out that when Chris Christie handed over the governor’s mansion to a Democrat, that Democrat (Phil Murphy) decided to open a state-based exchange instead of relying on the federal one. They ported over the data from the federal exchange just before I cancelled my auto-reenrollment there, and apparently never thought to check back with updates, thus automatically re-enrolling me against my will in this plan for the third time.

It had been an ordeal, having to cancel this plan five times over 15 months, but it was over. I had spoken to everyone involved, and finally gotten them on the same page, to delete me from the system, to not re-enroll me ever again. It was over.

Or was it!

January rolled around and I got a tax form from the exchange. This form, the 1095A, detailed the premiums I had paid and the subsidies I had received for my Obamacare plan for all 12 months of 2021.

Of course I had paid no such premiums, and received no such subsidies. I had cancelled this plan three times before it could even have started.

The problem is that the form wasn’t just wrong, it was dangerously wrong.

The way Obamacare works is that when you do your taxes you report your income and, depending what it is and what you got, settle up with the government through the IRS. I am not currently eligible for Obamacare subsidies, based both on income and on the fact that I have healthcare from another source, so that means that I would be legally required to repay $1700 in subsidies that I had not received for a plan I had gone to great lengths to not have.

The IRS gets a copy, so if I didn’t pay I would be very likely to get in trouble for underpayment of taxes. And even if I did want to just give up and pay it—allow the insurance company to steal almost $2000 out of my pocket through fraud or incompetence—I could get in trouble for submitting incorrect tax forms.

So I had to try to get it corrected.

So far, I’ve made five phone calls, including two conference calls with the New Jersey exchange and the insurance company, totaling 6 hours trying to get it fixed. I’ve had to use some of my limited time off begging people from the exchange and the insurance company to help me. So far they’ve revised the form twice, both times wrong. At the time of writing, I am waiting for the third revision, which hopefully will be correct. They promised me it would be done and done correctly by yesterday evening. I have heard nothing.

I would like to be able to tell the full story here. But 18 months after first cancelling my plan, it would be foolish to believe that this story will end any time soon. Despite my best efforts, I can’t help but think that I will find myself automatically re-enrolled when 2023 comes around and dealing with this whole thing again.

In summary: I had a healthcare plan through Obamacare that I hated, which was low quality and unaffordable, and provided virtually no benefits to me. I cancelled the plan as soon as I had another option. In the past 18 months, I have had to cancel it four times in total. The insurance company seems to have wrongly received $1700 on my behalf, which I will be legally obligated to repay if I can’t get the tax forms fixed, and getting the exchange to correct its records has been a nightmare.

And I’m not the only one.

Obamacare plans are expensive and bad, by design. That’s how the law works. Even the gold plans don’t really offer such good coverage, and the subsidies are way worse than what you would get from your employer (let alone Medicare or Medicaid) unless you are very low income.

I have had particular bad luck in cancelling the plan because of the unfortunate timing of the changeover from the federal to state-based exchange in New Jersey. But I would have had substantial difficulty cancelling anyway. How was I supposed to know that calling the health insurance company and cancelling the plan would not actually cancel the plan? And why the did both exchanges so zealously re-enroll me into a plan that I had already cancelled multiple times?

Obamacare enrollment rises every year. But how many of those enrollments are ghost plans like mine which people don’t have time to fight to cancel or don’t even know are still active?

The tax form, apparently, is a very common error. People at the insurance company, the NJ exchange, and the Federal exchange have all confirmed that they get a lot of calls about this. I spoke to a relative who also recently cancelled an Obamacare plan and they’re in the same situation.

This matters both because it’s creating a tax liability for lots of people which they may not know about but which they can absolutely get in trouble for and because the insurance companies appear to be getting lots of money from the government that they’re not supposed to be. These are my tax dollars and yours, going to insurance companies in exchange for services they have not provided, and nobody seems to know or care.

I’m no conservative. I have never voted for a Republican, and I never will. I cast my first ballot for Barack Obama in 2012, and have gone as far as to leave my ballot blank for offices contested only by Republicans.

But this system does not work, and nobody seems to care. What am I supposed to do?

Delta, Revisited

On July 10, 2021, I published a post titled “Delta Variant: Modelling the Threat“. In that post, I attempted to predict how Delta would go based on the level of immunity in the population to that point. Here were my conclusions:

I expect that about 5% of the US population will be infected with the Delta variant before cases level off, with a pretty low death rate. […] The US is largely protected by our relatively high vaccination rate, our high prior infection rate, the negative correlation between the two, and the strong performance of both artificial and natural immunity against Delta. […] The total population that will be infected is small and not very vulnerable, and this means that our surge in cases will likewise be fairly small.

There’s one numerical prediction (about 5% of the US population will be infected before cases level off) and one nearly-numerical prediction (the death rate will be “pretty low”). I will address these later to discuss what was right or wrong and why. But the post is suffused with a very clear attitude towards Delta: That it will be no big deal, that it’ll be fine.

Delta took a massive toll

This was wrong. Delta plagued the country for six months, causing two waves (the second was just cresting when Omicron hit), and at least 200,000 deaths.

As far as the numbers go: Refined estimates based on better data suggest that the R0 of Delta is about 5.1, so the numbers I was working off of were pretty close.

Estimating cases is harder.

The difficulty is that Delta ultimately exhibited a significant ability to evade prior immunity derived from vaccination or infection with other variants. Nothing like Omicron, but it means that my estimate that a vaccinated or infected individual had 95% immunity was incorrect. A better estimate is something like 75%, but fully vaccinated people very rarely face severe disease.

At the same time, Delta is more lethal to the unvaccinated and uninfected than previous strains.

As an approximation, I’ll assume the two cancel out, and use the same approximation as from the previous post: Infections are 200 times the number of deaths.

These are the key dates for Delta in the United States.

Between July 10th and September 16th, 2020, the United States recorded 70,417 deaths. Multiplying by 200, that suggests there were about 14.1 million cases. This is 4.3% of the population of the United States. Compared to 5%, as I estimated at the top of the post (elsewhere I said 4%), I think this is a pretty good estimate of how many cases there would be before things level off and begin to decline.

Unfortunately, this is just about the only thing I got right.

What should have been obvious is that cases rise, and then fall. Once the rise ends, the virus does not simply disappear. People still die, and those lives lost are just as much a tragedy. And in fact between September 16th and November 21 another 97,786 deaths were recorded, suggesting that about 5.9% of the population was infected.

It was likewise not surprising that when fall came cases rose again. This half-wave recorded 34,332 deaths (2.1% infected) before Omicron came and overwhelmed it. Between these three half-waves, about 12.3% of the United States was infected (mostly but not entirely the unvaccinated) and 202,535 deaths were recorded. By any measure, Delta was a catastrophe, small only in comparison to the even greater catastrophe of Covid-19 taken as a whole.

More wrong than any particular aspect of my analysis was my outlook. At the time of posting, Covid felt like it was over. Washington DC, where I live, was reporting just 11 cases per day on average. I wanted it to stay over. I wanted to not have to care.

Unfortunately, the virus remains real whether you care about it or not. People still occupy hospital beds, and still die. And even if unvaccinated, they are people’s parents, children, siblings, friends,and acquaintances. I will try to remember the mistakes I made here, both the specific ones and the general ones, in the future.

Going Global

There is some great news on the horizon: After decades of exponential decline, renewable energy is increasingly the cheapest way to generate electrical energy.

Renewable energy is getting much cheaper!

There’s some bad news too, though: The kinds of renewable energy that is seeing the biggest decline in price is what’s called “Variable Renewable Energy”. What this means is that it doesn’t produce electricity constantly or on demand, but instead produces it whenever conditions are right. This is fine if it’s only a small amount of the grid and other sources (like hydro or gas peakers) can pick up the slack. However the goal here is—must betotal decarbonization of the electricity sector.

I have been, and remain, a strong supporter of nuclear power. It is a safe, cheap, and carbon-free to generate reliable power on-demand. The technology has been frozen in the 70s by disinvestment, so investment in modernizing the technology is a low-hanging fruit where small effort produces big gains. Modern reactor designs will be smaller, cheaper, safer, and even more reliable. The waste storage issue has been solved in a permanent way in most places and has been held up in the United States by politics.

Power generation relative to installed capacity by month for various sources of carbon-free energy thanks to the US Energy Information Agency (EIA) .

Having said that, a fully nuclear-powered world creates a great risk of nuclear weapons proliferation, and frankly more to the point it is simply not the path that we have chosen. While I remain an enthusiastic supporter of nuclear power, and absolutely support both maintaining existing capacity and expanding with new capacity, solar and wind have a huge and probably determinative role to play in our carbon-free, solarpunk future.

This means we need to talk about storage. The sun, of course, does not shine at night. Nights are longer in winter than in summer. Clouds and other kinds of weather block the Sun. Wind power has variations that are dependent on weather, although the taller tall wind turbines we build now experience less variation than you would expect based on surface-level wind.

It’s always nighttime somewhere, but it’s never nighttime everywhere.

There are a number of proposed ways of dealing with grid-scale carbon-free power storage, but it is not a solved problem. Currently, virtually all of our grid energy storage takes the form of pumped Hydro. Hydroelectric energy currently provides about 7% of our total electricity and is more-or-less tapped out. Hydroelectric Dams are also concentrated in certain parts of the country and cause other environmental damage. They can do some of the work, but not all of it.

The other technologies, from compressed air to batteries to flywheels, are in their infancy at best, and all face substantial hurdles (technical, economic, and political) before they can be deployed at scale. We have no choice but to stop emitting greenhouse gases, and these are solvable problems, so they will be solved (or else!). But I want to take this chance to argue for an idea that has as much if not more potential and a similar or higher level of technological maturity: The Global Electric Grid.

Consider, schematically, a world powered completely by solar power. You’ve got lots of energy in midday, some in the morning and evening, and none at night. Sometimes it’ll be cloudy and you have less than expected. On the opposite side of the world, they have the same pattern at opposite times. If you could connect those places with a power cable, there would be no need for storage. Wind power has a similar issue (there will be more or less wind in some places and at some times, but globally the amount of wind should be pretty constant).

The best places in the world for solar power are, from West to East, the American Southwest, Northern Chile/Western Bolivia, the Sahara, Southwestern Africa, and Australia

What this means is that we could switch the grid over to an arbitrarily large amount of renewables without installing any storage capacity if we could build an electrical grid big enough. It would also give you the ability to generate energy in the world’s best spots and transmit it globally, potentially generating cheaper energy for everyone.

In short, the world stands to benefit from this system if it’s found to be cheaper to transfer electrical energy around it in this way than it is to build up the global storage capacity.

Whichever country builds this system stands to benefit from it immensely. Putting yourself at the center of the global grid system gives you the power to ensure that the generation systems in your country, rather than in another country, have the best access to it, ensuring the most efficient transfer of energy and giving you the benefits of being an energy exporter. And likewise, putting your demand sites in optimal locations relative to the grid means that they’ll benefit from cheaper energy.

This is an antipode map, which shows what’s on the other side of the world, combined with a population density map which shows how many people live there. The United States is in a great position to power South, East, and Southeast Asia (about half the world’s population) at night.

On top of this, whoever runs the grid will profit from doing so, likely as a near-monopoly distributor of energy to the whole world. Other countries will likely have influence (read: the ability to extract rent) to the extent that the transmission lines go through their territory, especially if they are particularly well situated; For example, the Strait of Gibraltar and the Sinai Peninsula will likely be key waypoints for transmitting the extravagant solar energy of North Africa to Europe and Asia respectively.

Overall the economics and geopolitics of the system combine that of fuel oil and that of telecommunications; those who produce benefit on a per-unit basis, and the people who run the system invest a large upfront cost in return for a low marginal cost to transfer power (provided you’re operating at or below max capacity and there are no disruptions).

The system is not without drawbacks. High-Voltage Direct Current (HVDC) transfer losses are sometimes quoted as 3% per 1000 km. This means that you lose about half your power if you send it all the way around the globe. I do not view this as being a dealbreaker: Other means of evening out variability will also have losses. Batteries, for example, are actually fairly expensive, and you’ll be able to generate more energy from your panels and wind turbines by siting them in the best possible locations.

Having said that, it is desirable to minimize these losses to the greatest extent possible. Further increasing the voltage (UHVDC?) is one way. Reducing the resistance is another. The system can be built without, but Room Temperature Superconductors would be of tremendous benefit here if they were to be discovered.

Here’s a list of what I see as the potential beneficiaries of such a project:

  • Everyone who stands to lose from climate change (IE everyone), to the extent that this system enables a faster and more complete transition to emissions-free energy sources
  • Whichever country or countries builds the system, creating jobs and a durable national revenue stream
  • Countries whose territory is strategically located at key points along the cables’ routes and are able to extract rents
  • Countries with lots of sunlight they can export as energy
  • Countries that need a lot of energy and don’t want to buy a ton of batteries
  • Countries that benefit from the reduced conflict that global energy interdependence will engender

Finally, I wanted to drop an extremely schematic map of what the system might look like. Note that it would in practice be much more complex than this; locations are not precise; many important things have been omitted.

Someone Wrote a Bad Fusion Article. Again.

Someone wrote a bad fusion article. Again.

This time it was Umair Irfan, writing for Vox. As I write this, the article is getting top billing on Vox’s (possibly algorithmic?) homepage. And it might be unfair to single him out, because his article is better than most. But this is just the latest example out of many, part of a renewed wave of fusion hype that is being published uncritically in almost every outlet that covers it.

A much worse offender than Vox is the Financial times, which has a tendency to uncritically publish free advertising for one fusion company or another. These articles, each published in the last three months, are particularly egregious: All of them exclusively cite people with personal investments in the fusion projects being covered in the article. None of them asks an independent expert if the “breakthrough” being covered actually has a reasonable chance of succeeding.

As in all things there are better and worse examples, but fusion hype rests mostly on a basic error which I will discuss and dismantle here.

The Motte and the Bailey

If the Motte and Bailey is a bad structure for an argument, Mott’s and Bailey’s would make a pretty disgusting cocktail.

The Motte and Bailey is a kind of fallacious argument, well-described on Wikipedia as:

[An argument] where an arguer conflates two positions that share similarities, one modest and easy to defend (the “motte”) and one much more controversial (the “bailey”). The arguer advances the controversial position, but when challenged, they insist that they are only advancing the more modest position.

To boil it down to a simple example, I could say “A lot of people dislike Nickelback. They are the worst band of all time.”. The former is observably true, but the latter totalizing claim, though related, is not supported by it. It’s easy to see why this is wrong when you’re talking about popular music that everyone has experience with, but for more complex or niche topics this kind of rhetorical structure can effectively hide important errors.

Unsurprisingly, a financial times article provided me with a perfectly distilled example. Note that in the quote below, “to do this” refers to fusion in general.

Nobody has ever built a commercially viable fusion reactor, but the Financial Times has perfected the science of fusing motte and bailey into a commercially viable number of retweets.

There are a variety of ways the motte can be stated. In the example above, it’s something like “net positive energy is much closer than 20 years away”. Other times it’s the more general claim that progress is being made towards commercially viable fusion. One FT article claimed that the belief that fusion was not achievable on Earth was only recently dispelled, despite the fact that fusion was first demonstrated in a lab in 1934.

There’s a spectrum of optimism for the bailey as well, but the core claim is that fusion is a realistic option for grid-scale power generation in the next couple decades. The FT quote above, and some others, conflate Q>1 (net positive energy) with this.

Net Negative Energy

They’re far apart now, get it?

Over the decades, the main criterion that fusion engineers have used to grade themselves has been the Lawson Criterion, Q. The Lawson criterion is defined as the amount of energy generated from fusion divided by the amount of heat energy it takes to get the fusion plasma up to temperature. When Q is larger than 1, the reactor can be used to generate energy.

Or can it?

In fact, Q>1 is a necessary but not sufficient criterion for a useful power-generating reactor. The Lawson criterion is a figure-of-merit, not a measurement of system efficiency, and it disregards a whole host of key loss factors. Looking at Magnetic Confinement Fusion, which is the focus of most of the current research (the other kind of fusion is called “Inertial Confinement” and uses lasers instead of magnets), here are some examples:

  • How efficiently can you heat your plasma up to fusion temperatures?
  • How much energy does containment require?
  • How much energy is lost to X-rays and other penetrating radiation?
  • How much energy is carried away, unused, by your cooling system?
  • How exactly are you going to convert the energy generated into electricity?
  • How efficient will that conversion process be?
  • How much energy does it take to produce the fuel which will be burned in the reactor?

It is beyond the scope of this post to estimate a realistic range for each of these parameters individually. But in general terms it means the Lawson Criterion underestimates the threshold for net power production by a factor of somewhere between 10 and 100 (There always being room for more losses, 100 is not an upper bound. It just depends how good your design is.).

Some of these fusion claims just don’t pass the smell test.

The highest Q ever measured in a controlled fusion reaction is 0.7, but it was in an Intertial Confinement reactor considered unsuitable for power production. ITER (the international project building a giant reactor in France) claims that it will achieve Q>5 when it begins full operation in the mid-2030s. To the best of my knowledge none of the commercial ventures has published an operating value for their fusion designs.

Beyond that, most people involved in the fusion space are interested in aneutronic fusion reactions. This is a class of reactions that produces less neutrons, and thus less nuclear waste, than the standard reference reaction. It’s a sensible choice. Unfortunately these reactions are themselves 10-100 times (or more) more difficult to sustain than that reference reaction.

It’s not enough, unfortunately, to produce a marginal amount of excess energy. In order to be useful, these reactors need to be literal powerhouses, pumping out gigawatts of electricity to satisfy the world’s growing demand. This will be determined by economics first and not by the measured Lawson Criterion of the reactor, but likely requires the ability to build a reactor with a Lawson Criterion orders of magnitude higher still than the calculated breakeven value. This is not something we’re close to doing.

It’s the Economics, Stupid

Fundamentally, power generators work the same way whether they’re based on oil, coal, gas, fission, or fusion: Fuel undergoes a reaction which releases energy as heat, and that heat is converted into electricity. Which one of these is preferable depends on the relative cost of using each one. (Solar, Wind, and Hydro work differently).

In the past, coal was the primary fuel used in electric generators. More recently, natural gas has started to eclipse it as its cost has fallen along with improvements to gas generators. Even more recently, both of these fuels are regarded as being non-viable because the greenhouse gases they emit cause so much harm.

It is in some ways a fool’s errand to try to discuss the economics of a technology that doesn’t exist. “Assume an economical fusion generator. What are its economics?”. But there are situations where a 1:1 comparison can be made based on the basic engineering properties of the system.

The simplest point of comparison is to nuclear fission. Both use small amounts of special, high-energy fuel. Neither emits carbon dioxide as a necessary part of its operation.

People wouldn’t invest a ton of money into these fusion companies if the economics didn’t work, right? Anyway, here’s a picture of Theranos founder Elizabeth Holmes photoshopped onto Thanos. What ever happened to her?

The difference? Fission reactions are comparatively simple to make happen. Basically, when you get enough fission fuel together, the reaction starts automatically. And despite its reputation, the means of controlling this reaction are relatively simple, safe, and well-understood. By contrast, as I discussed above, a fusion reaction is extraordinarily difficult to sustain. It requires advanced technology, high precision, and careful analysis, and even then (at least so far) requires very large reactors to generate fairly small amounts of power.

Fusion proponents say this is a benefit, because it increases the safety of the reactor: If anything goes wrong it will shut down automatically. But I believe that it is in fact a drawback when your means of generating energy is so difficult that you can barely sustain it.

I will discuss the question of radiation later, but suffice it to say that the benefits here are nonexistent and may actually advantage fission in relative terms.

So, in my view, fusion will tend to be a worse option than fission. And given that fission reactors exist now, and have suffered from underinvestment since the 70s (lots of low-hanging fruit to pick in terms of improvements!), it’s hard to see how fusion reactors would come to dominate the energy industry based on cost.

Fusion Reactors Generate Radiation. Lots of It.

Concern about radiation is actually two related concerns: First that a catastrophic failure could lead to the release of radiation to the environment, and second that nuclear waste disposal is an unsolved problem.

I want to mention that I think both of these concerns are overstated with regards to fission. No nuclear power reactor in the United States has ever released enough radiation to be harmful (yes, that includes Three Mile Island). Caused by a tsunami that killed 10,000 people, the Fukushima meltdown killed 0. Only in Chernobyl, where a bad reactor design combined with catastrophic mismanagement, was real harm done.

Nuclear waste is similarly overstated as an issue. It is well known that you can store waste by finding a stable, secure location and putting the waste there. Various countries do this in various ways, but the key thing is just to pick a spot and commit to it. The US has not done this.

The risk of the former is greatly reduced with fusion, from very low to nearly zero. But the latter is equally significant. And while I feel that these are not great barriers, if fusion advocates are going to argue that fusion generates less nuclear waste than fission I feel the need to explain why this is wrong.

The key thing to understand about nuclear waste is that a tiny minority of it is fission products in the reactor core. Though intensely radioactive, this is a small amount of material which decays quickly into less harmful (though still radioactive) isotopes. By far the greater magnitude of material is coolants and core materials which become quite radioactive as a result of their exposure to neutrons from the reaction. So the question becomes: How many neutrons does each reaction generate? And how much energy is contained in those neutrons?

Starting with Fission: Each fission event produces 169 MeV of usable energy and 2.5 neutrons on average, with a mean energy of 2 MeV per neutron. So in total, the reaction releases 0.015 neutrons per MeV and 3% of its total energy in neutrons.

Now, let’s assume that the current crop of fusion companies is successful beyond anyone’s wildest dreams, and builds a reactor many thousands of times better and thousands of times cheaper than anyone has seen before. How much radiation can we expect?

There are three main fusion reactions of interest: Deuterium-Tritium, Deuterium-Deuterium, and Deuterium-Helium-3. Deuterium-Tritium is what I called the “reference reaction” above. The table below shows the neutron production of each.

Note that Helium-3 is an extremely rare isotope, effectively so rare that it can’t be mined on Earth in the necessary quantities. There have been proposals to mine the Moon or Saturn to get it. As much as I would like to see that, I do not think this is viable for a number of reasons, including the cost and the fact that it’s barely more common there than here. The other way to get it is to use a breeder reactor which fuses deuterium to generate Helium-3. As you can see, this more or less negates any potential benefits of this reaction. Note that breeder reactors typically do not also function as energy generating reactors.

In addition to building a reactor thousands of times better than what we’ve got, marginal improvements over nuclear fission may also require a mining colony on Saturn.

If You’re a Journalist and You Made it This Far

Fusion proponents are making extraordinary claims, and if you’re going to write an article about them you should ask them basic questions about what they’re doing and how and why. You might be surprised to find they don’t have particularly good answers.

Government Contracting is the Most Important Thing We Never Talk About

Government contracting is the most important thing we never talk about.

Everyone hates “Cost Plus” contracts, where contractors are paid for their expenses (labor, materials, facilities, subcontractors, etc) plus either a percentage or a flat fee to make the contract profitable for them. In general, this should be avoided wherever possible, because it incentivizes contractors to charge loosely related costs to your contract and because it gives no—or even negative—incentive to work efficiently.

One thing that’s often mentioned is that this is done by way of particularly onerous accounting procedures. I think this specifically is a red herring. All companies—especially government contractors—are more or less legally required to do fairly detailed accounting for tax and compliance reasons, and on top of that for business planning. In reality, the actual accounting is a small portion of the program costs.  I think what’s really happening here is that engineers don’t like accountants or accounting and they complain about it a lot. NASA was definitely overusing cost-plus contracts and has moved towards fixed-cost contracts for a lot of things. This is a positive development.

Fixed-cost contracts aren’t a panacea, though. It’s a good way to do contracting when you (in this case meaning both NASA and the contractors submitting bids) have a good idea of what the project will end up costing. What this means is that it’s good for things that are either relatively simple or have been done before. For things that are novel and complex, fixed-cost contracting just isn’t as good. Consider the contractor’s point of view, assuming they’re trying to make money: If you don’t know how much it’s going to cost to fill an order, it’s generally a bad idea to make a bid for that order.

Now there are exceptions. Elon Musk is running what amounts to a parallel space program through SpaceX, so bidding on eg HLS is extra revenue at low marginal cost (they’re building Starship anyway, after all). And don’t get me wrong, I think Superheavy/Starship is one of the most exciting things happening in the space industry right now.  

The point is that we’re lucky in the space industry that people bid on stuff because it’s cool and not necessarily because it’s the most profitable, in a way they don’t for eg subway construction projects.

But there will be cases where NASA wants to do big, new stuff that doesn’t align with what Musk is doing. JWST is an example. What do you do then?

Unfortunately your options are somewhat limited. If you go out-of-house, you’re probably going to end up with something like a cost-plus contact. You can still do a fixed-cost contract, but if you’re putting all the risk on contractors like that for something big and new then the fixed price you’re offering is going to have to be pretty high.

An example I want to highlight is commercial crew.  Even though the contract terms are in my opinion pretty generous, Boeing may still lose money on Starliner. You can say Boeing is just incompetent if you want to (I don’t really agree) but they’re not in the business of losing money on contacts and this will affect their calculations in whether they want to bid on future contracts. And competition is really important for keeping prices down and schedules short. The fewer bids you get, the less power you have to set price and schedule.

In these circumstances you might get some value out of a modified cost-plus contract where, for example, the nominal value is $100 million and costs above that amount are reimbursed at 50%.  This creates more of an incentive for cost control than cost-plus without pushing all the risk out into the contractor. There’s still a big incentive to attribute loosely-related costs.

The other thing you can do is bring it in-house. This has a greater potential upside, because there’s no conflict of incentives, but it’s also harder. What it amounts to is staffing up on a permanent basis to build one—or ideally several—in-house contractors to do stuff.

This is something that happened a lot more in earlier eras. These days government is not set up to do this sort of thing.  Which is not to say it couldn’t or shouldn’t be! It could and it should. But we would have to make real changes to make it happen.  For example:

  • Higher salaries, competitive with other employers and with more leeway for raises
  • Insulation from government shutdowns
  • Fewer rules about what government employees can do in their free time
  • Much less prescriptive administration by Congress
  • More internal flexibility, for both management and unions to change what doesn’t work
  • An end to the practice of designing programs such that they amount to an earmark for someone or other
  • Longer-term budgeting

This has a bigger possible upside than modified contracting processes but can also fail entirely, if you end up with an unproductive organization that sandbags future budgets.

I want to mention that “doing it in-house” doesn’t mean you do literally everything yourself all the way down the supply chain. But it does mean you do a lot of work yourself and subcontract out jobs small enough to be fixed-price with competitors.

Either way, doing new stuff is hard, expensive, and takes a long time. But it should be no harder, no more expensive, and take no longer than it needs to.

This is something that everyone who’s interested in space should care about. It means that we can do more stuff on the same budget. I would go so far as to say that the quality of program management—more than any single technical choice within reason—is the most important thing that determines program costs.

If I had to boil all this down to a set of recommendations, it would be this:

  1. Use fixed-cost contracts whenever possible, and allow bidders a lot of leeway in how they do the thing as long as it gets done.
  2. Whether it’s being done in-house or out-of-house, it’s critical to have active, ongoing, knowledgeable oversight over a project from start to finish.
  3. Cost control needs to be an important goal for every project, imposed from above and actively pursued. My cost overrun is your budget cut.
  4. Schedule is important.  Everything should be done as quickly as possible (But no quicker).  This is how you get costs down.
  5. NASA should have more internal leeway to figure out how it wants to achieve the goals Congress and the President set for it.

It’s worth asking why these aren’t being done. I think the reason is Congress (and sometimes NASA) is less interested in achieving the maximal amount of things in space than in distributing money to favored contractors and to people who work in their states or districts.

It’s hard to fault them for this, since saving and creating jobs for your constituents (or campaign donors) has tangible political benefits for individual legislators, while an accelerated timeline on a crewed mission, a few more space probes, and more scientific discoveries doesn’t. You don’t have to like it (I don’t), but the reality is that Congress is not particularly good at the kind of planning, oversight, and administration under which a public agency flourishes.  And typically for best results you have to operate within the world as it exists.

My proposal for better outcomes would be to give NASA more independence.  Keep the administrator and perhaps a few other top officials as political appointees, ideally selected for a term like the Fed Chair or FBI Director, chosen by the President and confirmed by the Senate. Congress will still handle the budget, of course, ideally in very general terms. It wouldn’t be the worst thing to create some designated funding stream (for example, a Space Carbon Tax, Space Traffic Control/Launch/licensing fees, etc). Budgeting should be done on as long a time horizon as possible, for example every four years.  The administrator should have the ability to allocate funds within NASA on different projects. NASA should be separated from the rest of the Federal government, with most of its byzantine rules being inapplicable. People talk about how regulation can stifle innovation, but the most regulated part of the economy is actually the government itself.

To be clear, I don’t think Congress will give up this power voluntarily.  But I do think that NASA is small enough and different enough that if the president were to prioritize this kind of reform that they could get it done.  

I want to finish up by describing something that’s happening now that is more or less the opposite of everything I’ve recommended here.

In late October, NASA administrator Bill Nelson announced that he wanted to hand off SLS to an independent contractor, with NASA as an anchor tenant buying one rocket per year through 2050, and that the contractor would be tasked with cutting prices in half.

People have speculated that this would work something like the USA (United Space Alliance, what an acronym!), the Rockwell-Lockheed consortium that managed the Space Shuttle program for the last 20 years of its existence.  A structure like this is virtually guaranteed to cause big cost overruns, just like it did with the Shuttle.  The purpose appears to be outsourcing oversight to this contractor and hoping for the best, rather than doing it in house.

But the contractor has no particular incentive to keep costs down, and if (like USA) they’re a consortium of the main contractors building the rocket they actually have a substantial incentive to bring costs up to the maximum NASA is willing to pay. And if NASA is designating SLS as a key piece of infrastructure and committing to buy one rocket per year for the next 25 years, that might be a very high price indeed

Next Gen Chemical Fuels (OzAc, Part One)

Launching stuff into space is hard.

Fuel, fuel, more fuel, and a little bit of engines, structure, and payload.

Leaps and bounds are being made that are making it more reliable and less costly to launch stuff into space, but at a fundamental level it’s still hard.

In order to get into a minimal orbit, you need to get your payload moving at almost 8 km/s and raise its altitude by about 200 km. Assuming you launch from an equatorial location, that means increasing the kinetic + potential energy of the payload by 32 MJ/kg.

Anyway, it’s been discussed in lots of places that the high mass ratios needed to get to orbit require either 2 or even 3 stages or extremely lightweight rockets, and in either case your payload fractions are tiny.

The spacelaunch industry is moving in a direction where it relies on three primary liquid rocket fuels: H2/LOX, Kerolox, and Methlox.

Basic properties of common fuels

There’s still a lot of cost and efficiency gains that could be made with these fuels beyond where we are now. But I think everyone would agree, at least in principle, that denser fuels with higher exhaust velocities would be a boon to our spacelaunch capabilities. That’s what I want to look into here.

I have expanded on the above table to show some figures of merit for each fuel that are related to density and specific impulse.

Various metrics useful for comparing different fuels

Volume ratio here is defined as the volume of fuel required to lift one kilogram of dry mass into orbit, with an assumed ΔV of 9300 m/s. Put another way, for a mass ratio Rm, fuel density ρ, and Volume Ratio Rv, Rv=(Rm-1)/ρ. Volume ratio is particularly useful because it combines exhaust velocity and density, with a particular weight on exhaust velocity because Mass Ratio is exponential. Having said that it does in my opinion underweight somewhat on Vex. For that reason I have also included a composite metric equal to Vex/Rv, which has little physical meaning but I think better incorporates the importance of each. For Rm and Rv a lower number is better, but for the composite metric a higher number is better. Note that specific energy is calculated based on the actual mixture ratio and not the stoichiometric value.

Though there are 9 different metrics in the table, each is not independent of the other. There are three independent pieces of data here: Vex, Density, and Specific Energy (one could also choose a different set of three to derive the rest, if inclined). As far as actual rocket performance, only Vex and Density are important. Performance isn’t available for new fuels, though, and specific energy is easier to calculate. Given that it’s pretty convenient that the efficiency for all three is in a fairly narrow range, averaging 65%. That Methlox (the least technologically mature of the three by far) is also the lowest suggests that further gains will be possible there.

Based on all this, the most important number for finding a better fuel combination is the specific energy. In all three cases above, the high specific energy of the fuel is driven by the high energies of formation of Carbon Dioxide and Water: 8.9 and 15.9 MJ/kg respectively. So in order to find a fuel with a higher exhaust velocity, and ideally also a higher density, you will want to find possible combustion products with a greater specific energy of formation. In many cases this points you towards fluoride chemicals. However, I will be ignoring these out of hand as impractical due to the extreme reactivity and toxicity of fluorine gas.

Some possibilities are below.

Note that even the products that are not classified as toxic to humans or pollutants can be hazardous to humans or bad for the environment under some circumstances.

Unfortunately, all of them are refractory oxides: Incompatible with normal liquid rocket technology and also producing solid products. The most energetic are toxic to boot and environmentally harmful to boot. Beryllium and Lithium are also rare enough that it’s likely untenable to use them at mass scale for spacelaunch.

Anyway, it turns out that while there are compounds that meet the energy criterion (If you could somehow run an engine on Beryllium/Oxygen, you might get an exhaust velocity of almost 6 km/s).

Anyway, I’ll cut to the chase here: This is the wrong end to push on. The best answer I can come up with is to use fuel and oxidizer combinations that start off with high energy and release energy by their decomposition.

The best combination I can come up with is Acetylene-Ozone. Assuming an efficiency comparable to H2/LOX or Kerolox, it can likely achieve an exhaust velocity of nearly 4.5 km/s, at a density of 1140 kg/m3. It’s a fuel that’s got both high Isp and high density, and it’s not a deep cryogen.

These metrics speak for themselves. All of these numbers are calculated, obviously.

Clearly this is an idea with issues: Acetylene and ozone are both unstable chemicals that are hard to synthesize in their pure form. This is a real issue that requires substantial development. I will address this more in a later post.

But have a look at that number on the bottom right: The composite metric, which I believe is the one that best combines exhaust velocity and density into a single metric, is over 3 times higher than either Hydrogen or Kerosene.

In the next post, I will discuss at greater lengths the challenges of implementing these fuels, try to better estimate the exhaust velocity, and discuss partial implementations that can also provide partial benefits.

Decarbonize Air Travel Now, With Biofuels

I don’t think biofuels make sense in general, but I think they’re probably the best way to decarbonize air travel and we should get going on that.

Hydrocarbons like gasoline (and jet fuel, which is refined kerosene) carry a lot of energy in a small volume and small mass, and the energy is free (in a physical sense, meaning it’s been there since before time) to boot. Batteries don’t—and for various reasons probably can’t—match that. And in a lot of cases that’s mostly fine. Note that electric cars can now have comparable range as gas cars, but are much heavier for it.

One way to keep using hydrocarbon fuels while negating the emissions is to use biofuels. Plants grow by taking carbon dioxide out of the air, so if you get energy from hydrocarbons made from plants there will be no net emissions from burning them. It’s a tidy solution in some ways because nothing has to change except where the fuel is coming from, which is what many people want, especially the ones who are in charge of everything.

That tidiness hides a lot of issues. Biofuels typically take the form of ethanol made from corn. Corn is, of course, an agricultural product. Agriculture consumes a lot of fuel. Indeed, it’s not entirely clear that the production of bioethanol even produces more fuel than it consumes. (Most analyses put the number around 3:1 Produced:Consumed which is really pretty bad). It also requires all that machinery, labor, water, processing, land, etc to produce. What this adds up to is that biofuels are expensive and can be harmful to the environment in other ways.

Let’s just touch on land for a second. Most people are surprised to hear that photosynthesis is not an efficient process. Commercial solar panels can be 25% efficient at turning sunlight into electricity. Photosynthesis is about 1% efficient at turning sunlight into plant matter, at best, and only the useful bits of the plant can be turned into fuel. What this means is that biofuels at large scale would use up an incredibly (possibly unsustainably) large amount of land.

Despite their small share of our energy supply, biofuels use a lot of land.

It has also been alleged that biofuel standards drive up the price of food. This makes sense in some ways but I’m not sure it’s actually true. If so that would obviously be a bad thing.

As far as airplanes go I’ve been surprised and impressed to see that there are some battery-electric aircraft capable of flying short routes, but the demands of longer-distance air travel are too severe. Batteries likely won’t ever cut it. Some are suggesting we should give up on air travel altogether. And I love high speed rail as much as (realistically much more so than) the next person, but it won’t replace the speed or point-to-point capability of air travel.

So we need a carbon-free fuel to keep planes in the sky. Some suggest hydrogen. But given the extremely danger and difficulty associated with handing and storing hydrogen I don’t think it makes sense to use on an aircraft.

Photo unrelated.

I personally don’t have any other ideas for this besides biofuels, and I think air travel is a good example of an application where they make sense.

First of all, the price of a plane ticket is mostly not fuel. Fuel costs are roughly 16.4% of the ticket price, less than 1/6. The rest is other stuff. So even if fuel prices were to double, the price of a ticket would only increase modestly.

Beyond that, it doesn’t actually matter if you’re getting net fuel. We can and should electrify agricultural equipment. Biofuels are there primarily to store energy and release it in flight, they don’t need to be a primary source of it. Air travel is responsible for 3% of total US emissions and 9% of transportation emissions. This is a lot, and it’s critically important to fix it, but it’s still way less than cars. What this means is that, while biofuels to replace gasoline would rapidly get out of hand, it’s not crazy to use them for planes. In fact, you could probably get most of the way there with the ethanol we’re currently blending into gasoline.

The two best candidates for biofuels are ethanol and methane. They have slightly different costs and benefits but are mostly equivalently useful to the status quo fuel, without the emissions. Biokerosene is a somewhat more difficult possibility but has the highest volumetric energy density and requires the smallest amount of change as compared to current fuels.

The way I would structure this policy is for the federal government to have development grants for mass-scale biofuels, and also to guarantee the purchase of those fuels at a certain above-market price, to be resold at a lower price and used in biofuel airplanes. Going forward you would lower the subsidy while adding regulations that mandate increasingly carbon-free air travel. I think that in a pretty short timeframe you could decarbonize the industry.

And it’s worth mentioning that this works for space travel too. SpaceX uses methane as a fuel in its rockets, and biomethane would be identical to fossil methane, and thus trivially easy to substitute. Kerosene is another common fuel. Given the rapid growth in the spacelaunch industry, decarbonization is a good idea both because climate change is bad in general and because it will help demonstrate the industry’s commitment to sustainability.