Bulletin of the Atomic Scientists Daniel Jassby | 19 April 2017
Fusion reactors have long been touted as the “perfect” energy source. Proponents claim that when useful commercial fusion reactors are developed, they would produce vast amounts of energy with little radioactive waste, forming little or no plutonium byproducts that could be used for nuclear weapons. These pro-fusion advocates also say that fusion reactors would be incapable of generating the dangerous runaway chain reactions that lead to a meltdown—all drawbacks to the current fission schemes in nuclear power plants.
And, like fission, a fusion-powered nuclear reactor would have the enormous benefit of producing energy without emitting any carbon to warm up our planet’s atmosphere.
But there is a hitch: While it is, relatively speaking, rather straightforward to split an atom to produce energy (which is what happens in fission), it is a “grand scientific challenge” to fuse two hydrogen nuclei together to create helium isotopes (as occurs in fusion). Our sun constantly does fusion reactions all the time, burning ordinary hydrogen at enormous densities and temperatures. But to replicate that process of fusion here on Earth—where we don’t have the intense pressure created by the gravity of the sun’s core—we would need a temperature of at least 100 million degrees Celsius, or about six times hotter than the sun. In experiments to date the energy input required to produce the temperatures and pressures that enable significant fusion reactions in hydrogen isotopes has far exceeded the fusion energy generated.
But through the use of promising fusion technologies such as magnetic confinement and laser-based inertial confinement, humanity is moving much closer to getting around that problem and achieving that breakthrough moment when the amount of energy coming out of a fusion reactor will sustainably exceed the amount going in, producing net energy. Collaborative, multinational physics project in this area include the International Thermonuclear Experimental Reactor (ITER) joint fusion experiment in France which broke ground for its first support structures in 2010, with the first experiments on its fusion machine, or tokamak, expected to begin in 2025.
As we move closer to our goal, however, it is time to ask: Is fusion really a “perfect” energy source? After having worked on nuclear fusion experiments for 25 years at the Princeton Plasma Physics Lab, I began to look at the fusion enterprise more dispassionately in my retirement. I concluded that a fusion reactor would be far from perfect, and in some ways close to the opposite.
Scaling down the sun. As noted above, fusion reactions in the sun burn ordinary hydrogen at enormous density and temperature sustained by an effectively infinite confinement time, and the reaction products are benign helium isotopes. Artificial (terrestrial) fusion schemes, on the other hand, are restricted to much lower particle densities and much more fleeting energy confinement, and are therefore compelled to use the heavier neutron-rich isotopes of hydrogen known as deuterium and tritium—which are 24 orders of magnitude more reactive than ordinary hydrogen. (Think of the numeral one with 24 zeroes after it.) This gargantuan advantage in fusion reactivity allows human-made fusion assemblies to be workable with a billion times lower particle density and a trillion times poorer energy confinement than the levels that the sun enjoys. The proponents of fusion reactors claim that when they are developed, fusion reactors will constitute a “perfect” energy source that will share none of the significant drawbacks of the much-maligned fission reactors.
But unlike what happens in solar fusion—which uses ordinary hydrogen—Earth-bound fusion reactors that burn neutron-rich isotopes have byproducts that are anything but harmless: Energetic neutron streams comprise 80 percent of the fusion energy output of deuterium-tritium reactions and 35 percent of deuterium-deuterium reactions.
Now, an energy source consisting of 80 percent energetic neutron streams may be the perfect neutron source, but it’s truly bizarre that it would ever be hailed as the ideal electrical energy source. In fact, these neutron streams lead directly to four regrettable problems with nuclear energy: radiation damage to structures; radioactive waste; the need for biological shielding; and the potential for the production of weapons-grade plutonium 239—thus adding to the threat of nuclear weapons proliferation, not lessening it, as fusion proponents would have it.
In addition, if fusion reactors are indeed feasible—as assumed here—they would share some of the other serious problems that plague fission reactors, including tritium release, daunting coolant demands, and high operating costs. There will also be additional drawbacks that are unique to fusion devices: the use of fuel (tritium) that is not found in nature and must be replenished by the reactor itself; and unavoidable on-site power drains that drastically reduce the electric power available for sale.
All of these problems are endemic to any type of magnetic confinement fusion or inertial confinement fusion reactor that is fueled with deuterium-tritium or deuterium alone. (As the name suggests, in magnetic confinement fusion, magnetic and electrical fields are used to control the hot fusion fuel—a material that takes an unwieldy and difficult-to-handle form, known as a plasma. In inertial confinement, laser beams or ion beams are used to squeeze and heat the plasma.) The most well-known example of magnetic confinement fusion is the doughnut-shaped tokamak under construction at the ITER site; inertial confinement fusion is exemplified by the laser-induced microexplosions taking place at the US-based National Ignition Facility.
Tritium fuel cannot be fully replenished. The deuterium-tritium reaction is favored by fusion developers because its reactivity is 20 times higher than a deuterium-deuterium fueled reaction, and the former reaction is strongest at one-third the temperature required for deuterium-only fusion. In fact, an approximately equal mixture of deuterium and tritium may be the only feasible fusion fuel for the foreseeable future. While deuterium is readily available in ordinary water, tritium scarcely exists in nature, because this isotope is radioactive with a half-life of only 12.3 years. The main source of tritium is fission nuclear reactors.
If adopted, deuterium-tritium based fusion would be the only source of electrical power that does not exploit a naturally occurring fuel or convert a natural energy supply such as solar radiation, wind, falling water, or geothermal. Uniquely, the tritium component of fusion fuel must be generated in the fusion reactor itself.
The tritium consumed in fusion can theoretically be fully regenerated in order to sustain the nuclear reactions. To accomplish this goal, a lithium-containing “blanket” must be placed around the reacting medium—an extremely hot, fully ionized gas called a plasma. The neutrons produced by the fusion reaction will irradiate the lithium, “breeding” tritium.
But there is a major difficulty: The lithium blanket can only partly surround the reactor, because of the gaps required for vacuum pumping, beam and fuel injection in magnetic confinement fusion reactors, and for driver beams and removal of target debris in inertial confinement reactors. Nevertheless, the most comprehensive analyses indicate that there can be up to a 15 percent surplus in regenerating tritium. But in practice, any surplus will be needed to accommodate the incomplete extraction and processing of the tritium bred in the blanket.
Replacing the burned-up tritium in a fusion reactor, however, addresses only a minor part of the all-important issue of replenishing the tritium fuel supply. Less than 10 percent of the injected fuel will actually be burned in a magnetic confinement fusion device before it escapes the reacting region. The vast majority of injected tritium must therefore be scavenged from the surfaces and interiors of the reactor’s myriad sub-systems and re-injected 10 to 20 times before it is completely burned. If only 1 percent of the unburned tritium is not recovered and re-injected, even the largest surplus in the lithium-blanket regeneration process cannot make up for the lost tritium. By way of comparison, in the two magnetic confinement fusion facilities where tritium has been used (Princeton’s Tokamak Fusion Test Reactor, and the Joint European Torus), approximately 10 percent of the injected tritium was never recovered.
To make up for the inevitable shortfalls in recovering unburned tritium for use as fuel in a fusion reactor, fission reactors must continue to be used to produce sufficient supplies of tritium—a situation which implies a perpetual dependence on fission reactors, with all their safety and nuclear proliferation problems. Because external tritium production is enormously expensive, it is likely instead that only fusion reactors fueled solely with deuterium can ever be practical from the viewpoint of fuel supply. This circumstance aggravates the problem of nuclear proliferation discussed later.
Huge parasitic power consumption. In addition to the problems of fueling, fusion reactors face another problem: they consume a good chunk of the very power that they produce, or what those in the electrical generating industry call “parasitic power drain,” on a scale unknown to any other source of electrical power. Fusion reactors must accommodate two classes of parasitic power drain: First, a host of essential auxiliary systems external to the reactor must be maintained continuously even when the fusion plasma is dormant (that is, during planned or unplanned outages). Some 75 to 100 MWe (megawatts electric) are consumed continuously by liquid-helium refrigerators; water pumping; vacuum pumping; heating, ventilating and air conditioning for numerous buildings; tritium processing; and so forth, as exemplified by the facilities for the ITER fusion project in France. When the fusion output is interrupted for any reason, this power must be purchased from the regional grid at retail prices.
The second category of parasitic drain is the power needed to control the fusion plasma in magnetic confinement fusion systems (and to ignite fuel capsules in pulsed inertial confinement fusion systems). Magnetic confinement fusion plasmas require injection of significant power in atomic beams or electromagnetic energy to stabilize the fusion burn, while additional power is consumed by magnetic coils helping to control location and stability of the reacting plasma. The total electric power drain for this purpose amounts to at least 6 percent of the fusion power generated, and the electric power required to pump the blanket coolant is typically 2 percent of fusion power. The gross electric power output can be 40 percent of the fusion power, so the circulating power amounts to about 20 percent of the electric power output.
In inertial confinement fusion and hybrid inertial/magnetic confinement fusion reactors, after each fusion pulse, electric current must charge energy storage systems such as capacitor banks that power the laser or ion beams or imploding liners. The demands on circulating power are at least comparable with those for magnetic confinement fusion.
The power drains described above are derived from the reactor’s electrical power output, and determine lower bounds to reactor size. If the fusion power is 300 megawatts, the entire electric output of 120 MWe barely supplies on-site needs. As the fusion power is raised, the on-site consumption becomes an increasingly smaller proportion of the electric output, dropping to one-half when the fusion power is 830 megawatts. To have any chance of economic operation that must repay capital and operational costs, the fusion power must be raised to thousands of megawatts so that the total parasitic power drain is relatively small.
In a nutshell, below a certain size (about 1,000 MWe) parasitic power drain makes it uneconomic to run a fusion power plant.
The problems of parasitic power drain and fuel replenishment by themselves are significant. But fusion reactors have other serious problems that also afflict today’s fission reactors, including neutron radiation damage and radioactive waste, potential tritium release, the burden on coolant resources, outsized operating costs, and the increased risks of nuclear weapons proliferation.
Radiation damage and radioactive waste. To produce usable heat, the neutron streams carrying 80 percent of the energy from deuterium-tritium fusion must be decelerated and cooled by the reactor structure, its surrounding lithium-containing blanket, and the coolant. The neutron radiation damage in the solid vessel wall is expected to be worse than in fission reactors because of the higher neutron energies. Fusion neutrons knock atoms out of their usual lattice positions, causing swelling and fracturing of the structure. Also, neutron-induced reactions generate large amounts of interstitial helium and hydrogen, forming gas pockets that lead to additional swelling, embrittlement, and fatigue. These phenomena put the integrity of the reaction vessel in peril.
In reactors with deuterium-only fueling (which is much more difficult to ignite than a deuterium-tritium mix), the neutron reaction product has five times lower energy and the neutron streams are substantially less damaging to structures. But the deleterious effects will still be ruinous on a longer time scale.
The problem of neutron-degraded structures may be alleviated in fusion reactor concepts where the fusion fuel capsule is enclosed in a 1-meter thick liquid lithium sphere or cylinder. But the fuel assemblies themselves will be transformed into tons of radioactive waste to be removed annually from each reactor. Molten lithium also presents a fire and explosion hazard, introducing a drawback common to liquid-metal cooled fission reactors.
Bombardment by fusion neutrons knocks atoms out of their structural positions while making them radioactive and weakening the structure, which must be replaced periodically. This results in huge masses of highly radioactive material that must eventually be transported offsite for burial. Many non-structural components inside the reaction vessel and in the blanket will also become highly radioactive by neutron activation. While the radioactivity level per kilogram of waste would be much smaller than for fission-reactor wastes, the volume and mass of wastes would be many times larger. What’s more, some of the radiation damage and production of radioactive waste is incurred to no end, because a proportion of the fusion power is generated solely to offset the irreducible on-site power drains.
Materials scientists are attempting to develop low-activation structural alloys that would allow discarded reactor materials to qualify as low-level radioactive waste that could be disposed of by shallow land burial. Even if such alloys do become available on a commercial scale, very few municipalities or counties are likely to accept landfills for low-level radioactive waste. There are only one or two repositories for such waste in every nation, which means that radioactive waste from fusion reactors would have to be transported across the country at great expense and safeguarded from diversion.
To reduce the radiation exposure of plant workers, biological shielding is needed even when the reactor is not operating. In the intensely radioactive environment, remote handling equipment and robots will be required for all maintenance work on reactor components as well as for their replacement because of radiation damage, particle erosion or melting. These constraints will cause prolonged downtimes even for minor repairs.
Nuclear weapons proliferation. The open or clandestine production of plutonium 239 is possible in a fusion reactor simply by placing natural or depleted uranium oxide at any location where neutrons of any energy are flying about. The ocean of slowing-down neutrons that results from scattering of the streaming fusion neutrons on the reaction vessel permeates every nook and cranny of the reactor interior, including appendages to the reaction vessel. Slower neutrons will be readily soaked up by uranium 238, whose cross section for neutron absorption increases with decreasing neutron energy.
In view of the dubious prospects for tritium replenishment, fusion reactors may have to be powered by the two deuterium-deuterium reactions that have substantially the same probability, one of which produces neutrons and helium 3, while the other produces protons and tritium. Because tritium breeding is not required, all the fusion neutrons are available for any use—including the production of plutonium 239 from uranium 238.
It is extremely challenging to approach energy breakeven with deuterium-deuterium reactions because their total reactivity is 20 times smaller than that of deuterium-tritium, even at much higher temperatures. But a deuterium-fueled “test reactor” with 50 megawatts of heating power and producing only 5 megawatts of deuterium-deuterium fusion power could yield about 3 kilograms of plutonium 239 in one year by absorbing just 10 percent of the neutron output in uranium 238. Most of the tritium from the second deuterium-deuterium reaction could be recovered and burned and the deuterium-tritium neutrons will produce still more plutonium 239, for a total of perhaps 5 kilograms. In effect, the reactor transforms electrical input power into “free-agent” neutrons and tritium, so that a fusion reactor fueled with deuterium-only can be a singularly dangerous tool for nuclear proliferation.
A reactor fueled with deuterium-tritium or deuterium-only will have an inventory of many kilograms of tritium, providing opportunities for diversion for use in nuclear weapons. Just as for fission reactors, International Atomic Energy Agency safeguards would be needed to prevent plutonium production or tritium diversion.
Additional disadvantages shared with fission reactors. Tritium will be dispersed on the surfaces of the reaction vessel, particle injectors, pumping ducts, and other appendages. Corrosion in the heat exchange system, or a breach in the reactor vacuum ducts could result in the release of radioactive tritium into the atmosphere or local water resources. Tritium exchanges with hydrogen to produce tritiated water, which is biologically hazardous. Most fission reactors contain trivial amounts of tritium (less than 1 gram) compared with the kilograms in putative fusion reactors. But the release of even tiny amounts of radioactive tritium from fission reactors into groundwater causes public consternation.
Thwarting tritium permeation through certain classes of solids remains an unsolved problem. For some years, the National Nuclear Security Administration—a branch of the US Energy Department—has been producing tritium in at least one Tennessee Valley Administration-owned fission power reactor by absorbing neutrons in lithium-containing substitute control rods. There has been significant and apparently irreducible leakage of tritium from the rods into the reactor cooling water that’s released to the environment, to the extent that the annual tritium production has been drastically curtailed.
In addition, there are the problems of coolant demands and poor water efficiency. A fusion reactor is a thermal power plant that would place immense demands on water resources for the secondary cooling loop that generates steam as well as for removing heat from other reactor subsystems such as cryogenic refrigerators and pumps. Worse, the several hundred megawatts or more of thermal power that must be generated solely to satisfy the two classes of parasitic electric power drain places additional demand on water resources for cooling that is not faced by any other type of thermoelectric power plant. In fact, a fusion reactor would have the lowest water efficiency of any type of thermal power plant, whether fossil or nuclear. With drought conditions intensifying in sundry regions of the world, many countries could not physically sustain large fusion reactors.
Numerous alternative coolants for the primary heat-removal loop have been studied for both fission and fusion reactors, and one-meter thick liquid lithium walls may be essential for inertial confinement fusion systems to withstand the impulse loading. However, water has been used almost exclusively in commercial fission reactors for the last 60 years, including all of those presently under construction worldwide. This circumstance indicates that implementing any substitute for water coolant such as helium or liquid metal will be impractical in magnetic confinement fusion systems.
And all of the above means that any fusion reactor will face outsized operating costs.
Fusion reactor operation will require personnel whose expertise has previously been required only for work in fission plants—such as security experts for monitoring safeguard issues and specialty workers to dispose of radioactive waste. Additional skilled personnel will be required to operate a fusion reactor’s more complex subsystems including cryogenics, tritium processing, plasma heating equipment, and elaborate diagnostics. Fission reactors in the United States typically require at least 500 permanent employees over four weekly shifts, and fusion reactors will require closer to 1,000. In contrast, only a handful of people are required to operate hydroelectric plants, natural-gas burning plants, wind turbines, solar power plants, and other power sources.
Another intractable operating expense is the 75 to 100 megawatts of parasitic electric power consumed continuously by on-site supporting facilities that must be purchased from the regional grid when the fusion source is not operating.
Multiple recurring expenses include the replacement of radiation-damaged and plasma-eroded components in magnetic confinement fusion, and the fabrication of millions of fuel capsules for each inertial confinement fusion reactor annually. And any type of nuclear plant must allocate funding for end-of-life decommissioning as well as the periodic disposal of radioactive wastes.
It is inconceivable that the total operating costs of a fusion reactor will be less than that of a fission reactor, and therefore the capital cost of a viable fusion reactor must be close to zero (or heavily subsidized) in places where the operating costs alone of fission reactors are not competitive with the cost of electricity produced by non-nuclear power, and have resulted in the shutdown of nuclear power plants.
To sum up, fusion reactors face some unique problems: a lack of natural fuel supply (tritium), and large and irreducible electrical energy drains to offset. Because 80 percent of the energy in any reactor fueled by deuterium and tritium appears in the form of neutron streams, it is inescapable that such reactors share many of the drawbacks of fission reactors—including the production of large masses of radioactive waste and serious radiation damage to reactor components. These problems are endemic to any type of fusion reactor fueled with deuterium-tritium, so abandoning tokamaks for some other confinement concept can provide no relief.
If reactors can be made to operate using only deuterium fuel, then the tritium replenishment issue vanishes and neutron radiation damage is alleviated. But the other drawbacks remain—and reactors requiring only deuterium fueling will have greatly enhanced nuclear weapons proliferation potential.
These impediments—together with colossal capital outlay and several additional disadvantages shared with fission reactors—will make fusion reactors more demanding to construct and operate, or reach economic practicality, than any other type of electrical energy generator.
The harsh realities of fusion belie the claims of its proponents of “unlimited, clean, safe and cheap energy.” Terrestrial fusion energy is not the ideal energy source extolled by its boosters, but to the contrary: It’s something to be shunned.