Fusion Research: Time to Set a New Path

Issues in science and technology Summer 2015 by Robert L. Hirsch

The inherent limitations of the tokamak design for fusion power will prevent it from becoming commercially viable, but the lessons from this effort can inform future research.

Burning wood was an important source of energy for early humankind, because it had no competition, no cost concerns, and manageable environmental issues. Over time, new energy sources came into being with demonstrated superiority on key measures of value, such as cost, safety, and convenience. Beginning in the 1950s, fusion energy aspired to play a role, and at least in principle, it has several potential advantages over other sources of electricity.

Fusion is the merging of two atomic nuclei to form a larger nucleus or nuclei, during which energy is released. This is how the sun produces its energy. We know how to produce fusion reactions in the laboratory at small scale. However, a potentially viable fusion reactor would involve heating fusion fuels to very high temperatures (order of hundreds of millions of degrees) to form a gaseous plasma of electrons and ions and holding that plasma away from material walls for long enough that more power is produced than required to do the heating. An intense magnetic field can provide the required isolation, because there is no physical material that can withstand the high temperatures of a fusion plasma. Magnetic plasma containment is the basis of one approach to fusion power and is the focus of the following considerations. A key challenge in making fusion a viable electric power source is that it requires a large energy input, necessitating a larger energy output for viability.

Fusion is appealing as an energy source because fusion fuels are multifold and plentiful. The least difficult fuels to manage from a physics standpoint are the hydrogen isotopes of deuterium and tritium. Among the potentially attractive features of a deuterium-tritium (DT) fusion power plant is fuel abundance and its invulnerability to the type of runaway reaction that can occur in a nuclear fission accident. The challenge is to find a way to sustain a fusion reaction in a way that is economical, reliable, safe, and environmentally attractive.

The quest to make fusion power a viable generation option has turned out to be extraordinarily difficult. A great deal has been learned over more than 60 years of research, and a variety of approaches to fusion power have been and are being explored. However, decades ago the world fusion community decided that the most promising magnetic approach was the tokamak plasma confinement concept in which superconducting magnets are used to hold hot fusion plasma in a toroidal (donut) configuration.

Since a power-producing tokamak was understood to be very complex and expensive, a number of countries decided to develop a prototype together. It is called ITER and was initially supported by the United States, the Soviet Union, the European Union, and Japan. Later China and South Korea joined the project, and the 500 MW ITER was formally launched in 2007 to be built in France. ITER is a 30-meter tall device that will weigh over 20,000 tons and include roughly a million parts. The project has already encountered significant cost overruns and delays, and completion is now planned for 2027—about a decade later than the original target.

As this analysis will show, tokamak fusion power will almost certainly be a commercial failure, which is a tragedy in light of the time, funds, and effort so far expended. However, this particular failure does not mean that fusion power is a dead end. Research is under way on other technological approaches, which can benefit from the lessons learned from the tokamak experience. First we must understand where the tokamak approach went off the tracks.

Market realities

Electric utilities will almost certainly be the eventual adopters of fusion power systems aimed at producing electric power, so it is essential to view fusion technologies from their perspective. In 1994, sensing progress toward a potentially viable fusion power system, the Electric Power Research Institute (EPRI), the research arm of the U.S. utility industry, convened a panel of utility technologists to develop “Criteria for Practical Fusion Power Systems.” Noting that “Fusion power’s potential benefits to humanity and the environment are immense,” the report observed that “as the technology is developed and refined, a vision of fusion power plant buyer requirements is essential to providing a marketable product.” EPRI identified three major interrelated criteria for fusion power success:

Economics: “To compensate for the higher economic risk associated with new technologies, fusion plants must have lower life-cycle costs than competing proven technologies available at the time of (fusion) commercialization.”

Regulatory Simplicity: “Important directions and considerations include: Avoidance of any need for separating the plant from population centers …. Minimal need for engineered safety features …. Minimal waste generation …. Minimal occupational exposure to radiation in plant operation, maintenance, and waste handling activities.”

Tokamak fusion power will almost certainly be a commercial failure, which is a tragedy in light of the time, funds, and effort so far expended.

Public Acceptance: “A positive public perception can best be achieved by maximizing fusion power’s environmental attractiveness, economy of power production, and safety.”

Because the advent of fusion power was not imminent in 1994, EPRI noted, “It is not practical to assign values to these criteria for two reasons. First, because the world of tomorrow will be different—social, regulatory, and energy issues will pose moving targets. Second, there are potential tradeoffs among many of the factors.”

Fusion is sometimes promoted as an alternative to light water nuclear fission plants, so I use them as a reference point in assessing how well tokamak designs meet the EPRI criteria. This makes sense because the U.S. Nuclear Regulatory Commission (NRC), which is responsible for licensing and oversight of fission facilities, declared in 2009 that it has jurisdiction over fusion plants.

It is important to note that nuclear fission power’s acceptance in today’s world is mixed, a view that may or may not change in the future. Because of the current uneven acceptance of nuclear fission power, a conceptual fusion power system should clearly be more attractive, if it is to meet the EPRI criteria at some future date. A close look at the inherent characteristics of tokamak fusion reveals how poorly it compares with current fission reactors and with the EPRI criteria.

Economics

Both fission and DT fusion power plants are capital-intensive with low fuel costs, so I begin by considering reactor core capital costs, neglecting balance-of-plant considerations for the time being. For the purposes of a rough estimate, I use the general rule of thumb that a comparison of the relative masses of materials for systems of similar capabilities provides a rough proxy for their relative cost.

In 1994, technologists at the Lawrence Livermore National Laboratory (LLNL) compared the ITER core, as it then existed, with the core of the comparable power Westinghouse Advanced AP-600 nuclear reactor core. Considering the cores of the two systems was and is a reasonable basis for comparison, since the nuclear core is the heat source for a fission reactor power plant, and ITER is the prototype of the heat source for a tokamak power plant. LLNL calculated that the mass of the ITER tokamak was over 60 times that of the comparable fission reactor. Although the ultimate cost ratio will not be exactly the same, there can be no doubt that the tokamak core will be dramatically more expensive than the fission core. This large difference clearly indicated that tokamak power plant costs would likely be dramatically higher than fission power costs. In fact, the situation is worse when the balance-of-plant costs are considered, because ITER has vacuum, plasma heating, and cryogenic systems that the AP-600 does not.

The likelihood that a tokamak would be prohibitively expensive is supported by the experience of ITER thus far. The current estimate for the cost of the project is over $50 billion, about five times early estimates, and the project is still more than 10 years from expected completion. No one will be shocked if the actual cost is much higher. So on a cost basis, a utility faced with a choice between a fission plant and a tokamak would clearly prefer the fission plant.

Because the ITER central organization does not control the costs of the seven ITER partners, the actual cost of ITER is extremely difficult to determine. Each is committed to delivering certain pieces of hardware, but is under no obligation to publish their costs or convert their costs to dollars. Suffice to say that ITER costs have escalated dramatically in spite of various scope reductions.

The situation looks even worse when one considers the likely operation and maintenance (O&M) costs for a tokamak. The device is inherently large and complex, so that any disassembly and reassembly will be difficult and expensive. On top of that, virtually all reactor components will quickly become radioactive due to neutron activation and widespread tritium contamination, which will exist in abundance, since tritium tends to readily diffuse through most materials, particularly when they are hot. This means that most O&M will have to be conducted remotely, adding significantly to cost. The bottom line is that tokamak economics are inescapably very negative.

Regulation

The NRC will regulate fusion power plants. The NRC has public safety as its primary concern and must take into consideration even remote accident possibilities. The NRC requires all plants it oversees to be prepared for “A postulated accident that a nuclear facility must be designed and built to withstand without loss to the systems, structures, and components necessary to ensure public health and safety.”

Once potential accident scenarios have been identified, regulators require that proposed facilities provide safety in depth to ensure that there is no reasonable chance that even obscure failures will harm the public. Regulatory actions typically involve adding features to proposed designs to minimize and contain potential accidents within facility boundaries, often at considerable cost.

In the case of fission reactors, safety features are legion. Externally, the most noticeable safety feature is the massive building surrounding the reactor vessel, aimed at providing a layer of protection that can contain hazards created by internal system failures. According to the NRC, the nuclear reactor building is “a gas-tight shell or other enclosure around a nuclear reactor to confine fission products that otherwise might be released to the atmosphere in the event of an accident. Such enclosures are usually dome-shaped and made of steel-reinforced concrete.”

The NRC is not alone in its caution. The electric utilities themselves are keenly interested in preventing accidents because of the potentially serious human and economic costs.

The safety risks of a tokamak reactor have similarities and differences with fission reactors. Tokamak reactors will be far from risk-free. DT fusion reactions emit copious quantities of very energetic neutrons, which will damage materials near the plasma region and induce significant levels of radioactivity in adjacent structural materials. Accordingly, a tokamak power system will very quickly become highly radioactive and contaminated with tritium.

The levels of induced radioactivity will be influenced by the choice of reactor structural materials. Decades ago, 316 stainless steel (SS) was proposed but later abandoned in favor of materials in which induced radioactivity would be reduced. Of greatest current interest is reduced-activation ferritic/martensitic (RAFM) steel. Also mentioned are vanadium (V) and silicon carbide (SiC), both of which would require extensive materials development programs to establish their viability for fusion applications. Although induced radioactivity would be reduced with RAFM, V, or SiC, it would not be eliminated. However, their use would significantly increase plant costs, because these materials are more expensive than SS and have less industrial experience.

No matter what materials of construction are chosen, there will be large amounts of induced radioactivity and neutron-induced damage, particularly close to the plasma. Over time, radiation damage will render some system components structurally brittle, requiring replacement. Major component replacement in a tokamak fusion reactor will be very time-consuming, because of its complex geometry and the attendant long reactor downtimes, which will increase power costs.

Finally, it should be noted that there will be human-safety-concern levels of tritium throughout the core structure and the surrounding regions of a tokamak reactor, because tritium readily diffuses through most materials, particularly at the high temperatures that a tokamak reactor will operate.

Tokamak plasmas are not benign. As the European Fusion Network acknowledged, “Tokamaks operate within a limited parameter range. Outside this range sudden losses of energy confinement can occur. These events, known as disruptions, cause major thermal and mechanical stresses to the structure and walls.” Disruptions have been identified as a major problem to the design and operation of future tokamak reactors.

As reported at the 2011 Sherwood Conference, in the case of ITER, “…local thermal loads during plasma disruptions significantly (10 times!) exceed the melting threshold of divertor (waste dump) targets and FW (first wall) panels. A reliable Disruption Mitigations System (DMS) must be developed and installed in ITER prior to the full scale operation….” According to a 2013 ITER Newsline, “ITER, the world’s first reactor-scale fusion machine, will have a plasma volume more than 10 times that of the next largest (existing) tokamak, JET.”

Further, according to Columbia University researchers in 2011, “Disruptions are one of the most troublesome problems facing tokamaks today. In a large-scale experiment such as ITER, disruptions could cause catastrophic destruction to the vacuum vessel and plasma-facing components. There are two primary types of disruptions…which have different effects on the tokamak and need to be addressed individually.”

Although various mitigation options are under consideration, none can realistically be expected to be 100 percent foolproof. Accordingly, tokamak disruptions will clearly be of concern to both regulators and potential utility operators.

Another potential problem is the reliability of the magnets that contain the plasma. It is well known that superconducting (S/C) magnets can accidentally quench, which means suddenly “go normal” with a large release of stored energy. During a quench, a large S/C magnet can be damaged by high voltage, high temperature, and sudden large forces. Although magnets are designed to withstand an occasional accidental quench, repeated quenches can shorten their useful lives.

Small S/C magnets are widely used in magnetic resonance imaging machines, nuclear magnetic resonance equipment, and mass spectrometers. These systems are routinely stable and well behaved. Larger S/C magnets are used in particle accelerators, where difficulties have occurred and are considered a “fairly routine event,” according to a 2008 article in Fermilab’s Symmetry: Dimensions of Particle Physics. For example, a September 2008 magnet quench in the Large Hadron Collider occurred in about 100 bending magnets, led to a loss of roughly six tons of liquid helium coolant, which was vented and lost. The escaping vapor expanded with explosive force, damaging over 50 superconducting magnets and their mountings.

At the Fermilab particle accelerator, the Symmetry article reports, “a quench generates as much force as an exploding stick of dynamite. A magnet usually withstands this force and is operational again in a few hours after cooling back down. If repair is required, it takes valuable time to warm up, fix, and then cool down the magnet—days or weeks in which no particle beams can be circulated, and no science can be done.”

Events like these in accelerators are often caused by particle beams striking chamber walls, creating sudden, localized heating. Disruptions in tokamaks might provide similar triggers, but they are not the only events that can initiate quenching. To date, quenches have occurred on at least 17 occasions in tokamak experiments constructed with S/C magnets, due a number of factors including fast current variations, vacuum loss, subsystem failures, operator errors, and mechanical failure. Some failures can be avoided relatively easily, whereas others can require costly magnet and magnet casing replacements. With a structurally robust core containment vessel, such failures would not lead to danger to the public.

The ITER cryogenic system will be the largest concentrated cryogenic system in the world. ITER designers are mindful of quench potential, and in 2007 the ITER organization commented as follows:

Despite 23,000 tons of steel, the ITER machine won’t be a rigid, unmoving block. As the magnets are cooled down progressively, or as they are powered up according to ITER’s plasma scenarios, the machine will “breathe” and move. Quenches may occur as the result of mechanical movements that generate heat in one part of the magnet. Variations in magnetic flux or radiation coming from the plasma can also cause quenches, as well as issues in the magnet cryogenic coolant system.

During a quench, temperature, voltage, and mechanical stresses increase—not only on the coil itself, but also in the magnet feeders and the magnet structures. A quench that begins in one part of a superconducting coil can propagate, causing other areas to lose their superconductivity. As this phenomenon builds, it is essential to discharge the huge energy accumulated in the magnet to the exterior of the Tokamak Building. Magnet quenches aren’t expected often during the lifetime of ITER, but it is necessary to plan for them. “Quenches aren’t an accident, failure or defect—they are part of the life of a superconducting magnet and the latter must be designed to withstand them…”

Restarting a superconducting tokamak will be time consuming. In the case of the Chinese Experimental Advanced Superconducting Tokamak (EAST), it took about 18 days to cool all coils from room temperature to 4.5kelvin after a quench that occurred in December 2006. ITER and subsequent tokamak power reactors are much larger and will certainly take much longer to restart.

If a quench in ITER were to cause all of its magnets to go normal, the magnetic energy released would be over 40 gigajoules, the equivalent of toughly ten tons of TNT. How fast that energy is released depends on a number of factors, and regulators will require design features to minimize external damage.

Finally, and surprisingly, there is a potential fire hazard associated with an ultralow-temperature helium release. According to a University of Pittsburgh 2008 safety manual: “The cryogenic gases are not flammable; however, the extreme cold that exists during and immediately after a quench may cause air to condense and create liquefied oxygen on surfaces. Any liquid dripping from cold surfaces should be presumed to be enriched oxygen and treated as a potential fire hazard.” Although the chances of an associated fire hazard are likely small, they are not zero, so regulators will require related safeguards. On the basis of decades of experience with S/C magnets, the problem of quenching is not likely to ever be completely eliminated, so regulators will plan and regulate expecting their occurrence.

Because of the potential for significant explosive events in a tokamak power reactor based on an ITER-like core, regulators are virtually certain to require a major containment building to control the extremes of such events. Since a tokamak reactor would likely be tens of times larger than the containment building of a fission reactor of a comparable power level, such a building will be extremely expensive. Without a detailed design that would pass regulatory scrutiny, the cost of that tokamak reactor building cannot be easily estimated.

When imagining the hazards that regulators will anticipate, it is worth considering some of the guidance for nuclear fission reactors. Hazards that must be considered include, but are not limited to, the following: Loss of coolant accidents; failures in steam system piping; breaks in lines connected to the reactor coolant pressure boundary; internal missiles; internal fires; internal flooding; human origin hazards; an aircraft crash; explosion of a combustible fluid container; natural hazards; earthquakes; hurricanes; floods; tornados; impacts of an external missile; blizzards; terrorist attack; etc.

Of particular concern will be an aircraft collision with a tokamak fusion power plant. According to a 2014 report by the Congressional Research Service, “Nuclear power plant vulnerability to deliberate aircraft crashes has been a continuing issue. After much consideration, NRC published final rules on June 12, 2009, to require all new nuclear power plants to incorporate design features that would ensure that, in the event of a crash by a large commercial aircraft, the reactor core would remain cooled or the reactor containment would remain intact, and radioactive releases would not occur from spent fuel storage pools.” In light of the already noted sensitivities to plasma disruptions and S/C magnet disruptions, it is difficult to envision a tokamak fusion power plant not being significantly damaged by an aircraft collision. In fact, an aircraft smaller than a commercial airline may well be sufficient to lead to a series of events in which many of the S/C magnets would go normal, releasing stored energy, tritium, and induced radioactivity. The increased containment already described would have to be made dramatically stronger at major cost to have even a reasonable probability of meeting NRC standards.

It is beyond the scope of this analysis to estimate the cost of regulator-required building(s) to contain the most extreme but conceivable accidents, because a complete system redesign would be required to minimize its size. Although it is believed that a tokamak reactor containment structure will have to withstand a smaller maximum energy release than a fission reactor, it is reasonable to assume that such a building will be very expensive, because of its huge size. Related costs do not seem to have been factored into ITER planning, because a containment building has not been thus far required.

An essential element of ITER and tokamak power reactors is the divertor, a device at the bottom and/or top of the plasma chamber that collects waste particles and impurities while the reactor is operating. Divertors have been used in tokamak experiments for a long time but have not operated for extended periods with hot DT plasmas in which there is significant fusion energy production.

When DT fusion reactions occur, energetic helium nuclei are produced, which sooner or later will strike the divertor plate, where their energy is recovered and where the resulting helium gas can be readily pumped out of the system. Since the flux of plasma striking a divertor will be very energetic, divertors will operate at very high temperatures, so tungsten has been the usual material of choice.

Recent research at the University of Wisconsin indicates that no solid material, including tungsten, can operate under expected ITER conditions for a reasonable period of steady state operation. The problem is that energetic helium nuclei will become buried in the divertor material, causing surface morphology changes, including the formation of blisters. These surface changes have been found to lead to material loss values greatly exceeding previous estimates, resulting in an unacceptable amount of radioactive dust, which can quench the fusion plasma or act as a mobile source of radioactive tungsten dust. These recent results may not hinder ITER operation, because ITER is not expected to operate for long periods of time. However, it would definitely hinder a tokamak fusion reactor, where long-term operation is essential. Some researchers have proposed using a liquid metal instead of a solid, but related viability is yet to be established.

Another challenge is that many in the U.S. government have been troubled by the continuing escalation in ITER costs and its lengthening schedule. Recently, the Energy and Water Development Subcommittee of the Senate Appropriations Committee released a recommendation that the U.S. withdraw from the ITER project. This recommendation did not survive the full appropriations process, but it does not portend well for future ITER funding.

For over 50 years, the public and governments have been told very positive things about fusion power. Fusion is the fundamental source of energy in the universe, powering the sun and the stars, which is true. Fusion has been heralded as the ultimate solution to humankind’s energy needs, because of its essentially infinite fuel supply and its inherent cleanliness and safety.

Tokamak fusion, as envisioned by ITER and according to the foregoing, will not be close to being economic and has inherent safety and radioactivity problems. As ITER tokamak realities become more widely known, it is conceivable that the public will feel that it has been lied to by scientists and governments. Accordingly, a public backlash could result. Although understandable, it would be unfortunate, because there are other approaches to fusion power that may hold great hope for the future.

Lessons for future fusion research

The difficulties associated with the ITER-like tokamak approach to fusion power are significant, many would say overwhelming. Although pursuing this ultimately dead-end approach consumed significant resources, tokamak research and development experience can provide important lessons for researchers in their quest for other, more attractive approaches to fusion power. Development of a full list of lessons is beyond the scope of this analysis, but a few conclusions can be drawn.

First, the EPRI Criteria for Practical Fusion Power Systems should be mandatory reading and periodic discussion for all fusion research personnel and managers. There is no question that a viable fusion power concept must be economically viable, preferably superior to competitive electric power production options, e.g., renewable, nuclear, natural gas, and coal. Managerially, that requires a viable, continuing engineering design function that analyzes evolving physics concepts and challenges those whose reactor embodiments show potentially significant weaknesses.

Second, the inherently large size required in the tokamak approach is a significant disadvantage because of the time and resources required to attain important milestones. Concepts that are inherently small can progress more rapidly and at lower cost.

Third, plasma configurations that easily or inherently disrupt are not desirable.

Fourth, concepts that involve magnetic fields should avoid magnet systems that can easily quench. S/C magnet quenching is hazardous, disruptive, expensive, and time-consuming. If S/C magnets are to be used, configurations that are inherently more stable should be favored.

Fifth, although the preceding did not delve deeply into the multitude of the materials issues in ITER/tokamak power, the use of existing, industrial materials is always a positive. The fewer new technologies associated with the introduction of a basically new technology, the better.

I am reminded of the history of fission nuclear power. A number of interesting and exotic concepts were developed and pursued, many extensively. It took the pragmatic Admiral Hyman Rickover to recognize the many inherent challenges associated with emerging nuclear technology. He chose reactor configurations that were in many ways the least sophisticated. He succeeded for the Navy application, and his concepts won over almost all others for commercial electric power application. A fusion concept that initially simply boils water may not sound very exotic, but it may well facilitate the introduction of a new fusion technology. As the saying goes: “The best can be the enemy of the good.”

Finally, the concerns of likely regulators and potential utilities must be seriously considered relatively early in the development of any fusion concept. The longer those concerns are delayed, the more serious the potential upset.

Robert L. Hirsch (RLHirsch@comcast.net) is senior energy advisor at Management Information Systems, Inc., in Washington, DC, and a consultant in energy, technology, and management. He headed the federal fusion program from 1972-1976.