Plutonium Production Reactors
The first nuclear reactor, CP-1, went critical for the first time on 2 December 1942 in a squash court under Stagg Field at the University of Chicago. Construction on CP-1 began less than a month before criticality was achieved; the reactor used lumped uranium metal fuel elements moderated by high-purity graphite. Within 2 years the United States first scaled up reactor technology from this essentially zero-power test bed to the 3.5 MW (thermal) X-10 reactor built at Oak Ridge, Tennessee, and then again to the 250-megawatt production reactors at Hanford. The Hanford reactors supplied the plutonium for the Trinity test and the Nagasaki war drop. Clearly, reactor technology does not stress the capabilities of a reasonably well-industrialized state at the end of the twentieth century.
Some problems did arise with the scale-up to hundreds of megawatts: the graphite lattice changed crystal state, which caused some deformation, and the buildup of a neutron-absorbing xenon isotope poisoned the fission reaction. This latter problem was curable because of the foresight of the duPont engineers, who built the reactor with many additional fuel channels which, when loaded, increased the reactivity enough to offset the neutron absorption by the xenon fission product.
Finally, the problem of spontaneous emission of neutrons by 240 Pu produced in reactor plutonium became apparent as soon as the first samples of Hanford output were supplied to Los Alamos. The high risk of nuclear pre-initiation associated with 240 Pu caused the abandonment of the notion of a gun-assembled plutonium weapon and led directly to the adoption of an implosion design.
Since each fission produces only slightly more than two neutrons, on average, the neutron "economy" must be managed carefully, which requires good instrumentation and an understanding of reactor physics, to have enough neutrons to irradiate useful quantities of U-238. Note, however, that during the Manhattan Project the United States was able to scale an operating 250 watt reactor to a 250 megawatt production reactor. Although the instrumentation of the day was far less sophisticated than that in use today, the scientists working the problem were exceptional. A typical production reactor produces about 0.8 atoms of plutonium for each nucleus of U-235 which fissions.
A typical form of production reactor fuel is natural uranium metal encased in a simple steel or aluminum cladding. Because uranium metal is not as dimensionally stable when irradiated as is uranium oxide used in high burnup fuel, reactors fueled with the uranium metal must be confined to very low burnup operation, which is not economical for electricity production. This operational restriction for uranium metal fuel results in the production of plutonium with only a small admixture of the undesirable isotope, 240 Pu. Thus, it is almost certain that a reactor using metallic fuel is intended to produce weapons grade plutonium, and operation of such a reactor is a strong indicator that proliferation is occurring.
A Heavy Water Reactor would be based on a low-pressure, low-temperature application of nuclear fission technology specifically designed to produce plutonium [or tritium]. The reactor vessel and cooling system configuration (with primary and secondary cooling loops) would be similar to that used in commercial light water reactor nuclear power technology. The HWR would use heavy water as the reactor coolant and moderator. Heavy water, circulated through the core for cooling and moderation, also passes through heat exchangers that are external to the reactor tank. The heat is in turn carried away by the secondary cooling system. The heavy water in the tank surrounding the fuel would represent the bulk moderator.
|Join the GlobalSecurity.org mailing list|