The challenge for any nuclear data evaluation project is to periodically release a revised, fully consistent and complete library – with all the necessary data and covariances – and to ensure that it is robust and reliable for a variety of applications. Within such evaluation efforts, benchmarking activities are the final crucial step in validating the proposed library. The major data evaluations – JEFF, ENDF/B, JENDL, CENDL, ROSFOND/BROND and TENDL – all aim to provide such a library. Thus, they each require a coherent and efficient benchmarking process. In the past, this has been achieved through ad-hoc efforts by many participants typically using many different benchmarking suites. This process is prone to error and misunderstanding and considerable time can be wasted tracking down, for example, typographical or modeling errors that can be avoided by a more systematic approach.
The last two decades have seen the rise of tremendous new resources to address this issue: the international handbooks for criticality safety, reactor physics, fuel burnup, and radiation shielding. These peer reviewed references comprise our best understanding of the benchmark experiments by which we validate the nuclear data that we use for predictions of neutron reactivity, criticality safety, radiation shielding and other aspects of particle transport; and, more recently, for predictions of isotopic transmutation for waste disposal and dosimetry. This subgroup is working to (1) provide a methodology to assemble Quality Assured (QA) versions of these inputs for the MCNP and other transport codes; (2) to provide an initial repository of the major collections of such inputs and begin the QA process for them; and, (3) to provide the tools necessary to extract standardised information from such validation tests presented in a harmonised way.