Computer Programs

NAME, COMPUTER, PROBLEM, SOLUTION, RESTRICTIONS, CPU, AUXILIARIES, STATUS, REFERENCES, REQUIREMENTS, LANGUAGE, OPERATING SYSTEM, AUTHOR, MATERIAL, CATEGORIES

[ top ]

[ top ]

To submit a request, click below on the link of the version you wish to order. Rules for end-users are
available here.

Program name | Package id | Status | Status date |
---|---|---|---|

SCALE 6.2.4 | CCC-0834/06 | Arrived | 24-NOV-2020 |

Machines used:

Package ID | Orig. computer | Test computer |
---|---|---|

CCC-0834/06 | Many Computers |

[ top ]

3. DESCRIPTION OF PROGRAM OR FUNCTION

The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.

SCALE 6.2 provides many new capabilities and significant improvements of existing features.

New capabilities include:

• Evaluated Nuclear Data File (ENDF)/B-VII.1 libraries (for CE and MG neutronics) with enhanced group structures,

• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,

• Covariance data for fission product yields and decay constants,

• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,

• Parallel calculations with KENO,

• Problem-dependent temperature corrections for CE calculations,

• CE shielding and criticality accident alarm system analysis with MAVRIC,

• CE depletion with TRITON (T5-DEPL/T6-DEPL),

• CE sensitivity/uncertainty analysis with TSUNAMI-3D,

• Simplified and efficient LWR lattice physics with Polaris,

• Large scale detailed spent fuel characterization with ORIGAMI and ORIGAMI Automator,

• Advanced fission source convergence acceleration capabilities with Sourcerer,

• Nuclear data library generation with AMPX, and

• Integrated user interface with Fulcrum.

Enhanced capabilities include:

• Accurate and efficient CE Monte Carlo methods for eigenvalue and fixed source calculations,

• Improved MG resonance self-shielding methodologies and data,

• Resonance self-shielding with modernized and efficient XSProc integrated into most sequences,

• Accelerated calculations with TRITON/NEWT (generally 4x faster than SCALE 6.1),

• Spent fuel characterization with 1470 new reactor-specific libraries for ORIGEN,

• Modernization of ORIGEN (Chebyshev Rational Approximation Method [CRAM] solver, API for high-performance depletion, new keyword input format)

• Extension of the maximum mixture number to values well beyond the previous limit of 2147 to ~2 billion,

• Nuclear data formats enabling the use of more than 999 energy groups,

• Updated standard composition library to provide more accurate use of natural abundances.

SCALE website: http://scale.ornl.gov

SCALE 6.2.4 provides enhanced performance and resolves issues reported in the following areas.

Please see https://www.ornl.gov/content/scale-v624 for more information:

• SCALE 6.2.4 addressed where the Standard Composition Library (STDCOMP) and related functions did not include an input check that certain elements must have user-defined isotopics. The elements listed in Table 7.2.2 have natural abundances which define a default isotopic distribution.

• SCALE 6.2.4 addressed where DBRC data was not being correctly loaded on Windows, and with the DBR=2 option there was no indication that the data was not loaded and the calculation proceeded without warning.

• An issue has been fixed in SCALE 6.2.4 where including a thermal moderator nuclide (e.g., H-1) at multiple temperatures in the input was not handled correctly. This issue was introduced in 6.2.2 and affects both TRITON and CSAS calculations which utilize KENO as the transport solver.

• Polaris was updated to robustly handle simultaneous density and soluble boron changes in the moderator and coolant.

• An issue has been fixed in SCALE 6.2.4 where in ORIGEN, the calculation of (alpha,n) sources for 10 million years decay, ORIGEN did not include Am-241 in the results. .

• An issue has been fixed in SCALE 6.2.4 where ORIGAMI did not allow some valid power histories which occur when one wants to model a fine-grained power history with no intermittent decay.

• An issue with the TRITON swap capability has been fixed in SCALE 6.2.4, where the TRITON swap capability did not function correctly in many scenarios.

• An issue was fixed in SCALE 6.2.4 where MAVRIC could not perform CE responses for nu-fission.

Criticality Safety

SCALE provides a suite of computational tools for criticality safety analysis that is primarily based on the KENO Monte Carlo code for eigenvalue neutronics calculations. Two variants of KENO provide identical solution capabilities with different geometry packages. KENO V.a uses a simple and efficient geometry package sufficient for modeling many systems of interest to criticality safety and reactor physics analysts. KENO-VI uses the SCALE Generalized Geometry Package, which provides a quadratic-based geometry system with much greater flexibility in problem modeling but with slower runtimes. Both versions of KENO perform eigenvalue calculations for neutron transport primarily to calculate multiplication factors (keff) and flux distributions of fissile systems in both CE and MG modes. They are typically accessed through the integrated SCALE sequences. KENO’s grid geometry capability extends region-based features for accumulating data for source or biasing parameter specifications, as well as for tallying results from a calculation for visualization or communication of data into or out of a calculation. Criticality safety analysts may also be interested in the sensitivity and uncertainty analysis techniques that can be applied for code and data validation as described elsewhere in this document.

Radiation Shielding

The Monaco with Automated Variance Reduction using Importance Calculations (MAVRIC) fixed-source radiation transport sequence is designed to apply the MG and CE fixed-source Monte Carlo code, Monaco, to solve problems too challenging for standard, unbiased Monte Carlo methods. The intention of the sequence is to calculate fluxes and dose rates with low uncertainties in reasonable times, even for deep penetration problems. MAVRIC is based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology, which uses an importance map and a biased source that are derived to work together. MAVRIC generates problem-dependent cross section data, and then it automatically performs a coarse mesh 3D discrete ordinates transport calculation using Denovo to determine the adjoint flux as a function of position and energy, and apply the information to optimize the shielding calculation in Monaco. In the Forwarded-Weighted CADIS (FW-CADIS) methodology, an additional Denovo calculation is performed to further optimize the Monaco model to obtain uniform uncertainties for multiple tally locations. Several utility modules are also provided for data introspection and conversion.

Activation, Depletion, and Decay

The Oak Ridge Isotope Generation (ORIGEN) code calculates time-dependent concentrations, activities, and radiation source terms for a large number of isotopes simultaneously generated or depleted by neutron transmutation, fission, and radioactive decay. Provisions are made to include continuous nuclide feed rates and continuous chemical removal rates that can be described with rate constants for application to reprocessing or other systems that involve nuclide removal or feed. ORIGEN includes the ability to use MG cross sections processed from standard ENDF/B evaluations. Within SCALE, transport codes can be used to model user-defined systems, and the COUPLE code can be applied to calculate problem-dependent neutron-spectrum-weighted cross sections representative of conditions within any given reactor or fuel assembly and then convert these cross sections into a library to be used by ORIGEN. Time-dependent cross section libraries can be produced to reflect fuel composition variations during irradiation. An alternative sequence for depletion/decay calculations is ORIGEN-ARP, which interpolates pregenerated ORIGEN cross section libraries versus enrichment, burnup, and moderator density. ORIGEN Assembly Isotopics (ORIGAMI) computes detailed isotopic compositions for LWR assemblies containing UO2 fuel by using the ORIGEN code with pregenerated ORIGEN libraries for a specified assembly power distribution. The assembly may be represented by a single lumped model with only an axial power distribution or by a square array of fuel pins with variable pin powers, as well as an axial distribution. Multiple cycles with varying burn times and down times may be used. ORIGAMI produces files containing SCALE and MCNP composition input for material in the burnup distribution, files containing decay heat for use in thermal analysis, and energy-dependent radioactive source for use in shielding calculations. A series of 1470 pregenerated burnup libraries for use in ORIGEN and ORIGAMI are provided with SCALE for 61 fuel assemblies for commercial and research reactors.

Reactor Physics

The Transport Rigor Implemented with Time-dependent Operation for Neutronic depletion (TRITON) control module provides flexible capabilities to meet the challenges of modern reactor designs by providing 1D pin-cell depletion capabilities using XSDRNPM, 2D lattice physics capabilities using the NEWT flexible mesh discrete ordinates code, or 3D Monte Carlo depletion using KENO V.a or KENO-VI, including CE treatment with problem-dependent temperature corrections. For MG analysis, TRITON implements XSProc to process material input and provide a temperature and resonance-corrected cross section library. TRITON allows users to input Dancoff factors to account for nonuniform lattices. In all cases, ORIGEN is implemented for depletion and decay calculations. Additionally, TRITON can produce assembly-averaged few group cross sections for use in core simulators. There are few limitations to the types of systems that can be modeled with TRITON, but the input complexity and long runtimes can be burdensome on users for detailed analysis. Polaris is an optimized tool that produces assembly-averaged few group cross sections for light water reactor (LWR) analysis with core simulators. Polaris provides simplified input; only a few lines are required to describe the entire model. Polaris uses a MG self-shielding method called the Embedded Self-Shielding Method (ESSM) and Method-of-Characteristics (MoC) transport solver. The ESSM approach computes MG self-shielded cross sections using Bondarenko interpolation. The background cross section used in the interpolation is determined by a series of 2D MoC fixed-source calculations similar to the subgroup method that does not require explicit celldata input. Additionally, heterogeneous lattices are explicitly treated without the need to externally compute Dancoff factors. Like TRITON, Polaris implements ORIGEN for depletion and decay calculations.

Sensitivity and Uncertainty Analysis

SCALE provides a suite of computational tools for sensitivity and uncertainty analysis to (1) identify important processes in safety analysis and design, (2) provide a quantifiable basis for neutronics validation for criticality safety and reactor physics analysis based on similarity assessment, and (3) quantify the effects of uncertainties in nuclear data and physical parameters for safety analysis.

The TSUNAMI-1D, TSUNAMI-2D and TSUNAMI-3D analysis sequences compute the sensitivity of keff and reaction rates to energy-dependent cross section data for each reaction of each nuclide in a system model. The 1D transport calculations are performed with XSDRNPM, the 2D transport calculations are preformed using NEWT, and the 3D calculations are performed with KENO V.a or KENO-VI. The Monte Carlo capabilities of TSUNAMI-3D provide for S/U analysis from either CE or MG neutron transport, where the deterministic capabilities of TSUNAMI-1D and TSUNAMI-2D only operate in MG mode. The Sensitivity Analysis Module for SCALE (SAMS) is applied within each analysis sequence to provide the requested S/U data. Whether performing a CE or MG calculation, energy-dependent sensitivity data are stored in group form in a sensitivity data file (SDF) for subsequent analysis. These sequences use the energy-dependent cross section covariance data to compute the uncertainty in the response value due to the cross section covariance data.

TSAR (Tool for Sensitivity Analysis of Reactivity) provides sensitivity coefficients for reactivity differences, and TSUNAMI-IP (TSUNAMI Indices and Parameters) and TSURFER (Tool for Sensitivity and Uncertainty Analysis of Response Functions Using Experimental Results) provide code and data validation capabilities based on sensitivity and uncertainty data.

Sampler is a super-sequence that performs general uncertainty analysis by stochastically sampling uncertain parameters that can be applied to any type of SCALE calculation, propagating uncertainties throughout a computational sequence. Sampler treats uncertainties from two sources: (1) nuclear data and (2) input parameters. Sampler generates the uncertainty in any result generated by any computational sequence through stochastic means by repeating numerous passes through the computational sequence, each with a randomly perturbed sample of the requested uncertain quantities.

Nuclear Data

The cross section data provided with SCALE include comprehensive CE neutron and coupled neutron-gamma data based on ENDF/B-VII.0 and ENDF/B-VII.1. These data have been generated with the AMPX codes. The MG data are provided in several energy-group structures optimized for different application areas, including criticality safety, lattice physics, and shielding analysis. The comprehensive ORIGEN data libraries are based on ENDF/B-VII.1 and recent JEFF evaluations, and they include nuclear decay data, neutron reaction cross sections, neutron-induced fission product yields, delayed gamma ray emission data and neutron emission data for over 2,200 nuclides. The photon yield data libraries are based on the most recent ENSDF nuclear structure evaluations. The libraries used by ORIGEN can be coupled directly with detailed and problem-dependent physics calculations to obtain self-shielded, problem-dependent cross sections based on the most recent evaluations. There are no limitations with regard to compositions or energy spectra. SCALE also contains a comprehensive library of neutron cross section covariance data for neutron interactions, fission product yields, and decay data for use in sensitivity and uncertainty analysis with the TSUNAMI codes as well as Sampler.

Material Specification and Cross Section Processing

Cross (X) Section Processing (XSProc) provides material input and MG cross section preparation for most SCALE sequences. XSProc allows users to specify problem materials using easily remembered and easily recognizable keywords associated with mixtures, elements, nuclides, and fissile solutions provided in the SCALE Standard Composition Library. For MG calculations, XSProc provides cross section temperature correction and resonance self-shielding, as well as energy group collapse and spatial homogenization for systems that can be represented in celldata input as infinite media, finite 1D systems, or repeating structures of 1D systems such as uniform arrays of fuel units. Improved resonance self-shielding treatment for nonuniform lattices can be achieved through the use the Monte Carlo Dancoff (MCDancoff) code that generates Dancoff factors for generalized 3D geometries for subsequent use in 1-9 XSProc. Cross sections are generated on a microscopic and/or macroscopic basis as needed. Although XSProc is most often used as part of an integrated sequence, it can be run without subsequent calculations to generate problem-dependent MG data for use in other tools.

Graphical User Interfaces

Fulcrum is a cross platform graphical user interface designed to create, edit, validate and visualize SCALE input, output, and data files. Historically, SCALE has provided several special purpose graphical user interfaces which operate only on specific platforms and are loosely integrated with SCALE's computational and data components. Fulcrum is intended to provide a single user interface that directly integrates with SCALE’s internal resources to provide a consistent experience between Fulcrum and SCALE’s command line interface. Fulcrum provides input editing and navigation, interactive geometry visualization for KENO V.a, KENO-VI, and NEWT, job execution, overlay of mesh results within a geometry view, and plotting of data from most SCALE file formats. An error checking parser interactively identifies poorly constructed input with spelling errors or data entry omissions for all SCALE sequences. The Hierarchical Input Validation Engine (HIVE) will identify allowed data ranges and interdependencies in the input and report inconsistencies to the user. Fulcrum will interactively process standard composition data to produce a mixing table, list expanded input aliases for review, provide an internal listing of input as is required for Sampler material and geometry perturbation analysis, and launch the SCALE sample problems. The layout of panels in Fulcrum is highly configurable to accommodate the preferences of many users.

ORIGAMI Automator, a graphical user interface integrated with Fulcrum, facilitates the quantification of isotopics as a function of time for a large set of fuel assemblies such as the complete inventory of a spent fuel pool. This tool was developed to support the NRC in severe accident analyses, but it can be adapted to many other uses. Additional user interfaces include the KENO3D interactive visualization program for Windows for solid-body rendering of KENO geometry models, as well as the previously mentioned ExSITE and VIBE interfaces for sensitivity and uncertainty analysis. Several codes provide HTML-formatted output, in addition to the standard text output, to provide convenient navigation using most common Web browsers through the computed results with interactive color-coded output and integrated data visualization tools.

[ top ]

4. METHODS

The SCALE system consists of easy-to-use analytical sequences, which are automated through control modules to perform the necessary data processing and manipulation of well-established computer codes, referred to as functional modules. Computations with SCALE are typically characterized by the type of analysis to be performed (e.g., criticality, shielding, or lattice physics) and the geometric complexity of the system being analyzed. The user then prepares a single set of input in terms of easily visualized engineering parameters specified in a simplified, free-form format. The analytical sequence is defined by this single input specification. The SCALE control modules use this information to derive additional parameters and prepare input for each of the functional modules necessary to achieve the desired results, especially with the SCALE radiation transport codes that employ discrete ordinates, Monte Carlo, or hybrid methods.

[ top ]

[ top ]

6. TYPICAL RUNNING TIME

Runtimes for sample problems vary from approximately 12 hours to 24 hours depending on the speed of the machine. Running times are extremely problem dependent and depend heavily on the sequence used and the cross-section library selected. They range from less than one minute for a simple 1D criticality or depletion/decay problem to several hours for a complex 3D shielding or sensitivity/uncertainty analysis or 3D Monte Carlo depletion case.

[ top ]

8. RELATED OR AUXILIARY PROGRAMS

DATA LIBRARIES

SCALE Standard Composition Library

56-group cross sections based on ENDF/B-VII.1

238-group cross sections based on ENDF/B-VII.0

252-group cross sections based on ENDF/B-VII.1

27n, 19g coupled cross sections based on ENDF/B-VII.0

28n, 19g coupled cross sections based on ENDF/B-VII.1

200n, 47g coupled cross sections based on ENDF/B-VII.0

200n, 47g coupled cross sections based on ENDF/B-VII.1

ENDF/B-VII.0 continuous energy cross sections for KENO, Monaco and CENTRM

ENDF/B-VII.1 continuous energy cross sections for KENO, Monaco and CENTRM

56- and 252-group neutron cross section covariance data derived from ENDF/B-VII.1 and supplemented with additional data sources

44-group SCALE 6.0/6.1 neutron cross section covariance data

Fission product yield covariance data derived primarily from ENDF/B-VII.1

Decay data covariance library derived primarily from ENDF/B-VII.1

Albedos and weighting functions for use by KENO

Various cross-section, decay, and yield libraries for ORIGEN

Fission product yield covariance data derived primarily from ENDF/B-VII.1

ORIGEN-ARP basic cross-section libraries: (list below generated with ENDF/B-VII.1)

Siemens 14x14 18x18

Westinghouse CE 14x14, 16x16

Westinghouse 14x14, 15x15, 17x17, 17x17 OFA (Optimized Fuel Assembly)

GE 7x7, 8x8, 9x9, 10x10 8×8-4, 9×9-7, 7×7-0, 8×8-1, 8×8-2, 9×9-2, 10×10-8

ABB 8x8-1

ATRIUM-9 (9x9), ATRIUM-10 (10x10)

SVEA-64 (8x8), SVEA-100 (10x10)

VVER-440 flat enrichment (1.6% – 3.6%)

VVER-440 profiled enrichment, average 3.82%

VVER-440 profiled enrichment, average 4.25%

VVER-440 profiled enrichment, average 4.38%

VVER-1000

CANDU 19-, 28-, and 37-element bundles

AGR (Advanced Gas Cooled Reactor)

Magnox

BWR 7×7, 8×8-1, 8×8-2, 9×9-2, 9×9-9, 10×10-9, 10×10-8, SVEA-64, SVEA-96, and SVEA-100

Mixed oxide (MOX) BWR fuel: 8x8-1, 9x9-0, 9x9-9, 10x10-8, 10x10-9, 7x7-0, 8x8-2, 9x9-2

PWR 14×14, 15×15, 16×16, 17×17, 18×18

Mixed oxide (MOX) PWR fuel: 14x14, 15x15, 16x16, 17x17, 17x17-OFA, 18x18

RBMK

[ top ]

CCC-0834/06, included references:

- W. A. Wieselquist, R. A. Lefebvre, M. A. Jessee, Editors:SCALE Code System, ORNL/TM-2005/39, Version 6.2.4 (April 2020)

- README_SCALE6.2: Getting Started with SCALE 6.2.4 (April 2020)

- D. Wiarda, M. E. Dunn, N. M. Greene, M. L. Williams, C. Celik, L. M. Petrie:

AMPX-6: A Modular Code System for Processing ENDF/B, ORNL/TM-2016/43, (April

2016)

[ top ]

11. HARDWARE REQUIREMENTS

SCALE 6.2.3 includes binary executable files for Linux, Mac OS X, and Windows 7+

Minimum requirements: 2 GB RAM per CPU, 80 GB of disk space + additional space to store output results

Recommended requirements: 4 GB RAM per CPU, 80 GB of disk space + 100 GB of scratch space + additional space to store output results

Production requirement for large models: 64 GB RAM, 80 GB of disk space + 500 GB of scratch space + additional space to store output results

[ top ]

Package ID | Computer language |
---|---|

CCC-0834/06 | C-LANGUAGE, C++, FORTRAN-90, FORTRAN-95 |

[ top ]

13. SOFTWARE REQUIREMENTS

Executables included in the SCALE 6.2.4 distributions:

Linux executables created using the GNU GCC-4.8.3

Darwin (MacOS X) executables were created using the GNU GCC-4.8.3

Windows executables were created using the INTEL IFORT, ICC and ICPC 15.0

All executables included are 64-bit architecture.

This version was tested on the following systems.

AMD Opteron running RedHat Enterprise Linux

Intel Xeon running Windows 7 with Intel Compilers.

Intel Mac OS X

Using the following compilers GNU GCC-4.8.3 and INTEL v15 ifort, icc and icpc compilers.

If the user chooses to compile executables, the user will be required to install several GNU General Public License (GPL) open-source third party libraries (TPLs). Many of these libraries support options that are not part of the SCALE 6.2.4 release (parallelism, parallel I/O, high end visualization support, etc.). The following GPL TPLs are required to build SCALE 6.2.4. The software is not included in the installation package, but is available for download:

LAPACK/BLAS http://www.netlib.org/lapack/

HDF5 https://www.hdfgroup.org/

SILO https://wci.llnl.gov/simulation/computer-codes/silo/downloads

ZLIB http://www.zlib.net/

[ top ]

[ top ]

CCC-0834/06

Linux, Mac, and Windows executablessource

data libraries

sample problems

[ top ]

- B. Spectrum Calculations, Generation of Group Constants and Cell Problems
- C. Static Design Studies
- D. Depletion, Fuel Management, Cost Analysis, and Power Plant Economics
- G. Radiological Safety, Hazard and Accident Analysis
- H. Heat Transfer and Fluid Flow
- J. Gamma Heating and Shield Design
- K. Reactor Systems Analysis

Keywords: Monte Carlo method, burnup, calculations, complex geometry, continuous energy, criticality, cross sections, depletion, discrete ordinates, dose, gamma ray source, isotope inventory, multigroup, neutron, plotting, reactor physics, sensitivity analysis, spent fuel characterisation, uncertainty analysis, validation.