
ASIM is an agent-based modeling framework that simulates the structure, dynamics, and emergent behaviors of large-scale, Internet-like complex networks by incorporating factors such as traffic, geography, economics, policies, and security threats to enable more accurate analysis and prediction of real-world network phenomena.

Berkeley UPC is a portable, high-performance implementation of the Unified Parallel C (UPC) language, designed for high-performance computing on large-scale parallel machines, utilizing GASNet for efficient communication across shared and distributed memory systems.

Berkeley Lab Checkpoint/Restart (BLCR) for Linux is a hybrid kernel/user implementation designed to enable robust checkpointing and restarting of parallel applications, particularly those using MPI, without requiring modifications to application code, ensuring compatibility with SciDAC Scalable Systems Software.

The Berkeley Container Library (BCL) is a set of generic, cross-platform, high-performance data structures for irregular applications, including queues, hash tables, Bloom filters and more. BCL is written in C++ using an internal DSL called the BCL Core that provides one-sided communication primitives such as remote get and remote put operations. The BCL Core has backends for MPI, OpenSHMEM, GASNet-EX, and UPC++, allowing BCL data structures to be used natively in programs written using any of these programming environments.

AQT is a collaborative facility that designs, fabricates, and operates superconducting quantum processors, enabling DOE scientists to co-develop and implement quantum algorithms for solving challenging problems in optimization, materials science, and high-energy physics on current noisy, intermediate-scale quantum hardware.

CORVETTE (Correctness Verification and Testing of Parallel Programs) develops advanced tools for correctness verification and bug detection in hybrid and large-scale parallel programs, enabling precise, scalable identification of concurrency errors and non-determinism across diverse programming models and architectures.

The Dynamic Exascale Global Address Space Programming Environments (DEGAS) project developed next-generation programming models, runtime systems, and tools for exascale computing, advancing scalable, resilient, and energy-efficient Partitioned Global Address Space (PGAS) environments with enhanced programmability, performance portability, and interoperability for diverse scientific applications.

The FastOS and Tessellation projects pioneered new operating system architectures and resource management strategies for manycore and exascale systems, enabling flexible, partitioned, and energy-aware environments that support high-performance and client computing on future large-scale and heterogeneous hardware.

The Intel Parallel Computing Center: Big Data Support on HPC Systems project aims to redesign and optimize data analytics frameworks—particularly Apache Spark—for high-performance computing (HPC) environments, addressing architectural mismatches between data centers and supercomputers to enable scalable, efficient big data analytics on systems with up to tens of thousands of cores.

PyFloT is a precision tuning tool that helps identify opportunities to safely lower floating-point precision in performance-critical code regions, reducing execution time while maintaining correctness in scientific applications.

The ExaBiome project developed scalable, high-performance tools for metagenome assembly and protein analysis that leverage exascale computing to enable rapid, comprehensive analysis of massive and complex microbial community datasets, accelerating discoveries in environmental, agricultural, and medical biotechnology.

SIMCoV is a large-scale computational model that simulates the cell-by-cell spread of respiratory viral infections in the lungs, capturing detailed interactions between lung cells and immune responses to better understand infection dynamics and outcomes.

symPACK is a high-performance software tool that quickly and efficiently solves large systems of equations involving sparse, symmetric matrices—common in scientific and engineering problems—with the ability to leverage advanced graphics processors (GPUs) for even faster results.

The Pagoda Project sought to advance high-performance computing by developing state-of-the-art software and infrastructure based on the Partitioned Global Address Space (PGAS) model, in collaboration with partners across industry, government, and academia.