The US Department of Energy explained this week how it intends to put its supercomputing power to work by simulating the fundamental building blocks of the universe.
The electrons, protons, and neutrons that make up atoms, of which all matter is composed, are fairly well understood. However, the particles that make up these particles – leptons, quarks and bosons – remain mysterious and are the subject of ongoing scientific investigation.
A $13 million grant from the Dept of Energy’s Scientific Discovery through Advanced Computing (SciDAC) program aims to expand our understanding of the extraordinarily tiny things that exist in the particles of atoms.
As far as scientists can tell, quarks and gluons – the stuff that holds them all together – can no longer be broken down. They are literally the fundamental building blocks of all matter. Remember, of course, scientists once thought the same about atoms, so who knows where this might lead.
The initiative will involve multiple DoE facilities — including Jefferson, Argonne, Brookhaven, Oak Ridge, Lawrence Berkeley, and Los Alamos National Labs, which will collaborate with MIT and William & Mary — to advance supercomputing methods used to simulate the behavior of quarks and gluons in protons.
The program seeks to answer some big questions about the nature of matter in the universe, such as “what is the origin of mass in matter?” What is the origin of spin in matter,” said Robert Edwards, deputy group leader of the Center for Theoretical and Computational Physics at Jefferson Labs. The register.
Today, physicists use supercomputers to generate a “snapshot” of the environment inside a proton, and use math to add quarks and gluons to the mix to see how they interact. These simulations are repeated thousands of times and then averaged to predict the behavior of these elementary particles in the real world.
This project, led by the Thomas Jefferson National Accelerator Facility, includes four phases that aim to streamline and accelerate these simulations.
The first two phases will be to optimize the software used to model quantum chromodynamics – the theory governing photons and neutrons – in order to break the calculations down into smaller chunks and better take advantage of the even greater degrees of parallelism available on supercomputers. new generation.
One of the challenges Edwards and his team are currently facing is how to take advantage of the increasing floating-point capabilities of GPUs without running into connectivity bottlenecks when scaling them.
“A lot of our efforts have tried to find algorithms that avoid communication and reduce the amount of communication that has to go out of nodes,” he said.
Much of our efforts have attempted to find algorithms that avoid communication
The team is also looking to apply machine learning principles to parameterize the probability distributions at the heart of these simulations. According to Edwards, this has the potential to significantly speed up simulation times and also helps eliminate many bottlenecks around node-to-note communications.
“If we could scale it, it’s like the holy grail for us,” he said.
In addition to using existing models, the third phase of the project will involve the development of new methods to model the interaction of quarks and gluons in a computer-generated universe. The final phase will take the information gathered from these efforts and use it to begin scaling systems for deployment on next-generation supercomputers.
According to Edwards, the results of this research also have practical applications for adjacent research, such as Jefferson Lab’s Continuous Electron Beam Accelerator or Brookhaven Lab’s Relativistic Heavy Ion Collider – two of the instruments used to study quarks and gluons.
“Many of the issues we are trying to solve now, such as code frameworks and methodology, will impact the [electron-ion collider],” he explained.
The DoE’s interest in optimizing its models to take advantage of larger, more powerful supercomputers comes as the agency receives a $1.5 billion check from the Biden administration to improve its computing capabilities. ®