Categories
Research

Parallel GPGPU application takes aim at tumors

In order to precisely reconstruct images of a tumor, a proton CT scan must pinpoint the exact location where an individual proton enters and exits the human body. A new generation of particle detectors uses tiny scintillating fibers (photo) in combination with ultrasensitive photomultipliers made of silicon to detect a proton's path. Photo: Reidar Hahn

Protons, specifically proton beams, are increasingly being used to treat cancer with more precision. To plan for proton treatment, X-ray computed tomography (X-ray CT) is typically used to produce an image of the tumor site — a process that involves bombarding the target with photon particles, measuring their energy loss and position, and then using projection methods to establish the 3D shape of the target.
A new imaging method, which employs protons instead of photons, promises to deliver more accurate images while subjecting the patient to a lower dose of radiation. Proton computed tomography (pCT) employs billions of protons and multiple computationally intensive processes to reconstruct a 3D image of the tumor site. To achieve the required accuracy would take a long time on a single computer, and it’s not clinically feasible to require a patient to sit still for a long period to be imaged, or to wait a day for the images to be produced.
A group of computer scientists at Argonne and at Northern Illinois University has been working to accelerate pCT imaging using parallel and heterogeneous high performance computing techniques. The team so far has developed the code and tested it on GPU clusters at NIU and at Argonne with astounding results — producing highly accurate 3D reconstructions from proton CT data in less than ten minutes.
Image Credit: FermiLab

Categories
Argonne Leadership Computing Facility Research

Cracking the source of crackle in supersonic jet noise

The ASCR Leadership Computing Challenge projects that make up roughly 30% of the time awarded on ALCF supercomputers each year go to support “high-risk, high-payoff” simulations of interest to the DOE. Stanford’s Parviz Moin used his 60 million hour award to make a new and potentially industry-changing discovery about the source of supersonic jet noise. No ear protection needed.
Moin and his team ran large eddy simulations on Intrepid to determine the source of crackle in hot supersonic jet engines. Crackle is a major source of engine noise that causes hearing damage and impacts fuel efficiency. This particularly irritating phenomenon is associated with shock-like “N-shaped” acoustic waveforms consisting of sudden strong compressions followed by more gradual expansions. Because crackle occurs in the direction of peak jet noise, its elimination has the potential to help meet the U.S. Navy’s near-term jet noise reduction goal of 3 dB in the peak noise.
One way to make jets less noisy is to modify the shapes of the engine exhaust nozzles using pointed cutouts, called chevrons. Past ALCF allocations to Moin enabled a comprehensive study of the physical mechanisms by which chevrons affect the jet mixing and shock-associated noise. His current allocation was used to complete the simulations capturing crackle events and to develop new methods to identify and save such events for further study. Furthermore, with the source of the crackle noise now identified, new nozzle designs can be simulated.

Categories
Argonne Leadership Computing Facility Technology

Renewed urgency in the race to exascale

At a special two-day symposium last month, Argonne invited back many of the visionaries who in some way contributed to the lab’s 30 years in advancing parallel computing and computational science. Most if not all of these individuals have made a career in high performance computing — directing programs and conducting research in government agencies, national laboratories, universities and industry. Strong DOE investment in research and development of HPC capabilities has kept the U.S. the leader in high impact scientific research for decades, but now the DOE’s shining scientific supercomputing centers are oversubscribed by factors of three or more. Exascale is the next and necessary milestone in the race for computing power that will enable future breakthroughs in energy, medicine, and engineering, and the U.S. is now facing fierce competition from around the world.
Argonne’s Rick Stevens has been a leader in DOE planning for exascale since 2007. He recently testified to the urgent need for sustained government investment in exascale at a Congressional Subcommittee on Energy hearing, “America’s Next Generation Supercomputer: The Exascale Challenge.” The May 22 hearing was related to a bill proposed by Illinois Rep. Randy Hultgren to improve the HPC research program of the DOE and make a renewed push for exascale research in the U.S. A full transcript of his testimony can be found here.