HEP-CCE: Promoting Computational Excellence
HEP-CCE Coordinators: Salman Habib (Argonne), Kerstin Kleese Van Dam (Brookhaven), Liz Sexton-Kennedy (Fermilab), Peter Nugent (Lawrence Berkeley), Richard Dubois (SLAC)
The HEP-CCE is a cross-cutting initiative to promote excellence in high performance computing (HPC) including data-intensive applications, scientific simulations, and data movement and storage. Enhancing connections with DOE’s Advanced Scientific Computing Research (ASCR) program is an important part of the Center’s activities. This includes promoting future-looking R&D initiatives in exascale architectures and systems, intelligent networking, and new data management and data analysis tools. Although the HEP-CCE is not a service-oriented entity, limited resources are available to support collaborative computing efforts for the HEP community, including a common GitHub repository for open source codes, a website for aggregating useful information, and expertise within and without HEP for solving computational problems via the Expert Forum. The HEP-CCE also sponsors topical workshops and student training programs.
Find out about our organization, projects, opportunities, and how you can become an HEP-CCE partner.
The OLCF will present an Introduction to Summit webinar from
1:00 PM until 4:30 PM (Eastern Time) on Friday, June 1. In this
webinar, we will cover the basic topics new users will need to
get up and running on Summit. We will give a broad overview of
available features and the details necessary to submit and run
jobs. For more information, please see the event page at
One challenge of using leadership computing resources is reading and writing data in scalable ways that does not lead to bottlenecks or performance penalties on the shared filesystems. This Workshop aims to bring together leading IO experts in the HEP field with experts from DOE ASCR Facilities to discuss how to move forward in the next years to make HEP software more friendly to millions of parallel threads accessing files on shared disks.
On February 26-March 2, 2018, Brookhaven National Laboratory (BNL) hosted a hackathon targeting the Intel Xeon Phi (formerly code-named Knights Landing, or KNL) processors for scientists to come together to optimize their application code performance on the KNL-based supercomputers. Out of the five teams who participated, two teams came from the high energy physics community:
- APES (Accelerator Particle Energy Simulator): A code for tracking particle-device and particle-particle interactions that has the potential to be used as the design platform for future particle accelerators. Team members all came from BNL.
- ART: a code for simulating the formation of structures in the universe, particularly galaxy clusters. Team members came from Yale University and University of Miami.
Each team was paired with a mentor with similar scientific background. They also had access to four floating mentors from Intel who brought expertise in OpenMP, Intel hardware architectures, compilers and performance profiling tools. The teams worked with their mentors for five days in a hands-on setting. By the end of the week, all teams achieved significant performance improvements for their codes, with ART and APES achieving >2X and >5X speedup, respectively.
Held at the ALCF from May 15–17, our intensive three-day workshop is aimed at experienced HPC users with goals of applying for a major allocation award.
Carnegie Mellon University (CMU) and Georgia Tech (GT) will be hosting a
3-day conference on “Machine Learning in Science and Engineering” on
CMU’s campus in Pittsburgh on June 6-8, 2018. The purpose of the
conference is to bring together researchers across the disciplines to
present the latest ideas on the applications of ML methods in their
fields as well as providing a forum for work on the development of new
algorithms designed for challenges in science and engineering. More
information on the conference can be found at the website
Together with Deirdre Shoemaker from Georgia Tech, I am co-chairing the
Physics Track of the conference. We invite you all to submit abstracts
for 15+5 minute contributed talks through your respective collaborations
that have received information about abstract contributions to our ML
In case of any questions, please contact firstname.lastname@example.org