Machine learning and particularly “deep learning” is currently an area of rapid development in the statistics and computer science fields. HEP has used machine learning techniques for some time but is only just now beginning to embrace these new techniques. The HEP-CCE is helping support the HEP community’s ability to exploit these at the most extreme scale by
- Developing tools and code for prototype use-cases focussing on large scale data: such as large images from cosmology simulations, and raw detector data (calorimeter energy deposits, tracking detector hits etc.). We also focus on using cutting edge techniques such as generative adversarial networks, unsupervised approaches, and GraphNN to bring the machine learning and HEP communities closer together.
- Scaling popular machine learning frameworks across multiple nodes on HPC machines. Current activities include studying the scaling characteristics of TensorFlow and Caffe at ANL and working with Intel to deploy the latest versions of Caffe, TensorFlow, Theano optimised for KNL at NERSC.
t-SNE representation of unsupervised convolutional autoencoder on data at NERSC from the Daya Bay experiment http://arxiv.org/abs/1601.07621