Automating Active Learning Workflows for Computational Design of Li-ion Cathodes

Hi! I’m Alex Tai and I’m a rising Junior at Northwestern studying Materials Science and Engineering with a minor in Computer Science. This summer at Argonne, I’ve been working with Dr. Noah Paulson and Dr. Joshua Gabriel in the Applied Materials Division on the simulation of solid state cathode materials using a range of computational techniques.  

Background 

Koerver, R. et. al. Energy Environ. Sci. 201811 (8), 2142–2158 

The charge/discharge cycle of a solid state battery involves lithium entering and leaving the crystal structure of the cathode material. The removal of lithium from the structure is called delithiation. As cathode materials are delithiated, they undergo a volume change and concomitant structural collapse. This destabilizes the interface between the cathode and the electrolyte, compromising the performance of the battery. The search for a cathode material that maintains stability under delithiation spans a vast composition and configuration space. The problem is intractable with a purely experimental approach, and even computational simulations can become impractically time intensive. Thus, a machine learning workflow is employed that attempts to minimize computational cost while still accurately calculating materials properties.  

The Workflow 

There are two main materials simulation methods involved. One is more accurate but more expensive, while the other is less expensive but less accurate. The specifics can vary, but in our case the more expensive method is density functional theory (DFT) and the less expensive method is machine learning force fields (MLFF).  

DFT uses functionals of the electron density to find an approximate solution of Schrodinger’s equation. Depending on the level of theory and the complexity of the system, calculations can sometimes take multiple weeks to converge. MLFFs are trained on the DFT data and use neural networks to represent a potential energy surface (PES). These neural network potentials (NNP) are orders of magnitude faster than DFT and excel at interpolation between points of the PES that the MLFF was initially trained on.  The goal is to use MLFFs to explore more configurations and compositions with quantified uncertainties on their predictions, in lesser time than DFT.  The active learning workflow is developed to incrementally improve the accuracy of the MLFF when the uncertainty on its prediction is too high.  

The determination of whether DFT calculations are necessary is informed by uncertainty quantification (UQ) calculations. When the MLFF predictions have high uncertainty for a structure, that structure needs to be evaluated with the more physically rigorous first principles DFT approach. The workflow is shown in the figure below.  

Every place you see an arrow in the diagram, a human researcher has to log onto the supercomputer (these computations are too intensive for personal devices), perhaps look at the output, move some files around, and submit the next calculation in the workflow. That might not sound particularly hard, but it becomes time consuming (not to mention very annoying) if you have several of these workflows running at once. Another consideration is that a human can’t always move the calculation onto the next step as soon as it finishes-we might be off work, or we might be busy with something else and simply forget for a few days. Implementing a program to automate the “babysitting” of the workflow would save researchers time and effort and increase the efficiency of the exploration of the space.  

Tools for automation 

Colmena is an open-source Python library built for automating simulation workflows on high performance computers. The main components of a Colmena application are the Thinker and the Doer. The Thinker does what the researcher would typically do: submit calculations, read results, and even make decisions on what calculations to perform next based on those results. The Doer is made up of the functions that actually run the calculations. Colmena also allows us to manage the computational resources used by different functions. The challenge is to structure the Colmena app so all of the parts of the workflow communicate properly. Adaptations to the already existing pieces are necessary for compatibility. 

https://github.com/exalearn/colmena

The Doers in our application include a molecular dynamics simulation to generate structures, a function to evaluate the uncertainty of a structure, a function to evaluate a structure with DFT, and a function to retrain NNPs. The Thinker should read the result of the uncertainty evaluation to decide whether a structure needs to be evaluated with DFT and added to the NN training set. It automatically creates files and submits calculations accordingly.  

How is uncertainty quantified? 

The key to the efficiency of the workflow is uncertainty quantification. In our model, we have several NNPs that make an ensemble of predictors. These potentials differ from each other because they are trained on different subsets of the data. A structure is fed into all of the potentials, and each potential returns a prediction of the energy and forces. We use these points to construct a 95% confidence interval with the student’s t-distribution. The width of that confidence interval is taken as the uncertainty. If there is high uncertainty, that means the models disagree, so at least one of them must be wrong, and we should find the ground truth with DFT. The threshold to determine what counts as “high uncertainty” is something we are still considering, but it should become clearer once more structures are evaluated.  

Training neural networks 

I am also exploring the training step of the neural networks. NNs are trained by defining a loss function, which serves as a metric for how well the model predicts a dataset. The loss function defined in the DeePMD-kit software package that we use to train the network has the form: 

Han Wang, Linfeng Zhang, Jiequn Han, and Weinan E. “DeePMD-kit: A deep learning package for many-body potential energy representation and molecular dynamics.” Computer Physics Communications 228 (2018): 178-184 

E denotes energy and F forces, and the p values are tunable parameters, also called prefactors. The total loss is essentially a weighted sum of the distance from the model’s predictions for energy and forces to the ground truth. Depending on how the parameters are set, the model will have different priorities. The parameters are tunable and we are exploring the best combination of energy and force prefactors.  

It is common in machine learning to plot the loss and error over the course of training. While DeePMD-kit produces a file with evaluations of loss and error on train and test datasets at regular intervals of the training, it does not give the actual plots. I wrote a python script to parse that output file and generate plots from it. An example is shown on the left. On the right is the correlation between the model predictions and ground truth for forces on some testing configurations. The predictions are accurate despite the model having never “seen” the testing data. Thus, it can be generalized to predict the properties of configurations within the space spanned by the dataset.  

Next Steps 

We will continue to work on writing and testing the Colmena application. I am also now working on exploring the composition space (the automated workflow, as it is now, only explores configuration space) by substituting Co, Mn, and other dopants for Ni into the structure using the pymatgen Python library.