Optimizing Advanced Photon Source Electrospinning Experimentation

Hi, my name is Jacob Wat and I am a rising junior at Northwestern University. I’m majoring in mechanical engineering with a minor in computer science. This summer, I’ve been working with Erik Dahl, a chemical engineer who is conducting experiments regarding electrospinning and roll to roll manufacturing. As a mechanical engineer, my project deals with the design of a new testing apparatus to increase efficiency and usability. 

Electrospinning is a method used to create nanofibers via a polymer solution in syringes, as seen in Figure 1. As the syringe slowly pumps out, a high voltage will draw out the nanofibers to create a deposition on the grounded collector. The collector can change depending on the use; they can be circular or rectangular, flat surface or wires, etc. At Argonne, we use a rectangular, wired collector which is useful for the characterization of nanofibers and one way they are doing that is by taking x-rays of the material at the Advanced Photon Source (APS) to determine different attributes of the material. 

Figure 1. Basic electrospinning setup.

The setup I’m designing for is the APS electrospinning experiments. In Figure 2, you can see that the current implementation has a removable wire collector. It needs to be replaced and the nanomaterial (circled) needs to be cleaned after every iteration. All in all, a very tedious and inefficient method. In order to speed up experimentation, we are designing an apparatus that can run continuously after being initialized in tension.  

Figure 2. Picture of wire collector with spun nanofibers present.

With the new electrospinning setup, the most important aspect we were designing for included the continuous lateral movement of the wires, thus allowing users to rotate the spools to a new section of wire remotely. Accordingly, there would be no need to have user oversight; you could start running the experiment, set the motor for the wires to slowly replace the used sections, and leave it to run on its own while continuously collecting data. In addition, many other features had to be redesigned to accommodate this new setup. For example, one of the key parts of electrospinning is the grounding of the collectors (in this case, the wires). To accomplish this, we used grounding bars with notches cut in them to control the spacing between each wire. This solves the problem of grounding and wire spacing with one element of the design. Another important consideration is maintaining tension in the wire. This is achieved via a spring/shock absorber type part that will push against one of the grounding bars so that each wire will always remain in tension. However, each wire has to be individually tensioned before the experiment can begin.

For future development, a prototype should be made and tested at the APS because there are very specific space requirements that have not been dealt with yet. Once it can replace the current implementation, the next step will be to improve the user experience. This could include introducing a method for tensioning all wires at the same time instead of having to do it individually or even go so far as to clean sections of wire to be reused. 

 

References:

Urbanek, Olga. “Electrospinning.” Laboratory of Polymers & Biomaterialshttp://polybiolab.ippt.pan.pl/18-few-words-about/17-electrospinning.

Automatic Synthesis Parameter Extraction with Natural Language Processing

Hi there! My name is Peiwen Ren and I am a rising junior studying materials science and integrated science program at Northwestern. This summer, I am working with Jakob Elias and Thien Duong at the Energy and Global Security Institute. My project focuses on using natural language processing (NLP) to automatically extract material synthesis parameters in science literatures.

In recent years, many of the breakthroughs in deep learning and artificial intelligence are in the field of natural language processing. From the speech recognition apps like Siri and Google Assistant, to the question answering search engines, NLP has seen an unprecedented growth of popularity among the AI community. In addition, there have been increasingly more and more applications of NLP in fields like materials science, chemistry, biology and so on. A big challenge in materials science, as well as in chemistry, is that a huge amount of material synthesis knowledge is locked away in literatures. In many cases, after a material is made and the synthesis methods are recorded in a paper, the paper just goes un-noticed. As a result, many potential breakthroughs in materials discovery may have been delayed due to a lack of knowledge of prior work, or unnecessary duplicate studies are conducted only to realize they have been done before. The advantage of NLP, if properly developed, is that it can input a huge amount of texts and output summary information without the need of researchers going through the literatures themselves.

The end goal of this project is to pick out relevant synthesis parameters of the material mentioned in the given abstract or full document text. The advantage of this approach is that if a researcher wants to study a known compound or synthesize a new material, he or she can can just give the name of the compound of interests into the program, and the program will automatically query the existing science literature corpus to download all the papers mentioning the target compound. Then the extracted texts will be fed into a NLP model to pick out relevant synthesis parameter. One example would be given a sentence input, “Compound A was made with compound B by heating in the furnace at 300 degree celsius”, the NLP model would output a tagged sequence with the same length as the input sentence, but each entry corresponds to a pre-defined tag. The tagged output in this case would be “Compound A <TARGET>, was <O>, made <O>, with <O>, compound B <MATERIAL>, by <O>, heating <OPERATION>, in <O>, the furnace <SYNTHESIS-APPARATUS>, at <O>, 300 <NUMBER>, degree <O>, celsius <CONDITION-UNIT>, . <O>” This process of tagging a sequence of word is called named entity recognition (NER).

Here is a list explaining the tags mentioned above:

  • <TARGET>: the target material(s) being synthesized
  • <O>: null tag or omit tag
  • <MATERIAL>: the material(s) used to make the target material(s)
  • <OPERATION>: the action performed on material(s)
  • <SYNTHESIS-APPARATUS>: equipment used for synthesis
  • <NUMBER>: numerical values
  • <CONDITION-UNIT>: the unit corresponding to the preceding numerical value

The following figure is a visualization of the tagging.

Figure 1: Visualization of named entity recognition given an input sentence.

To realize this NER task, I trained a sequence to sequence (seq2seq) neural network using the pytorch-transformer package from HuggingFace. A seq2seq model basically takes in a sequence and outputs another sequence. The input and output sequence may not be of the same length, although in our sequence tagging task they are. A common use of seq2seq models is language translation, where for example a sequence of English words is put through a model and a sequence of French words is produced, or vice versa.

Figure 2: A seq2seq model used for language translation. (Pic. taken from Jay Alammar‘s blog post)

Since training a seq2seq model is a supervised learning problem (meaning the model needs labeled training set), a hand-annotated dataset containing 235 synthesis recipes was taken from the Github page of Professor Olivetti’s group at MIT. The dataset was downloaded in a nested JSON format, which was then flattened into a csv file using python’s Pandas package. Here is a preview of the training set.

Figure 3: The training set for the NER task

As seen in the figure above, each row represents a word, with “sentence #” specifying the index of the sentence it belongs to and “Tag” representing the pre-defined tag mentioned before. The data was then split into training data and validation/test data in a 90% : 10% ratio. The model was implemented using a pre-trained BERT model from Google Research on a single Nvidia Tesla T4 GPU. After running 5 epochs (1 epoch meaning a passthrough over the entire training set), the prediction accuracy of the model on validation/test set is 73.8%, and the F1 score is 19.8%. A similar model developed by the Olivetti group achieved 86% accuracy and 81% F1 score. [1]

There are two possible reasons for the low accuracy and F1 score:

  1. The NLP model utilized a NLP model specifically pre-trained on materials science literatures, whereas the BERT was pre-trained on Wikipedia entries and a list of book corpus with little focus on materials science topics.
  2. The BERT NLP model is predicting a lot of the null tags (“O”) to be meaningful named entities tags. For example, “is” may be predicted as “target” instead of “O”, leading to a large number of false positives in the predicted labels.

The following two figures show a visualization of a sentence prediction from the BERT NLP model.

Figure 4: True labels for a sentence in the validation set.
Figure 5: Predicted labels for a sentence in the validation set.

In the figures above, we can see the the NLP models successfully assign correct tags to most words, except that it misclassifies “magnetic” as a APPARATUS-DESCRIPTOR rather than a SYNTHESIS-APPARATUS, and it fails to assign tags for the last three words “tumbling (60 rpm)”.

The next steps for this project would be to pre-train a model specifically focused on materials science literature and to tune down the false positive rate in the current NLP model, in order to increase the test accuracy and F1 score respectively.

The Jupyter Notebook and training data used for this project will be updated at this Github Repo. The code used for training the BERT model is modified from Tobias Sterbak‘s Named Entity Recognition with BERT post. This project is inspired by the works from Professor Olivetti’s group at MIT and Professor Ceder’s and Dr. Jain’s groups at UCB. [2]

References:

  1. Kim, E., et al. (2017). “Machine-learned and codified synthesis parameters of oxide materials.” Sci Data 4: 170127.
  2. L. Weston, V. T., J. Dagdelen, O. Kononova, K. A. Persson, G. Ceder and A. Jain (2019). “Named Entity Recognition and Normalization Applied to Large-Scale Information Extraction from the Materials Science Literature.” Preprint.

3D Printing of Food Waste

Hello, my name is Johnathan Frank, and I am a rising junior studying chemical engineering at Northwestern. This summer, I am researching under Meltem Urgun-Demirtas and Patty Campbell in the Applied Materials Division at Argonne. My work focuses on exploring food waste materials for 3D printing purposes.

The importance of this research is multi-faceted. Plastic usage and waste today constitute a growing problem for the world, as most of the 300 million tons of plastic produced annually are produced from unsustainable petrochemical resources and pose environmental concerns after their use due to many plastics’ inability to biodegrade in receiving environments (natural and engineered). Plastics thus require address to remedy the sustainability issues they create. Food waste disposal is another area that could benefit from new ideas, as current disposal methods are not very economical, but upcycling the annual 1.3 billion tonnes of food waste globally per year would provide a more profitable alternative and help promote a circular economy. Exploring food waste materials as plastic alternatives is a step toward a potential solution to both of these issues.

Previous research in this project focused primarily on investigating alternative food waste options and processing options necessary to form biofilms, and I started this summer as the focus began to more strongly be on 3D printing. Additive manufacturing represents an opportunity for more efficient, sustainable production and so it is an important aspect of the research. Furthermore, the printing setup I use is a home 3D printer that has been modified to allow the cold extrusion of pastes, and the chemicals and raw materials I am using are safe and easy to obtain so there is potential for home printing of similar biocomposites.

This summer, I have largely focused on printing qualities (such as the evenness of print color and texture, strength of the dried prints, and print shrinkage) of chitosan-based biocomposites. Chitosan, derived from chitin that comes from sources such as crustacean shells, fungi, and insect exoskeletons, functions as a binder for the composites, with various powders as fillers. The fillers that I have primarily used are microcrystalline cellulose, carrot, and eggshell, as some representations of food waste that could be used for 3D printing biocomposites. Printing the pastes for these composites presents a couple challenges that I have looked to address.

Printer and paste extruder

First, an inherent challenge in using chitosan as the binder is the need to dissolve it in acid. This means that, as the part sets, water evaporates and causes shrinkage of the print, resulting in features losing their integrity and especially posing a risk for complex prints. In tandem with that concern, parts need to be dried evenly and quickly enough while printing to allow extrusion of successive layers onto a stable base. Pastes with high amounts of water tend to shrink a lot, but they are easily printed because they are thinner. For thicker pastes, however, the extruder has more difficulty pumping the paste through the tubing of the printer and could jam. I worked with a number of ratios of chitosan, filler powder, and acid to try to find an optimum paste composition. 

Freshly extruded print
Dried print (notice shrinkage)

Second, the printer and its default settings are not optimized for printing pastes, and even the extruder used that gives the printer the ability to print paste materials is optimized for silicone, not the biocomposite pastes I am using. Pastes with an uneven consistency or sticky texture plug up the extruder or clump on the printer nozzle (unlike silicone, which is homogenous and slippery), and print speed, infill density, and material flow are all tailored by trial and error to produce good print quality. Airflow in the printer is another parameter that has proven to be very important because it allows the partial drying of a part while it is printing to ensure that each successive layer is deposited onto a solid base. Bubbles in the paste as a result of filling the extruder syringe pose two difficulties: the bubbles create gaps in the print surface when they are ‘extruded’ instead of paste, and as bubbles leave the tubing they decrease pressure in the syringe, necessitating a gradual increase in applied pressure in the extruder over time.

Carrot paste printing
Completed carrot print

As this project develops, it will be necessary to address more aspects of 3D printing these biocomposites, and further fine-tune compositions and parameters to produce consistently successful prints. Also, the mechanical properties and biodegradability of the dried prints will need to be tested. Eventually, this research could open possibilities for more environmentally friendly plastic alternatives and profitable waste disposal, as well as bring us toward a more sustainable circular economy. I would like to thank Meltem Urgun-Demirtas for mentoring me this summer, as well as Dr. Jennifer Dunn and those involved with NAISE for giving me the opportunity to research at Argonne over the summer.

Electrospinning of PVDF/HFP for Paper Conservation Applications

Hello! My name is Kathleen Dewan and I am a rising junior at Northwestern studying materials science and engineering. This summer, I am working with Yuepeng Zhang, whose group specializes in the synthesis of nanofibers using electrospinning. In my project, I am aiming to use electrospinning for the conservation of paper artifacts.

The preservation of cultural heritage is vital to the perpetuation of unique identities and communities within society, as it serves to maintain the histories and records of these cultures for many generations into the future. Paper is one of the oldest substrates used by man for record-keeping, as it was first invented in China around 100 B.C., and a great number of historically pertinent documents are paper-based. There is currently a great emphasis placed on the preservation of these artifacts from chemical degradation, as cellulose-based paper is subject to acidification and oxidation over extended periods of time. However, there are far fewer techniques that exist to protect paper from mechanical wear, which can include tearing, water damage, degradation from UV radiation, and exposure to contaminants.

In order to protect paper artifacts from this kind of damage, the ideal conservation mechanism would be clear, tensily strong, hydrophobic, UV resistant, and able to provide a barrier against common airborne particulates. Due to these requirements, we looked towards electrospinning a blend of polyvinylidene fluoride with hexafluoropropylene (PVDF/HFP) to deposit a near-invisible membrane of nanofibers directly onto a paper substrate. The electrospinning process produces nanofibers by using a voltage source to draw a solution of opposite charge onto it, and this results in the formation of a nanofiber mesh. This mesh structure is ideal for our applications, as it provides a barrier against potential contaminants while still remaining porous enough to maintain the ambient atmospheric conditions that are required for the paper to remain chemically stable. We chose PVDF/HFP for our solution because it is colorless, meaning that the legibility of the paper can be maintained, as well as highly hydrophobic and UV resistant, meaning that it can serve to protect the paper from water and UV damage.

Below is an image of the experimental setup used for electrospinning.

Figure 1: Electrospinning experimental setup

The first step in creating an appropriate membrane is to optimize the morphology of the fibers produced by electrospinning. Because electrospinning is a process that involves many parameters, including spinning time, voltage, injection rate of solution, and concentration of solution, this required the controlling and variation of many variables in order to get the desired results. Overall, the trends that I observed are that higher spinning times resulted in thicker membranes, higher voltages resulted in better overall coverage, and higher injection rates resulted in a greater amount of fibers. For our applications, we needed our fibers to be numerous and provide adequate coverage, while having the membrane still thin enough as to not alter the legibility of the document. It was sometimes challenging to alter the parameters appropriately to achieve these desired results, which I will discuss further below.

We first deposited the fibers onto aluminum foil, in order to provide a initial estimate of the required parameters. We then deposited the fibers onto a paper substrate, in which we found that certain adjustments needed to be made in order to account for the reduced conductivity of the paper and to maintain the legibility of the text. From these experiments we found that the ideal spinning time would be around 3 minutes, as this would allow the fibers to be just visible enough to identify an appropriate area to take an SEM sample, while still thin enough to preserve the legibility of the text.

Figures 2, 3, and 4: Visual determinations of membrane thickness

We then performed analyses of the fibers using SEM imaging in order to examine their microstructures. SEM imaging is necessary to observe the actual morphology of the fibers, as even if the membrane seems thin enough from the naked eye, we need to ensure that they provide even coverage of the substrate. Below are some examples of how altering certain parameters can change the morphology of the fibers.

The sample below was spun on an aluminum substrate at a voltage of 16 kV and an injection rate of 0.2 mL/hr, with varying spinning times. These images showed us that the injection rate of 0.2 mL/hr was too low, as their coverage is not sufficient, even at 3 minutes.

Figure 5: SEM analysis of samples with varying spinning times

The sample below was spun on a paper substrate for 3 minutes at 0.5 mL/hr, with varying voltages. After increasing the injection rate to 0.5 mL/hr, we began to see more effective coverage at 16 kV and 20 kV, but there are still beads of solution that are present on the fibers. This resulted from instability during the electrospinning process, which was likely caused by the solution concentration, and thus viscosity, being too low, as well as the decrease in the conductivity of the voltage source introduced by the paper substrate [1].

Figure 6: SEM analysis of samples with varying voltages

The most recent experiment that I performed used the same spinning time, voltage, and injection rate conditions, but this time with double the concentration of PVDF/HFP in the solution. Because we increased the solution concentration, we hypothesize that the bead formation will decrease due to the increase of the solution viscosity. However, further SEM imaging is still necessary to determine if these conditions will be optimal.

After the fibers have been sufficiently optimized, I can perform an ImageJ analysis of the SEM images in order to determine the sizes of the spaces in between the fibers in the mesh. I can then compare this information to the average diameters of typical airborne contaminants, which can be found in the table below [2]. This will allow me to make a qualitative estimate of whether or not the membranes will be effective in protecting the paper from these contaminants.

Table 1: Size of common pollutants or contaminants and aerodynamic diameter of common gases.

In the coming weeks, I would also like to performed certain tests on both paper with and without the PVDF/HFP membranes. I would first like to perform tensile tests in order to compare the tensile strength and elongation of each sample, as well as contact angle tests in order to compare the hydrophobicity of each sample. If I still have enough time, I can also perform porosity testing on the paper sample with the membrane in order to determine what kinds of gases the membrane will allow to pass through; this information, in conjunction with Table 1, will help me to determine if the appropriate ambient atmospheric conditions can be maintained with the membrane.

 

References

1: H Fong, I Chun, D.H Reneker, Beaded nanofibers formed during electrospinning, Polymer, Volume 40, Issue 16, 1999, Pages 4585-4592, ISSN 0032-3861, https://doi.org/10.1016/S0032-3861(99)00068-3.

2: Qinglian Li, Sancai Xi, Xiwen Zhang, Conservation of paper relics by electrospun PVDF fiber membranes, Journal of Cultural Heritage, Volume 15, Issue 4, 2014, Pages 359-364, ISSN 1296-2074, https://doi.org/10.1016/j.culher.2013.09.003.

Improving Dimensional Accuracy of Parts Created with Binder Jet Printing

Hello! My name is Zachary Martin, and I am an undergraduate Materials Science & Engineering student at Northwestern. This summer I am working with Dileep Singh in the field of additive manufacturing (AM), and my project focuses on minimizing/controlling dimensional distortions created during the sintering process of binder jet printing, a promising powder printing technique.

The binder jet printing process is capable of fabricating entire complex parts by repeatedly infiltrating layers of loose powder with a liquid binder, holding the targeted region of powder together until postprocessing, forming a “green part” that is later densified and bonded through sintering.

What sets binder jet printing apart from the many other AM technologies is that this first step of creating a “green part” involves only small temperature variations. Other techniques, including the heavily researched Selective Laser Melting (SLM), form final parts in the printing bed through the use of large inputs of energy to rapidly melt and bond each layer of powder together, which introduces a large temperature gradient across the surface of the part. These gradients lead to the development of internal stresses within the component, lowering the performance of the final part.

Binder jetting avoids these problems by creating a green part at a relatively constant temperature, before heating this entire component simultaneously during sintering to ensure the development of temperature gradients within a sample is minimized. Specifically, the sintering process of binder jet printing uses high temperatures to first remove the binder material, and then densify the powder into a final part. By densifying the part simultaneously, sintering promotes the creation of three-dimensional bonding, eliminating issues of anisotropy and yielding parts with bonding structures closer to those found in traditional manufacturing feedstock.

While able to significantly increase the mechanical performance of final parts, sintering conditions must be carefully controlled to promote the densification of a part, while minimizing the propagation of undesired creep and uneven shrinkage at high temperatures. The process requires parts to be heated to high temperatures for an extended period of time, allowing for the development of notable creep in final parts, which my project works to reduce. Additionally, temperature gradients within the furnace can lead to unequal rates of shrinkage across a part, leading to dimensional warpage. These dimensional distortions can be seen in final samples, demonstrated by the angled outside edges and disrupted channels in Figure 1 below.

Figure 1: Dimensional changes to channeled green part before (left) and after (right) sintering

In previous studies, warpage of the final part has been counteracted by infiltrating parts with ceramics or other metals. One group reduced distortions by introducing ceramic nanoparticles to fill voids present in a steel powder crystal structure, which greatly blocked creep over the sintering process. Another group introduced additional metals with a lower melting temperature than the powder feedstock, which showed a similar reduction in creep. These solutions, however, alter the performance and properties of final parts, as the structures is no longer purely steel.

Throughout the course of my project, my research focuses on altering the temperature and time conditions of the sintering process to quantify their effects on the overall warpage of a final part. Additionally, by changing sample orientation during sintering, I can identify trends that minimize the effect of gravity and temperature gradients on a part. To systematically quantify the warpage present, I use a combination of ASTM dimensional guidelines and distortion measuring methods expressed in previous additive manufacturing papers. This allows samples to be empirically compared in respect to warpage in each dimension. The guidelines I use are visualized below in Figure 2.

Figure 2: Classification of warpage and dimensions measured in a final part

Going forward, the group will move onto the creation of channeled parts using a ceramic material, which must be created using binder jet printing since the high temperatures required bond the powder cannot be achieved using SLM or other laser-based techniques. The results from my research will reveal optimal processing conditions to minimize creep and temperature gradients during the sintering process, and will provide a basis for which to quantify the results of the final ceramic part.


References

[1] S. Allen, E. Sachs, “Three-Dimensional Printing of Metal Parts for Tooling and Other Applications”, METALS AND MATERIALS, vol. 6, no. 6, pp. 589-594, 2000.

[2] L Grant, M. Alameen, J. Carazzone, C. Higgs, Z. Cordero, “Mitigating Distortion During Sintering of Binder Jet Printed Ceramics”, Solid Freeform Fabrication Symposium, 2018.

Predicting and Responding to Microclimatic Changes in an Electronics Enclosure

Hello! My name is Richard Yeh, and I am a rising senior with majors in Electrical Engineering and Integrated Science Program at Northwestern. This summer, I am working with Pete Beckman and Rajesh Sankaran on improving the resiliency of the Array of Things (AoT) nodes. The AoT nodes that are deployed have to be outdoors throughout the year, experiencing the full force of nature through night and day and rain or sun. Despite this, the sensors have to be reliable and resilient to maximize uptime and minimize maintenance, especially as the scale of the project increases and more nodes get deployed over a larger range of area. When looking at nodes that have been brought back from deployment over the years, it has been observed that the electronics inside are prone to failure, which is expected given the harsh environment they are exposed to. My work here is to develop a method to predict and anticipate weather events that could negatively impact the performance of the nodes so that preventive action can be taken.

The first step was to ensure that accurate data is being collected to better understand the environment inside and surrounding the nodes. This involves identifying historically problematic sensors and fixing the pipeline through which their data is sent. Many of the sensors used by the AoT nodes run on a communication bus using the I2C protocol which is able to interface with dozens of sensors through just two wires. Previously, it was noted that many of the sensors running this protocol often report erroneous values, publishing inaccurate results that were being made public. Additionally, by nature of the way the I2C protocol works, when one sensor on the bus malfunctions, there is a chance for the entire bus to go down, rendering other sensors nonfunctional. My first project was to resolve this issue by updating the firmware on the sensor boards to be able to run a check on the I2C bus and react accordingly. The changes allow for a scan to be run on the bus to detect and identify these failing sensors and “disable” them, preventing data from being published from those specific sensors even when requested from.

Having updated the firmware on several deployed nodes, the next step was to determine the types of failures that can occur. For electronics in enclosures that are exposed to a variety of external climatic conditions, one major concern is humidity build up and condensation, which is problematic for the longevity of these electronics as the presence of water leads to significantly higher rates of corrosion.

Figure 1: Sensor Board from Previously Deployed Node. White discolorations indicate corrosion

In studies of simulated cycling temperature conditions to a typical electronic enclosure, accumulation of water content in the enclosure has been observed over each cycle, increasing the absolute humidity over time [1]. Additionally, the problem is compounded on by possible contamination on the sensor boards from the manufacturing process. Contamination in the form of ionic residues on the boards can lead to leakage current and corrosion at lower humidity levels as the salts begin to absorb moisture and form conduction paths [2].

To get an idea of when and why corrosion happens on the boards, it is important to see what the climate profile looks like inside the nodes. By using temperature and humidity measurements from sensors inside of the enclosure housing the electronics, the internal microclimate can be observed and analyzed:

Figure 2: Plot of Internal and External Temperature, Relative Humidity, and Absolute Humidity Over 7 Days for 1 Node

It is suspected that condensation may be happening where there are sudden drops in internal absolute humidity, indicating a loss of water content. However, continued analysis over a longer range of time and over a larger range of nodes is still being done to correctly identify the cause of the sensor readings, such as other factors like rain events.

Eventually, the goal is to extend this kind of real time analysis and detection to other potentially damaging cases, such as conditions that promote corrosion or extreme temperature. Once a node senses that the environment is reaching the threshold that would result in one of these damaging scenarios, the node can then employ emergency self-protecting procedures, such as generating heat using the CPU onboard in cold temperatures, or shutting down parts of the board that could be at risk because of condensation. All of this helps to keep these nodes alive for a longer period of time, collecting and publishing accurate data.

 

References

[1] H. Conseil, V. C. Gudla, M. S. Jellesen, R. Ambat, “Humidity Build-Up in a Typical Electronic Enclosure Exposed to Cycling Conditions and Effect on Corrosion Reliability”, Components Packaging and Manufacturing Technology IEEE Transactions on, vol. 6, no. 9, pp. 1379-1388, 2016.

[2] V. Verdingovas, M. S. Jellesen, and R. Ambat, “Impact of NaCl contamination and climatic conditions on the reliability of printed circuit board assemblies,” IEEE Trans. Device Mater. Rel., vol. 14, no. 1, pp. 42–51, Mar. 2014.

Computer Vision for the Optimization of Laser Powder Bed Fusion Analysis

Hello there! My name is Lyon Zhang, and I am a rising junior studying Computer Science at Northwestern. This summer I am working with Jakob Elias on a variety of projects with a broad goal of using networking and artificial intelligence techniques to automate visual analysis of varying manufacturing methods. The most extensive and ambitious of these is Laser Powder Bed Fusion, a 3D printing technique using a thin layer of metallic powder.

As a short summary, Laser Powder Bed Fusion (LPBF) is an additive manufacturing technique that uses a laser beam to melt a thin layer of metallic powders through to the base below. This method is tremendously useful, similar to other 3D printing techniques, because of its ability to facilitate the automated production of geometrically complex and minuscule parts. However, the high energy of the laser, scattered nature of the powder bed, and dynamic heating and cooling patterns result in a chaotic process that easily form defects in defect-sensitive components. For a more detailed overview of LPBF, see Erkin Oto’s post below.

The current method of analyzing defects (specifically, keyhole porosity as explained by Erkin) is simple manual inspection of X-Ray images. Once deformities have been spotted, a researcher must personally sync the X-Ray frame up with its corresponding infrared data. Like any analytical process that involves human judgment, this method is time consuming and somewhat prone to errors, even for the best researchers.

Thus, the first step was to create a tool that could assist with immediate research needs, by providing fast, locally stored data, synced images with informational charts, and dynamic control over the area of interest:

Figure 1: Demo interface with fast data pulling and pixel-precise control

One flaw with the above interface is that the infrared values do not inherently contain accurate temperature values, and thus these must be computed manually. In the interface, we are using a pre-set scale that is not necessarily accurate, but is still useful for visualization. However, for precise research on LPBF, exact temperature data is needed to gauge the physical properties of the materials used.

Once again, the process of calibrating exact temperature values is currently done by visual identification of the melt pool in X-ray images combined with knowledge of material melting point. This method is thus entirely subject to the researcher’s intuition:

Figure 2: X-ray video of powder bed during fusion. The chaotic heating, cooling, and pressure differentials cause distortion and volatile powder particles.

The disorderly nature of the LBPF process lends itself to a consistency problem – one researcher may have different opinions from another on the correct location of the melt pool in any given experiment. Even the same researcher’s intuition certainly suffers minor variations from day to day. To this end, any sort of automation of this visual identification problem would immediately provide the benefit of consistency from experiment to experiment.

The automated visual identification of these image sets takes advantage of the different textures and brightness levels of each region, and incorporates these with assumptions about the location of the melt pool relative to said regions. An experimentally discovered sequence of brightness thresholding, Gaussian blurring, median blurring, brightening, and Canny edge detection culminate in semi-accurate region detection for individual images:

Figure 3: Process of region detection for individual images.

This process is quick (approx. 1.5 seconds for all ~100 images), and accurate at first glance. However, putting all the processed images together in sequence reveals that the detected bottom of the melt pool is actually quite chaotic. Fortunately, this has a relatively simple solution, which takes advantage of high image count in order to generate a smoothed path using mean squared error regression. With ~100 images contributing, this estimation (with a researcher inputted offset) is almost guaranteed to accurately emulate the true path of the laser.

Figure 4: (Top) True detected location of melt pool bottom (red) and smoothed estimate of laser path (blue). True pixel values of detected bottom location and the least mean squared error line that returns rate of movement (bottom).

From there, it’s a relatively simple process to match the points demarcating the melt pool on the X-ray image to the corresponding points on the IR images, using the known geometry of the images:

Figure 5: Melt pool bounds on X-ray image (right) and corresponding area on the IR images (left).

While already useful in providing consistency, speed, and accuracy in melt pool detection for LPBF, the process still contains steps that require manual input and should be automated in the future. For example, the existing sequence of image processing techniques used to detect the melt pool was iteratively developed simply by entering successive combinations into the processing script. Many experiments are conducted with the same X-ray camera settings and thus should use the same image processing techniques. If a researcher could label the correct laser positions on just a few image sets, it would be trivial for a machine learning model to discover the best combination for use on many following experiments.

Another crucial issue is that although many experiments are run with the same camera settings, not all are. Thus, given different image sets, the optimal image processing parameters might need modifications on a non-trivial timescale. Another potential avenue for future development would be to create a classifier that could determine the image type based on image features, and then select the correct set of processing parameters as determined using the method above.

These two further developments alone could turn this project into a useful tool for the visual analysis of all LBPF experiments, not limited by trivialities such as researcher bias and X-ray imaging settings. Once integrated with other LBPF analysis research performed this summer, this computer vision project has the potential to help form the basis for a powerful tool in LBPF defect detection and control.

References:

  1. Zhao C. et al. “Real-time monitoring of laser powder bed fusion process using high-speed X-ray imaging and diffraction.” Nature Research Journal, vol. 7, no. 3602, 15 June 2017
  2. Jinbo Wu, Zhouping Yin, and Youlun Xiong “The Fast Multilevel Fuzzy Edge Detection of Blurry Images.” IEEE Signal Processing Letters, Vol. 14, No. 5, May 2007

In Situ Analysis of Laser Powder Bed Fusion Using Simultaneous High-speed Infra-red and X-ray Imaging

Hello! My name is Erkin Oto, and I am a master`s student in Mechanical Engineering at Northwestern. I am working with Aaron Greco and Benjamin Gould on the In Situ Analysis of Laser Powder Bed Fusion Using Simultaneous High-speed Infra-red and X-ray Imaging project.

Laser powder bed fusion process is a type of additive manufacturing (AM) that selectively melt or bind particles in thin layers of powder materials to build 3D metal parts. However, the consistency of parts manufactured by this technique is still a problem, and overcoming this problem requires the understanding of multiple complex physical phenomena which occur simultaneously during the process.

Because the components are fabricated layer by layer, LPBF allows for the manufacture of geometrically complex parts that are not possible to manufacture with traditional manufacturing techniques. It also has the capability to make complex parts with significantly less wasted material than the other traditional manufacturing processes. However, the use of a high power laser beam leads to high temperatures, rapid heating/cooling, and significant temperature gradients resulting in highly dynamic physical phenomena that can form defects in the parts.

If the build parameters, including laser spot size, power, scan speed, and scan path, are not controlled, then the microstructure of the final product could contain unwanted porosity, cracks, residual stress, or an unwanted grain structure. It is essential that advanced in situ techniques, particularly those that can correlate material thermal conditions (i.e. heating and cooling rates, and thermal gradients) to build parameters, are required to be able to solve the unknowns of the process.

Although high-speed X-ray imaging provides great insight into the important sub-surface features that determine the quality of the LPBF process, it is unable to directly convey the quantitative thermal information that is necessary to fully understand the LPBF process.

Also, every industrial machine has some sort of IR camera attached to it. Therefore, if behaviors and defects seen in X-ray can be linked with IR videos, then the IR camera can be used within a control system to prevent defect formation.

The current project I am working on combines the high-speed infra-red (IR) imaging and hard X-ray imaging at the Advanced Photon Source (APS) at Argonne, to provide an analysis that correlates IR and X-ray images to be able to understand and quantify dynamic phenomena involved in LPBF. My work consists of observing the formation of multiple points of subsurface porosity, commonly referred to as keyhole pores. In specific, I am focusing on understanding the keyhole porosity formation at the end of each track which is referred to as the “end track keyhole porosity”. This phenomenon is especially of interest since under the same laser and material conditions, the end track porosity is not always observed. I am trying to shed light on the phenomena that are causing the random formation of these end track keyhole pores.

Figure 1: End track keyhole porosity formation

It was determined by my research group that there are large differences in the temperature history of the probed pixels when the experiments with and without the end track keyhole porosities are compared. It was also determined that a larger maximum cooling rate and a higher temperature after solidification was observed.

Therefore, I start by selecting a certain region of interest or the exact pixel on the IR images at the instant the end track keyhole porosity is observed on the X-ray image. This requires syncing the IR and X-ray images. After finding the right pixel, I look at the cooling rate of that specific pixel and compare it with the cooling rate of a pixel taken from an experiment done under same conditions and has not formed an end track porosity. The cooling rate comparison for the keyhole porosity formation is given below.

              porosity formed                                                          no porosity

Figure 2: Cooling rate comparison for two builds, one forming an end of track porosity

The main advantage of this study is that combining the X-ray and IR imaging makes it possible to identify the thermal signatures that cause the formation of defects. This could further be developed by adding a control system in commercial printers that can identify defects in situ by tracking the thermal signatures of the build. The cost of the additive manufacturing processes could be cut significantly by determining a defect early in the process, so that the companies would not dedicate all the time and money later to find out that the built part is unusable after all.

 

References:

  1. Parab. et al. “Ultrafast X-ray imaging of laser-metal additive manufacturing processes.” Journal of Synchrotron Radiation, vol. 25, 2018, pp. 1467-1477.
  2. Cunningham. et al “Keyhole threshold and morphology in laser melting revealed by ultrahigh-speed x-ray imaging.” Science Mag, vol. 363, no. 6429, 22 Feb 2019, pp. 849-852.
  3. Zhao C. et al. “Real-time monitoring of laser powder bed fusion process using high-speed X-ray imaging and diffraction.” Nature Research Journal, vol. 7, no. 3602, 15 June 2017

Using Time Series Techniques to Understand the Correlation between Light, Thermal Radiation, and Reported Temperature Error

Hello! My name is Kevin Mendoza Tudares, and I am a rising sophomore at Northwestern University studying Computer Science. This summer, I am working with Pete Beckman and Rajesh Sankaran on developing a process to clean and organize preexisting and incoming data from the Array of Things (AoT) nodes as well as use time series techniques on this data to quantify the correlation between direct exposure to sunlight and the resulting error in the reported environmental temperature (and humidity) by the node.

Having an up-to-date server and database is critical when working with live and time-series data, and at the moment, the research team is transitioning their database system to using PostgreSQL extended with TimescaleDB in order to efficiently manage the incoming data from the nodes as time-series data. That is why a part of what I am working on this summer is writing scripts/programs that will cleanly create mappings and upload data representing the system of nodes and sensors in the form of .csv files into their appropriate relational tables in the new database. These scripts will also transfer other preexisting node and sensor data along with large amounts of measurement data from the previous database system into the new one. This first part of my work is important for the execution of the second portion as I will be working with this same data to find correlations between the reported solar exposure and error in reported temperature.

The second task I will be working on involves knowledge of thermal radiation and how it affects the performance of outdoor temperature instruments, such as those used in climatology stations and found usually in white plastic housings or enclosures called Stevenson screens. These enclosures protect the instruments from precipitation and direct or reflected sunlight while still allowing air to circulate through them, thus allowing more accurate and undisturbed measurements from the environment around them. AoT nodes are built in a similar fashion for the same benefits, as seen in the figures below.

Figure 1: An AoT Node  

      Figure 2:  Exterior of a Stevenson screen

Along with the benefits of protection from this design, one of the issues with this for the AoT node enclosure is the concept of solar gain, which is the increase in thermal energy, or heat, in a space or object as it absorbs solar radiation. While the node casing protects the temperature sensors from direct incident radiation, as none of it is transmitted directly through the material and most of the radiation is reflected, there is still the presence of thermal reradiation from the protective material. This is because “despite being coloured white the external surfaces may be free to absorb some short-wave radiation, and some of this may reradiate internally” into the node as long-wave radiation and onto the temperature sensors (Burton 160). This infrared radiation causing the error doesn’t need to come from the sun directly, as it could also come from the glare from glass of a nearby building or from the hood of a passing vehicle, but this error is most often to occur in the day time when the sun is out and shining directly on these nodes. Another issue that goes in hand with the thermal reradiation would be the size of these nodes, as previous research has found that an overheating of air temperature inside smaller sized Stevenson screens was detected more frequently in comparison to much larger Stevenson screens, and these findings could be applied to the small-scale nodes mounted on poles (Buisan et al. 4415). Excessive solar gain can lead to overheating within a space, and with less space, this form of passive heating is much more effective as the heat cannot disperse. Finally, one last issue with the internal temperature of the nodes is the lack of active ventilation. Studies have found that Stevenson screens that are non-aspirated (no ducts) reported significantly warmer internal temperatures than ones with aspiration ducts under low wind conditions (Hoover and Yao 2699). Without aspiration ducts, which all nodes lack, the cooling for these nodes to maintain them at ambient temperature is limited to only wind conditions that will circulate air through the node.

Thus, with the knowledge of potential issues with the nodes that could result in errors in ambient temperature data, my task is to find, understand, and quantify the described trend. This process will involve the time-series data I previously cleaned and uploaded, querying the detected visible and infrared light measurement data from a node at times where the calculated temperature error reaches a certain magnitude, and using these associated values to create a model. This model can then be applied to estimate a more accurate measurement of the ambient temperature around the node by accounting for this error at other times given the light measurements.

My work on this project is important because working with accurate data and readings is essential for all other data analysis and machine learning tasks that must be done by the team to identify and predict phenomena in our environment. For this to be done, we must have faith in the data and any trends we see, and I am contributing to help understand these trends and account for them. Special thanks to Pete Beckman, Rajesh Sankaran, and Jennifer Dunn for mentoring me this summer.

 

References:

Buisan, Samuel T., et al. “Impact of Two Different Sized Stevenson Screens on Air Temperature Measurements.” International Journal of Climatology, vol. 35, no. 14, 2015, pp. 4408–4416., doi:10.1002/joc.4287.

Burton, Bernard. “Stevenson Screen Temperatures – an Investigation.” Weather, vol. 69, no. 6, 27 June 2014, pp. 156–160., doi:10.1002/wea.2166.

Hoover, J., and L. Yao. “Aspirated and Non-Aspirated Automatic Weather Station Stevenson Screen Intercomparison.” International Journal of Climatology, vol. 38, no. 6, 9 Mar. 2018, pp. 2686–2700., doi:10.1002/joc.5453.

Powder Ejection in Binder Jetting Additive Manufacturing

Hi there! My name is Sean Wang, and I’m a rising sophomore at Northwestern studying Materials Science and Engineering. I worked with Tao Sun, Niranjan Parab, and Cang Zhao in the X-Ray Science Division on data analysis of powder ejection in binder jetting additive manufacturing.

Additive manufacturing (AM), commonly referred to as 3D printing, is the process of creating parts from CAD by fusing thin layers of material together. This is typically done using heat, through either Fused Deposition Modeling in which filament is melted in a thin string and layered to create a part, or Powder Bed Fusion (where a laser scans a powder bed and melts powder into layers). These techniques utilize high amounts of heat that can lead to the formation of residual stress and create defects.

An alternative AM technique known as Binder Jetting uses no heat during the process. The process is like printing with ink; a print head moves across the powder bed and drops binder, bonding the powder together to form solid layers. The solid part is later treated with heat, if needed, and finished. Because the binder jetting process does not melt powder, materials that are difficult to melt (such as ceramics) or are sensitive to heat can be utilized.

However, one of the problems that limits the use of binder jetting is powder ejection when the binder drops onto the powder bed. The ejected powder can land on different areas of the bed and lead to powder bed depletion. Understanding how different types of powdered materials eject from the bed will help shape manufacturing parameters and improve the quality of parts created. The Advanced Photon Source lets us take high speed x-ray images that we can analyze for insights into the binder jetting process.

Figure 1: Short x-ray image sequence of SS 316 30µm binder jetting.

During my time at Argonne, I looked at 4 types of powder particles: Stainless Steel 316 (30µm and 9µm), Al2O3(32µm), and Silicon (9µm). I imported 10 x-ray image sequences into the ImageJ software to threshold them and filter out spurious features. By isolating the powder particles within a set frame, we can use the built-in particle tracker in ImageJ to count the number of airborne particles and calculate the area in the frame they occupy.

Figure 2: Processed image sequence of SS 316 30µm ready for particle tracking.

 

Figure 3: ImageJ tracking function highlights each particle. The software gives information about the count of particles and their combined area in each frame.

Data and Analysis

Figure 4: Data graphs showing count and area over time.

Several insights can be learned from the data provided by ImageJ.

  • SS 316 has the most particle ejection out of the different powders. SS 316 30µm is most likely to cause significant powder bed depletion due to the large volume of particles.
  • Al2O3 has less ejection than SS 316 because of mechanical interlocking, where the particles grasp onto each other and prevent ejection.
  • Si 9µm had little ejection, but the ejected powder clumped together in large chunks when ejected. This can lead to significant powder bed depletion in localized areas.
  • Area data for Si 9µm not available because ImageJ was unable to process the image and track particles. Particle count was done manually.

Some future work to further the development of binder jetting would be to incorporate a machine learning algorithm to automatically process images and track particles. Also, in my processed images some particles seemed to disappear for a few frames or join with other particles when particles cross each other (a limitation of the 2-dimensional nature of the image sequences). The particle tracking feature in ImageJ is unable to account for these occurrences, lowering the accuracy of the data and requiring adjustments to be made for each individual sequence. Automating these processes would let researchers test and analyze a large variety of image sequences and gain insight into this developing process.

Figure 7: Processed image that suffers from disappearing and cross-path particles.

 

Additive manufacturing has the potential to revolutionize the way we design and create by reducing wasted time, energy, and resources from traditional manufacturing processes. I enjoyed assisting the X-Ray Science Division with their research into in-situ characterization of multiple additive manufacturing processes using high-speed x-ray techniques, and am extremely thankful to Dr. Jennifer Dunn, Tao Sun, Niranjan Parab, and Cang Zhao for the opportunity to work at Argonne National Laboratory this summer.