Predicting and Responding to Microclimatic Changes in an Electronics Enclosure

Hello! My name is Richard Yeh, and I am a rising senior with majors in Electrical Engineering and Integrated Science Program at Northwestern. This summer, I am working with Pete Beckman and Rajesh Sankaran on improving the resiliency of the Array of Things (AoT) nodes. The AoT nodes that are deployed have to be outdoors throughout the year, experiencing the full force of nature through night and day and rain or sun. Despite this, the sensors have to be reliable and resilient to maximize uptime and minimize maintenance, especially as the scale of the project increases and more nodes get deployed over a larger range of area. When looking at nodes that have been brought back from deployment over the years, it has been observed that the electronics inside are prone to failure, which is expected given the harsh environment they are exposed to. My work here is to develop a method to predict and anticipate weather events that could negatively impact the performance of the nodes so that preventive action can be taken.

The first step was to ensure that accurate data is being collected to better understand the environment inside and surrounding the nodes. This involves identifying historically problematic sensors and fixing the pipeline through which their data is sent. Many of the sensors used by the AoT nodes run on a communication bus using the I2C protocol which is able to interface with dozens of sensors through just two wires. Previously, it was noted that many of the sensors running this protocol often report erroneous values, publishing inaccurate results that were being made public. Additionally, by nature of the way the I2C protocol works, when one sensor on the bus malfunctions, there is a chance for the entire bus to go down, rendering other sensors nonfunctional. My first project was to resolve this issue by updating the firmware on the sensor boards to be able to run a check on the I2C bus and react accordingly. The changes allow for a scan to be run on the bus to detect and identify these failing sensors and “disable” them, preventing data from being published from those specific sensors even when requested from.

Having updated the firmware on several deployed nodes, the next step was to determine the types of failures that can occur. For electronics in enclosures that are exposed to a variety of external climatic conditions, one major concern is humidity build up and condensation, which is problematic for the longevity of these electronics as the presence of water leads to significantly higher rates of corrosion.

Figure 1: Sensor Board from Previously Deployed Node. White discolorations indicate corrosion

In studies of simulated cycling temperature conditions to a typical electronic enclosure, accumulation of water content in the enclosure has been observed over each cycle, increasing the absolute humidity over time [1]. Additionally, the problem is compounded on by possible contamination on the sensor boards from the manufacturing process. Contamination in the form of ionic residues on the boards can lead to leakage current and corrosion at lower humidity levels as the salts begin to absorb moisture and form conduction paths [2].

To get an idea of when and why corrosion happens on the boards, it is important to see what the climate profile looks like inside the nodes. By using temperature and humidity measurements from sensors inside of the enclosure housing the electronics, the internal microclimate can be observed and analyzed:

Figure 2: Plot of Internal and External Temperature, Relative Humidity, and Absolute Humidity Over 7 Days for 1 Node

It is suspected that condensation may be happening where there are sudden drops in internal absolute humidity, indicating a loss of water content. However, continued analysis over a longer range of time and over a larger range of nodes is still being done to correctly identify the cause of the sensor readings, such as other factors like rain events.

Eventually, the goal is to extend this kind of real time analysis and detection to other potentially damaging cases, such as conditions that promote corrosion or extreme temperature. Once a node senses that the environment is reaching the threshold that would result in one of these damaging scenarios, the node can then employ emergency self-protecting procedures, such as generating heat using the CPU onboard in cold temperatures, or shutting down parts of the board that could be at risk because of condensation. All of this helps to keep these nodes alive for a longer period of time, collecting and publishing accurate data.

 

References

[1] H. Conseil, V. C. Gudla, M. S. Jellesen, R. Ambat, “Humidity Build-Up in a Typical Electronic Enclosure Exposed to Cycling Conditions and Effect on Corrosion Reliability”, Components Packaging and Manufacturing Technology IEEE Transactions on, vol. 6, no. 9, pp. 1379-1388, 2016.

[2] V. Verdingovas, M. S. Jellesen, and R. Ambat, “Impact of NaCl contamination and climatic conditions on the reliability of printed circuit board assemblies,” IEEE Trans. Device Mater. Rel., vol. 14, no. 1, pp. 42–51, Mar. 2014.

Computer Vision for the Optimization of Laser Powder Bed Fusion Analysis

Hello there! My name is Lyon Zhang, and I am a rising junior studying Computer Science at Northwestern. This summer I am working with Jakob Elias on a variety of projects with a broad goal of using networking and artificial intelligence techniques to automate visual analysis of varying manufacturing methods. The most extensive and ambitious of these is Laser Powder Bed Fusion, a 3D printing technique using a thin layer of metallic powder.

As a short summary, Laser Powder Bed Fusion (LPBF) is an additive manufacturing technique that uses a laser beam to melt a thin layer of metallic powders through to the base below. This method is tremendously useful, similar to other 3D printing techniques, because of its ability to facilitate the automated production of geometrically complex and minuscule parts. However, the high energy of the laser, scattered nature of the powder bed, and dynamic heating and cooling patterns result in a chaotic process that easily form defects in defect-sensitive components. For a more detailed overview of LPBF, see Erkin Oto’s post below.

The current method of analyzing defects (specifically, keyhole porosity as explained by Erkin) is simple manual inspection of X-Ray images. Once deformities have been spotted, a researcher must personally sync the X-Ray frame up with its corresponding infrared data. Like any analytical process that involves human judgment, this method is time consuming and somewhat prone to errors, even for the best researchers.

Thus, the first step was to create a tool that could assist with immediate research needs, by providing fast, locally stored data, synced images with informational charts, and dynamic control over the area of interest:

Figure 1: Demo interface with fast data pulling and pixel-precise control

One flaw with the above interface is that the infrared values do not inherently contain accurate temperature values, and thus these must be computed manually. In the interface, we are using a pre-set scale that is not necessarily accurate, but is still useful for visualization. However, for precise research on LPBF, exact temperature data is needed to gauge the physical properties of the materials used.

Once again, the process of calibrating exact temperature values is currently done by visual identification of the melt pool in X-ray images combined with knowledge of material melting point. This method is thus entirely subject to the researcher’s intuition:

Figure 2: X-ray video of powder bed during fusion. The chaotic heating, cooling, and pressure differentials cause distortion and volatile powder particles.

The disorderly nature of the LBPF process lends itself to a consistency problem – one researcher may have different opinions from another on the correct location of the melt pool in any given experiment. Even the same researcher’s intuition certainly suffers minor variations from day to day. To this end, any sort of automation of this visual identification problem would immediately provide the benefit of consistency from experiment to experiment.

The automated visual identification of these image sets takes advantage of the different textures and brightness levels of each region, and incorporates these with assumptions about the location of the melt pool relative to said regions. An experimentally discovered sequence of brightness thresholding, Gaussian blurring, median blurring, brightening, and Canny edge detection culminate in semi-accurate region detection for individual images:

Figure 3: Process of region detection for individual images.

This process is quick (approx. 1.5 seconds for all ~100 images), and accurate at first glance. However, putting all the processed images together in sequence reveals that the detected bottom of the melt pool is actually quite chaotic. Fortunately, this has a relatively simple solution, which takes advantage of high image count in order to generate a smoothed path using mean squared error regression. With ~100 images contributing, this estimation (with a researcher inputted offset) is almost guaranteed to accurately emulate the true path of the laser.

Figure 4: (Top) True detected location of melt pool bottom (red) and smoothed estimate of laser path (blue). True pixel values of detected bottom location and the least mean squared error line that returns rate of movement (bottom).

From there, it’s a relatively simple process to match the points demarcating the melt pool on the X-ray image to the corresponding points on the IR images, using the known geometry of the images:

Figure 5: Melt pool bounds on X-ray image (right) and corresponding area on the IR images (left).

While already useful in providing consistency, speed, and accuracy in melt pool detection for LPBF, the process still contains steps that require manual input and should be automated in the future. For example, the existing sequence of image processing techniques used to detect the melt pool was iteratively developed simply by entering successive combinations into the processing script. Many experiments are conducted with the same X-ray camera settings and thus should use the same image processing techniques. If a researcher could label the correct laser positions on just a few image sets, it would be trivial for a machine learning model to discover the best combination for use on many following experiments.

Another crucial issue is that although many experiments are run with the same camera settings, not all are. Thus, given different image sets, the optimal image processing parameters might need modifications on a non-trivial timescale. Another potential avenue for future development would be to create a classifier that could determine the image type based on image features, and then select the correct set of processing parameters as determined using the method above.

These two further developments alone could turn this project into a useful tool for the visual analysis of all LBPF experiments, not limited by trivialities such as researcher bias and X-ray imaging settings. Once integrated with other LBPF analysis research performed this summer, this computer vision project has the potential to help form the basis for a powerful tool in LBPF defect detection and control.

References:

  1. Zhao C. et al. “Real-time monitoring of laser powder bed fusion process using high-speed X-ray imaging and diffraction.” Nature Research Journal, vol. 7, no. 3602, 15 June 2017
  2. Jinbo Wu, Zhouping Yin, and Youlun Xiong “The Fast Multilevel Fuzzy Edge Detection of Blurry Images.” IEEE Signal Processing Letters, Vol. 14, No. 5, May 2007

In Situ Analysis of Laser Powder Bed Fusion Using Simultaneous High-speed Infra-red and X-ray Imaging

Hello! My name is Erkin Oto, and I am a master`s student in Mechanical Engineering at Northwestern. I am working with Aaron Greco and Benjamin Gould on the In Situ Analysis of Laser Powder Bed Fusion Using Simultaneous High-speed Infra-red and X-ray Imaging project.

Laser powder bed fusion process is a type of additive manufacturing (AM) that selectively melt or bind particles in thin layers of powder materials to build 3D metal parts. However, the consistency of parts manufactured by this technique is still a problem, and overcoming this problem requires the understanding of multiple complex physical phenomena which occur simultaneously during the process.

Because the components are fabricated layer by layer, LPBF allows for the manufacture of geometrically complex parts that are not possible to manufacture with traditional manufacturing techniques. It also has the capability to make complex parts with significantly less wasted material than the other traditional manufacturing processes. However, the use of a high power laser beam leads to high temperatures, rapid heating/cooling, and significant temperature gradients resulting in highly dynamic physical phenomena that can form defects in the parts.

If the build parameters, including laser spot size, power, scan speed, and scan path, are not controlled, then the microstructure of the final product could contain unwanted porosity, cracks, residual stress, or an unwanted grain structure. It is essential that advanced in situ techniques, particularly those that can correlate material thermal conditions (i.e. heating and cooling rates, and thermal gradients) to build parameters, are required to be able to solve the unknowns of the process.

Although high-speed X-ray imaging provides great insight into the important sub-surface features that determine the quality of the LPBF process, it is unable to directly convey the quantitative thermal information that is necessary to fully understand the LPBF process.

Also, every industrial machine has some sort of IR camera attached to it. Therefore, if behaviors and defects seen in X-ray can be linked with IR videos, then the IR camera can be used within a control system to prevent defect formation.

The current project I am working on combines the high-speed infra-red (IR) imaging and hard X-ray imaging at the Advanced Photon Source (APS) at Argonne, to provide an analysis that correlates IR and X-ray images to be able to understand and quantify dynamic phenomena involved in LPBF. My work consists of observing the formation of multiple points of subsurface porosity, commonly referred to as keyhole pores. In specific, I am focusing on understanding the keyhole porosity formation at the end of each track which is referred to as the “end track keyhole porosity”. This phenomenon is especially of interest since under the same laser and material conditions, the end track porosity is not always observed. I am trying to shed light on the phenomena that are causing the random formation of these end track keyhole pores.

Figure 1: End track keyhole porosity formation

It was determined by my research group that there are large differences in the temperature history of the probed pixels when the experiments with and without the end track keyhole porosities are compared. It was also determined that a larger maximum cooling rate and a higher temperature after solidification was observed.

Therefore, I start by selecting a certain region of interest or the exact pixel on the IR images at the instant the end track keyhole porosity is observed on the X-ray image. This requires syncing the IR and X-ray images. After finding the right pixel, I look at the cooling rate of that specific pixel and compare it with the cooling rate of a pixel taken from an experiment done under same conditions and has not formed an end track porosity. The cooling rate comparison for the keyhole porosity formation is given below.

              porosity formed                                                          no porosity

Figure 2: Cooling rate comparison for two builds, one forming an end of track porosity

The main advantage of this study is that combining the X-ray and IR imaging makes it possible to identify the thermal signatures that cause the formation of defects. This could further be developed by adding a control system in commercial printers that can identify defects in situ by tracking the thermal signatures of the build. The cost of the additive manufacturing processes could be cut significantly by determining a defect early in the process, so that the companies would not dedicate all the time and money later to find out that the built part is unusable after all.

 

References:

  1. Parab. et al. “Ultrafast X-ray imaging of laser-metal additive manufacturing processes.” Journal of Synchrotron Radiation, vol. 25, 2018, pp. 1467-1477.
  2. Cunningham. et al “Keyhole threshold and morphology in laser melting revealed by ultrahigh-speed x-ray imaging.” Science Mag, vol. 363, no. 6429, 22 Feb 2019, pp. 849-852.
  3. Zhao C. et al. “Real-time monitoring of laser powder bed fusion process using high-speed X-ray imaging and diffraction.” Nature Research Journal, vol. 7, no. 3602, 15 June 2017

Using Time Series Techniques to Understand the Correlation between Light, Thermal Radiation, and Reported Temperature Error

Hello! My name is Kevin Mendoza Tudares, and I am a rising sophomore at Northwestern University studying Computer Science. This summer, I am working with Pete Beckman and Rajesh Sankaran on developing a process to clean and organize preexisting and incoming data from the Array of Things (AoT) nodes as well as use time series techniques on this data to quantify the correlation between direct exposure to sunlight and the resulting error in the reported environmental temperature (and humidity) by the node.

Having an up-to-date server and database is critical when working with live and time-series data, and at the moment, the research team is transitioning their database system to using PostgreSQL extended with TimescaleDB in order to efficiently manage the incoming data from the nodes as time-series data. That is why a part of what I am working on this summer is writing scripts/programs that will cleanly create mappings and upload data representing the system of nodes and sensors in the form of .csv files into their appropriate relational tables in the new database. These scripts will also transfer other preexisting node and sensor data along with large amounts of measurement data from the previous database system into the new one. This first part of my work is important for the execution of the second portion as I will be working with this same data to find correlations between the reported solar exposure and error in reported temperature.

The second task I will be working on involves knowledge of thermal radiation and how it affects the performance of outdoor temperature instruments, such as those used in climatology stations and found usually in white plastic housings or enclosures called Stevenson screens. These enclosures protect the instruments from precipitation and direct or reflected sunlight while still allowing air to circulate through them, thus allowing more accurate and undisturbed measurements from the environment around them. AoT nodes are built in a similar fashion for the same benefits, as seen in the figures below.

Figure 1: An AoT Node  

      Figure 2:  Exterior of a Stevenson screen

Along with the benefits of protection from this design, one of the issues with this for the AoT node enclosure is the concept of solar gain, which is the increase in thermal energy, or heat, in a space or object as it absorbs solar radiation. While the node casing protects the temperature sensors from direct incident radiation, as none of it is transmitted directly through the material and most of the radiation is reflected, there is still the presence of thermal reradiation from the protective material. This is because “despite being coloured white the external surfaces may be free to absorb some short-wave radiation, and some of this may reradiate internally” into the node as long-wave radiation and onto the temperature sensors (Burton 160). This infrared radiation causing the error doesn’t need to come from the sun directly, as it could also come from the glare from glass of a nearby building or from the hood of a passing vehicle, but this error is most often to occur in the day time when the sun is out and shining directly on these nodes. Another issue that goes in hand with the thermal reradiation would be the size of these nodes, as previous research has found that an overheating of air temperature inside smaller sized Stevenson screens was detected more frequently in comparison to much larger Stevenson screens, and these findings could be applied to the small-scale nodes mounted on poles (Buisan et al. 4415). Excessive solar gain can lead to overheating within a space, and with less space, this form of passive heating is much more effective as the heat cannot disperse. Finally, one last issue with the internal temperature of the nodes is the lack of active ventilation. Studies have found that Stevenson screens that are non-aspirated (no ducts) reported significantly warmer internal temperatures than ones with aspiration ducts under low wind conditions (Hoover and Yao 2699). Without aspiration ducts, which all nodes lack, the cooling for these nodes to maintain them at ambient temperature is limited to only wind conditions that will circulate air through the node.

Thus, with the knowledge of potential issues with the nodes that could result in errors in ambient temperature data, my task is to find, understand, and quantify the described trend. This process will involve the time-series data I previously cleaned and uploaded, querying the detected visible and infrared light measurement data from a node at times where the calculated temperature error reaches a certain magnitude, and using these associated values to create a model. This model can then be applied to estimate a more accurate measurement of the ambient temperature around the node by accounting for this error at other times given the light measurements.

My work on this project is important because working with accurate data and readings is essential for all other data analysis and machine learning tasks that must be done by the team to identify and predict phenomena in our environment. For this to be done, we must have faith in the data and any trends we see, and I am contributing to help understand these trends and account for them. Special thanks to Pete Beckman, Rajesh Sankaran, and Jennifer Dunn for mentoring me this summer.

 

References:

Buisan, Samuel T., et al. “Impact of Two Different Sized Stevenson Screens on Air Temperature Measurements.” International Journal of Climatology, vol. 35, no. 14, 2015, pp. 4408–4416., doi:10.1002/joc.4287.

Burton, Bernard. “Stevenson Screen Temperatures – an Investigation.” Weather, vol. 69, no. 6, 27 June 2014, pp. 156–160., doi:10.1002/wea.2166.

Hoover, J., and L. Yao. “Aspirated and Non-Aspirated Automatic Weather Station Stevenson Screen Intercomparison.” International Journal of Climatology, vol. 38, no. 6, 9 Mar. 2018, pp. 2686–2700., doi:10.1002/joc.5453.

Powder Ejection in Binder Jetting Additive Manufacturing

Hi there! My name is Sean Wang, and I’m a rising sophomore at Northwestern studying Materials Science and Engineering. I worked with Tao Sun, Niranjan Parab, and Cang Zhao in the X-Ray Science Division on data analysis of powder ejection in binder jetting additive manufacturing.

Additive manufacturing (AM), commonly referred to as 3D printing, is the process of creating parts from CAD by fusing thin layers of material together. This is typically done using heat, through either Fused Deposition Modeling in which filament is melted in a thin string and layered to create a part, or Powder Bed Fusion (where a laser scans a powder bed and melts powder into layers). These techniques utilize high amounts of heat that can lead to the formation of residual stress and create defects.

An alternative AM technique known as Binder Jetting uses no heat during the process. The process is like printing with ink; a print head moves across the powder bed and drops binder, bonding the powder together to form solid layers. The solid part is later treated with heat, if needed, and finished. Because the binder jetting process does not melt powder, materials that are difficult to melt (such as ceramics) or are sensitive to heat can be utilized.

However, one of the problems that limits the use of binder jetting is powder ejection when the binder drops onto the powder bed. The ejected powder can land on different areas of the bed and lead to powder bed depletion. Understanding how different types of powdered materials eject from the bed will help shape manufacturing parameters and improve the quality of parts created. The Advanced Photon Source lets us take high speed x-ray images that we can analyze for insights into the binder jetting process.

Figure 1: Short x-ray image sequence of SS 316 30µm binder jetting.

During my time at Argonne, I looked at 4 types of powder particles: Stainless Steel 316 (30µm and 9µm), Al2O3(32µm), and Silicon (9µm). I imported 10 x-ray image sequences into the ImageJ software to threshold them and filter out spurious features. By isolating the powder particles within a set frame, we can use the built-in particle tracker in ImageJ to count the number of airborne particles and calculate the area in the frame they occupy.

Figure 2: Processed image sequence of SS 316 30µm ready for particle tracking.

 

Figure 3: ImageJ tracking function highlights each particle. The software gives information about the count of particles and their combined area in each frame.

Data and Analysis

Figure 4: Data graphs showing count and area over time.

Several insights can be learned from the data provided by ImageJ.

  • SS 316 has the most particle ejection out of the different powders. SS 316 30µm is most likely to cause significant powder bed depletion due to the large volume of particles.
  • Al2O3 has less ejection than SS 316 because of mechanical interlocking, where the particles grasp onto each other and prevent ejection.
  • Si 9µm had little ejection, but the ejected powder clumped together in large chunks when ejected. This can lead to significant powder bed depletion in localized areas.
  • Area data for Si 9µm not available because ImageJ was unable to process the image and track particles. Particle count was done manually.

Some future work to further the development of binder jetting would be to incorporate a machine learning algorithm to automatically process images and track particles. Also, in my processed images some particles seemed to disappear for a few frames or join with other particles when particles cross each other (a limitation of the 2-dimensional nature of the image sequences). The particle tracking feature in ImageJ is unable to account for these occurrences, lowering the accuracy of the data and requiring adjustments to be made for each individual sequence. Automating these processes would let researchers test and analyze a large variety of image sequences and gain insight into this developing process.

Figure 7: Processed image that suffers from disappearing and cross-path particles.

 

Additive manufacturing has the potential to revolutionize the way we design and create by reducing wasted time, energy, and resources from traditional manufacturing processes. I enjoyed assisting the X-Ray Science Division with their research into in-situ characterization of multiple additive manufacturing processes using high-speed x-ray techniques, and am extremely thankful to Dr. Jennifer Dunn, Tao Sun, Niranjan Parab, and Cang Zhao for the opportunity to work at Argonne National Laboratory this summer.

Flame Spray Pyrolysis: A Novel Powder Production Technology

Hello, my name is Vrishank and I’m a materials science and engineering major working on Flame Spray Pyrolysis this summer under Joe Libera, Nikola Ferrier and Jakob Elias, along with Ignacio Gonzales who worked with coding. Flame Spray Pyrolysis is a method of producing nanopowders by burning specific precursor solutions in a continuous flame. While the method holds promise for the scalable and continuous production of nanoparticles, it is possible to optimize the conditions for production in order to fine tune the final products.

 

The search for sustainable and scalable nanopowder production is of the utmost important in the face of the global energy crisis. Their high surface area-to-volume ratio is the key to optimizing industry processes where FSP products such as LLZO and TEOS find use as catalysts and electrolytes. One advantage of FSP is that it allows for the fine tuning of nanoparticle morphologies and will allow for Argonne to benchmark industry procedures for different compounds and properties.

 

This summer I worked closely with Joe Libera, understanding the process, trimming data using the in-lab data view program and analyzing the optical emission spectroscopy and Scanning Mobility Particle Sizer data. The main obstacle we encountered was the little resource and literature we had on the correlations between the optical emission spectrum and the product and thus decided the best route for us to go was to develop analytical tools to help deconstruct the FSP OES data.

-Vrishank Menon

 

 

I’m Ignacio Gonzales, rising junior who is majoring in a Mechanical Engineering and Manufacturing and Design Engineering. Over the summer I worked, analysing data at the Flame Spray Pyrolysis (FSP) project under Jakob Elias.  The FSP is a new method of production of nanomaterials that Joe Libera has been developing at the Argonne National Laboratory. The benefits of this method is that it will enable a continuous production of nanomaterials (as opposed to bulk production) and it will costs less than current methods of production. Currently the FSP is able to produce nanomaterials, however these materials are raw; their shape, size and agglomerate structure are not controlled. The project consists in optimizing the conditions, of the FSP in order to be able to produce a final product with full control of the outcome.

 

I was assigned to the computational side of this project along with David MacCumber. Joe Libera and his team, had been conducting various tests with the FSP using various concentrations both  LLZO and TEOS particles. From this Trials, there was a lot of data to work with, including Optical Emission Spectroscopy (OES) and Scanning Mobility Particle Sizer (SMPS) data. I mainly worked on deconstructing the OES data in order to obtain, specific features  such as peak, height, width, area, equations for each broadband, etc. To achieve this, I worked closely with Vrishank Menon, and intern in the experimental side, he acted as a bridge between the experimental and computational side of this project and helped us understand the relevance of the data . Additionally I used various toolkits and packages in Python, such as RamPy and SciPy to perform a more accurate analysis . On the future, features will be used to create a Neural Net and for a Machine Learning platform in which using additional data such as a APS X-ray analysis on the nanomaterial, we can predict the properties of the nanomaterial produced.

-Ignacio Gonzales

Bioprocessing for Additive Manufacturing

Hello, my name is Patricia Lohman and I am a rising sophmore studying material science and engineering at Northwestern University. I work under Meltem Urgun-Demirtas and Patty Campbell on in a bioprocess and reactive separations group. This summer I was tasked with bioprocessing for additive manufacturing or making 3D printable pastes made of food waste.

For much of the summer performed literature searches to compile a list of procedures for making biofilms out of different food waste. This included food waste with three different types of base materials. Vegetable and fruit waste (peels, shells, and seeds) made of cellulose, egg shells made of calcium carbonate, and shrimp shells from which chitosan could be extracted from. After the search, I worked in lab recreating the biopolymers found in the studies. I started with fruit and vegetable waste materials. To do so the process involved digesting dried plant waste material in dilute acid and casting the resulting mixture. In particular, spinach waste produced a flexible film.

Figure 1: Spinach waste biofilm

The egg shell biomaterials began with dried and fine egg shell powder and was mixed with a binder solution until a clay-like paste was produced. The clay could be molded easily and held its shape.

Figure 2: Egg shell paste

The above egg shell paste used less than 90 µm egg shell powder and sucrose water in a 1:1 ratio as a binder.

Shrimp shells do not have chitosan directly available. The shells once demineralized and deproteinized start with chitin. Chitin then undergoes a deacetylation reaction with concentrated NaOH at high temperatures to remove the acetyl group and convert chitin into chitosan.

Figure 3: Chitin to chitosan deacetylation

The effectiveness of the reaction is key in determining the crystallinity, hydrophilicity, degradation, and mechanical properties of chitosan biomaterials. The target was to produce chitosan with a degree of deacetylation of 60% or greater. In lab I began working with pure purchased chitin and performing the deacetylation process under different conditions and measuring the degree of deacetylation. I plan to work on changing the concentration of NaOH, temperature, and conducting the reaction in an inert atmosphere to achieve that degree of deacetylation. After isolation, chitosan can be added to a number of organic solvents to form an extrudable paste.

Once the biopolymers were replicated, I planned on manipulating process parameters to achieve a consistency of paste that could be extruded by the Discov3ry extruder attachment, made especially for pastes, with an Ultimaker 3D printer.

Making bioplastics out of waste material is not only a novel idea, its an essential one. Food and plastic waste are glaring problems that have vast detrimental consequences on the planet. Finding alternatives to the materials used everyday is a good first step to tackling the issue. This project does a great job of addressing waste issues and providing exciting advances for additive manufacturing.  I am very grateful that I was able to work on an impactful project and am excited to see where it goes. A special thank you to my PI’s and Dr. Jennifer Dunn for all their help this summer.

An Automated Workflow to Pre-process Images for Machine Learning Models

Hello! My name is Sarah O’Brien and I’m a rising Junior and Computer Science major at Northwestern University. I’m working with Maria Chan at Argonne and Eric Schwenker (PhD candidate in Materials Science and Engineering) at Northwestern on developing an automated process to create large training sets for image-based machine learning models.

Having large datasets with labeled classes or features is vital for training deep neural networks – and large datasets, particularly in the field of materials science, are often not publicly available. As such, without an automated workflow, organizing and labeling large sets of images is extremely time consuming and needs to be done manually each time we have a new problem to solve with machine learning.

This summer, we developed an Image Pre-processing Workflow to produce training and test data for our machine learning model. We first obtain many figures and captions from the scientific literature and use string parsing in the caption to decide if a figure is “compound,” i.e. made up of multiple subfigures. If it is, we extract each individual sub-figure with a figure separation tool developed by researchers at Indiana University2. Below is an example of the compound figure separation process.

TOP: Original Compound Figure from literature1; BOTTOM: Output of Figure Separation tool

Once we have a set of many separated figures, we decide if each is useful for our training set or not; for example, to create a set of microscopy images we trained a Convolutional Neural Network to create a binary microscopy image/non-microscopy image classifier. We trained this classifier on 960 total hand-labeled images (480 microscopy images and 480 non-microscopy images) and used two approaches: transfer learning and training from scratch. Both methods yielded classifiers with about 94% accuracy using ten-fold cross validation; we are working on making this classifier even more accurate by fine-tuning the models. Together with the figure separator, this trained classifier now allows us access to a large number of individual microscopy images for our future work on training a deep learning model.

My work on this project is important because a working image pre-processing workflow is essential for training a machine learning model. The data pre-processing stage of a machine learning task is often quite time consuming, so to be able to complete the stage automatically will offer us the ability to extract a large amount of information from a collection of images in a relatively short time, and therefore enable automated interpretation and understanding of materials through microscopy.

I’m excited for the future of this project and to see this workflow’s output in action building deep learning tools that will advance scientific collaboration. Special thanks to Dr. Maria Chan, Dr. Jennifer Dunn, and Eric Schwenker for mentoring me this summer.

 

References:

1: https://media.springernature.com/lw785/springer-static/image/art%3A10.1186%2F2047-9158-1-16/MediaObjects/40035_2012_Article_15_Fig2_HTML.jpg

2: S. Tsutsui, D. Crandall, “A Data Driven Approach for Compound Figure Separation Using Convolutional Neural Networks”, ICDAR, 2017.

 

Keyhole Porosity Formation in Metal Addative Manufacturing

My name is Jacob Hechter, I am a rising Junior at Northwestern University working on a degree in Materials Science and Engineering. This summer I’ve been working with Argonne’s Materials for Harsh Conditions group under Dr. Aaron Greco and Dr. Benjamin Gould on their Metal Additive Manufacturing (MAM) project. Colloquially known as metal 3D printing, MAM is the process of continuously adding material to a part during manufacture until the part has the desired final shape. This is in contrast to more traditional methods of manufacture, such as milling or grinding, which can be referred to as subtractive manufacturing.

This project focused on a type of MAM often referred to as Selective Laser Melting (SLM) Powder Bed Fusion (PBF). This process uses a Computer-Aided Design (CAD) document as a source for the design, where the CAD document is sectioned into a series of layers. The MAM machine deposits a layer of powder on top of a substrate and scans a laser across this layer of powder, to fuse the powder in the shape described by the bottommost layer of the CAD drawing. This process is repeated, up until the last layer in the CAD drawing has been completed. After this process has been completed, you are left with a part that has literally been built from the ground up.
MAM has several advantages over more traditional methods of manufacture. It allows for the construction of parts with much greater complexity than traditional manufacturing methods, allowing for the formation of internal voids and other such characteristics without the requirement to make multiple pieces which must be welded together. It can make complex parts with significantly less wasted material compared to traditional methods of manufacture. It also requires significantly less infrastructure to perform the manufacturing process, since it does not require an entire assembly line which must be retooled every time a adjustment is made to a design or a new part needs to be made. However, MAM has some other quite significant disadvantages when compared to traditional methods of manufacturing. During the MAM process, the material of the part undergoes complex thermal cycling, where it is rapidly heated and cooled by repeated scans within the space of seconds. This results in unexpected microstructures, and the formations of several characteristic defects which can ruin a part. This requires constant individual validation of every single part made via MAM if said part is going to be used in almost any application, making MAM produced parts significantly more expensive.

Figure 1: Example of X-ray Transmission Video
Figure 2: Example of Top View IR Video

The overall focus of this project is to record in-situ X-ray transmission and IR videos of the MAM process, in an attempt to better understand its behavior and provide tools which can be used to avoid defect formation. The X-ray transmission analysis results in very high spatial and temporal resolution videos, allowing us to record data at hundreds of thousands of frames per second and pixels which are less than 2 microns wide. These videos give a fairly good picture of what is happening physically to the sample during the MAM process, and an example of one of these videos is shown in Figure 1. However, the only reason why we have these sorts of X-ray videos are because of our use of the Advanced Proton Source, and these videos can only really be obtained with relatively thin samples, so it is highly impractical to suggest these X-ray videos as a source of diagnostic of feedback for MAM. On the other hand, pretty much every industrial machine has some sort of IR camera attached to it. If behaviors and defects seen in X-ray can be linked patterns in IR videos (example in Figure 2) then it may be possible to use the IR cameras as a diagnostic tool, giving MAM machines feedback during the process to avoid defect formation and reducing the need for exhaustive validation.

Figure 3: Example of keyhole porosity formation

My research has focused on a specific type of defect called Keyhole porosity. This occurs when bubbles of gas get trapped underneath the surface a part constructed during the MAM process, resulting in the formation of relatively spherical pores under the surface of the part. This is opposed to other types of porosity, which can form from incomplete melting of the powder material or improper adhesion of two layers of material. An example of keyhole porosity after a print is shown in Figure 3. To compare the severity of keyhole porosity formation, I treated the area under the surface of the sample with imageJ, and measured the area faction which displayed keyhole porosity. Two examples of this process are shown in figures 4 and 5.

Figure 4: Measurement of Area Percent Porosity, no porosity
Figure 5: Measurement of Area Percent Porosity, high porosity
Figure 6: X-ray Transmission Image with Vapor Depression example

A large majority of my time on this project was spent demonstrating that some behaviors observed by others studying this issue were repeatable. In other studies, it was found that the primary physical characteristic which can be correlated with the formation of keyhole porosity is the geometry of the vapor depression. A vapor depression is a column of vapor that penetrates into the bulk of the part during the MAM process. An example is shown in Figure 6. When the width of the vapor depression is kept constant, the vapor depression depth becomes the primary driving factor for keyhole formation. The physical behavior occurring here is that the surrounding liquid metal will close around the bottom section of the vapor depression. This creates a bubble of vapor underneath the surface, which is often trapped underneath the surface when the surrounding material solidifies. In the case of Ti-6Al-4V, the relation is described in Figure 7. Below about 250 micrometers, there is little to no porosity formation. Above 250-300 microns, serious porosity formation starts to occur, increasing fairly strongly with the vapor depression depth, up until it reaches 5-8 % porosity in the 450-550 micron range.

Figure 7: Comparison of porosity and vapor depression depth
Figure 8: Simultaneous X-ray and IR video of single scan image. The top is X-ray Transmission and the bottom is IR Video. The scale bar is in Celsius.
Figure 9: Diagram of 2 line scan
Figure 10: Video of 2 line scan. The top is X-ray Transmission and the bottom is IR Video. The scale bar is in Celsius.

All of this previous data was obtained with single scan samples, in which a sample was scanned once with the laser used to simulate the MAM process. However, we also performed multiple tests in which the samples were scanned multiple times, with a slight offset distance between each scan line referred to as the hatch spacing. This work was done in order to study the effect of thermal history on the formation of porosity. An example of the process of scanning on direction, moving to be offset slightly, and then scanning back the other direction is pictures in Figures 9, and Figure 10 is a video showing this behavior in action using X-ray transmission on the top and IR on the bottom. This ends up being a better approximation of the actual MAM process. Constructing a part with MAM requires hundreds, if not thousands of scans, and it seemed pertinent to see how these behaviors changed from scan to scan. The results of this testing show a clear difference between the first and second scan, where the second scan displays a deeper vapor depression, and consequently displays an increased amount of keyhole formation. The data is shown in Figure 11.

Figure 11: Comparison of the porosity and vapor depression depth for the first and second scans in 2 line scan samples.

As can be seen, only 1 of the 6 samples display an increase in porosity after the first scan, but 4 of the 6 samples shown an increase in porosity after the second scan (Figure 11). Also, all but one of the scan 2’s have a greater vapor depression depth than the first scan, indicating that this increase in porosity formation is due to an increase in vapor depression depth (Figure 11). There is a statistical significant increase in the vapor depression depth, with the mean being a 107.6 micrometer increase in vapor plume depth, with a standard deviation of 16.4 micrometers, resulting in a 95% confidence interval of 74.9 microns to 140.3 microns.

Unfortunately, I have not been able to transform this information into anything useful for the purpose of detecting keyhole formation with IR. I have made several attempts at potential low hanging fruits, comparing profiles of temperature along the scan line as well as the spot size as seen in the IR camera to the vapor depression width and depth, but have achieved nothing of note at this point. There are still other methods which could be fairly simple to make an analysis out of, as well as much more sophisticated methods which could be used to attempt to find such a correlation.  This will be one of the aims of future work.

Advancements in Desalination Technologies: Applications of Electrodeionization

Hello! My name is Caroline Kaden and I am a rising senior studying chemical engineering. This summer I was able to work in the Advanced Materials Division of Argonne National Laboratory, more specifically within the Water-Energy Nexus with Dr. YuPo Lin. Water plays a crucial role in energy and fuel production, from usage in power plant cooling towers, to fracking, to acting alone as a renewable source by hydroelectric power. Similarly, energy is needed to produce usable water from various sources. Pumping, desalinating and distributing water all require energy. Factors such as climate change, and regional variance including population, geography, weather, and occurrences of natural disasters all contribute to the importance of the Water-Energy Nexus because these factors can shift water and energy demands greatly and unexpectedly over short periods of time.

One of the most consumptive and least efficient processes is thermoelectric cooling. Focusing on the optimization of this could therefore greatly decrease water and energy use. More specifically, minimizing the energy used to desalinate water as well as having more usable water for cooling towers will make a large difference in the Water-Energy-Nexus because cooling towers account for almost 50% of interdependent water withdrawal within the US. The inefficiencies of cooling towers are that high mineral, contaminant, and salt content promote scaling therefore decreasing functionality, the blowdown water can be very difficult to treat or dispose of due to high salinity and contaminant content, and even with heavy monitoring, withdrawal of water for make-up usage is very large and increases the impact within the nexus.

However, Electrodeionization (EDI) technology can help solve these issues. EDI is a far more energy-efficient and economical pretreatment than previous water treatment solutions, which means blowdown frequency can be reduced. Sea water, brackish water, produced water and treated municipal effluents are all possible candidates for makeup water if treated sufficiently and economically, and reduce the amount of freshwater needed for makeup water in the cooling tower. These solutions are largely beneficial as developing and optimizing water reuse technologies can reduce cooling tower water consumption by up to 40%.

This summer I did research specifically working with removing silica from water. Silica is especially difficult to remove because it is almost always present in both the reactive and unreactive forms and it is nearly impossible to control which form is present. Additionally, the solubility is affected by time, pH, and temperature. The experiments I ran involved building an EDI stack, with resin wafers inside to promote ionic transport. I then pumped a silica solution through the system as a batch operation, taking the conductivity and pH of both the feed and concentrate regularly to analyze the concentration. The set up is shown below. The EDI stack is in the back left. 

I found that silica does not move through the tower as easily as salt, as not all of the silica originally put into the system is accounted for in the feed nor the concentrate at the end of the experiment. I hypothesize that the the silica being adsorbed onto the resin beads. Because of this, the next steps to take include: changing components out, such as using different resins and/or different membranes to better promote silica transfer to concentrate stream, changing operating conditions such as flowrate, voltage applied, and running continuous feed of silica solution to test for a steady-state point of separation.

Overall this summer’s work was very rewarding and interesting as it combined my background in chemical engineering with my interest in sustainability and I look forward to seeing where EDI and separation technology lead to!