If you didn’t catch the previous parts of this series on what electronic tests are and how they relate to sustainability, you can find the first part here! (No really, you’ll be very confused if you read this article otherwise 😁).
But as quick refresher: improved technology is the key to sustainability–from energy-efficient LEDs to environmentally-friendly batteries! But many tests are needed to improve technology and these should look for ‘failure modes’ (the events that lead to electronics breaking), using tests that speed up both burn-in and lifetime under different (usually harsh) conditions. For example, you might run a burn-in test for an integrated circuit chip (computer chip) at 130°C instead of 25°C (even though most computers are probably stored at room temperature instead of 130°C).
There is now a variety of tests used by the industry to test the durability and longevity of their products, such as: Highly Accelerated Stress Test (HAST); Highly Accelerated Lifetime Test (HALT); High Temperature Operating Life (HTOL) Test; Accelerated Lifetime Test, etc. All these tests follow the same principle: the worse the conditions, the faster the degradation. The issue is figuring out how to balance the test accuracy, making sure not to create unrealistic conditions that may not apply to reality. For example, solar panels need to absorb sunlight, and for higher efficiency they need to reflect very little sunlight. To achieve that solar panels must be treated with special antireflective coatings. The coating under intense heat should be able to perform, but at the same time it has to deal with the pressures of cracking, atomic defects, rain, snow, and… bird poo!! How do you go about including all those things in your testing standard??? This is why testing standard organisations exist, like the International Standards Organisation (ISO), the International Electrotechnical Commission (IEC), the Deutsches Institut für Normung (DIN). These organisations have hundreds of committees with experts who have decades of industry experience that decide the specific test requirements for any electronic part (from batteries to supercapacitors to hard drives and beyond).

Keep in mind that these standards aren’t regulations. Manufacturers just use them to show their products’ quality. But test are costly, and as a result manufacturers use some sort of statistical/simulation-based tools to predict the test outcome early. These types of tools are called ‘prognostics’. Many companies are now developing prognostics to estimate the health of electric vehicle batteries after they’ve been in use. This is especially used when products reach the end of their first lifecycle, in order to predict the possibility and potential efficiency of entering into a second lifecycle (e.g. reusing batteries, or parts of bulky electrical equipment such as laundry machines or fridges).
Prognostic tests can be based on statistical simulations. These statistical models find relationships between data and can extrapolate specific relationships between variables, analyse special statistical distributions like the Weibull distribution, and even use machine learning techniques to classify and forecast data. Still, not all tests and electronics will be ‘fixable’ with statistical prognostics alone. For example, some types of electronic products don’t have ANY failure modes that show up for a certain period of time (longer than 100 hours) (e.g., electrical grid components, like insulated transformers / power cables that often withhold their performance for YEARS). Though burn-in tests for 100 hours would catch defective units for many types of electronics, this means some types of important sustainable technology would need even longer burn-in tests.
This pretty much eliminates any type of statistical model where you want to collect some data and then extrapolate a curve of fit forward to predict results.

There’s also another issue with using statistical prognostics to develop sustainable electronics faster. Some electronics won’t have quantifiable data that you can use for statistical modelling solutions. Two reasons for this are that quantifiable test standards haven’t been developed or the electronic product is too complicated.
For instance, test standards for optical coatings in fibre-optic networks (ie. high-speed Internet) have only been around for about 45 years. Coatings are tested to make sure they can resist scratches and don’t easily come off things they’re applied to (like mirrors and lenses). These tests take hundreds to thousands of hours and some parts are quantifiable (ex. 1000 hours at 85°C and 85% humidity). But other parts are just mere ‘visual’ checks: rub a cheesecloth against a coating 50 times and see if you have scratches. It’s hard to gather data about the effect of cheesecloth rubs though. So statistical models aren’t easy to build here.
In contrast, integrated circuit chips (computer chips) have had testing standards since the 1970s. But each chip has billions of transistors (electronic switches to let current pass through), capacitors (electronic devices to store static charge), and other components. How do you test if one of them is broken? No magical engineering solutions here, unfortunately. Instead, engineers create what are called ‘test patterns’. These are basically different input, output pairs to test on the computer chip. If you input data and get the expected output, the individual computer chip is working. For example, you could write data to the computer chip and then read it back. If the data you read isn’t what you tried to write, you know the computer chip didn’t save the data properly.
BUT data saved isn’t really a ‘physics’ variable that we can quantifiably test like with other electronics. And even if we tried to test variables like temperature, current, etc. — which of the billions of components in a chip do you take readings from?? This is why statistical models are also hard to build here.
In both cases, though, there is an option for carrying out measurements. Instead of collecting data from optical coatings or computer chips, you collect data from the machines that make them. And you use different data from the factory production environment to optimise the production process and MAYBE make some predictions about the functionality of individual batches of electronics. There are companies working on this very complicated approach. But especially with newer sustainable technology (ex: new types of energy storage electronics or energy production electronics), there’s a lot of potential for new growth!
Certainly, there’s a lot of innovation to be had in creating more sustainable technology for the future… but better testing technology is the pre-requisite to unlocking that innovation!
If you have any questions about this article, feel free to email Voltx’s cofounders: Alishba Imran or Shagun Maheshwari!
Thank you to: 🙏
- Dr. Jeff Jones from the IEC. I wouldn’t have understood the connections between different electronic products without you!
- Dr. Darayus Patel from the Nanyang Technological University. I’m grateful for all your enthusiastic support in breaking down semiconductor fabrication with me!
- Dr. Stefaan Vandendriessche from Edmund Optics. I couldn’t have imagined the issues with testing optical coatings without your tip!
- Dr. Robert Herrick from Intel. I’m amazed by all your selfless support in answering my endless questions about the optoelectronics industry!
Written by Madhav Malhotra, a 17-year-old developer, designer, and entrepreneur-in-training. To find out more about the author, please visit https://www.madhavmalhotra.com/