Purdue researchers create ‘self-aware’ algorithm to avoid hacking attempts

WEST LAFAYETTE, Ind. – It looks like a scene from a spy thriller. An attacker passes through the computer defenses of a nuclear power plant and provides it with false and realistic data, making its computer systems and personnel believe that operations are normal. The attacker then disrupts the operation of key machines in the plant, causing them to malfunction or break down. By the time the system operators realize they have been duped, it is too late, with catastrophic results.

The scenario is not fictitious; this happened in 2010, when the Stuxnet virus was used to damage nuclear centrifuges in Iran. And as ransomware and other cyber attacks spread around the world, system operators are becoming more concerned about these sophisticated “fake data injection” attacks. In the wrong hands, the computer models and data analysis – powered by artificial intelligence – that keep today’s power grids, manufacturing facilities and power plants running smoothly could be turned against them- same.

Purdue researchers have developed new self-awareness and healing technology for industrial control systems against internal and external threats. The project is led by Hany Abdel-Khalik (center) with Yeni Li, a postdoctoral associate in nuclear engineering (right) who is leading anomaly detection work and a third-year doctorate in nuclear engineering. student, Arvind Sundaram, implementing secret knowledge algorithms. (Photo by Purdue University / Vincent Walter) Download Image

Hany Abdel-Khalik of Purdue University offered a powerful answer: to make the computer models that run these cyber-physical systems both conscious and self-healing. Using background noise in the data streams of these systems, Abdel-Khalik and his students integrate invisible, ever-changing, single-use signals that turn passive components into active observers. Even if an attacker is armed with a perfect copy of a system’s model, any attempt to introduce tampered data will be immediately detected and rejected by the system itself, requiring no human response.

“We call it secret knowledge,” said Abdel-Khalik, associate professor of nuclear engineering and researcher at the Purdue Insurance and Information Security Education and Research Center (CERIAS). “Imagine having a bunch of bees hovering around you. Once you move around a bit, the whole bee network responds, so it has that butterfly effect. Here, if someone puts their finger in the data, the whole system will know that there has been an intrusion, and they can correct the changed data.

Confidence through self-awareness

Abdel-Khalik will be the first to say that he is a nuclear engineer, not a computer scientist. But today, critical infrastructure systems in energy, water, and manufacturing all use advanced computational techniques, including machine learning, predictive analytics, and artificial intelligence. . Employees use these models to monitor their machines’ readings and verify that they are within normal ranges. By studying the efficiency of reactor systems and their response to equipment failures and other disturbances, Abdel-Khalik became familiar with the “digital twins” employed by these facilities: duplicate simulations of data monitoring models that help system operators determine when real errors are occurring.

But gradually he became interested in intentional rather than accidental failures, especially what could happen when a malicious attacker has his own digital twin to work with. This is not a far-fetched situation, as the simulators used to control nuclear reactors and other critical infrastructure can be easily acquired. There is also a constant risk that someone inside a system, with access to the control model and its digital twin, could attempt a sneak attack.

“Traditionally, your defense is as good as your knowledge of the model. If they know your role model well enough, then your defense may be violated, ”said Yeni Li, a recent graduate of the group, whose doctorate. research has focused on detecting such attacks using model-based methods.

Abdel-Khalik said: “Any type of system currently based on monitoring information review and decision making is vulnerable to these types of attacks. If you have access to the data and then change the information, then the person making the decision will base their decision on false data.

To thwart this strategy, Abdel-Khalik and Arvind Sundaram, a third-year nuclear engineering student, found a way to hide the signals in the system’s unobservable “noise space”. Control models juggle thousands of different data variables, but only a fraction of them are actually used in basic calculations that affect model outputs and predictions. By modifying these non-essential variables slightly, their algorithm produces a signal so that individual components of a system can verify the authenticity of incoming data and react accordingly.

“When you have components that are loosely coupled to each other, the system is really not aware of the other components or even of itself,” Sundaram said. “He only responds to his entries. When you make it aware of itself, you are building an anomaly detection model within itself. If something is wrong, it should not only detect it, but also operate in a way that does not respect any malicious input that is entered. “

For added security, these signals are generated by random noise from system hardware, such as fluctuations in temperature or power consumption. An attacker with a digital twin of an installation’s model would not be able to anticipate or recreate these ever-changing data signatures, and even someone with internal access would not be able to crack the code.

“Whenever you develop a security solution, you can trust it, but you still have to give the keys to someone,” Abdel-Khalik said. “If that person turns on you, then all bets are void. Here we say that the added disturbances are based on the noise of the system itself. So there is no way that I will know what the noise of the system is, even as an insider. It is automatically recorded and added to the signal.

Although the papers published by team members have so far focused on the use of their paradigm in nuclear reactors, the researchers see potential for applications in all industries – any system using a loop of control and sensors, said Sundaram. The same methods could also be used for purposes beyond cybersecurity, such as detecting self-healing anomalies that could prevent costly shutdowns, and a new form of cryptography that would allow secure sharing of data from critical systems. with external researchers.

Cyber ​​becomes physical

Nuclear engineers, Abdel-Khalik and Sundaram benefit from the expertise and resources of CERIAS to find entry points into the worlds of cybersecurity and IT. Abdel-Khalik attributes to Elisa Bertino, computer science professor Samuel D. Conte and research director of CERIAS, the original spark which led to the creation of the secret knowledge algorithm, and thanks the center for having exposed it to new partnerships and opportunities.

Founded in 1998, CERIAS is one of the oldest and largest research centers in the world dedicated to cybersecurity. Its mission, explains CEO Joel Rasmus, has always been interdisciplinary, and today the center works with researchers from 18 departments and eight Purdue colleges. Abdel-Khalik’s research is a perfect example of this diverse network.

“When most people think of cybersecurity, they only think of IT,” said Rasmus. “Here is a nuclear engineering faculty member doing an incredibly excellent cyber and cyber-physical security job. We were able to put him in touch with computer scientists at Purdue who understand this problem, but who don’t understand anything about nuclear engineering or the power grid, so they can work with him.

Abdel-Khalik and Sundaram began to explore the business possibilities of secret knowledge through a start-up. This startup, Covert Defenses LLC, recently engaged with Entanglement Inc., a high-tech startup, to develop a go-to-market strategy.

In parallel, the team will work on the development of a software toolbox that can be integrated into the cyberphysical test benches of CERIAS and the Pacific Northwest National Laboratory, where sensors and actuators coupled with software provide simulation of industrial systems. in large scale.

“We can provide additional applications for the technologies he develops, as this is an idea that can help almost any cyber-physical area, such as advanced manufacturing or transportation,” Rasmus said. “We want to make sure that the research we do actually helps move the world forward, that it helps solve real real world problems.

Cybersecurity is a critical topic as part of Purdue’s Next Moves, ongoing strategic initiatives that will advance the university’s competitive advantage. Purdue’s cybersecurity research and education initiatives are centered on CERIAS, which has 135 affiliated faculty members.

About Purdue University

Purdue University is a leading public research institution that develops practical solutions to today’s most difficult challenges. Ranked in each of the past four years as one of the 10 Most Innovative Universities in the United States by US News & World Report, Purdue delivers world-changing research and extraordinary discoveries. Engaged in hands-on, online learning in the real world, Purdue provides transformative education for everyone. Committed to affordability and accessibility, Purdue has frozen tuition and most fees at 2012-2013 levels, allowing more students than ever to graduate debt-free. Find out how Purdue never stops in the persistent pursuit of the next giant leap at https://purdue.edu/.

Media contact: Kayla Wiles, 765-494-2432, [email protected]

Writer: Rob mitchum

Source: Hany Abdel-Khalik, [email protected]


Source link

About Rochelle Boisvert

Check Also

Treasury doesn’t mince words on Lekwa’s Standerton business – Ridge Times

A financial recovery plan for the municipality of Lekwa has been prepared, published and if …