In comparison to the approaches that are now used, the detection capabilities of a typical internet attack have been much improved thanks to the work done by scientists.Keeping a close check on the ever-shifting traffic patterns on the internet is the key to the success of a new method that was created by computer scientists working at the Pacific Northwest National Laboratory, which is part of the Department of Energy. The results were presented on August 2 by PNNL scientist Omer Subasi at the IEEE International Conference on Cyber Security and Resilience, where the article was acknowledged as the best research paper given at the meeting. The conference also recognized the publication as the best research paper presented at the meeting.
The playbook that is most commonly used to detect denial-of-service attacks, in which the perpetrators attempt to bring a website to its knees by inundating it with requests, was modified by the scientists. Attackers might be holding a website hostage in exchange for a ransom, but they could also be trying to disrupt users or companies.A great majority of systems depend on a raw number referred to as a threshold in order to attempt to identify these kinds of assaults If that threshold is exceeded by the number of people that are attempting to visit a website, it is determined that an attack is very probable and protective measures are put into place. However, depending on a threshold might make systems susceptible to attack.
“A threshold really doesn’t give any insight or information about what it is that is truly going on in your system,” said Subasi. ” “A straightforward threshold has a good chance of missing actual attacks, which can have severe repercussions, and the defender might not even be aware that this is happening.”
A threshold has the potential to generate false alarms, any of which may have significant repercussions on its own. An actual denial-of-service attack, often known as a DOS attack, seeks to effectively achieve what a false positive may do: compel defenders to take a site down and put genuine traffic to a halt. False positives can cause defenders to take a site offline.
“It is not sufficient to just identify high-volume traffic. You have to have an understanding of that traffic, which is always changing over the course of time, Subasi said. “Your network has to be able to distinguish between an attack and a benign event that causes a rapid rise in traffic, like the Super Bowl. The actions are quite similar to one another.”
According to Kevin Barker, the lead investigator, “You don’t want to throttle the network yourself when there isn’t an attack underway.”
The team from PNNL entirely circumvented the idea of thresholds in order to achieve their goal of improving detection accuracy. Instead, the group concentrated on the development of entropy, which is a measure of the degree to which a system is disordered.
In most cases, the internet is characterized by a pervasive dysfunction in every single location. On the other hand, two different measurements of entropy move in opposing directions while a denial-of-service attack is being carried out. Low entropy may be seen at the destination address because an abnormally high number of clicks are being directed to a single location. But the origins of those clicks, whether they come from humans, zombies, or bots, are dispersed throughout a wide variety of locations, indicating a high level of entropy. The mismatch may indicate that there will be an attack.
The research conducted by PNNL found that ten different standard algorithms properly recognized, on average, 52 percent of DOS attacks, with the top algorithm accurately identifying 62 percent of attacks. The PNNL formula successfully recognized almost all of these kinds of assaults (99 percent).
The elimination of criteria is not the sole factor contributing to the improvement. The researchers from PNNL added a twist to their method in order to increase accuracy even more. Instead of only focusing on the entropy levels at a particular point in time, they also monitored the patterns that emerged throughout the course of the study.In addition, Subasi investigated many other methods that may be used to compute entropy. The Shannon entropy formula is used in a large number of the algorithms that identify denial-of-service attacks. Instead, Subasi decided to use a formula that is often referred to as Tsallis entropy for some of the fundamental mathematics.
Subasi discovered that the Tsallis formula is hundreds of times more sensitive than the Shannon formula when it comes to filtering out false alarms and distinguishing between real flash events and an attack. One example of a valid flash event is heavy traffic to a website during the World Cup.
The approach developed by PNNL is automated, thus it does not need close supervision by a person in order to differentiate between genuine traffic and an attack. The researchers claim that their software is “lightweight,” which means that it does not need a significant amount of processing power or network resources in order to function properly. According to the experts, this is quite different from solutions that are based on machine learning and artificial intelligence. While these methods also avoid thresholds, they need a significant quantity of training data to function well.
Now, the team at PNNL is looking into how denial-of-service attacks will be impacted by the rollout of 5G networking and the booming landscape of the internet of things.
Information security specialist, currently working as risk infrastructure specialist & investigator.
15 years of experience in risk and control process, security audit support, business continuity design and support, workgroup management and information security standards.