A key challenge facing modern security teams is the explosion of new potential threats, both cyber and physical, and the speed with which new exploits are taken advantage of. Additionally, in our globalized world threats can evolve from innumerable sources and manifest as a diverse range of hazards.
Because of this, security teams need to efficiently utilize automation technology and machine learning to identify threats as or even before they emerge if they want to mitigate risks or prevent attacks.
Artificial Intelligence in the Cyber Security Arms Race
Today, AI and machine learning play active roles on both sides of the cybersecurity struggle, enabling both attackers and defenders to operate at new magnitudes of speed and scale.
When thinking about the role of machine learning for corporate security and determining the need, you first need to understand how it is already being used for adversarial applications. For example, machine learning algorithms are being used to implement massive spear-phishing campaigns. Attackers harvest data through hacks and open-source intelligence (OSINT) and then can deploy ‘intelligent’ social engineering strategies with relatively high success rate. Often this can be largely automated which ultimately allows previously unseen volumes of attack to be deployed with very little effort.
Another key example, a strategy that has been growing in popularity as the technology evolves, making it both more effective and harder to prevent, is Deepfake attacks. This uses AI to mimic voice and appearance in audio and video files. This is a relatively new branch of attack in the spread of disinformation and can be harnessed to devastating effect. For example, there are serious fears of the influence they may bring to significant future political events such as the 2020 US Presidential Election.
These are just two of the more obvious strategies currently being implemented in a widespread fashion by threat actors. AI supported cyberattacks though have the potential to go much further. IBM’s DeepLocker, for example, describes an entirely new class of malware in which AI models can be used to disguise malware, carrying it as a ‘payload’ to be launched when specific criteria are met - for example, facial recognition of its target.
Managing Data Volumes
One of the primary and critical uses of AI for security professionals is managing data volumes. In fact, in Capgemini’s 2019 cybersecurity report 61% of organizations acknowledged that they would not be able to identify critical threats without AI because of the quantities of data it is necessary to analyze.
“Machine learning can be used as a ‘first pass’, to bring the probable relevant posts up to the top and push the irrelevant ones to the bottom. The relevant posts for any organization are typically less than 0.1% of the total mass of incoming messages, so efficient culling is necessary for the timely retrieval of the relevant ones.” - Thomas Bevan, Head Data Scientist at Signal.
Without the assistance of advanced automation softwares and AI, it becomes impossible to make timely decisions - impossible to detect anomalous activity. The result of which is that those organizations who don’t employ AI and automation softwares for intelligence gathering often miss critical threats or only discover them when it’s too late.
Signal OSINT and Machine Learning
Signal OSINT platform uses machine learning and automation techniques to improve data collection and aggregation. The platform allows you to create targeted searches using Boolean logic, but it is our machine learning capabilities which allow us to go beyond Boolean keyword searches.
“By recognising patterns in speech and relations between commonly used words, one can find examples of relevant posts even without keywords. While phrases like ‘I’m gonna kill the boss’ can be picked up by keywords easily, keyword searches alone struggle with more idiomatic speech like, ‘I’m gonna put the boss six feet under’, and will incorrectly flag posts like ‘Check out the new glory kill animation on the final boss’. Machine learning, given the right training data, can be taught to handle these sorts of examples,” says Thomas Bevan.
Signal continuously scans the surface, deep, and dark web and has customizable SMS and Email alert capability so that security teams can get real-time alerts from a wide array of data sources such as Reddit, 4Chan, 8Kun etc. Additionally, Signal allows teams to monitor and gather data from dark web sources that they would otherwise be unable to access either for security reasons or because of captive portals.
Finally, the software allows users to analyze data across languages and translate posts for further human analysis. There are additional capabilities, such as our emotional analysis tool Spotlight, which can help indicate the threat level based on language indicators.
Complementing AI with Human Intelligence
In order to stay ahead of this rapidly evolving threat landscape, security professionals should be using a layered approach that pairs the strategic advantages of machine learning to parse through the vast quantities of new data with human intelligence to make up for current flaws in AI technology.
Machines have been at the forefront of security for decades now. Their role though is evolving as they get passed the heavy lifting, allowing analysts and security professionals to analyse hyper-relevant data efficiently.