AI in Security & Cyber
The InfoSec Consulting Series #17
By Jay Pope
Just when it seems that the volume of digital information in existence couldn’t get any larger, it does. So, too, do the variety of potential uses and abuses to which it can be put, and the bodies of legislation that attempt to control them. Cyber-crime is already costing the global economy more than £335 billion each year. It has more than doubled in 5 years. Other estimates suggest a significantly higher figure could be closer to the truth. Many businesses and governments seem to under-estimate the number and cost of the attacks they suffer. For example, in a recent poll of senior executives, almost all respondents reported attacks but only 25% reported receiving attempted phishing exploits, despite phishing emails being an almost daily event in many business email boxes. This discrepancy could be because IT departments are already providing a strong defence against these well-known attack vectors. However, many breaches are only discovered months after the event, or never. It would be hard for the resources in any profession to grow at the same rate as digital information and the networks that contain and transfer it, and that, of course, is the key problem. Extrapolating current trends, hiring and training additional cyber-security personnel will never keep up with the growing volume of threats. The only alternative is to hire better trained and better-equipped professionals. A sea-change could come from the evolution of new strategies based upon machine learning and AI in security.
New Tools
Machine learning has been deployed in cyber security for some time, but only in very limited ways, for example, to improve recognition and filtering of “spam” email and malware. However, these forms of learning algorithms are comprised of fixed manually entered rules and based only on the assumption that the wording of emails or the behaviour of scripts will fit specific definable patterns, or “signatures”. Rule-based approaches require constant manual adjustment, and any fixed set of data points is easily evaded by more imaginative cyber criminals and interlopers.
However, advanced machine learning systems can go further, also seeking patterns in the wider context beyond just the email or script itself. Dates of attacks, originating IPs, volumes of traffic, movements of funds, and access requests to certain types of data can all be flagged as deviations from the norm. Big Data, the IoT, and integrated enterprise management systems are all making it more feasible to connect machine learning algorithms to vast reservoirs of pertinent data. IT managers should be capturing and aggregating data today that might be useful to machine learning and AI systems tomorrow. Consequently, instead of learning slowly from any failures after being deployed, machine learning tools can be primed from reservoirs of data already stored. This will render them effective in preventing new incidents almost out of the box.
A GCHQ spokesman has stated that 95% of the serious cyber-attacks against UK targets are already picked up through analysis of bulk data. Such tools can identify suspicious trends that human staff would rarely notice. This delivers another significant advantage in that they relieve skilled personnel from laboriously programming them, thereby mitigating the burden of finding and paying for additional staff. Considering that industries such as healthcare, insurance, and financial trading have been applying machine-learning in decision-making processes for some time, it is perhaps surprising that cyber security is playing catch-up in this field. This has largely been due to limited funding. As threats continue their worrying upward trend and new legislation begins to bite, this road-block is likely to disappear.
Artificial Intelligence
There is no cast iron distinction between machine learning and AI, but “AI” is often reserved for those systems capable of the most autonomous evolution. In practice, this means the system can not only detect attacks following old known patterns but can also discern new potential threats previously entirely unknown to the system’s designers and operators. Some AI security tools being brought to market claim more than 99.9% success rates against a range of modified malicious code, without any human support. The longer-term objective is that AI systems will neutralise viruses as automatically as human antibodies do. However, we are not there quite yet.
One flaw in almost all countermeasures is their potential for abuse. The backdoors now known to have been placed in many network devices by governmental intelligence agencies are a case in point, with many known and exploited by foreign governments and criminal gangs. Likewise, there is nothing in principle to prevent criminals, terrorists or hostile governments from using their own machine learning or AI tools to attack systems, or to specifically target AI security defences themselves for an attack – by feeding them misleading or camouflaging data or triggering false alarms and generating alert fatigue.
We have already seen attempts at AI-driven social engineering attacks. Some even predict that AI-enabled cyber-attacks will cause an explosion in network penetrations and new species of intelligently evolving computer viruses. Few existing cyber security experts are yet trained to anticipate or mitigate attacks against intelligent systems themselves. In future, it is likely that AI security expertise could become a specific vocation and focus area for InfoSec professionals.
Integrated Solutions
In the meantime, hybrid systems are one way to go. Cyber security tools can be adaptive without having to operate wholly without human guidance and oversight (MIT’s CSAIL project is an example of this approach). There are also practical and economic arguments for constantly updating how the way in which AI systems are functioning, rather than merely abandoning them to their current programming until reviewing them in retrospect and superseding them with a later version.
Neither do AI decision trees have to be black and white choices between “allow/deny”. They can be permitted intermediate precautions including sandboxing solutions; triggering other engines to supply additional intelligence; or escalation to human experts. They can then branch into expert systems – recommending the safest options to the security professionals to whom they are deferring.
In effect, AI security components should be looked upon as members of the security team, working alongside human experts and overseers. Although AI in the foreseeable future may still have a few weaknesses human experts would not, these systems can avoid a host of human errors through their capacity to perform consistently, tirelessly and at lightning speed. Together with their human colleagues, they may form an unbeatable team.
Does Your Organisation Need Top Cyber Security Consultants?
We are a team of experts with extensive knowledge and experience of helping organisations improve business performance. Our highly qualified consultancy team can deliver cyber security capability at all levels of your organisation and are on hand to help ensure your projects deliver solutions that are appropriately aligned to your cyber security risk position, and meet technical, business and ethics due diligence requirements. Schedule a call above to learn more about how we can help.