Human vs AI In Pen Testing

The InfoSec Consulting Series #35

By Jay Pope

 

In the 19th century, English textile workers destroyed machinery to protest automation. The Luddites worried that their skills and experience would become redundant as machines progressively took over their roles. In this century, with the increased use of automation and robotics in industry, we can already see the demise of traditional manual labour. Jobs are becoming smarter, making better use of human skills and intuition.  This trend can even be seen with a relatively new career, penetration testing. We now see AI In Pen Testing Systems taking over some of the tasks previously carried out by human testers. Is it Luddite to see this as a threat to our jobs? Can AI make Pen testers redundant like manual labourers, or can it make our jobs more satisfying, allowing testers enough time to make full use of their experience?

In this article we look at the traditional sequence of stages for Pen testing and how this fits with human skills. We then consider AI and machine learning (ML) and their comparative benefits and drawbacks. Finally, we take a brief look at some of the AI Pen Testing Systems available.

 

Lifecycle & Human Skills

The traditional Pen testing lifecycle consists of 5 stages:

  • Planning. Also known as reconnaissance, where the tester gathers intelligence and agrees testing goals;
  • Scanning. The tester uses tools to attempt to intrude on the system and assess how it responds;
  • Gaining access. The tester uses attacks, typically from a web application, to identify vulnerabilities in the system;
  • Application Penetration Testing (APT). The tester mounts a full-scale attack, bypassing anti-virus and network defences, utilising social engineering and intrusion techniques;
  • Web Application Firewall (WAF) configuration. Test results are analysed and used to reconfigure the system’s defence mechanisms. The lifecycle then starts again.

Soft skills are valuable while gaining knowledge about the system. Discussions with analysts and developers can pinpoint areas where the system may be vulnerable, or where there is sensitive or valuable information. Agreeing goals with the test manager gives the tester insight into the requirements and the opportunity to apply skills and experience. What sets a human apart is their ability to think outside the box. The application and its operating environment may be novel, perhaps a design not previously seen, but the tester will be able to assess where to focus their attention during each iteration of the lifecycle.

There is also plenty of scope for human error. Data and requirements can be misinterpreted and there may be errors in recording results. Above all, there is a limited amount of time available; it’s rarely possible to carry out every iteration with boundless energy and rigour.

AI & Machine Learning Strengths & Weaknesses

AI is a computer science discipline, the concept of a machine with the intelligence to mimic human thought processes. ML is a subset of AI with the purpose of enabling computer systems to carry out tasks without an explicit program. ML is therefore good at:

  • Learning a pattern and a successful response;
  • Uncovering patterns that are not obvious, perhaps because of data volume or complexity;
  • Carrying out tasks very quickly.

 

ML Also Has Limitations In These Areas:

  • Each ML model is a one-trick-pony. It can only function based upon what it has already encountered. It can be trained to find a pattern in, say, unusual network events, but that will be the extent of its repertoire;
  • It requires data to learn, each model requires extensive training with data relevant to the business, the network and the application;
  • It is not accountable. Whereas a tester, or the test manager, will put their signature to the final test report, ML does not have the ability to “sign-off” a system as being secure. If a vulnerability is discovered after the system has gone live, there is no-one to take responsibility.

AI can take Pen testing a stage further than ML. AI tools, once they have been trained on datasets, are able to:

  • Predict future threat profiles;
  • Recommend test payloads to mimic those threats;
  • Predict exploitation scenarios and model their effects.

 

AI Pen Testing Systems

The term “AI” is often used carelessly when describing software tools. Here we identify a small number of the testing systems available with some level of genuine AI included. Deep Exploit learns how to exploit a system by carrying out attacks using learned methods and by brute force. It carries out a wide-range attack, targeting all open ports, using traditional attack methods. It can then focus its approach by targeting a specific port number and application, using its arsenal of exploits and payloads. It learns by assessing feedback from successful attacks.

Pentoma assesses servers and applications to find security risks such as:

  • SQL Injection;
  • File inclusion;
  • Unvalidated Forwards and Redirects;
  • Cross-site scripting (XSS).

It uses ML and AI (to an extent) to evolve and grow its assessments and techniques.

Wallarm uses nodes deployed in the cloud network to provide dynamic protection against the most common application vulnerabilities (known as the OWASP top 10) including injection, broken authentication, sensitive data exposure and XML external entities. It can discover network assets, scan for vulnerabilities and monitor abnormal patterns. It learns application vulnerabilities using automated threat verification. Having blocked a malicious request, it mimics it to test the behaviour of the application.

Hackers Using AI

The use of AI is not confined to application development and operations; hackers are using AI to assist their activities. Indeed, AI can itself be hacked. The algorithm at the heart of the AI process can be manipulated during its learning phase and after deployment. Security specialist Darktrace reports that AI-driven malware is being used to mimic the behaviour of a human attacker, increasing the stealth and scalability of attacks. By extending malware such as TrickBot, hackers can adopt contextual awareness. An AI-based attack can autonomously assess the target and determine how to avoid detection. This makes it much harder to track the criminal behind it.

 

Human Pen Testers v AI

It’s unlikely that we will adopt the Luddites’ methodology of attacking our machines. While ML can learn from data, it’s not a substitute for a human Pen tester. AI has the potential to disrupt the industry further, but it still has some way to go. Realistically, by taking on the routine tasks, ML should make our careers as Pen testers more enjoyable, giving us time to communicate and to think outside the box.

 

Does Your Organisation Need Top Cyber Security Consultants?

We are a team of experts with extensive knowledge and experience of helping organisations improve business performance. Our highly qualified consultancy team can deliver cyber security capability at all levels of your organisation and are on hand to help ensure your projects deliver solutions that are appropriately aligned to your cyber security risk position, and meet technical, business and ethics due diligence requirements. Schedule a call above to learn more about how we can help.