Can AI solve cybersecurity issues?

I was very struck by a recent article written by Martin Giles and published on Medium recently. In it he looks at the risks, as well as the apparent benefits, associated with using AI and machine learning in the cybersecurity industry. It’s an interesting conundrum, because on the one hand it seems perfectly logical that AI should play a role in protecting against hacker attacks, so what do we need to be mindful of?

As Martin Giles recounts, he met many companies at a cybersecurity conference who were “boasting about how they are using machine learning and artificial intelligence to help make the world a safer place.” However, as he also points out, others are less convinced. Indeed, he spoke to the head of security firm Forcepoint , who said: “What’s happening is a little concerning, and in some cases even dangerous.” Of course, what we want to know is why he thinks that.

The risks with AI and cybersecurity

There is a huge demand for algorithms that will combat cyber attacks. But, there is also a shortage of skilled cyber security workers at all levels. Using AI and machine learning helps to plug this skill shortage gap. Plus, many firms believe it is a better approach than developing new software.

Giles also reveals that a significant number of firms are launching new AI products for this sector because there is an audience that has “bought into the AI hype.” He goes on to say, “And there’s a danger that they will overlook ways in which the machine-learning algorithms could create a false sense of security.”

And then there are the actions of hackers to consider. What can they do with security that uses AI? According to Giles, an AI algorithm might miss some attacks, because “companies use training information that hasn’t been thoroughly scrubbed of anomalous data points.” And, there’s a problem with “hackers who get access to a security firm’s systems could corrupt data by switching labels so that some malware examples are tagged as clean code.”

There is also the issue with relying one a single master algorithm that can quite easily become compromised without sending out a message that anything untoward has happened. The only way to combat this is to use a series of diverse algorithms. And there are other issues as explained in this MIT Technology Review article.

None of this means that we shouldn’t be using AI at all for security purpose, just that we need to more carefully monitor and minimise the risks that come with using it. The challenge is to find, or train, people in the skills needed to use AI in this increasingly challenging sector of the cyber sphere.

Scroll to Top