Unless you’ve had your head in the sand over the past few years, you’ll have heard about the unprecedented — and largely unexpected — advancement in Artificial Intelligence (AI). Perhaps the most public example of this was when Google’s company DeepMind used an AI called AlphaGo to beat one of the world’s top Go players in 2016. But that’s far from the only instance of AI breaking new ground.
Today, it plays a role in voice recognition software — Siri, Alexa, Cortana and Google Assistant. It’s helping retailers predict what we want to buy. It’s even organising our email accounts by sorting the messages we want to see from those we don’t.
Meanwhile, in the world of business, machine learning – an element of AI that focuses on algorithms that can learn from, and make predictions based on data – is pushing the boundaries of what computers can do. As a result, we’re seeing solutions such as Robotic Process Automation (RPA) and big data, driving efficiencies and boosting profits.
Overall, AI is doing a fantastic job at transforming the world for the better.
The dangers inherent in AI
But what about the other side of the coin? What negative impact could AI have? It’s clear that AI – like any technology – could be used for corrupt means. Adversarial AI (where inputs can be carefully crafted to trick AI systems into misclassifying data) has already been demonstrated. It could, for example, make an AI vision system that recognises a red traffic light, perceive a green one instead – which could have disastrous ramifications for an autonomous vehicle.
The Adversarial AI scenario is an example of AI getting hacked. But let’s take it further; what if we have AI itself doing the hacking? That’s not a worst-case scenario – it’s a likelihood.
Cyber criminals are all but sure to get their hands on AI tools, thanks to the fact that they’re widely available as open software already. OpenAI and Onyx, are two that immediately come to mind.
This highlights the need to ensure that AI systems – particularly those used in mission-critical settings – are resilient to such attacks.