When AI….goes bad

Chris Bunch 9th March 2018
Malicious AI

Couldn’t resist the trash TV title, sorry…. (hoping to sell the rights to Channel 5 in the UK).

I spotted a report last week on the topic of ‘Malicious AI’ and thought it worth sharing a summary for those too time poor to read it in full – it’s 101 pages, albeit a pretty easy going read as a chunk of that is supporting reference links.

 

So, Skynet* is a real threat?

*That’s a Terminator reference if you aren’t clued up on action films from the early 1980s.

Well, kind of – yes. The report’s authors – and there’s some very smart folks in here from Oxford University and the EFF amongst others – are trying to draw attention to the fact that we’re not really having much debate as an industry around the potential negative impacts of AI technologies on society.

We talk a lot about the potential benefits, e.g. around computer vision algorithms analysing MRI scans quickly and with incredible accuracy, but perhaps not about how a military drone designed to target you as individual could hunt you down using face recognition algorithms in the style of an overly muscular former governor of California.

If you think this doesn’t apply to you, consider this: we all use AI in one form or another every day, whether it’s in spam filters, or search engines. I’m also certain you’ve heard of “computers beating humans” at complex games like Go.

 

Come on then, scare me!

Alright, I’ll give it a shot…

As AI makes further significant advances (and the pace is accelerating all the time), the cost of a malicious attack becomes dramatically lower – as AI can perform many (or all) of the elements that humans had to action previously.

Spear phishing attacks are already popular today – basically targeted phishing attacks, increasing the likelihood of success – how much more effective could they be by using AI over large data sets to refine and more specifically aggressively target people, learning all the time about what people respond to? This targeting might be used to compromise individual or company security, but equally could be used to deliver propaganda automatically and shape and control the views of the world. You can bet certain nation states who’s leaders favour topless male horse riding are investigating this.

Or what about the ability to generate synthetic images, text, video and audio which could be used to impersonate others online, or to sway public opinion by distributing AI-generated content through social media channels.

We’ve all seen and heard of “fake news” recently – how about if those reports were supported by realistic fabricated video and audio? Fancy seeing videos of state leaders appearing to make inflammatory comments…which were never actually uttered. If that doesn’t strike a chord, how about seeing yourself edited in an adult video for blackmail or just good old fashioned defamation? The video output from AI currently is a little crude, but that will change within 5 years I believe. Audio and image work is already pretty advanced – see the pictures on page 15 in the report on how facial image generation has improved in just the past 3 years.

These type of attacks can also be used for fraud, e.g. with perfect synthesised voices being used to impersonate your boss or your partner. Most people are not capable of mimicking others’ voices realistically, but AI can. What if you could combine voice and video and create a perfect sting?

How about hacking itself, or at least looking for system level vulnerabilities? Well, this can be automated as well – e.g. by looking at historical patterns of code vulnerabilities to speed up the discovery of new vulnerabilities, and the creation of code for exploiting them.

Back in the physical world, AI can, and will be, used to defend us from attacks. Great. The downside is that these same defence systems can potentially be misused and abused via ‘data poisoning’. Data poisoning is an attempt to deliberately cause AI to misbehave, by introducing training data that causes the system to make mistakes. A worst-case scenario might be an attack on a system which is used to direct autonomous weapons – which could lead to a friendly fire event or even civilian targeting…

If that doesn’t terrify you, what about the compromising and repurposing commercial systems for terrorist purposes? Using drones or autonomous vehicles to deliver explosives and cause crashes…How safe is that cleaning robot in your office?

A lovely vision of the world, huh?

 

So, how do we prevent this dystopian future?

There’s a host of ideas proposed at length, but I’d boil them down to the following.

  • Governments and policymakers to work with tech researchers to ensure that appropriate safeguards/laws are put into place. This should include the aggressive use of Red teaming. Requesting close collaboration with technical experts isn’t a new concept by any stretch, but if implemented properly it will help to ensure that policies will be informed by the technical realities of the technologies at hand
  • Researchers being proactive in contacting people and recommending design patterns
  • “Responsible disclosure” of vulnerabilities, e.g. government agencies not holding on to things for their own benefit (we’re looking at you, NSA)
  • Security tools. What tools should be developed and distributed to help define standard tests for common security problems in AI systems? The community needs to work on this in a responsible way.
  • Indeed, ethics and social responsibility from the technical community in general are highlighted as an overall theme of what we’ll need if we are to prevent many of these challenges from coming to fruition.

 

Good lord, I’m scared of malicious AI

 

Well, many people are already of course – thinking that AI will “take their jobs”. Which it might. The better news is that it’s much harder to see the positive benefits that new and complex technologies bring to society. This was the same with the Spinning Jenny as it is today with AI. Vast new swathes of jobs will be created, and tedious tasks will be completed by machines rather than humans.

Whilst today AI systems perform effectively on only a relatively small portion of the tasks that humans are capable of, they will eventually exceed human performance at pretty much everything. The report’s authors note this full transition is likely to take greater than 50 years – a huge timeframe in technology – so no need to panic just yet.

AI will be huge in terms of impact on both technology and society. This report highlights that we need to govern and control its development and ongoing effects on us.

AI is going to be awesome, let’s not screw it up.

 

Chris Bunch, Head of Cloudreach Europe
@cloudychrisb
Blog archive