Machine Learning, Robotic Process Automation, Neural nets and Cognitive Automation are just some of the buzzwords that are all part of artificial intelligence (AI).
While most associate AI with nefarious science fiction computers, such as HAL from 2001: A Space Odyssey, or Arnold Schwarzenegger’s Terminator, it might surprise some to learn that AI is already widely used within a variety of industries.
Most large corporations already employ AI in one or another way. Nonetheless, even though actual AI is far less malevolent than the AI depicted in Hollywood movies, artificial intelligence still poses a genuine threat – perhaps not to humankind as a whole, but certainly to individuals.
It should be made abundantly clear that this is not the fault of the technology – rather, those using AI for fraudulent activity or evil purposes who are human hackers or thieves.
However, the threat of AI being used in malicious ways is a very real one, and experts ranging from Elon Musk to Stephen Hawking have urged caution when it comes to developing artificial intelligence.
Now, a group of 26 experts from a wide array of different organizations and institutions has banded together to release an extensive report on AI. Titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, the report came about following a workshop at Oxford University in 2017.
Boston Dynamics latest terrifying self-aware robot, Atlas
The report delves into the issue of artificial intelligence as a whole, and dissects the three significant areas in which AI could cause considerable harm in the coming years. Notably, the report identifies three key areas that could potentially be negatively affected by artificial intelligence. These three areas are Digital Security, Physical Security, and Political Security.
The main reasoning behind the release of this compendium is to highlight the troubling outlook of increased use of AI. Since artificial intelligence systems tend to surpass what humans are capable, some of the authors argue that it is entirely possible that the next few years will see the rise of physical target identification, hacking, surveillance and persuasion that is far beyond that which humans are capable of doing.
The report even goes so far as to indicate that artificial intelligence can arguably be viewed as one of the potentially most destructive forces on the planet. If terrorists, rogue states or even ordinary criminals get their hands on artificial technology, this would allow them to launch massive attacks on civilians.
One of the main dangers of AI is also how easily such technology can scale. This would eventually offset the costs of AI, giving criminals, terrorists, and similar types a rational and fiscally sound reason to adopt the technology.
There is a veritable plethora of different cyberattacks that can scam ordinary people out of their money, blackmail people into releasing funds or information, or merely gather sensitive information. So-called spear-phishing, which gave hackers access to Hillary Clinton’s campaign Chairman John Podesta’s email account, are becoming more common each year.
Moreover, the increasingly common methods of speech synthesis, advanced automated hacking, data poisoning or simulated deepfakes also utilize artificial intelligence to trick or coerce people into giving the hackers what they desire.
Furthermore, this problem will only be exacerbated by the fact that artificial intelligence is becoming more and more common in our society. Our cars, drones, mobile phones, smart watches, tablets, computers and even homes are all seeing new ways to implement AI into them.
As our lives become more connected, they also become more vulnerable to ways in which they can be hacked. Even our social media feeds are dictated by algorithms and artificial intelligence – something which the 2016 US election cast into the spotlight, following the discussion on fake news and Cambridge Analytica’s abuse of personal information.
However, if one were to examine the report produced by the 26 experts and the three main areas that can be negatively affected by AI, the first of these is digital security.
Digital security risks
Digital security relates to how artificial intelligence can hijack online users, their information, and their habits and ultimately use this to perform fraudulent activity. Since artificial intelligence is nearly infinitely scalable, it would be possible to automate acts or criminal cyber-offense, or even use machine learning to prioritize targets for cyberattacks.
AI can imitate human behaviour on the internet if programmed to, thus making it harder for developers to detect that an AI is, in fact, merely a system.
Furthermore, this allows for potential armies of autonomous, AI-controlled users to flood websites, overwhelming them and thereby preventing real human users from accessing it.
Artificial intelligence systems can also scrape the Internet for personal information, to create specially designed links, emails or even websites from fake users that mimic legitimate contacts to those being sent the links. All in all, this muddies the water when it comes to who users can trust or not on the Internet, and more sophisticated AI capable of long, convincing dialogues can trick even alert users.
It is therefore evident that digital security is an area of crucial importance. This being said, the Internet is the home turf for AI and is therefore perhaps the area where users are most aware of suspicious links, and can potentially expect to be faced with malware or other malicious programs.
While AI is definitely smarter than traditional computer viruses, internet users have long known to be on their guard when browsing, or merely using, the Internet.
Physical security risks
An area which is not as readily associated with artificial intelligence, then, is that of physical security. Physical security relates to the well-being of humans in the real world, and can potentially be threatened by hacked AI, or artificial intelligence controlled by a party with detrimental intentions.
The Malicious Use of Artificial Intelligence report outlines a few key areas in which artificial intelligence can potentially be used in a harmful way.
The first of these is the possible terrorist repurposing of commercial AI systems. This would allow terrorists to access commercial drones to spy on or even attack citizens. Moreover, taking control over autonomous vehicles would enable terrorists to guide them into large crowds of people, or even to use them as a delivery system for explosives.
The risk of having drones and cars that can “turn on their creators” due to the actions of a single hacker is a truly chilling prospect.
Also, AI presents the risk of allowing for an increased scale of attacks. Again, this relates to the extreme scalability of AI systems. If one AI system is successfully hacked, it is possible that all similar AI systems might be susceptible to similar exploits and hacks.
Furthermore, the rise of AI gives high-skill attack capabilities to low-skill individuals. This drastically increases the potential mayhem that a normal person could wreck.
The advent of AI also allows the attacker and the attacks to be further removed both spatially and temporally. An attacker can orchestrate an attack from the other side of the world, or even program it before the attack actually takes place and then have plenty of time to disappear, as the autonomous AI system will carry out the attack as instructed.
AI and large distributed networks of drones or autonomous robots also allow for so-called “swarming attacks”, which are rapid, coordinated attacks that can be conducted on a massive scale.
Political security risks
Political security is also an important aspect to safeguard from AI exploits. States can potentially employ AI in order to suppress, monitor, or even crack down on dissidents. Smart, large-scale automated surveillance platforms allow nations to use image and audio processing to identify dissidents or even those that are merely probable to become dissidents.
AI can also be used in order to produce convincing propaganda which uses deepfakes to assign comments that were never made to certain people.
Also, individuals may be targeted by AI-driven automated, hyper-personalized disinformation campaigns that are designed to affect voting behaviour. AI-based analysis of social networks can also show which influencers influence which parts of society, and can then be approached – in other words, AI can be used as a sort of twisted segmentation tool.
AI systems can manipulate the availability of information in order to affect their behaviour. All of these ways in which AI can infringe on people’s political security are truly frightening.
However, there are slivers of hope that AI will be used responsibly. The paper undoubtedly shines more light on the potential dangers of AI. This had the effect to push governments around the world to press on with strategies that allows their respective countries to be at forefront of the innovative technology.
The Trump Administration recently announced that it is setting up a new AI Task Force dedicated to US artificial intelligence efforts during a White House event hosting executives from major US technology companies, including Goggle, Amazon and Facebook.
This will monitor and combat fraudulent usage of artificial intelligence and hopes to prevent any infringement on either digital, physical or political security as a result of artificial intelligence.
The UK, one of the European leaders in AI, announced a £1 billion deal to drive AI research and development in the country.
Matt Hancock, Secretary of State for Digital, Culture, Media and Sport, said “The UK must be at the forefront of emerging technologies, pushing boundaries and harnessing innovation to change people’s lives for the better,”
In a press release, the European Union (EU) has presented a series of measures to put AI at the service of Europeans and boost Europe’s competitiveness in the technology. The EU intends to increase investments in AI research and innovation by at least €20 billion ($23.50 billion) between now and the end of 2020.
Whilst this is not a guarantee for increased security, it certainly bodes well for the future of AI.