Programming

Is artificial intelligence a threat?

Some of the biggest names in technology – internet pioneer Sir Tim Berners-Lee and SpaceX founder Elon Musk – are concerned artificial intelligence (AI) could develop past the point of human control. But how real is this threat?

×

Choose what’s right for you with confidence

Join today and get instant access to all test results and research.

Join Consumer Now

What is AI?

Also known as machine learning, AI is where a computer system exhibits “cognitive” functions, such as learning from experience and problem-solving, which we normally associate with a human mind. It’s an industry in which billions of dollars are being invested.

An AI machine learns from the data it’s presented with and it continues to learn from data it collects. This isn’t future tech either – we’re already using it. Your Netflix recommendations don’t just consider what you’ve recently watched. Its algorithm also weighs up how other people who saw the same show rated it, when they stopped watching it, if they restarted and what they viewed next. As a result, you get more tailored recommendations (for example, not every Disney movie because you recently watched Cinderella).

Ever asked Siri or Google Now for restaurant recommendations? They both use AI algorithms to learn your current location and what food you’ve previously liked and assess other people’s reviews to help you find your new favourite restaurant – not just the nearest Italian because you had carbonara last week.

Machines can be engineered to replicate complex thought processes quicker than a human, as demonstrated by Google’s Alpha Zero. It was given the basic rules of chess and, after playing games against itself for four hours, had self-learned strategies that had taken humans hundreds of years to develop.

What is the threat?

AI could be a threat to us on many levels, including compromising our privacy, taking our jobs through automation, and passing society’s social biases to machines. But the scariest threat is “AI takeover”. It’s the hypothetical scenario that artificially intelligent machines will achieve dominance over humans and take over the world. Movies such as 2001: A Space Odyssey and the Terminator series have given us a glimpse of this alarming future.

With IoT (Internet of Things), we are sharing more information and control of our daily lives, so the threat of AI takeover isn’t just science fiction. But AI for the near future lacks the complex thought processes, such as intuitiveness, we associate with human intelligence. One possible takeover scenario is a machine doing what it thinks is best, but with unintended catastrophic effects – in the scenario of a worldwide virus epidemic, an AI could conclude the best solution would be to extinguish human life.

How likely is harm?

AI can be used for a lot more than creating a great chess player. It’s being used to map coral reef deterioration by analysing photos taken using drones. Automating the process saves time and money, enabling researchers to dedicate more time to saving the reefs.

It also has the potential to save lives – Google has used machine learning to diagnose abnormal cells in mammograms with the same or better accuracy as a pathologist. In the next few decades we could potentially see a single AI do the work of a lab full of scientists.

Early planning and adoption of AI by governments could improve our lives by designing smart cities and improving infrastructure and transport. When self-driving cars hit the streets, their AI will need to consider factors such as road markings, movement of other traffic, hazards and congestion. Then there are moral dilemmas, such as if a pedestrian gets in the car’s way, does it hit them, or take evasive action and potentially crash, injuring the car’s occupants?

So what about that AI takeover? We can’t say what’s next for AI, but it has a greater potential to improve human lives than to harm them. AI development isn’t pushing ahead unchecked – organisations such as OpenAI and Machine Intelligence Research Institute research and guide international AI advancement so that it’s for the benefit, not the detriment, of us.


By Erin Bennett
Technical Writer