User Experience

User Experience

AI Assistants

AI Assistants

The Risks and Limitations of Artificial Intelligence
20 August 2018

The Risks and Limitations of Artificial Intelligence

Machine learning in 2018 is creating amazing tools, but they can be hard to explain, costly to train, and often enigmatic even to their creators.

Big Tech Giants like Google are confidently pronouncing that we live in an "AI-first age," with machine learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are happy to point out that there’s still a lot of work to be done.

Just because we have digital assistants that sound like people, this doesn’t mean we’re much closer to creating true artificial intelligence.


The Problem with AI

Machine learning in 2018 is creating amazing tools, but they can be hard to explain, costly to train, and often enigmatic even to their creators.

A few of the main problems include:

  1. The need for vast amounts of data to power deep learning systems.
  2. Our inability to create AI that is good at more than one task.
  3. The lack of insight we have into how these systems work in the first place.

(1) First You Get the Data, Then You Get The AI

Artificial intelligence requires large amounts of data to learn about the world, but we often overlook how much data is involved.

AI systems don’t just require more information than humans to understand concepts or recognize features, they require hundreds of thousands of times more. If you look at all the applications domains were deep learning is successful, you’ll see they’re domains where we can acquire a lot of data. Tech Giants such as Google and Facebook have access to mountains of data, making it much easier for them to create useful tools.

At this moment in the evolution of AI high volumes of data are essential. New approaches to AI are evolving that are less data hungry, typically neural networks.

Hundreds of startups are working on their own machine learning models. They might be revolutionary, but without the data to make them work, we’ll never know.

Big Tech (Google, Facebook, Amazon, and Microsoft etc.) have massive datasets and so can afford to run inefficient machine learning systemsand improve them over time.

Smaller startups might have good ideas, but they won’t be able to follow through without data.

(2) Getting the Required Data Opens Many Ethical Issues

The data acquisition problem gets significantly more difficult in domains where it’s challenging to get your hands-on data ethically. For example, healthcare applies AI for machine vision tasks like recognizing tumors in X-ray scans, but where digitized data can be sparse.

As Lawrence points out, the tricky bit here is that it’s "generally considered unethical to force people to become sick to acquire data." (That’s what makes deals like that struck between Google and the National Health Service in the UK so significant.) The problem, says Lawrence, is not really about finding ways to distribute data, but about making our deep learning systems more efficient and able to work with less data. And just like Watt’s improvements, that might take another 60 years.

(3) AI Needs to Be Able to Multitask to Evolve Further

There’s another major problem with deep learning, all our current systems are, essentially, idiot savants.

Once they’ve been trained, they can be incredibly efficient at tasks like recognizing cats or playing Atari games. There is no neural network, and no method today that can be trained to identify objects and images, play video games, and listen to music." (Neural networks are the building blocks of deep learning systems.)

(4) The Learning Multiple Task AI Challenge

When Google’s DeepMind announced in February last year that it’d built a system that could beat 49 Atari games, it was certainly a massive achievement, but each time it beat a game the system needed to be retrained to beat the next one.

To get to artificial general intelligence we need AI technologies that can learn multiple tasks.

The Solution May Be Progressive Neural Networks

A solution might be progressive neural networks — this means connecting separate deep learning systems together, so they can exchange information.

Progressive neural networks were able to adapt to games of Pong that varied in small ways (in one version the colors were inverted; in another the controls were flipped) much faster than a traditional neural net, which had to learn each game from scratch.

Progressive neural networks are a promising new learning method, and in more recent experiments it’s even been applied to robotic arm. This approach sped up the robot learning process from a matter of weeks to a single day.

(1) Progressive Neural Network Have Important Limitations

There significant limitations to progressive neural networks. You can’t simply keep on adding new tasks to AI learning. If you keep chaining systems together, sooner or later you end up with a model that is too large to be tractable.

Creating a human-level intelligence that can write a poem, solve differential equations, and design a chair are not possible with Progressive Neural Networks today.

(2) We Don’t Really Understand How Neural Networks Come to Their Decisions

Another major challenge is understanding how artificial intelligence reaches its conclusions.

Although we know how Neural Networks put together and the information that goes into them. We don’t understand why they reach the decisions they reach. This remains unexplained.

The Risks and Limitations of Artificial Intelligence in Business

Artificial intelligence involves giving machines and programs the ability to think like a human. Businesses are increasingly looking for ways to put this technology to work to improve their productivity, profitability and business results.

However, while there are many business benefits to artificial intelligence, there are also certain many barriers and disadvantages to keep in mind.

(1) The Limitations of Artificial Intelligence

One of the main limitations of AI is the cost. Creation of smart technologies can be expensive, due to their complex nature and the need for repair and ongoing maintenance.

Software programs need regular upgrading to adapt to the changing business environment and, in case of breakdown, present a risk of losing code or important data. Restoring this is often time-consuming and costly.

Other AI limitations relate to:

  • implementation times, which are often lengthy
  • integration challenges and lack of understanding of the state-of-the-art systems
  • usability and interoperability with other systems and platforms

(2) The Risks and Ethical Issues of Artificial Intelligence 

If you're deciding whether to take on AI-driven technology, you should also consider:

  • Customer privacy
  • Potential lack of transparency
  • Technological complexity
  • Loss of control over your business decisions and strategy
  • AI and ethical concerns

With the rapid development of AI, a number of ethical issues have cropped up. These include:

  • The potential of automation technology to give rise to job losses
  • The need to redeploy or retrain employees to keep them in jobs
  • Fair distribution of wealth created by machines
  • The effect of machine interaction on human behaviour and attention
  • The need to eliminate bias in AI that is created by humans
  • The security of AI systems (e.g. autonomous weapons) that can potentially cause damage
  • The need to mitigate against unintended consequences, as smart machines are thought to learn and develop independently

Despite All These Issues
AI Has the Potential to Positively Move our Civilization Forward

While these risks can't be ignored, it is worth keeping in mind that advances in AI can, for the most part, create better business and better lives for everyone. If implemented responsibly, artificial intelligence has significant potential to move our civilization forward.


The ethical dilemma we face on AI
Can we build AI without losing control over it?