Fork me on GitHub
Tutorial Home

Neural Networks

Introduction

The term Artificial Intelligence (AI) has been around for a long time. It was coined by John McCarthy in the 1950s and he is considered one of the founding fathers of AI alongside Marvin Minsky.
As well as that, some of the key applications like machine learning, deep learning and neural networks have been widely researched and deployed during the 70s, 80s and 90s. However, during each of those periods, the industry suffered from what we call “AI Winters” where initiatives didn’t have enough investment to succeed.

In the 2000s, big companies like Google, Facebook and Baidu arrived on the scene and started putting big money into research again, realising the potential of the solutions. Whilst AI is flying right now, what’s to say that we won’t have another “Winter” up ahead?

How do we know that AI has really taken off this time?

AI Is Only Just Starting

The big difference between the AI applications of the 2000s and those of the past is commercialisation. Whilst the Turing Test was revolutionary in 1950 and machines winning at checkers, chess and Jeopardy were impressive developments, they didn’t learn to any real world, practical application of AI in a commercial sense.

There are three key elements as to why AI has been able to become so powerful in the last decade.

  1. Computing Power

Looking back to the 1950s, the early computers didn’t have sufficient power to create truly autonomous functioning systems. Some of the hardware and infrastructure might have been out there but it was incredibly costly where there were no real use cases that could prove the value of any investment.

Imagine trying to run Alexa on a dial-up internet connection. It simply would not have been possible. Today, people will abandon a webpage if it hasn’t loaded within 3 seconds. That is the extent to which computing power has changed consumer expectations. Only in the last few decades has computer processing power been enough to support an AI system.

IBM is now working towards developing even more powerful quantum computing platforms that will take AI to the next level (although full adoption is a little way in the future).

  1. Big Data

With the hyperconnected digital world, we are creating more data now than ever before and it is still growing. In fact, by 2025, intelligence firm International Data Corporation (IDC) are forecasting we will have 10 times more data than we did in 2016.

Big Data underpins most of what we are doing with AI. For example, social networks have unleashed a whole amalgam of data that never previously existing amount human behaviours. Amazon have collected enough shopping data from their consumers to now accurately predict what they are likely to purchase next. Netflix can tell their subscribers what they want to watch before they even know.

The fact is, as more of our lives become digitalised, the more data becomes available and the potential for AI applications can only increase.

  1. Models and Algorithms

For Big Data to be effective, we need good algorithms. These are scripts that instruct AI technology what to do. In some of the earlier applications of AI, these algorithms were very prescriptive and told machines what to do on a step by step basis. They have now become so sophisticated that computers can build their own algorithms without supervision to an incredibly high degree of accuracy.

Data and computational power has led to a rise in deep learning and neural networks with refined algorithms. For example, algorithms can now be modelled that were not possible 20 years ago where there wasn’t the volume, speed and accuracy of information to create commercial value.

  1. Democratisation of AI

Data Science and AI resource is still hard to find and for that reason, it can be expensive. Data Scientists were in a position where they could virtually create their own salaries given their unique skillsets.

A growing number of tools are ensuring AI capabilities can be put into the hands of non-technical experts. Large enterprises including Google, Microsoft and IBM and releasing cloud-based tools that allow almost anyone to create their own machine learning models. These tend to use pre-built algorithms that can be applied to various situations without the need for technical support.

Companies like DataRobot allows uses to upload data and quickly try different algorithms to see which obtains the best results.

BI platforms such as Tableau, Qlik and Sisense amongst countless others can analyse data in an instant without needing any specialist resource.

As AI becomes commonplace in business, all these attempts to democratise access to it will speed up adoption across numerous functions.

AI Deployments Will Continue To Accelerate

Whilst what we have seen from AI so far has greatly impacted everyday life as well as our jobs and the ways many industries work, we are only at the start. To date, what we have is “narrow AI.”

These are applications that have programmed rules to adhere by and function but the goal for researchers and developers is to achieve an artificial general intelligence (AGI). This is where machines can truly imitate human actions in a conscious way such as understanding the environment around them. A robot may be able to walk and perform activities against set rules, but it is not yet capable of designing those rules for itself.

Narrow forms of AI are still in the early stages of adoption and have already started diagnosing disease, solving legal cases and educating pupils. With the data available, potential computing power and democratisation of systems, applications will become mainstream in the forthcoming years.

Following that, we may be able to set our sights on AGI but we are some way off that kind of singularity just yet.