Fork me on GitHub
Tutorial Home

History of AI


We tend to see artificial intelligence (AI) as a brand new development but the history books would tell us otherwise. No longer the domain of science fiction, robotics and artificial intelligence and becoming important business drivers. This article looks at how we got to where we are today.

The timeline below from the University of Queensland gives a brief overview of how AI has progressed over the years into becoming a standard part of university offerings.

Whilst this timeline provides us wit a great overview, we are going to start back at 1921 at a time when the term “robot” was first used.

The rise of robotics and AI

The term robot was first used by Czech writer Karel Capek almost 100 years ago in 1921, although he credited his brother Josef Capek as being the inventor of the word. It comes from the word robota which is associated with labor or work in Slovak and gives us an insight into its intention.

In 1939, a humanoid robot named Elektro was presented at the World Fair, smoking cigarettes and blowing up balloons for the audience. This was a couple of years before Isaac Asimov formulated his “three laws of robotics” that most of us will be familiar with from movies like “I, Robot.” The three laws of robotics state:

  1. A robot may not injure a human being or through inaction allow a human being to be harmed

  2. A robot must obey orders given to it by human beings except where such orders would conflict with the first law

  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law

These laws still stand firm today and in their own way are built into artificially intelligent devices.

Moving into the 1940s and 1950s and the foundations of neural networks in machines start to be developed.
In many papers, this period is considered to be the true start of AI as computer science started being used to solve real world problems, moving away from just theory and fantasy.

During the Second World War, the British computer scientist worked to crack the “Enigma” code which was used by German forces to send secure messages. This was done by Turing and his team using the Bombe machine and laid down the foundations for the application of machine learning; using data to imitate human tasks.

Turing was amongst the first to consider that a machine could converse with a human, without the human knowing it was a machine. Many know this as the “imitation game”, another that has since been made into a popular movie.

The standard was set for AI and in the 50s and 60s, research into the domain began to boom. In 1951, Marvin Minsky built the first neurocomputer and a machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. At a similar time, John McCarthy who is often penned at the father of AI, developed the LISP programming language which has become very important in machine learning.

In the 1960s, there was more exploration around robotics as GM installed the Unimate robot to lift and stack hot pieces of metal. Frank Rosenblatt also constructed the Mark I Perception computer which was able to learn skills by trial and error. By 1968, a mobile robot known as “Shakey” is introduced and controlled by a computer the size of a room.

The AI Winters

Despite all this progress, during the 1970s, AI hit a period known as the AI Winters, coined as an analogy of the nuclear winter. Scientists were finding it very difficult to create truly intelligent machines as there simply wasn’t enough data to do so. This led to a slide in government funding as confidence began to dwindle.

Research slowed until the 1990s apart from a few notable projects such a SCARA, a robotic arm invented for assembly lines in 1979 and research by Doug Lanat and his team in the 80s which looked to codify what we call human common sense. Also, in 1988, the first version of a conversational chatbot was launched and we saw a service bot in hospitals for the first time. Some of these developments created a bit of a spark and a renewed interest in the potential of AI going into the 1990s.

New Opportunities for AI

During the 90s and going into the new Millennium, companies started showing a new interest in AI and it had a second coming of sorts. The Japanese government announced plans to develop a new generation computer to advance machine learning. In 1997, IBM’s Deep Blue computer famously defeated world chess champion, Gary Kasparov and really propelled AI into the limelight.

Improvements in computer hardware meant companies had more data and therefore, greater opportunity to develop machine learning propositions. In 1999, although we were a way off seeing the likes of Pokemon Go, augmented reality was first coined as a framework and term. It seems as if these theories started to drive a wave of developments in the early 2000s as the Big 4 (Google, Amazon, Facebook and Apple) gained a major share of the AI market.

In 2005, autonomous cars took a huge leave, driving 183 miles without intervention. IBM introduces their Watson AI assistant in 2006 which later defeated a Jeopardy champion in the US. Google launched street view in 2010 and in 2011 we met Siri for the first time. It wasn’t until 2015 that Alexa finally hit the market place and then drone deliveries started, Google lens became a reality, smart homes were the in thing and people started having their own Virtual Reality platforms at home.

In fact, for all the history of AI, the last ten years have been huge, creating so much data and allowing us to change life as we know. It is difficult to imagine life without AI in the modern digital world. This is quite an amazing feat when we considered the tumultuous times of the last century.