Fork me on GitHub
Tutorial Home



Artificial intelligence (AI) is the new weapon that is starting to define how large companies compete with each other. The Big Four or GAFA (Google, Amazon, Facebook, Apple) companies have had an advantage of large budgets, talented resource pools and vast amounts of data to develop AI solutions at a faster rate than anyone else. This is creating a problem for smaller businesses who cannot compete with many of the offerings and get drowned by a GAFA tidal wave. This means small to mid-sized businesses need to find their own niches to remain competitive.

Rather than focussing on the trials and tribulations of those small businesses, this article looks at GAFA and how they are progressing their AI offerings.


AI radiates through almost every that Google does. In fact, they have stated that they are moving towards an “AI first” development structure. It has been around 12 years since Google introduced us to their Android operating system, an open source platform for mobile devices. In April 2019, they announced they would be doing the same with AI, using TensorFlow, their open source platform for machine learning. Essentially, anyone who can connect to the internet has access to one of the most powerful AI platform ever created.

Google has virtually become an AI company, far removed from simply a search engine, and they are starting to introduce that to the rest of the world. Although there are other companies with platforms like TensorFlow, they don’t have the same development, research and funding power that Google has.

Whilst in theory, providing these complex machine and deep learning tools to everyone with allow them to compete with Google, the objective is that it accelerates computer science and the community gives back. This is how Google see the future and they have set themselves up as a transformative AI company.

Beyond TensorFlow, there are a vast amount of ways that Google already deploy AI applications. Below is a summarise of some of the most well-known applications.

  • Google Search

Most of us would know that the Google Search algorithms are powered by AI, or more specifically machine learning and natural language processing. However, this has come a long way over the years, especially with the prominence of voice search and deep learning techniques are allowing the models to learn on their own.

  • Adwords

Google Adwords now uses AI to enable Smart Bidding. Instead of employing a digital marketing team to manually arrange auction bids, machine learning algorithms will automate the process to improve conversions whilst reducing the cost of labour. It is able to user a wider range of contextual signals within the machine learning models that may not have been picked up otherwise.

  • Maps

Consumers are now able to use Google Maps just like a satellite navigation system. It has effectively replaced the need old-fashioned GPS monitor (TomTom) and allowed one less device in the car. The integrated AI models can flag driver alerts and link to place of interest. It also detects traffic delays and route problems for drivers. The predictive nature of Google Maps means you can navigate without any commands.

  • YouTube

Owned by Google, one of the core uses of machine learning at YouTube is ensuring that brands don’t have their ads placed next to what might be deemed as inappropriate content. Users are also fed recommended videos in a form of predictive AI.

  • Photos

Although not one of the most used features, Photos will recommend images you should share and which friends you should be sharing them with.

  • Gmail

The email platform from Google has really stepped up the AI game in the last year. Platform users now have access to smart replies. The platform will offer up predictive sentences as you start writing content based on the data within your inbox. As it gathers more data, it learns your writing and it able to accurately create an email for the user.

  • Calendar

Within Drive is Smart Scheduling which can suggest meetings and appointments based on the regular habits of the user.

  • Drive

The AI used within Google Drive and Documents is able to predict which files you are looking for and claims to reduce the time spent finding it by up to 50%. It will display the files it thinks you need at the top of the screen each time. In Google Sheets, it can now even automatically generate formulas whilst Documents use natural language processing for better functionality.

  • Assistant

Google Assistant, the voice activated agent, gets answers almost instantly straight from the web, pretty much as if you were doing a search yourself. Google Assistant is also great at remembering previous conversations and syncing to Maps and other Google products. The assistant is the power behind the Google Home IoT device which has laid down its marker in the voice command device market. Google Home has been learning at an accerlerated rate thanks to the massive amount of data that the company hold.

  • Allo

This is the Google venture into the messaging App world. It goes beyond some of the other apps through deep learning neural networks which Google say will be powerful enough to create its own emojis, rather than the standard ones that come with other platforms. Everything Google is doing has an AI first thought process.


Amazon has not become the retail giant that it is today by accident. AI is shaping everything they do from the warehouse right through to the Alexa smart speakers.

  • Product recommendations

Virtually from the start, Amazon has used machine learning algorithms to recommend products to consumers. The predictive AI will present users with products they are most likely to want based on either their purchasing behaviour or the actions of other consumers just like them.

The correlations for the recommendations have gone though a number of changes over the years to account for fast-paced and dynamic markets. For example, if a new product line comes to market, it can tell instantly which consumers are going to be interested in it. In the earlier days of the machine learning algorithms this may have taken some time to work itself out.

Reports have suggested that as much 35% of Amazon’s retail revenue comes from products that are purchased via its recommendation engine. To put that into context, AI is able to sell products that a consumer would not have made a conscious decision to buy otherwise. If you add personalised emails into the mix and the excellent on-site search, Amazon is driving much of its retail sales using data alone.

  • Voice control devices

Unless you have been hibernating for the last few years, you will have noticed the amazing rate at which Alexa has come to the market. Millions of consumers have now bought Alexa and there are almost 50,000 skills available on the device. This means they can help with a wide variety of tasks from simple voice commands.

We should not underestimate the amount of data in the Alexa engine and the immense processing power that can send answers back almost instantly. There are now integrations with many other devices, capitalising on the dominance of the market. They are creating customisable skills for third parties e.g. Marriott Hotels as a type of concierge system for guests.

The future of Alexa is in letting consumers create their own skills via their Blueprint platform which doesn’t require any development knowledge.

  • Amazon Web Services (AWS)

AWS is the massive cloud based service and storage facility provided by Amazon. It has become one of the market leaders in this sector and is a huge contributor towards the trillion dollar valuation of the company.

The aim of AWS is to give users the same capabilities that Amazon has in creating its machine learning models for recommendations, Alexa, DeepLens or Amazon-Go (see below). AWS is tailored for different levels of users, from beginners all the way to those with Data Science or equivalent type degrees.

Usage of AWS grew by 250% during 2018 and is set to continue as the cloud platform of choice for many businesses.

  • The Warehouse

Albeit not consumer facing, one of the most important areas for AI is in the amazon warehouse. The facilities are filled with robotics that spur into action as soon as somebody places an order. They are completely autonomous and will deliver items to a human who checks it and puts it on a conveyor belt. With the volume of orders received, every second counts in the warehouse which is why AI is vital for improving these efficiencies.

As well as this, Amazon use machine learning to predict what customers are likely to be ordering and put it in the right spot of the warehouse, improving speed of processing. Using computer vision, Amazon have also optimised the scanning processes for goods in the warehouse with a new system saving workers huge amounts of time.

Delivering items to the customer on time is imperative to the success of the Amazon offering so every second saved is another happy buyer.

  • Amazon-Go

Trials of the Amazon-Go stores showed customers walking in, picking up groceries and walking back out again. Sensors and scanners can tell exactly what they have picked up and charge them via their SmartPhone as they go in and out of the store. There is no need for cash or any human interactions whilst in the store.

The system is far more complex than the warehouse whereby in busy stores, cameras can have a blocked view or lighting can change and impact the algorithms. This is why there are only a few stores now whilst the experts attempt to build models that can account for these glitches.

The number of sensors needed right now could also be costly which is another reason why they haven’t got them to a commercial release. Automated stores are most likely the way forward and Amazon will be the pioneers if so.

  • DeepLens

AWS DeepLens is a wireless-enabled video camera and development platform integrated with the AWS Cloud. It lets you use the latest AI tools and technology to develop computer vision applications based on a deep learning model.

Everything about the hardware is fully customisable for developers. For example, it is possible to create deep learning networks capable of recognising the objects in a room or even the faces of people in a room.


The bad press in the last 24 months with events like the Cambridge Analytica scandal has led to people questioning how Facebook use AI and data but the fact of the matter is they have been using it to great effect for a long time now.

With a dedicated research lab and billions of users, Facebook has an infrastructure designed for growth in AI.

  • Recommendations

A bit like Amazon, recommendations are high on the list of AI deployments at Facebook. As well as suggesting new friends, the items you see are your news feeds re all there because algorithms have told them to appear. Every time you do something on Facebook, it learns from your behaviour and can ensure it creates exactly the right intent next time around.

Businesses can use Facebook Ads to target the right users and you will see campaigns based on your activity and behaviours. One of the most impressive (some may say intrusive) aspects is re-targeting of products. If you’ve ever been shopping on a site and suddenly see the same item or similar items appear on Facebook, that is AI driven re-targeting. I promise Mark Zuckerberg isn’t listening to every conversation but deep learning models are getting very strong at tailoring these ads.

  • Content

With so much press around internet trolls and offensive posts, Facebook has algorithms that can alert them about content. A recent innovation was developing a script capable of spotting if teenagers were showing signs of depression and suicidal thoughts. This has been built for public good after some high-profile incidents that could have potentially been stopped.

Other use cases are in terrorism and racial abuse.

  • Language

Facebook acquired, a natural language processing startup based in London. With the technology, they are able to decipher what their users are saying an analyse the context and meaning as well as the actual items themselves. The use case for this is tackling fake news and hate speech but it will also be used to better understand the behaviour and needs of their users.

Some analytics companies are using language to recognise personality traits. Many say these attributes are better factor of trustworthiness and integrity when it comes to things like credit. Trials are in place that use social data as opposed to traditional financial backgrounds for making loan or mortgage decisions.

The AI platform for understanding context is known s Deep Text. Facebook say it has “human like accuracy” of understanding the context of language through the deep analysis of intents and entities.

  • Image Recognition

Millions of images are posted on Facebook. Identifying faces at automatically tagging them is one of the major advances machine learning algorithms have brought to the site over the years. This was trained using billions of photos from Instagram once they acquired it allowing them to build accurate models incredibly quickly.

One of the most important use cases is that the algorithms can identify images and describe them to the visually impaired. This works by simply taking a photo and allowing the deep learning networks to quickly identify and explain them to users.

Facebook even believe they have an algorithm which can work out the mood of a person based on their stature or pose in an image.

  • Chatbots

Facebook Messenger is probably the most used conversational chatbot AI. Businesses are now able to let them customers service themselves or purchase products using the technology.

Rich amounts of data feed the Messenger platform allowing it to conduct all kinds of activities. They have even given it the capability to negotiate with humans and it created its own way to bluff and lie in conversations.


Many say that Apple is a bit of a follower when it comes to AI and machine learning with the other big companies doing much more in the way of leading innovation. In fact, even if we look at Siri which was the first voice assistant to enter the market, Amazon and Google quickly took over with their products when they had a lot more data available to make things happen faster.

The Apple model has typically relied on hardware rather than developing their own AI and machine learning models like the other companies in this article have done constantly. However, in April 2018, Apple hired John Giannandrea from Google. He was one of the core reasons why Google integrated AI into just about every product and a major part of their success. His appointment showed the direction that Apple desired to go in.

With that, the latest iPhone models come with heaps of computational power and AI based algorithms. The objective of this was slicker camera effects and the ability to create amazing augmented reality (AR) experiences for the users. A little like Google and Amazon, customisation is key and non-developers are able to run their own algorithms using the Apple hardware.

With non-developers having this sort of scope, the iTunes store is set to livened up with new experiences for socialising and getting things done. The machine learning algorithms have image recognition to better understand photos as an example. The new computer chip in the iPhone is dedicated to running neural network software that starts to understand he concepts of speech as well as images, more so than ever before.

The new technology installed by Apple allows developers to run neural networks at a rate of 10 times faster than the iPhone X. To put this into some context, the basketball app HomeCourt has been able to improve drastically. The app analyses video and images to get on court analytics on shots, misses and dribbles. This used to take a couple of seconds to process but is now able to complete the analysis in real-time such is the computational power.

AI has become a key area of focus at Apple as they seek to keep up with Google, Amazon and Facebook. They have hired a number of staff to AI roles at the start of 2019 from developers to industry specific professionals. However, they don’t talk it up as loudly as the competitors.

In 2019, Apple announced that Siri is set to have a far more natural voice, personalised music will be available via their HomePod speaker system and Siri will be able to read incoming message to your AirPods. The Core ML platform is available to iOS developers for developing their own applications as we’ve already said and is continually improving.

Apple have also hired former Google man Ian Goodfellow as the Director of Machine Learning to further exemplify their intent. However, it seems that although they have the resources and knowledge, the culture of the company remains in hardware and it will be hard to pull away from that completely. Some have suggested there are roadblocks in using machine learning given Apple’s commitment to privacy and that could be causing more problems with development than we realise.

Although Apple are, and for the foreseeable future will be a huge power in the technology field, their AI strategy still seems to be somewhat lacking against their core competition. They need to think about releasing something big soon at the risk of falling short where Google, Amazon and Facebook are all powering ahead.