Artificial Intelligence: Or how I learned to stop ranting and love the buzzword

Category: Blog

By Janice


Our guest author, Antero Duarte of Wallscope takes a wry and somewhat insightful look into the world of all things AI. Take it away Antero...............or should that be Siri, Hal, or even Holly............?

Unless you live under a rock, you have been constantly bombarded on allfronts about Artificial Intelligence (even in bins?) lately.

Artificial Intelligence has been around for a long time and it has been called about a gazillion different names throughout its history.

The term itself has also been used to describe about a bajillion different things.

While some of the uses of the term align with the definition, some have been a bastardisation of the concept for marketing purposes and can hurt the development of the technology. Or can they?

Defining AI the way only humans can
How do you get nothing done? Ask a group of AI students to agree on a definition of AI — Based on a true story

Let me take you back to the notes from when I took an Artificial Intelligence module at university. An artificially intelligent system was defined as belonging to one or more of 4 broad categories by having one or more of the 4 following characteristics:

  • Thinking Rationally - Can a system “reason” in a way that is rational/logical? (if this then that — Logic, Inference)
  • Thinking Humanly - Can a system mimic the ways in which humans think? (e.g. Introspectively)
  • Acting Rationally - Can a system act in a way that is based on the logical analysis of its environment? (if temperature > 21 then turn heating off)
  • Acting Humanly - Can a system act in a human way? (usually to try to convince humans that it is a human — Turing Test)

Is that a broad definition? Yes it is. Is it useful? Probably not?!

Definitions like this one mean that nothing about them is incorrect. It also makes them so vague that they are not very meaningful. Nonetheless, when I refer to an AI, I will be referring to a system that possesses at least one of those characteristics.

This is also the definition that no one came up with during a 2 hour practical when asked to define AI.

A brief history of AI
(This section is pretty much just a summary of the wikipedia page on the History of AI, so go there for the full picture)

With such a broad definition, it’s no wonder that AI is everywhere. We can basically stretch the definition of AI to fit any system that replaces human behaviour/intervention in any way.
Since the 1950s people have been developing systems and calling it artificial intelligence (it goes further back, but that’s when computer AI started).

When it started it was based on the human brain and how digital signals are passed around the brain. This was the first attempt at a system that thinks humanly. But the problem is as far as we know, there’s more to thinking than just electrons firing. How do we define thinking? Is it based on consciousness? If so, are animals intelligent?Which ones? So many questions… Yes, this is the birth of the philosophy of AI. The Turing Test marks it.

People are experimenting with computers, and start developing Game AI(which would be used as a measure of the progress of AI throughout history). Game AI is important because it marks the realisation that a system can act humanly but still think rationally.

It’s 1956 and we have the Dartmouth conference. If AI had a birth certificate this would be the time and the place in there. This is where the name was picked, the mission was defined and the biggest players joined.

After that we have the first period (1956–1974) of heavy development where a lot of the algorithms still used today were created, there were major breakthroughs in fields like Natural Language Processing and everyone was pouring money into AI research. This is where systems in areas other than games were making the jump from acting rationally to acting humanly

Then it slowed down (1974). Then it picked up again (1980). The it slowed down again (1987).

The predictions were too optimistic, which meant that most of them didn’t become true. As one of the founders of AI Marvin Minsky put it: “So the question is why didn’t we get HAL in 2001?”.

There isn’t one answer to that as much as a combination of several (speculative) factors like limited computer power, the end of funding, profit driven research which focused on short term gain…
Excuses. We want HAL! (Actually, do we? It would kill us all… I’ll save that for another article).

We also want hoverboards. Also the world didn’t end in 2012. It’s like human made predictions never come true, someone should get a machine to predict these things. Anyway…

And then…

because I’m stuck in the 90s. send help

Big Data and Deep Learning changed everything. Suddenly we are able to throw a lot of data at a machine and it will use these magic black boxes that allow them to act humanly, but interestingly they are also the closest we have gotten to thinking humanly.

What now?

That is where we are. We live in a world where more and more machines are making more and more decisions based on big data.

Data that is abundant as corporations have been collecting it for decades now. It is also biased because it reflects the biases that exist in the real world and we are yet to find ways of preventing these models from learning these biases that we are training them on.

This should be the subject of the next article that I won’t write. People who know way more about this problem are writing about it. I recommend Invisible Women by Caroline Criado-Perez and Racist in the Machine: The Disturbing Implications of Algorithmic Bias by Megan Garcia.

These techniques are being used everywhere. They work for the most time and they are cheaper and easier to build as long as you’ve got the data. Open data is also abundant, so sometimes you don’t even need to own the data to get decent results.

These techniques are also so widely used because they are popular. They become popular by being easy to talk about. Artificial Intelligence is a concept th