Is Optimus Prime A.I.?

by Terence Tan

We all know that Optimus Prime from Transformers is a robot. However, is he an example of artificial intelligence (AI)? The word ‘artificial’ implies that the intelligence is built, presumably by humans.

Were the Transformer robots created by humans? No. Therefore, Optimus Prime is a completely fictional but naturally intelligent sentient being in the body of a machine.

Is C-3PO from Star Wars an example of AI? Yes. It (or is it he?) was built by a little boy called Anakin Skywalker. Incidentally, Anakin would have made a really great mechatronics engineer but instead he decided to become an intergalactic light sabre-wielding villain.

It’s now 2011 but where are our robots? Where is the AI that we hoped would work, talk and play with us?

We want robots that are intelligent enough to play with us. Consider what happens when humans play games. Firstly, we reason – “If I do this, then he’ll do that, so I should…” Secondly, against human opponents, we have to understand human behaviour. We look for signs in their facial or body language to gauge whether they are happy or sad, scared or relieved, aggressive or passive, and so on. Thirdly, we have to have a goal in mind and use our reasoning and understanding to win the game. Fourthly, whether we win or lose, we learn. We learn from our mistakes and we do better the next time. If a machine could do all that, then surely it is artificially intelligent!

With this in mind, past AI researchers worked to create machines that could play games with humans, and not just play, but win. So what game best demonstrates artificial intelligence?

No machine can beat a man in a game of chess. I don’t mean men who are amateur chess players and dabble for fun. I mean a real champion grandmaster. Chess requires daring, creativity and, most of all, human ingenuity – attributes that no mechanical calculator can ever hope to achieve.

Those were the thoughts of many as world chess champion Gary Kasparov sat down to play against IBM’s Deep Blue. The battle was tough. The world was watching. In the end, the machine won 3-2. The world hailed the coming of a new age – the age of the machine, the age of AI.

However, was Deep Blue truly artificially intelligent? Indeed, it could reason: “If I move my pawn here, my opponent will move his queen there…” But could it understand human behaviour? There were no eyes or ears on Deep Blue. All it could decipher about human behaviour was what pieces were on the chessboard.

Did it have a goal? Yes, and it managed to achieve the checkmate to Gary Kasparov’s dismay. Last in our four-item AI checklist: Could Deep Blue learn? What is learning?

The first time you play a game, whether it’s Tetris or a card game or sports, you generally ‘suck’. You are not familiar with the rules, you don’t know the techniques and you don’t know how to get the high scores. As you play the game more, you steadily increase your knowledge and experience. You start to know the strategic points of the game and what to do to win. As the young people say, you are going from ‘noob’ to ‘pro’. That is learning.

Deep Blue did not learn chess by progressing from ‘noob’ to ‘pro’. It had extra help. Deep Blue’s memory contained 4,000 chess openings and 700,000 grandmaster games. It could process 20 moves in advance – much more than a human can compute. During its time, Deep Blue was the 259th most powerful supercomputer. That was the computational power required to win against men.

While computers have dominated in many games such as checkers, chess and poker, the strongest computers surprisingly enough still lose against a good Go player.

Let’s put aside board games. What about quizzes? In 2011, IBM brought a machine named Watson to play in Jeopardy, a popular American quiz show. Jeopardy goes like this: contestants are shown a sentence, i.e. “This DC Comics superhero swims at high speeds and communicates telepathically with sea creatures.” The contestant who hits the buzzer first gets to answer first. His answer needs to be in the form of a question. So the answer for in this case would be: “Who is Aquaman?”

To play Jeopardy, the machine needs to understand the structure (syntax) and meaning (semantics) of language. Watson won against the human contestants but only by having access to 200 million pages of data which includes the full text of Wikipedia. Even so, it could not answer every question correctly. This shows there is still a gap in the capacity of computers to completely understand the world we live in.

If computers can’t understand the world, at least they can understand movie lovers. Netflix, an on-demand Internet streaming video company, offered a US$1 million prize to anyone who could develop a better algorithm to predict a person’s movie preferences. For example, the algorithm would look at me and my favourites – Transformers, Star Wars, DC Comics – and conclude that I love to watch Avengers (with a probability of 88.5%).

Now, what would happen if Netflix could increase the accuracy by 1%? Netflix made US$2.1 billion in revenue last year. So a 1% increase in accuracy could lead to a 1% increase of $2.1 billion or more.

The race to develop systems that reason, understand human behaviour, achieve goals and learn is only just beginning. Google, the search engine company, does not want to be the world’s number one search engine company. That’s irrelevant, even though it already is the dominant search engine. What Google really wants is to create artificial intelligence.

Have you ever stopped to consider that the machine you are looking at every day could actually be observing you, thinking and learning about you, every day? Let me leave you with that thought. May you have a nice day.

Terence Tan is a senior lecturer in the Department of Electrical and Computer Engineering of Curtin Sarawak’s School of Engineering and Science. He won the 2008 Excellence and Innovation in Teaching Award from Curtin University, Perth, Western Australia, and due to his experience and expertise, is often invited to speak to students on learning, leadership and technology. His current PhD research is on ‘Learning and Cooperating Multi Agent Systems’, which is essentially AI. In addition, he is a facilitator for the John Curtin Leadership Academy that equips students for community service, leadership and entrepreneurship. For any comments on the article, he can be contacted at