The Child Or The Driver – Forcing AI To Make A Decision

You are here:///The Child Or The Driver – Forcing AI To Make A Decision
  • Brain Synapses

The Child Or The Driver – Forcing AI To Make A Decision

An autonomous car is driving down a narrow road lined with trees when suddenly a child runs into its path. Should the car swerve knowing that by doing so it will collide with a tree and result in the death of its driver and its own destruction, or should it continue forward knowing that this will result in the death of the child?

The software behind the ‘brains’ of these vehicles is by no means intelligent enough to make this decision, in fact Tesla’s self-driving car can only differentiate between road signs, traffic lights, objects and obstructions (objects that have entered the planned path), but they are still a form of Artificial Intelligence.

Over the years Hollywood has painted a rather fantastical picture of how AI is a system of superior intellect that has the potential to outsmart the entire human race and achieve world domination, when in reality the probability of creating such a system is practically minute.

The human brain is an incredibly complex machine that develops and improves over a significant period of time. It calls on memories of previous experiences and the emotions that are linked to these moments when asked to make a decision. Is it possible for AI to replicate a process that, in essence, makes us human?

There are many different techniques and methodologies available to AI practitioners in order to provide systems with the ability to ‘learn’ and make decisions based on this evolution of knowledge. One of our most recent recruits had the opportunity to carry out an internship with UWE’s Artificial Intelligence Group where he had a chance to experience the complexity of developing Machine Learning algorithms, a popular and well known method of enabling systems to learn ‘like a human’.

Machine Learning provides systems with the ability to learn and improve from experience without being explicitly programmed, often through access to data that it can use as a learning aid. As part of our efforts to improve safety on the rail network Zircon has been experimenting with Machine Learning and the potential benefits it could bring to the industry, but for the purposes of this article an example of Machine Learning in action is a program created by Youtuber SethBling to play the game Super Mario World.

The program, aptly named MarI/O, started out with absolutely no knowledge of the game. It had no idea that pressing the right button would allow Mario to move on through the level, or that running into enemies would result in Mario dying. Yet by using a Machine Learning technique known as Neuroevolution, MarI/O was able to complete a level in just 24 hours of learning time.

 

As you can see from the clip Neuroevolution attempts to simulate the way a human brain makes new connections. But so what if a machine can simulate how we learn, as mentioned earlier in the article there is more to decision making than solely relying on logic based on knowledge. Just because logic dictates that that a specific response is valid, does not mean it is the best outcome.

This was the case for Microsoft’s AI powered bot Tay, which was designed to mimic the speech patterns of a teenage girl in order to respond to messages on certain social media sites. After just 16 hours of operation, Microsoft decided to shut the service down when Tay began to post offensive messages through its Twitter account. The reason behind this sudden dark turn was that the AI ‘brain’ behind Tay was learning to adjust its responses based on how other users were interacting with it. What the system was unable to do was recognise that the statements it was making were offensive, something that even the most insensitive of human beings would have recognised.

Going back to the example from the introduction, in the majority of cases, for a human being the gut reaction would be to try and avoid the child, but this is a decision fuelled by emotion and the implied sense of right and wrong. How could an artificially intelligent brain possibly hope to make the same decision without all of these inputs?

“Dr Calvin: A robot’s brain is a difference engine, it must have calculated-

Det Spooner: It did. I was the *logical* choice. It calculated I had a forty-five percent chance of survival. Sarah only had an eleven percent chance. That was somebody’s baby – Eleven percent is more than enough. A human being would have known that. But robots, nothing here. [points at heart] They’re just lights, and clockwork. But you go ahead and trust them if you wanna.

I, Robot (2004)
By | 2017-11-02T16:44:58+00:00 October 31st, 2017|Categories: Blog Posts|Tags: , , , |

Leave A Comment