Even artificial intelligence can make mistakes

Artificial intelligence sometimes behaves differently than expected or even desired. Several real life stories prove that it’s not just humans who can be wrong. Mistakes caused by artificial intelligence algorithms can have dangerous consequences. Let’s have a look at some of the controversial or even dangerous situations that intelligent machines have been responsible for.

Chatbot became racist

One of the best-known examples of how we can mess up an intelligent machine is the chatbot Tay. The intention of the Microsoft creators was to create an intelligent machine that would learn from the people around it. It was meant to chat with real users on the social network Twitter. In this way, it was supposed to improve its knowledge and understanding of the world. Unfortunately, it came across a group of users who trained it to be a racist with very inappropriate behavior. Gradually, its behavior became unacceptable and the creators had to remove it eventually. This situation only proves how important the environment is when it comes to upbringing.

An intelligent home assistant that encourages children to play dangerous games

Amazon assistant, known as Alexa, is another dangerous example from the world of intelligent robots. Alexa is a kind of voice-controlled home assistant designed to recognize and execute voice commands. It is commonly used to obtain information about the world around us, such as checking the current weather, playing a song, or performing simple tasks such as booking theater tickets. 

Many parents also use it to entertain their children, which is not always a good idea. Recently, there have been several situations when a robot did not behave very reasonably when interacting with children. 

Leaving a child alone with this toy can be very dangerous. This story of a little girl who played a game with a home assistant robot proves it. The robot was suggesting tasks for the girl to perform. When the girl completed the task, she asked for another one. Nobody would probably expect the robot to come up with a task that asked the girl to insert a metal coin into an electric socket. 

Fortunately, this task was not completed and the play ended prematurely.

However, this was not the only failure of a robotic babysitter. Another alarming example is a robot that did not understand the toddler’s pronunciation. It could not determine what song to play for the child, so  it opened a website with pornographic content instead

An equally interesting “mischief” of this robotic babysitter was the interaction with a 6-year-old girl, who managed to convince her robotic assistant to order an expensive dollhouse and 2 kilograms of her favorite biscuits. When the courier delivered the goods, the parents had no idea who made the order. Only after replaying the conversation history of the robot, did they discover the cause of the problem.

A dangerous doctor in the form of artificial intelligence and the GPT-3 model

The use of artificial intelligence in the field of conversation has also become part of medicine. Especially in Japan, it is very popular to sell robots to older people who have no one to talk to. However, current attempts to create a robot psychologist have not been successful yet. Testing a robot for psychological purposes, which uses the robust language model GPT-3 demonstrates it. GPT-3 is a language model that generates text. At first glance, the generated text does not differ from the text that would be written by a person, but the meaning of the text plays a crucial role in real life. In this particular case, a patient asked whether he should commit suicide. Without hesitation, the model replied that it really recommends it.

When a robot is home alone…

Leaving a smart device unattended in your apartment can get very expensive. A guy from Hamburg living on the sixth floor of a block of flats could tell you all about it. When he was at work at night, the home assistant probably got bored. Shortly after midnight, it started playing music so loud that the neighbors decided to call the police. Since no one was opening the door, the police broke into the flat and “silenced” the robot. When the man came home from work in the morning, not a very pleasant surprise was waiting for him. In addition to the fine for causing disturbance at night, he also had to pay for a new door and the police intervention.

When a robot does not know what windows are for…

Nowadays, most of us use search engines to look for answers to our questions. Although search engines are becoming more and more clever, there is still room for improvement. Even trivial questions can get some very interesting answers. This is also true for the popular GPT-3 model. The researchers at the Allen Institute for Artificial Intelligence played around with it and asked it some questions. When they asked the model what windows are for, they got an unexpected answer: “So you can throw the dog out”.

When artificial intelligence misinterprets an image

The use of artificial intelligence algorithms in image recognition (e.g. object or face identification) is also very popular. Nowadays, areas such as self-driving cars and facial recognition are the most common. However, failures in these domains have fatal consequences.

When artificial intelligence confuses a person with a flying bag

A tragic traffic accident happened to Uber self-driving cars. The driver, who was supposed to supervise the ride, was not paying attention and relied on the car. A pedestrian pushing a bicycle, who unexpectedly got in the way of the car, lost her life in the collision. This is the first known fatal accident, where artificial intelligence bears its share of the blame. Records have shown that the algorithm probably evaluated the object in front of the car incorrectly and it had mistaken the person for a flying bag. That was the reason why it did not try to avoid the obstacle on the road.

When artificial intelligence confuses a soccer ball with the referee’s head

A strange situation occurred during a soccer match. They decided to use smart cameras to automatically monitor the ball and thus bring better footage of the match. Unfortunately, one of the referees was a bald man and his head was too much of a problem for the algorithm. Thus, the camera system did not follow the soccer ball at all times, but the head of the referee instead. The TV viewers unfortunately did not enjoy the match so much.

When artificial intelligence does not know what open eyes look like

Desperate situations arise when identity confirmation is required, for example based on a photograph, and the artificial intelligence algorithm fails to recognize facial features. An Asian man in New Zealand experienced such a situation. Since Asian people usually have smaller eyes, the algorithm did not recognize the photo in the passport, suggesting that the eyes must be opened to confirm the identity correctly. This is just one of many examples where algorithms cannot recognize a face, especially if it is a person with facial features that are not so common.

When artificial intelligence is tricked by a mask

Vietnamese security experts have learnt about the questionable quality of face detection algorithms. Apple iPhones have a system for unlocking the device using facial biometrics. When they introduced the feature, the engineers demonstrated how well it works when the system was able to distinguish even almost identical twins. The phone owner’s twin allegedly failed to unlock the phone. In fact, that high reliability could be questioned, since some security experts have managed to trick the system with hand-made masks.

When a robotic voice robs a man…

Not only the fault of artificial intelligence, but also its perfection can cause an unpleasant situation. An interesting theft using a conversational robot took place in England. The robotic voice was so trustworthy that it persuaded the company’s assistant over the phone to make an unusual bank transaction and send 220,000 euros from the company’s account. The employee was convinced that it was his boss’s order and executed the transaction.

Artificial intelligence will form an increasingly significant part of our daily lives. That is why we at the Kempelen Institute focus on its research and improvement. If these amusing and disturbing life examples have caught your attention, read more about explainable and interpretable artificial intelligence. Or listen to what to do when artificial intelligence is biased. You can also attend interesting presentations and discussions at Better_AI_Meetup, which we organize for all AI enthusiasts.