Trends: Artificial intelligence – trust yes, consideration no

0

A lack of trust in the methods of artificial intelligence has long been considered an obstacle in the relationship between humans and machines. However, research shows that the problem is rather that we tend to exploit AI.

Let’s start with the good news: Basically, the relationship between humans and machines is trusting. Whether it’s researching companies or symptoms of illness, when in doubt we often tend to trust the search engine or Dr. Google before consulting other sources. Trust is good, control is better? This was once the case. Powerful algorithms, machine learning, AI methods, autonomous driving and chatbots – a few years ago, these were futuristic pipe dreams. Today, they have become everyday technologies that are already working or hold promise. Realism and objectivity instead of fear of technology dominate the debate. The debate is no longer about whether certain things are possible, but rather about how we as a society want to deal with them.

As early as 1948, science fiction author Isaac Asimov established rules for how robots should behave and how they should be programmed. These rules are known as robot laws: According to them, robots must not harm humans and must obey them without fail. But who establishes the rules that sanction the behavior of humans toward machines and technology? There is a need here – this has not only been clear since e-scooters had to be pulled out of rivers en masse.

Even developers of chat bots regularly complain that their AIs and protocols are overwhelmed by inappropriate language from users. Fully aware that they are interacting with a bot, some users get their frustrations off their chest. Microsoft’s bot Tay, fed by a destructive Twitter bubble, became an unpalatable racist within hours. Being equipped with an AI apparently does not lead to more consideration towards machines. That’s what the research findings of Ludwig Maximilian University in Munich (LMU) suggest.

In a study, the scientists used behavioral game theory to investigate how test subjects would behave toward autonomous vehicles without occupants. The result is sobering: people treat machines inconsiderately. It’s true that humans trust machines in principle, says Jurgis Karpus, who supervised the study. On the other hand, however, the trained “good nature” of AI is shamelessly exploited: ” In traffic, a human would give way to a human driver, but not to a self-driving car,” says Karpus. This pattern was so evident that the researchers refer to it as “exploitation of algorithms” or “algorithm exploitation.” The “reluctance to cooperate with machines” is a challenge for future interaction between humans and AI, according to Karpus.

This challenge can also be applied to one’s own company: anyone who already relies on learning machines, chat bots or complex technologies in their business (or wants to take this step in the future) must keep these challenges in mind: Artificial intelligence and smart machines can only work if they are accepted by both customers and employees. Otherwise, a chatbot degenerates from a source of information to a virtual punching bag. Expensive machines may not be treated with the necessary care out of frustration with technology. The new technologies must therefore be introduced in an understandable and transparent manner as what they can be: Enormous facilitators for the everyday work of most people.

Summary
Trends: Artificial intelligence - trust yes, consideration no
Article Name
Trends: Artificial intelligence - trust yes, consideration no
Description
A lack of trust in the methods of artificial intelligence has long been considered an obstacle in the relationship between humans and machines. However, research shows that the problem is rather that we tend to exploit AI.
Author
Publisher Name
Beyond-Print.net

Leave A Comment