The Ethical Development of Artificial Intelligence

[vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column][vc_column_text]Artificial intelligence is seen as a revolution in technology. While some scientists think that artificial intelligence will improve the quality of life of people, others believe that artificial intelligence is a technology that needs to be controlled and monitored. However, it is a fact that developments in the field of artificial intelligence are shaping our society faster than we can imagine. During this shaping, artificial intelligence brings along many ethical issues.

It is revealed as a result of research that the ethical values of the data that artificial intelligence feeds on are also transferred to these systems.

As artificial intelligence begins to be used in all areas of our lives, we are asking machines to make more and more important decisions. For example, there is already an international convention governing the use of autonomous drones. If you have a drone that could potentially fire a rocket and kill someone, there needs to be a human in the decision-making process before the missile is deployed. So far, some of the critical control problems of AI have been solved by a set of rules and regulations like this.

The problem is that AIs are increasingly forced to make split-second decisions. For example, in high-speed trading, more than 90 per cent of all financial transactions are now driven by algorithms, so there is no chance of giving control of decisions to a human.

The same goes for autonomous cars. If a child is stranded, the AI has to react immediately. Therefore, it is important for AI to control its state. This creates interesting ethical challenges in AI and its control.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]

Moral Standard

[vc_column_text]Every society differs from each other with its own belief and the moral values it brings. So much so that even individuals living in the same society may differ in their moral values. Accordingly, how will artificial intelligence be guided when it comes to morality? For this purpose, scientists have been striving to reach a moral standard by collecting data from all over the world for the last few years.

The Moral Machine, an online platform created by researchers at the Massachusetts Institute of Technology (MIT), which is a game where we have to make some decisions on the screen, actually tells us a lot about our similarities and differences.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]

The Moral Machine

[vc_column_text]The Moral Machine is an adaptation of a thought experiment known as the Trolley Dilemma. To put it briefly, a railway carriage, out of control, is speeding along the tracks. Ahead, five workers are repairing the rails. You realize this situation and understand that they will all die. In the meantime, by intervening in the switch near you, you can direct the carriage in such a way that only one person will die. What would your behavior be in such a situation?

Over time, other variations of the Wagon Dilemma thought experiment emerged. Afterwards, there was a lot of discussion about our choices and ethics. The Moral Machine actually deals with this issue. But it also adds some technological problems. Because the main purpose of its creation was to find out how driverless cars should be equipped to make similar decisions. By collecting responses from gamers in more than two hundred countries, The Moral Machine illustrates what they would want a driverless car to do when faced with a difficult choice.

Driverless cars, while still having some flaws, will be part of our lives in the near future. Perhaps this will protect us from accidents caused by distracted and angry drivers. But vehicles equipped with artificial intelligence will also have to make some decisions. For this reason, the system needs to be trained to react appropriately in situations that have never been seen before. Because a self-driving car can also make a decision that risks causing death.

This is exactly what the creators of The Moral Machine wanted to understand and solve. So they developed a game that pits people against accident scenarios and asks them to choose who lives and who dies. Millions of people around the world have expressed their opinions in this more detailed version of The Wagon Dilemma.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]

Surprising Results

[vc_column_text]The results so far are also very interesting. Firstly, we prefer to save a woman pushing a baby carriage, then a girl, a boy and a pregnant woman from an accident. This is followed by doctors, athletes and executives. Old people, criminals, dogs and lastly cats. So when asked to choose between hitting a dog or a cat, most people prefer to save the dog.

Moral Machine also collected data about who responded and where these people lived. Some of these differences were important. The researchers identified three clusters of countries. These were a western cluster of North America and a few European countries; an eastern cluster of Asian countries; and a southern cluster of mostly Central and South American countries. Cultural differences significantly influenced the decisions made.

For example, the western cluster placed more emphasis on the safety of passengers in vehicles than on pedestrian safety. The eastern cluster, which tended to respect age, was not so keen on saving young people. The Southern cluster, on the other hand, had developed a stronger feeling to save women and fit people. For example, in the section on jaywalkers, those living in wealthy countries preferred to save the person who obeyed the rule and faced a possible accident, whereas in countries with underdeveloped and/or low economic levels, there was much more tolerance towards jaywalkers.[/vc_column_text][/vc_column][/vc_row][vc_row css=”.vc_custom_1629803910077{margin-bottom: 24px !important;}”][vc_column]

Results for Turkey

[vc_column_text]When we look at the results, Turkey is most similar to Argentina. The most different result from us comes from China.

If people in every country start from the truths they are taught, why do we make so much effort to impose our own truths on others and make the world a narrow place for those who are different from us? Or is trying to make others like us only one of the many things we are taught and accept without questioning?

The fact is that a computer program cannot perfectly simulate real life. If you’ve ever been in a combat situation, you know that you don’t have time to think about what to do. Some decisions can be made subliminally. But driverless cars will be superior to us in many ways, and perhaps when they get into these dilemmas, they will be able to find a third option that saves everyone. Regardless, this survey is a very good demonstration that a driverless car may have to change its decision-making depending on where it is in the world. Most importantly, it leaves us with the truth about how our cultural differences shape our sense of morality.[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_column_text] Sources:

Jay Ingram; The Science of Why, Volume 5: Answers to Questions About the Ordinary, the Odd, and the Outlandish; Simon & Schuster; Illustrated edition (Nov. 10 2020)
https://dergipark.org.tr/en/download/article-file/1530133 Access Date: 23. 05. 2022 [/vc_column_text][/vc_column][/vc_row]