Artificial intelligence increases risk for soldiers

0
148
(Courtesy of Open Clip Art Vectors)

Artificial intelligence is defined to be intelligence exhibited by machines or software that interact with any given environment.

This rapidly growing field is currently making large impacts in a wide variety of environments and will inevitably influence the lives of millions.

Just this past summer, over a thousand scientists and artificial intelligence researchers signed a letter at the International Joint Conference on Artificial Intelligence. This letter publicly addresses the need to ban offensive autonomous weapons in order to avoid a global artificial intelligence arms race.

Many individuals make the mistake of classifying autonomous robots with remotely piloted aircraft — drones. The largest difference between the two structures would be the fact that drones require a person to fly the craft and make targeting decisions.

Unlike drones, autonomous weapons analyze their surroundings and search for targets to engage with on their own.

By removing any human control, how do we know what decisions the drone will make in a hostile situation?

This is exactly the classic trolley problem since, ethically, what is more acceptable? To save the lives of five soldiers or for the autonomous weapon to save the life of one soldier?

An even greater ethical concern would be the behavior humans would now exhibit towards warfare.

By decreasing the number of soldiers on the field, these autonomous weapons will also be “lowering the threshold for going to battle,” as the petition letter stated.

Elon Musk, the CEO of Tesla Motors, is worried about this exact behavior and is one of the thousands signing for a worldwide ban on offensive autonomous weapons.

Musk emphasizes that this could set off a revolution in weaponry “comparable to gunpowder and nuclear arms.”

The second distinction regarding autonomous weapons would be the clarification about how autonomous weapons are not nuclear weapons. This is extremely concerning because these are machines that are much lower in production cost, their resources are abundant and they have the capacity to be mass produced.

Therefore, it will only be a matter of time before this technology is placed in the wrong hands and used to terminate specific ethnic groups or to destabilize nations.

While AI weapon development could be utilized to conduct an assassination, the same technology can be used to create new safety tools for soldiers.

Rather than using AI technology for killing, many of those that signed the letter support the great potential AI has to benefit humanity.

Daniel Victor, from the New York Times, states, “Proponents have predicted applications in fighting disease, mitigating poverty and carrying out rescues.”

From an ethical perspective, these are all improvements to one’s living conditions and possess the ability to save lives.

By publicizing these capabilities, researchers and citizens will soon feel a moral obligation to build AI robots for the correct causes, since some of these robots require less sophisticated technology than what is found in most smartphones.

In the words of Stephen Hawking, “While the development of artificial intelligence could be the biggest event in human history, unfortunately, it might also be the last.”

 

NO COMMENTS

Leave a Reply