Killer robots are not science fiction. A push to ban them is growing.





It may have seemed like an obscure United Nations conclave, but a meeting this week in Geneva was closely watched by experts in artificial intelligence, military strategy, disarmament and humanitarian law.

The reason for the interest? Killer robots – drones, weapons and bombs that decide on their own, with artificial brains, to attack and kill – and what should be done, if anything, to regulate or ban them.

Once the domain of science fiction films like the “Terminator” and “RoboCop” series, killer robots, more technically known as Lethal Autonomous Weapons Systems, were invented and tested at an accelerated pace with little oversight. Some prototypes have even been used in real conflicts.

The evolution of these machines is seen as a potentially seismic event in warfare, similar to the invention of gunpowder and nuclear bombs.

This year, for the first time, a majority of the 125 countries that belong to an agreement called the Convention on Certain Conventional Weapons, or CCW, said they wanted restrictions on killer robots. But they have encountered opposition from members developing these weapons, notably the United States and Russia.

The group’s conference ended on Friday with only a vague statement on considering possible measures acceptable to all. The Campaign to Stop Killer Robots, a disarmament group, said the outcome was “extremely short.”

The CCW, sometimes known as the Inhuman Weapons Convention, is a framework of rules that prohibit or restrict weapons considered to cause unnecessary, unjustifiable and indiscriminate suffering, such as incendiary explosives, blinding lasers, and traps that do not distinguish between combatants and civilians. . The convention does not contain any provision concerning killer robots.

Opinions differ on an exact definition, but they are widely viewed as weapons that make decisions with little or no human involvement. Rapid advances in robotics, artificial intelligence and image recognition make such weaponry possible.

The drones that the United States has used extensively in Afghanistan, Iraq, and elsewhere are not considered robots because they are operated remotely by people, who choose targets and decide whether or not to fire.

For war planners, weapons hold the promise of keeping soldiers safe and making decisions faster than a human, giving more responsibility on the battlefield to autonomous systems such as gunships. Unmanned drones and unmanned tanks that independently decide when to strike.

Critics argue that it is morally repugnant to attribute lethal decision-making to machines, regardless of technological sophistication. How does a machine differentiate an adult from a child, a fighter with a bazooka from a civilian with a broom, a hostile fighter from a wounded or surrendering soldier?

“Fundamentally, autonomous weapon systems raise ethical concerns for society about the substitution of human decisions about life and death with sensors, software and machine processes,” said Peter Maurer, Chairman of the International Committee. of the Red Cross and fierce opponent of killer robots. the Geneva conference.

Ahead of the conference, Human Rights Watch and the International Human Rights Clinic at Harvard Law School called for steps towards a legally binding agreement that requires human oversight at all times.

“Robots lack the compassion, empathy, mercy and judgment necessary to treat humans humanely, and they cannot understand the inherent value of human life,” the groups argued in a backgrounder. to back up their recommendations.

Others said autonomous weapons, rather than reducing the risk of war, could do the opposite – providing antagonists with means to inflict damage that minimizes the risk to their own soldiers.

“Mass-produced killer robots could lower the threshold of war by removing humans from the chain of destruction and unleashing machines that could engage a human target without any humans at the controls,” New Zealand minister Phil Twyford said. disarmament.

The conference was widely viewed by disarmament experts as the best opportunity to date to find ways to regulate, or even ban, the use of killer robots under the CCW.

It was the culmination of years of discussions by a group of experts who had been invited to identify challenges and possible approaches to reducing threats from killer robots. But the experts could not even agree on basic questions.

Some, like Russia, insist that any decision on boundaries must be unanimous – in effect giving opponents a veto.

The United States argues that existing international laws are sufficient and that a ban on autonomous weapons technology would be premature. Chief US delegate to the conference, Joshua Dorosin, proposed a non-binding “code of conduct” for the use of killer robots – an idea disarmament advocates have dismissed as a delaying tactic.

The US military has invested heavily in artificial intelligence, working with the largest defense contractors, including Lockheed Martin, Boeing, Raytheon and Northrop Grumman. The work included projects to develop long-range missiles that detect moving targets based on radio frequencies, swarm drones capable of identifying and attacking a target, and automated missile defense systems, according to research by opponents of weapon systems.

The complexity and varied uses of artificial intelligence make it more difficult to regulate than nuclear weapons or landmines, said Maaike Verbruggen, emerging military security technologies expert at the Center for Security, Diplomacy and Strategy at Brussels. She said the lack of transparency about what different countries are building has created “fear and concern” among the military leaders that they must follow.

“It’s very difficult to get a sense of what another country is doing,” said Ms Verbruggen, who is studying for a doctorate. on the subject. “There is a lot of uncertainty and it drives military innovation.”

Franz-Stefan Gady, researcher at the International Institute for Strategic Studies, said that “the arms race for autonomous weapons systems is already underway and will not be canceled any time soon.”

Yes. Even as the technology becomes more advanced, there has been a reluctance to use autonomous weapons in combat due to fears of mistakes, Gady said.

“Can military commanders trust the judgment of autonomous weapons systems?” Here the answer for the moment is clearly ‘no’ and will remain so for the foreseeable future, ”he said.

The autonomous weapons debate has spilled over into Silicon Valley. In 2018, Google said it would not renew a contract with the Pentagon after thousands of its employees signed a letter protesting the company’s work on a program that uses artificial intelligence to interpret images that could be used to choose drone targets. The company also created new ethical guidelines prohibiting the use of its technology for weapons and surveillance.

Others believe that the United States does not go far enough to compete with its rivals.

In October, former Air Force software chief Nicolas Chaillan told the Financial Times he had stepped down because of what he saw as weak technological progress within the US military, in particular the use of artificial intelligence. He said policymakers are being held back by ethical issues as countries like China move forward.

There aren’t many verified examples on the battlefield, but critics point to a few incidents that show the potential of the technology.

In March, United Nations investigators said a “lethal autonomous weapons system” had been used by government-backed forces in Libya against militia fighters. A drone called Kargu-2, made by a Turkish defense contractor, tracked and attacked the fighters as they fled a rocket attack, the report said, which does not reveal whether humans were controlling the fighters. drones.

In the 2020 Nagorno-Karabakh war, Azerbaijan fought Armenia with attack drones and missiles that hover in the air until they detect a signal from an assigned target.

Many disarmament advocates said the conference outcome had hardened what they described as a determination to push for a new treaty in the coming years, such as those that ban landmines and cluster munitions. .

Daan Kayser, an autonomous arms expert at PAX, a Dutch-based peace advocacy group, said the conference’s failure to agree to negotiate even on killer robots was “a really clear signal that the CCW is not up to the task “.

Noel Sharkey, an artificial intelligence expert and chairman of the International Committee for the Control of Robotic Weapons, said the meeting demonstrated that a new treaty was better than continued CCW deliberations.

“There was a sense of urgency in the room,” he said, that “if there is no movement, we are not ready to stay on this treadmill.”

Jean Ismay contributed reports.




Roxxcloud

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top