Can AI be used responsibly? Lloyd Austin seems to think so





Here’s what you need to remember: Austin has made it clear that the Pentagon will continue to adhere to what he called the defining principles of “responsible AI.”

While laying out the details of a new set of core principles for “responsible AI,” Defense Secretary Lloyd Austin expressed deep concern that China is pursuing a very different and extremely worrying approach to China. IA.

Speaking at the World Summit on Emerging Technologies of the National Security Commission on Artificial Intelligence, Austin warned that the Chinese hope to dominate global AI by 2030. Austin also made it clear that Chinese leaders consider developing and applying AI in a much more aggressive, and arguably unethical, way.

“Beijing is already talking about using AI for a range of missions, from surveillance to cyber attacks to autonomous weapons. In AI, as in many others, we understand that China is our pace challenge, ”Austin said at the event, according to a Pentagon transcript.

The most significant or immediate concern seems to be Austin’s reference to autonomous AI weapons, given that China is most likely not adhering to the basic ethical guidelines of U.S. defense policy. For example, despite rapid technological advancements that increasingly allow platforms to find, track, and destroy enemy targets without human intervention, the Pentagon firmly maintains its current doctrine that any decision regarding the use of force murderous must be taken by a Human. However, the technical capability of an AI-based system is such that sensors can find targets autonomously, send otherwise disparate data pools to a central database, and make instant decisions about specifics of the system. target. By extending this cycle, there is an evolving ability for armed platforms to take this maturing technology to the next level and shoot or destroy a target without human intervention.

US weapons developers are probably very concerned that Chinese military and political leaders do not view AI capability as part of any ethical parameter, a scenario that dramatically increases the risk to US forces. and other American assets.

Nonetheless, Austin made it clear that the Pentagon would continue to adhere to what he called the defining principles of “responsible AI.”

“Our development, deployment and use of AI must always be responsible, fair, traceable, reliable and governable,” said Austin. “We will be using AI for clearly defined purposes. We are not going to tolerate unintentional AI biases. We will pay attention to unintended consequences.

Austin also added that AI-driven weapon developers will keep a close eye on the evolution, maturation and application of the technology.

“We will immediately adjust, improve or even turn off any AI systems that are not behaving the way we want them to,” Austin said.

When it comes to non-lethal AI applications, however, discussions are ongoing about potential applications, such as those for purely defensive purposes, interceptor missiles, or drone defenses.

Kris Osborn is the defense editor for the National interest. Osborn previously served in the Pentagon as a highly trained expert in the Office of the Assistant Secretary of the Army – Acquisition, Logistics and Technology. Osborn also worked as an on-air presenter and military specialist on national television networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also holds an MA in Comparative Literature from Columbia University.

This article is republished for the interest of readers.

Image: Reuters




Leave a Comment

x