US releases global military AI principles | Inquirer Technology

US releases global military AI principles

08:00 AM November 13, 2023

The US State Department issued its “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy on February 16, 2023. It urges nations developing military AI projects to follow these steps for ethical and responsible deployment. On November 1, these guidelines became more relevant after 28 nations pledged to mitigate military AI risks.

ChatGPT has made the world feel generative AI’s transformative impact only within a year. Consequently, armed forces have been exploring its battlefield applications. These military AI principles could ensure nations minimize the loss of innocent lives when they use artificial intelligence systems. 

This article will discuss the military AI principles inside the United States’ recent declaration. Later, I will cover how countries have been using artificial intelligence in armed conflicts.

ADVERTISEMENT

What are the US military AI principles?

This represents a military jet.

The US Department of State or State Department handles the country’s foreign policy. It released military AI principles to respond to the increasing number of nations developing such capabilities. 

FEATURED STORIES

It says military AI must be “ethical, responsible, and enhance international security.” Consequently, countries must follow these 10 guidelines:

  1. States must ensure their military groups follow these principles for the responsible development, deployment, and use of AI capabilities.
  2. Countries must ensure their military AI capabilities follow international law, especially international humanitarian law. Also, States must consider how to use these technologies to improve their implementation of international humanitarian law. 
  3. Nations must make sure senior officials oversee the deployment and development of armed forces’ AI capabilities. 
  4. Countries must minimize unintended bias in these technological capabilities. 
  5. Their relevant personnel exercises must teach proper development, deployment, and usage of military artificial intelligence.
  6. Moreover, their methodologies, data sources, design procedures, and documentation are transparent to and auditable by relevant defense personnel.
  7. The personnel using military AI must have proper training to understand their capabilities and limitations. As a result, they could mitigate its risks.
  8. Countries must ensure military artificial intelligence systems have well-defined purposes. Moreover, their design must fulfill those intended functions.
  9. States must verify the safety, security, and effectiveness of their armed AI technology.
  10. Countries must have appropriate safeguards to manage the failure risks of such systems.

You may also like: Chinese military laser allegedly fires nonstop

How is military AI progressing?

This represents a military AI jet.

The Israeli Defense Forces have been using artificial intelligence in its military operations. It is called the Fire Factory, a system that coordinates air raids.

The AI system “calculates appropriate munition loads, prioritizes targets, assigns them to aircraft and drones, and proposes a schedule.” In other words, it counts how many bullets and bombs an air strike requires.

Fire Factory chooses which locations to bomb first, assigns them to aircraft, and schedules their operation. Moreover, the news outlet reported when Israel applied its AI to a real battlefield.

HT said military officials allegedly suggested employing it in periodic conflicts in the Gaza Strip. In 2021, the IDF reportedly called the conflict the world’s first “AI war.” The AI system identified rocket launchpads and deployed drone swarms. 

ADVERTISEMENT

The United States Air Force has also been developing military AI. It collaborated with the global security and aerospace company Lockheed Martin to produce an AI-powered jet.

You may also like: Philippines spends most time online in Asia Pacific

The aircraft’s name is VISTA-X62A or Variable In-flight Simulation Test Aircraft. The Defense Advanced Research Projects Agency (DARPA), the US Air Force Test Center, and the Air Force Research Laboratory (AFRL) support VISTA. An AI agent took it on a 17-hour test flight in December 2022.

More importantly, it has Calspan’s VISTA Simulation System (VSS), Lockheed Martin’s Model Following Algorithm (MFA), and System for Autonomous Control of the Simulation (SACS). VSS enables the AI aircraft to mimic existing vehicles.

For example, it could perform like the F-16 fixed-wing fighter jet or MQ-20 fighter drone. As a result, the US Air Force can apply new AI systems to existing aircraft without risking them.

Conclusion

The United States announced military AI principles and urged the world to follow them. It believes these guidelines will ensure the ethical and responsible usage of combat artificial intelligence. 

More countries will adopt this technology as their neighbors reap their benefits. In response, citizens should learn more about how AI works.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Learn more about the United State’s latest declaration on the State Department webpage. Also, check out more digital tips and trends at Inquirer Tech.

TOPICS:
TAGS:

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.