US releases global military AI principles
The US State Department issued its “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy on February 16, 2023. It urges nations developing military AI projects to follow these steps for ethical and responsible deployment. On November 1, these guidelines became more relevant after 28 nations pledged to mitigate military AI risks.
ChatGPT has made the world feel generative AI’s transformative impact only within a year. Consequently, armed forces have been exploring its battlefield applications. These military AI principles could ensure nations minimize the loss of innocent lives when they use artificial intelligence systems.
This article will discuss the military AI principles inside the United States’ recent declaration. Later, I will cover how countries have been using artificial intelligence in armed conflicts.
Article continues after this advertisementWhat are the US military AI principles?
The US Department of State or State Department handles the country’s foreign policy. It released military AI principles to respond to the increasing number of nations developing such capabilities.
It says military AI must be “ethical, responsible, and enhance international security.” Consequently, countries must follow these 10 guidelines:
- States must ensure their military groups follow these principles for the responsible development, deployment, and use of AI capabilities.
- Countries must ensure their military AI capabilities follow international law, especially international humanitarian law. Also, States must consider how to use these technologies to improve their implementation of international humanitarian law.
- Nations must make sure senior officials oversee the deployment and development of armed forces’ AI capabilities.
- Countries must minimize unintended bias in these technological capabilities.
- Their relevant personnel exercises must teach proper development, deployment, and usage of military artificial intelligence.
- Moreover, their methodologies, data sources, design procedures, and documentation are transparent to and auditable by relevant defense personnel.
- The personnel using military AI must have proper training to understand their capabilities and limitations. As a result, they could mitigate its risks.
- Countries must ensure military artificial intelligence systems have well-defined purposes. Moreover, their design must fulfill those intended functions.
- States must verify the safety, security, and effectiveness of their armed AI technology.
- Countries must have appropriate safeguards to manage the failure risks of such systems.
You may also like: Chinese military laser allegedly fires nonstop
Article continues after this advertisementHow is military AI progressing?
The Israeli Defense Forces have been using artificial intelligence in its military operations. It is called the Fire Factory, a system that coordinates air raids.
The AI system “calculates appropriate munition loads, prioritizes targets, assigns them to aircraft and drones, and proposes a schedule.” In other words, it counts how many bullets and bombs an air strike requires.
Fire Factory chooses which locations to bomb first, assigns them to aircraft, and schedules their operation. Moreover, the news outlet reported when Israel applied its AI to a real battlefield.
HT said military officials allegedly suggested employing it in periodic conflicts in the Gaza Strip. In 2021, the IDF reportedly called the conflict the world’s first “AI war.” The AI system identified rocket launchpads and deployed drone swarms.
The United States Air Force has also been developing military AI. It collaborated with the global security and aerospace company Lockheed Martin to produce an AI-powered jet.
You may also like: Philippines spends most time online in Asia Pacific
The aircraft’s name is VISTA-X62A or Variable In-flight Simulation Test Aircraft. The Defense Advanced Research Projects Agency (DARPA), the US Air Force Test Center, and the Air Force Research Laboratory (AFRL) support VISTA. An AI agent took it on a 17-hour test flight in December 2022.
More importantly, it has Calspan’s VISTA Simulation System (VSS), Lockheed Martin’s Model Following Algorithm (MFA), and System for Autonomous Control of the Simulation (SACS). VSS enables the AI aircraft to mimic existing vehicles.
For example, it could perform like the F-16 fixed-wing fighter jet or MQ-20 fighter drone. As a result, the US Air Force can apply new AI systems to existing aircraft without risking them.
Conclusion
The United States announced military AI principles and urged the world to follow them. It believes these guidelines will ensure the ethical and responsible usage of combat artificial intelligence.
More countries will adopt this technology as their neighbors reap their benefits. In response, citizens should learn more about how AI works.
Learn more about the United State’s latest declaration on the State Department webpage. Also, check out more digital tips and trends at Inquirer Tech.