Conscious Ethical Warplanes
Deshna Naruka - University of Florida

Conscious Ethical Warplanes

Autonomy Capability Team 3, Air Force Research Laboratory I’ve learned that often you may receive questions for which you have no answer. During a particularly charged Q&A session, in the midst of an advocacy campaign as an International Humanitarian Law (IHL) advocate under the American Red Cross, I was asked about what the rules of warfare are in regards to the usage of Artificial Intelligence (AI). How can one aim to distinguish between a civilian or combatant, and trust autonomous weapons to do the same?

As autonomous air warfare technology becomes a reality, such as the In-Flight Stability Test Aircraft X-62A flown and tested in the past couple years (Eddins, 2024), the ability of AI systems to adhere to principles of distinction and proportionality is crucial in upholding international humanitarian and ethical standards. As I’m focused on my research with AI models such as LEABRA (Local, Error-driven and Associative, Biologically Realistic Algorithm), a neural network learning algorithm which mirrors human thinking, I am compelled to draft understanding on how such technology can use pattern recognition to mimic the functions in the human brain. Models such as LEABRA aid AI in both processing and interpreting complex environments in order to make decisions–in theory similar to a human. LEABRA uses invariant object recognition–a concept from human psychology and neuroscience where an object moves up through a visual hierarchy to be increasingly distinguished, regardless of variations in location, size, angle, etc. Through invariant object recognition, AI models can mimic this aspect of human perception.

Tools such as convolutional neural networks (CNNs) help AI models to recognize patterns and features in data similar to hierarchical human perception. Thus, LEABRA is enabled to engage in many stages of human information processing–however, there are some limitations. Because models like LEABRA are incapable of phenomenal consciousness, they are not self-aware and have no experience of the world itself, thus LEABRA cannot perfectly replicate human information processing. What are the implications of humans and machines perceiving differently, for the ethical employment of AI?

My journey as an IHL advocate with the American Red Cross has brought me profound awareness about the moral and ethical importance of this topic, especially in regards to means of warfare. Efforts to address the connection between these two fields of ethics and technology have been made, as the usage of AI is said to be “in accord with States’ obligations under international humanitarian law, including its fundamental principles” (US Department of Defense, 2023). The joining of my research on LEABRA and my advocacy for IHL reflects a broader vision for the future of AI—one with an equal balance between technological innovation and humanitarian responsibility.

One of the IHL principles, proportionality, forbids methods of warfare used to attack military objectives if the overall damage and loss of civilian life is greater than the anticipated military advantage gained. In a past discussion with fellow IHL advocates, I spoke about how vague the guidelines for this principle can be. Essentially, it is a judgment call: the final outcome lies with the one holding the trigger. When it comes to autonomous weapons and AI, making that choice suddenly seems even more ambiguous. The ability for the autonomous weapon to perceive the most fair and proportional choice is significant to a future conflict. AI has a powerful potential to mitigate human suffering with enhanced precision and decision-making in conflict zones.

However, this potential can only be employed through continuing to design AI systems that are aligned with human thinking, understanding, and respect for the rules of warfare.

With the rapid technological developments made, my two-sided focus on advanced AI models and humanitarian law furthers my understanding for a balanced approach between both AI usage and ethical principles in times of conflict. Only then can we ensure that AI technologies serve to not control, but enhance the well-being of humanity and further technological advancements for our societies.


References:

Eddins, J. M. (2024, May 20). The United States Air Force’s focus on AI research and

development. Airman Magazine.

https://www.airmanmagazine.af.mil/Features/Display/Article/3776930/the-united-states-air-forces-focus-on-ai-research-and-development/

US Department of State Bureau of Arms Control, Verification and Compliance. (2023). Political

declaration on responsible military use of artificial intelligence and autonomy.

Distribution Statement A: Approved for public release, distribution unlimited (Case #: AFRL-2024-3375)

Daniel Dattilo

Affiliated Marketing, holistic coach specialize in teaching higher consciousness intuitive sensory perception human hybrid intelligence.

1mo

Are you proud of building weapons to kill other human beings with closed minded narcissistic fear, mongering mindset 80% of science goes into building bombs to kill human beings creating security to control People it a very narcissistic sick mentality Nazi ideology you can live in fear and hide from the science that it doesn’t work or I can start owning up to your truth that you lied so much about

Like
Reply
Amber Joneleit

Student at Purdue University and Intern at AFRL

1mo

So interesting!!

Kevin Schmidt

Research Engineering Psychologist

1mo

Deshna Naruka is a rising Freshman at #UniversityOfFlorida studying #ComputerScience and #Engineering. Hello World! Keep your eyes on this superstar.

David J Smith

Aerospace Engineering (Ret.), Engineering Fellow

1mo

Be mindful of my comments on this topic in prior comments,… AI should only be used in a limited capacity in aircraft. Your enemy is as smart as you are and thru trial and error, they will eventually exploit any weaknesses revealed thru a constant delivery of different tactics to exploit them!

To view or add a comment, sign in

Explore topics