Identity Review | Global Tech Think Tank
Keep up with the digital identity landscape.
In an era where rapid technological evolution ceaselessly redefines the paradigms of warfare, the inclusion of a Chief AI Ethics Officer for AI in military operations has quickly emerged as a crucial factor in a nation’s defense capabilities. The profound implications of this integration necessitate an incisive and multifaceted examination. AI has recently sparked mass debate over ethical boundaries regarding inclusivity and bias, but the consequences of unregulated AI in combat can extend to jeopardizing lives and questions around responsibility. The complex tapestry of AI ethics in the military has allowed for collective innovation and research from army commanders, scientists, technologists, philosophers, and more around the globe.
One of the most experienced professionals paving a new path towards the ethical use of AI is Dr. David Barnes. As the Chief AI Ethics Officer of the U.S. Army Artificial Intelligence Integration Center (AI2C) and Senior AI Ethics Advisor to the Defense Advanced Research Projects Agency ((DARPA), Dr. Barnes has spearheaded AI research and leadership initiatives in informing decision-makers about the critical ethical considerations of AI in military operations. Recently named by Forbes as a top “15 AI Ethics Leader,” Dr. Barnes currently advises governments and business leaders on how to best adopt AI ethics practices into their projects.
In an exclusive interview with Identity Review, Dr. David Barnes discusses how AI has been interwoven into the fabric of modern armed forces, the implications of unregulated technology and weaponry, and how to best stabilize AI’s exponential societal relevance with ethical boundaries.
These views reflect those of Dr. Barnes personally and not those of the U.S. Army or the AI2C. Answers may be edited for clarity.
The army designs, develops, and uses AI technology just like any other sector (abiding by national and foreign law). The center is co-located in the city of Pittsburgh, which is a technical center – “the Silicon Valley of the East” as some have claimed, with a long history of emerging technologies.
We work with Carnegie Mellon University, the National Robotics Institute Engineering Center and the University of Pittsburgh to discuss how they leverage technology and health care.
The center also leverages startups that are addressing responsible AI issues, and to move to MVPs and test some of the boundaries on current problems. We do this to visualize gaps that we can then analyze and address across the army before they perhaps become larger programs of record. It allows us to, in a way, sandbox different elements of a strategy that could be scaled up to the army at large. The “”Chief” title is making those critical decisions in strategic planning for AI ethics.
Thinking about my background, it’s somewhat unique because I have my operational experience, but I’m also a professor of philosophy and have an undergraduate degree in aerospace engineering.
But it goes further back than that – I’m a sci-fi and tech geek at heart and have been my whole life, and so my academic and research interests seem to coincide with the operational needs.
[The U.S. Army] recognizes the need for addressing ethical considerations, legal considerations, and societal implications of this technology – and in that sense it is a natural fit for me.
My research includes issues that have already been recognized but perhaps aren’t fully addressed, such as issues of biases and data being translated into decisions that the algorithms make. So there’s issues with the technology itself, and it’s bringing different elements together to recognize that you need someone with appropriate level of experience, education, and training in this field.
What I think is the most important aspect of the conversation is how you bring a different voice into the conversation. People come from different backgrounds and bring different elements – you have purely technologists, the AI architects, the data scientists and engineers, and philosophers – they all have a particular view.
We can apply lessons learned into how we conduct our research and development, and how we set the conditions up for use in accordance with our values.
And what you’re beginning to see are programs and institutions offering advanced degrees in AI ethics, responsible AI, or ethics certification.
My thinking, and the way that the Army is thinking, is how do we best incorporate AI ethics and responsible AI mindsets and processes into work that is tech focused.
AI needs to be baked into the R&D that we are already conducting, not bootstrapped. No one wants a redo on inventing these new technologies.
We need to be able to attract, train, educate and retain expertise in artificial intelligence in every organization. Many don’t even realize how pervasive AI is in their workflows already.
But in addition, we need what I call “AI for the every person,” “AI for the army,” and “AI for every soldier.” What I mean by that is what does every individual – from the youngest private all the way up to the most senior leaders – need to know about artificial intelligence? Education on the promised benefits, yes, but also the limitations and risks, so that they can make better informed decisions relative to their position.
And that’s a key element because so much of the debate is rooted in a true lack of education broadly about artificial intelligence.
Part of it is our own fault, but we all grew up with it in a great pop culture, with vivid imagination informed by science fiction.
And so we have developed, not just in the US, but globally, certain fears about artificial intelligence, some of which are irrational that we may not understand why we have these deeply rooted concerns, but others are clearly rational.
But yet, because we still have these concerns, that means that we need to do more to address them, whether it’s through policy regulation or a reworking or tuning of our testing and evaluation processes.
You can’t plan for every single scenario, but you can have a good foundation and that really does start with education at all levels.
Firstly, any weapon system has to follow the law of armed conflict. Some are worried that we have a new race, an arms race if you will. I think that is a terrible way of describing it.
We do see this sort of race towards who can adopt the technology, and even President Putin had said several years ago that whoever controls this could control the world. And so there’s this desire that because there’s a race we need to pull all the constraints that might stifle innovation.
For instance, the way it’s being played out in conflict areas like Ukraine is interesting. We are harnessing the ability to bring in various features of a number of different artificial intelligence systems from multiple nations, and have them working synergistically through trial and error. These AI systems are very narrow, but because of the need, we are finding creative ways of leveraging these innovations.
What’s important to note is that the AI that’s involved in combat isn’t the super sophisticated, killer robot kind. The features that are embedded in the drones are still operated by a human, meaning we still are the authority even if it’s a so-called “suicide drone.”
However, the technology is nowhere near that sophisticated and capable like the movies at this point.
It’s going to be much more mundane, such as geomapping and speeding up communication and therefore the decision making process. The main advantage of AI now, as it had been philosophized many years ago, is to see ourselves and see the enemy in a way that we really have not been able to before.
Think of the debate over fully autonomous vehicles: A Tesla vehicle causes an accident, who is responsible in that case?
In a way it’s similar with AI, but I don’t think it’s the wicked problem that some people claim it to be.
And part of the reason is that when we develop new technology, historically, there’s been a phase (sometimes short, sometimes longer) where the new technology is wrestling with a couple of different competing features. One is the societal acceptance of that technology, and the other is the response from the insurance industry in determining liability.
And I think those things are interwoven.
Part of the concern is who is responsible when a system fails, and we know these systems are going to fail much like humans.
Using this mentality, many associate that responsibility to some sort of punishment. So the dilemma is how do you punish a machine?
I think you have to take a step or two back in terms of responsibility. You can hold that machine responsible for an incident, but you need to unpack whether it’s the approximate cause or it’s tracing back to the decisions made throughout the machine’s life cycle that led to that particular fault.
We also have to recognize that with technology, we have a long tail of unintended consequences. They are going to act in ways that we don’t anticipate, regardless of the rigorous safety testing, validation and verification we did.
In the military, the commander is ultimately responsible. If I’m deployed and I decide to use one of these systems and something goes wrong, I’m responsible for it in the same way I’m responsible for the actions of any one of the individuals under my command.
However, regulation hasn’t caught up with what we see in current technology and what we imagine to see in the future.
So it’s a problem that will be eventually solved, but it may take some time for companies to put that toe in the water because they’re unsure of their culpability, and hesitant to take steps to ensure that they are comfortable in taking responsibility in the absence of structured legislation.
A company can’t ignore the potential of the technology that they’re developing. From the companies’ perspective, their counter-argument is that they can build it responsibly, but they can’t necessarily control the responsible use. Yet, there’s going to be a democratization or proliferation of it and it’s going to be used in other ways you can’t anticipate. We must create systems, much like we’ve done with compliance in other areas, that provide those guard rails in terms of designing and building.
We are the limiting factor. How AI is used and constructed is a very human decision. We can’t just isolate technology from us.
As our interview with Dr. David Barnes draws to a close, the complexity and weight of responsibility in integrating AI within military operations become profoundly evident. It is imperative to recognize that the intersection of AI and military practices is not merely an amalgamation of hardware and algorithms, but a blanket woven with threads of ethics, accountability, legalities, and societal values. The deep insight provided by Dr. Barnes illuminates the necessity for a balanced, informed, and cautious approach in harnessing AI for defense purposes. Education, collaboration, and ethical considerations must act as the compass guiding this voyage into uncharted waters.
The narratives from science fiction and speculative imaginations must be tempered with realistic assessments and anchored in the values that society holds dear. Will AI serve as an extension of human ingenuity and values? These complex questions demand not only technical expertise, but philosophy, ethical governance, and a collective commitment to safeguarding the principles that define humanity.
ABOUT IDENTITY REVIEW
Identity Review is a digital think tank dedicated to working with governments, financial institutions and technology leaders on advancing digital transformation, with a focus on privacy, identity, and security. Want to learn more about Identity Review’s work in digital transformation? Please message us at team@identityreview.com. Find us on Twitter.
RELATED STORIES