News and Events

Embedding AI Ethics: Making AI Safer for Humanity

Defining ethical boundaries surrounding artificial intelligence (AI) is a nuanced and complex endeavor. Practitioners and users of AI generally agree that “ethical AI” is not a nice-to-have— it’s a high priority, and developers of AI systems are responsible for ensuring that those systems behave ethically. When developers abdicate this responsibility, adverse outcomes range from biased predictions to malicious applications recently flagged as potential threats to human rights.  

In September 2021, the United Nations High Commissioner for Human Rights Michelle Bachelet called on member states to put a moratorium on the sale and use of artificial intelligence systems until the “negative, even catastrophic” risks they pose can be addressed. Her report warned of the potential for AI systems’ faulty and discriminatory behavior leading to decisions that are a risk for already marginalized groups.  

The reality is that a lot of technology can (and does) pose a threat to human rights—inaccurate facial recognition software, for example. And although there is a possibility the United Nations High Commissioner might call for a moratorium on sale and use of facial recognition software on the general population, the chances of that happening are slim. The same could be said for AI systems more generally: the economic benefits that private firms and nation states stand to gain from their development, sale, and use are simply too high to expect that there will be sudden and significant restrictions. 

Creating ethical AI that is available as an option over its unethical version is a moral imperative for researchers and software engineers. For example, if an AI-based loan approval system that exhibits racial bias is the only product on the market, it’s likely to be used because of the significant economic benefit to the lenders putting it in place (if not in the US, then in other countries where there is less regulation/oversight).  If lenders could select software that performed just as well without exhibiting racial bias, it could open the door for regulations in countries that might otherwise shy away. 

It isn’t clear that the risks AI systems pose to society exceed their current and future societal benefits. Admittedly, improper usage of AI systems has led to poor outcomes, such as racially biased classification by facial recognition systems and inaccurate automated drone targeting causing civilian deaths. However, humanity has also reaped incredible societal rewards from careful application of AI systems, such as better diagnosis of cancers, design of molecular peptides for use in drugs, and massive reduction in costs of everyday consumer products when AI is implemented within logistics systems. 

Assessment of cost-benefit tradeoffs for more extreme events isn’t straightforward either. AI-driven tail risk—that is, the cost associated with very low probability, exceptionally deleterious events—is surely nonzero. The philosopher and AI researcher Nick Bostrom has outlined an unlikely, but plausible, chain of events that could lead to enslavement of humanity to a superintelligent AI. But there are also tail benefits that have yet to be realized. For example, AI could design entirely new classes of medical treatments that cure chronic diseases. (Recent advances in AI protein-folding technology suggest that this is not as unlikely as previously believed.) 

It’s nearly impossible to determine the probability of either type of tail event with reasonable precision. Rather than banning the creation and sale of AI systems, which would preclude the good outcomes along with the bad, we should seek to both make AI intrinsically safer and establish methods to ensure societal resilience to bad outcomes it will cause. It is possible to make progress toward these goals in at least three ways:
 

  • We must ensure that we can understand the reasons for an AI’s actions. One of the chief concerns humans often have about novel applications of AI is that we can’t always understand why it behaves the way it does. Explainable AI (XAI) technologies can help by promoting trust through deep insights and understanding of an AI’s decision-making process. The Defense Advanced Research Projects Agency (DARPA) has recently embarked on a multi-million dollar research program designed to modernize XAI capabilities and increase human understanding and interpretability of AI systems. It is imperative that U.S. government funding for XAI initiatives continues after the conclusion of this program.
     
  • It must be easier for AI developers, and AI itself, to explicitly reason about uncertainty. This enables AI to make better choices and helps experts quantify what could go wrong. The recently developed field of universal probabilistic programming empowers scientists and policy experts to easily encode their knowledge and assess risk when developing AI, including in the context of so-called “black-box” algorithms—those with opaque internal workings. Investors in AI should insist that developers leverage universal probabilistic programming technologies when building algorithms, and research should continue in embedding first-class reasoning about uncertainty and risk into algorithms.
     
  • More broadly, the U.S. must take a global leadership position in the creation of ethical AI. Not only should we spur investment in R&D from our funding agencies, which will make ethical AI an option, if not a leading solution, in the growing AI marketplace, but we should continue to make ethical AI a social norm, not an anomaly. Ethical AI can benefit from the same argument made by the President on sustainable energy: investing early can deliver both economic and social benefits as the demand for it grows.

 

As often happens during the rise of a transformative innovation, there are concerns around how AI is used or misused. And although it can be a force for good and help humans overcome some of our greatest obstacles, AI technology has its own set of flaws. Industry-wide AI ethics should be enforced and the risks of violating human rights should be addressed in a way that doesn’t push back against the inevitable progression of AI adoption—in the end, making the use of AI safer for all of humanity.  

Dr. David Rushing Dewhurst is research scientist at Charles River Analytics. He designs probabilistic AI for national security applications, including financial resilience, information operations, and cybersecurity. He leads Charles River’s efforts on multiple DARPA programs.

©2022 AnOriginal. Photo provided courtesy of AnOriginal.com.

Solutions to serve the warfighter, technology to serve the world®

Charles River Analytics brings foundational research to life, creating human-centered intelligent systems at the edge of what’s possible, through deep partnerships with our customers. 

To learn more about Charles River or our current projects and capabilities, contact us