Researchers at the University of California, Berkeley’s Center for Long-Term Cybersecurity have released a new set of recommendations to help governments of all levels evaluate the potential risks and harms associated with new artificial intelligence (AI) technologies.
The report, released earlier this month, is the result of a comparative analysis of AI risk and impact assessments from five regions around the world: Canada, New Zealand, Germany, the European Union, and San Francisco, Calif. The report compares how these different assessment models approach key questions related to the safety of AI systems, including what impacts they could have on human rights and the environment, and how the range of risks should be managed.
The report’s author – Louis Au Yeung, a recent Master of Public Policy graduate from the Goldman School of Public Policy at the University of California, Berkeley – focuses on “AI risk and impact assessments,” which are formalized, structured assessments used to characterize risks arising from the use of AI systems, and how to identify proportionate risk mitigation measures
“These assessments may be used by both public and private entities hoping to develop and deploy trustworthy AI systems, and are broadly considered a promising tool for AI governance and accountability,” Au Yeung wrote. “Ensuring that AI systems are safe and trustworthy is critical to increasing people’s confidence and harnessing the potential benefits of these technologies…. Risk and impact assessments provide a structured approach for assessing the risks of specific AI systems, differentiating them based on their riskiness, and adopting mitigation measures that are proportionate to the risks.”
In a press release, the Center for Long-Term Cybersecurity says the paper specifically focuses on ongoing efforts at the National Institute of Standards and Technology (NIST) to develop an AI risk management framework. Congress tasked NIST with developing a voluntary AI risk management framework that organizations can use to promote trustworthy AI development and use.
Recommendations in the report include:
- Urging governments to consider that certain risk mitigation measures emphasized across all surveyed frameworks are essential as a starting point. The measures include human oversight, external review and engagement, documentation, testing and mitigation of bias, alerting those affected by an AI system of its use, and regular monitoring and evaluation.
- Stressing that governments need to account for impacts on inclusiveness and sustainability in order to protect the wider interests of society and ensure that marginalized communities are not left behind.
- Including individuals and communities affected by the use of AI systems in the process of designing risk and impact assessments to help co-construct the criteria featured in the framework.
- Banning the use of specific AI systems that present unacceptable risks, to ensure that fundamental values and safety are not compromised.
- Engaging in periodic risk and impact reassessments to ensure that continuous learning AI systems meet the standards required after they have undergone notable changes.
- Tying risk and impact assessments to procurement and purchase decisions to incentivize the use of voluntary frameworks.
“The widespread use of AI risk and impact assessments will help to ensure we can gauge the risks of AI systems as they are developed and deployed in society, and that we are informed enough to take appropriate steps to mitigate potential harms,” Au Yeung wrote. “In turn, this will help promote public confidence in AI and enable us to enjoy the potential benefits of AI systems.”