type
status
date
slug
summary
tags
category
icon
password
Researchers at the Massachusetts Institute of Technology (MIT) have created a database called the "AI Risk Library" that records more than 700 risks related to artificial intelligence (AI). The purpose of this database is to help people better understand and manage the various risks that AI may bring.
The purpose of this database is to provide a comprehensive, transparent list of risks that auditors, policymakers, and scientists can use to continuously monitor the risks of AI models after they are deployed.
These risks are divided into two main classification systems: Causal Taxonomy and Domain Taxonomy.
Features of AI Risk Library
- Dynamic updates: The Risk Library is a “living” database, meaning it will be updated over time to cover more emerging risks and research findings.
- Broad applicability: The Risk Library is designed to provide a common reference framework for policymakers, auditors, academic researchers, and industry practitioners to help them better understand and manage AI-related risks.
AI Risk Library Consists of 3 Main Parts:
AI Risk Database
Captures over 700 risks in 43 categories with detailed citations and page numbers.
Cause and effect taxonomy
- Entities: Divided into the categories of humans, AI systems, and others, describing the main entities that pose risks.
- Intent: Distinguishes whether the risk is the result of intentional or unintentional conduct.
- Time: The time point when the risk occurs, including before deployment, after deployment, and other situations.
Field Classification
- The domain classification method classifies risks according to their nature or impact into the following seven main areas:
- Discrimination & Toxicity: Including unfair discrimination, exposure to harmful content, and unequal representation between different groups.
- Privacy & Security: Covers privacy leaks, security vulnerabilities and attacks on AI systems, etc.
- Misinformation: This includes the generation and dissemination of false or misleading information.
- Malicious Actors & Misuse: Involving the use of AI for large-scale false information propaganda, cyber attacks, weapons development and use, and other malicious activities.
- Human-Computer Interaction: These include over-dependence on AI systems, unsafe use, and loss of human autonomy.
- Socioeconomic & Environmental Harms: Covering socio-economic issues such as concentration of power, declining job quality, unfair distribution of benefits, and the impact of AI systems on the environment.
- AI System Safety, Failures & Limitations: These issues involve the behavior of AI systems when they conflict with human goals, insufficient security, lack of transparency, and other issues.
The study shows that most risks (51%) are caused by the decisions or behaviors of AI systems, and 65% of the risks occur after AI is deployed. This shows that although humans play a key role in the development of AI, many risks arise after AI systems begin to operate autonomously. In addition, the study found that the risk areas discussed in most literature are AI system safety, failure and limitations (76%) and socioeconomic and environmental hazards (73%).
Brian Jackson, chief research director at Info-Tech Research Group, described the database as “extremely helpful for leaders working to establish AI governance in their organizations. AI introduces many new risks to organizations and exacerbates some existing risks. It takes an enterprise risk expert to sort through all of them, but now MIT has done the hard work for organizations.”
Not only that, he said, “It’s available in a handy Google Sheet that you can copy and customize to your needs. The database categorizes AI risks into cause and effect and into seven different areas. It’s an indispensable foundation of knowledge for anyone working on AI governance, and it’s also a great tool for them to use to create their own organization-specific catalog.”
Database official website: https://airisk.mit.edu/
Experience 99.99% uptime with our reliable servers
- Author:KCGOD
- URL:https://kcgod.com/mit-releases-database-of-ai-related-risks
- Copyright:All articles in this blog, except for special statements, adopt BY-NC-SA agreement. Please indicate the source!
Relate Posts
Google Launches Gemini-Powered Vids App for AI Video Creation
FLUX 1.1 Pro Ultra: Revolutionary AI Image Generator with 4MP Resolution
X-Portrait 2: ByteDance's Revolutionary AI Animation Tool for Cross-Style Expression Transfer
8 Best AI Video Generators Your YouTube Channel Needs
Meta AI’s Orion AR Glasses: Smart AI-Driven Tech to Replace Smartphones