Home > web3.0 > body text

AI Risk Assessment: A Race to Map the Evolving Landscape of AI Risks

WBOY
Release: 2024-08-16 18:11:14
Original
712 people have browsed it

A recent study has ranked AI models based on the risks they present, revealing a wide range of behaviours and compliance issues. This work aims to provide insights into these technologies’ legal, ethical, and regulatory challenges. The results could guide policymakers and companies as they navigate the complexities of deploying AI safely.

AI Risk Assessment: A Race to Map the Evolving Landscape of AI Risks

Recent studies have ranked AI models based on the risks they present, highlighting a wide range of behaviors and compliance issues. This work aims to provide insights into these technologies' legal, ethical, and regulatory challenges, guiding policymakers and companies in navigating the complexities of deploying AI safely.

Bo Li, an associate professor at the University of Chicago known for testing AI systems to identify potential risks, led the research. His team, in collaboration with several universities and firms, developed a benchmark called AIR-Bench 2024 to assess AI models on a large scale.

The study identified variations in how different models complied with safety and regulatory standards. For instance, some models excelled in specific categories; Anthropic's Claude 3 Opus was particularly adept at refusing to generate cybersecurity threats, while Google's Gemini 1.5 Pro performed well in avoiding the generation of nonconsensual sexual imagery. These findings suggest that certain models are better suited to particular tasks, depending on the risks involved.

On the other hand, some models fared poorly overall. The study consistently ranked DBRX Instruct, a model developed by Databricks, as the worst across various risk categories. When Databricks released this model in 2023, the company acknowledged that its safety features needed improvement.

The research team also examined how various AI regulations compare to company policies. Their analysis revealed that corporate policies tended to be more comprehensive than government regulations, suggesting that regulatory frameworks may lag behind industry standards.

"There is room for tightening government regulations," remarked Bo Li.

Despite many companies implementing strict policies for AI usage, the researchers found discrepancies between these policies and how AI models performed. In several instances, AI models failed to comply with the safety and ethical guidelines set by the companies that developed them.

This inconsistency indicates a gap between policy and practice that could expose companies to legal and reputational risks. As AI continues to evolve, closing this gap may become increasingly important to ensure that the technology is deployed safely and responsibly.

Other efforts are also in progress to better understand the AI risk landscape. Two MIT researchers, Neil Thompson and Peter Slattery, have developed a database of AI risks by analyzing 43 different AI risk frameworks. This initiative is intended to help companies and organizations assess potential dangers associated with AI, particularly as the technology is adopted on a wider scale.

The MIT research highlights that some AI risks receive more attention than others. For instance, more than 70 percent of the risk frameworks reviewed by the team focused on privacy and security concerns. However, fewer frameworks—around 40 percent—addressed issues like misinformation. This disparity could indicate that certain risks may be overlooked as organizations focus on the more prominent concerns.

"Many companies are still in the early stages of adopting AI and may need further guidance on managing these risks," said Peter Slattery, who leads the project at MIT's FutureTech group. The database is intended to provide a clearer picture of the challenges for AI developers and users.

Despite advances in AI model capabilities, such as Meta's Llama 3.1, which is more powerful than its predecessors, there have been minimal improvements in safety. Bo Li pointed out that the latest version of Llama, although more capable, does not show significant enhancements in terms of safety.

"Safety is not improving significantly," stated Li, reflecting a broader challenge within the industry to prioritize and optimize AI models for safe and responsible deployment.

The above is the detailed content of AI Risk Assessment: A Race to Map the Evolving Landscape of AI Risks. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!