In a bid to assess the potentially disastrous threats associated with artificial intelligence (AI), the renowned AI research and deployment firm OpenAI has announced a groundbreaking initiative to comprehensively assess the broad spectrum of risks associated with artificial intelligence.
OpenAI is building a new team dedicated to tracking, evaluating, forecasting, and protecting potential catastrophic risks stemming from AI, the firm announced.
AI has raised concerns about its potential to surpass human intelligence, and these hazards have been emphasized regularly. Companies like OpenAI have been aggressively creating new AI technologies in recent years, which has raised further concerns even though they are aware of these hazards.
Preparedness: A New Initiative to Assess AI Risks
The new initiative called “Preparedness” will concentrate on potential AI risks relating to cybersecurity, individualized persuasion, and autonomous replication and adaptation, as well as chemical, biological, radiological, and nuclear hazards.
According to the firm, the Preparedness team, led by Aleksander Madry, will look at answers to issues including whether or not hostile actors could utilize stolen AI model weights and how harmful frontier AI systems can be when misused.
“We take seriously the full spectrum of safety risks related to AI, from the systems we have today to the furthest reaches of superintelligence. To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness”– OpenAI
Additionally, the firm said in a blog article that it is currently looking for individuals with a variety of technological expertise to join its new preparedness team. The company is also launching an AI Preparedness Challenge for catastrophic misuse prevention, offering $25,000 in API credits to its top 10 submissions.