Former OpenAI Policy Chief Establishes Institute for Independent AI Safety Audits
1 min read
AI Governance, Risk & Compliance
-/5
In short
- Miles Brundage, who previously led policy research at OpenAI, has launched a new initiative named AVERI, aimed at conducting independent audits of prominent AI models.
- This development highlights a growing concern within the industry regarding self-regulation, as Brundage argues that companies should not be permitted to evaluate their own technologies.
- The establishment of AVERI seeks to ensure accountability and transparency in AI development, addressing potential risks while also recognizing the opportunities that responsible innovation
Miles Brundage, who previously led policy research at OpenAI, has launched a new initiative named AVERI, aimed at conducting independent audits of prominent AI models. This development highlights a growing concern within the industry regarding self-regulation, as Brundage argues that companies should not be permitted to evaluate their own technologies. The establishment of AVERI seeks to ensure accountability and transparency in AI development, addressing potential risks while also recognizing the opportunities that responsible innovation can provide. As the landscape of artificial intelligence continues to evolve, the implications of such audits could be significant, prompting a reevaluation of existing practices and fostering a more robust framework for safety and ethical considerations in AI deployment.
Source:
-
Former OpenAI policy chief launches institute for independent AI safety audits — The Decoder (EN-US)