OpenAI Updates Safety Framework for High-Risk AI Capabilities

OpenAI has revamped its safety framework to better manage the risks of advanced AI systems. The updated approach focuses on identifying and mitigating capabilities that could lead to serious, irreversible harm. Capabilities are now classified into two categories: those with known risks and established safeguards, and emerging areas requiring new safety strategies. Depending on the risk level, safety measures must be applied either during development or before release. OpenAI also plans to release more detailed reports on both performance and safety practices—a timely move ahead of upcoming EU AI regulations.