
The source announces that OpenAI has introduced two new open-source AI modelsspecifically developed for content moderation, named gpt-oss-safeguard-120b and gpt-oss-safeguard-20b. These models are designed to identify and flag inappropriate content, such as toxic language or unsafe material, across various online platforms. A key feature is that they allow developers to customize moderation policies according to their specific needs and standards. Furthermore, the models offer transparency by explaining the reasoning behind their moderation decisions, which helps developers understand the enforcement process. By making these tools free, OpenAI aims to provide scalable and controllable systems for managing content in the current AI landscape.