The rapid development of Artificial Intelligence (AI) is gradually transforming our world. However, with technological advancements, AI safety issues have increasingly become a focal point of global concern. Recently, 16 global companies, including OpenAI, Microsoft, and ZhiPu AI, have jointly signed an agreement known as the "Frontier AI Safety Commitment," aiming to ensure the responsible development and safe use of AI technology.

Background: The Rising Prominence of AI Safety Issues

Recently, the departure of two key members from OpenAI has sparked widespread concern in the industry regarding AI safety issues. They publicly accused OpenAI and its leadership of neglecting safety issues, favoring flashy products over all else. In addition, Turing Award winners Yoshua Bengio, Geoffrey Hinton, and Yao QiZhi, among other experts, published an article in the journal Science, calling on global leaders to take stronger actions to address the potential risks posed by AI.

Action: Signing the Frontier AI Safety Commitment

On May 22nd, a significant moment in the history of artificial intelligence, companies such as OpenAI, Google, Microsoft, and ZhiPu AI signed the Frontier AI Safety Commitment. This commitment includes several key points:

  1. Responsible Governance and Transparency: Ensuring the safety and responsible governance of frontier AI.
  2. Risk Assessment: Based on the AI safety framework, clarifying how to measure the risks of frontier AI models.
  3. Risk Mitigation Mechanisms: Establishing clear processes to mitigate the risks of frontier AI safety models.

Impact: The Significance of the AI Safety Commitment

The signing of this commitment is seen by Turing Award winner Yoshua Bengio as "an important step towards establishing an international governance system to promote AI safety." Zhang Peng, CEO of ZhiPu AI, also stated that with the development of advanced technology, the responsibility to ensure AI safety is increasingly important.

Technology: Superalignment Technology Enhances AI Safety

ZhiPu AI shared their specific practices for AI safety at the AI top conference ICLR 2024. They believe that Superalignment technology will assist in enhancing the safety of large models and have launched a Superalignment plan similar to OpenAI's. GLM-4V is equipped with these safety measures to prevent the generation of harmful or unethical behavior while protecting user privacy and data security.

Regulation: Approval of the EU's Artificial Intelligence Act

On the same day, the Council of the European Union officially approved the Artificial Intelligence Act, which is the world's first comprehensive AI regulatory legislation and will come into effect next month. This regulation adopts a "risk-based" approach, with stricter rules for higher risks of social harm.

Conclusion

The safety of AI technology has become a topic of common concern for global technology companies and government agencies. As Professor Philip Torr from the Department of Engineering Science at the University of Oxford said: "It is time to move from vague suggestions to concrete commitments." The action of global tech giants undoubtedly provides a solid foundation for the development of AI safety and brings more hope and security for the future of humanity.