Abstract: The tech world has once again been shaken by the news that Ilya Sutskever, co-founder of OpenAI, has announced the establishment of a new company called "Safe Superintelligence" (SSI). The company's goal is to directly create a safe superintelligence, rather than pursuing short-term commercial benefits. This move not only marks a new chapter in Sutskever's personal career but also heralds the continuation of the debate between the safety and acceleration routes in the field of artificial intelligence.


In the rapid development of artificial intelligence, security issues have always been a focus of attention in the tech community. Now, this issue has once again been pushed to the forefront. Ilya Sutskever, a leading expert in the field of deep learning, has chosen to leave OpenAI and establish a brand new AI company - SSI, which aims to create a safe superintelligence.

The Entrepreneurial Philosophy of Safety First

The establishment of SSI reflects Sutskever's profound understanding and firm stance on artificial intelligence safety. He emphasizes that the company will focus on creating a safe and powerful artificial intelligence system, and will not launch any commercial products or services in the short term. This "safety first" philosophy is in stark contrast to the development route of OpenAI.

The Ideological Divergence from OpenAI

Although Sutskever expressed his best wishes for the future of the company when he left OpenAI, his new company has obviously chosen a different path. The establishment of SSI is not only a major change in Sutskever's personal career but also a response to OpenAI's accelerationist route.

Nuclear Safety Level AI Safety

When describing the safety goals of SSI, Sutskever used the metaphor of "safety like nuclear safety." This indicates that his attention to AI safety has reached an unprecedented level. SSI will attempt to achieve this goal through engineering breakthroughs rather than simple technical protection.

Strong Founding Team

In addition to Sutskever himself, the founding team of SSI also includes former Apple machine learning director Daniel Gross, as well as engineer Daniel Levy, who worked with Sutskever at OpenAI. The formation of this team has laid a solid foundation for the future development of SSI.

Investor Confidence and Challenges

The business model and goals of SSI have undoubtedly brought great confidence to investors. However, the company will face a long period of no output before achieving the ultimate goal of "superintelligence." This will be a major challenge that SSI needs to overcome in its development process.


The establishment of SSI is not only an exploration of Sutskever's personal attitude towards artificial intelligence safety but also an inspiration to the entire industry. How to ensure the safety of artificial intelligence while pursuing technological progress will be an eternal topic in the development of future technology.

Full SSI Announcement:

Safe Superintelligence Inc.
Superintelligence is just around the corner.
Building a safe superintelligence is the most important technical issue of our time.
We have launched the world's first SSI lab, with only one goal and product: safe superintelligence.
It is called Safe Superintelligence Inc.
SSI is our mission, our name, and our entire product roadmap because it is our only focus. Our team, investors, and business model all work together to achieve SSI.
We treat safety and capability in parallel, as a technical issue that needs to be solved through revolutionary engineering and scientific breakthroughs. We plan to improve capabilities as quickly as possible while ensuring that safety is always the top priority.
In this way, we can expand in a calm state.
Our focus means that we will not be distracted by management affairs or product cycles, and our business model means that safety and technological progress will not be affected by short-term commercial pressures.
We are an American company with offices in Palo Alto and Tel Aviv, where our roots are and where we can recruit top technical talents.
We are building a streamlined, outstanding team of world-class engineers and researchers, focusing only on SSI.
If you are such a person, we offer an opportunity for you to complete a lifelong career and help solve the most important technical challenge of our time.
Now is the time. Join us.
Ilya Sutskever, Daniel Gross, Daniel Levy
June 19, 2024