Ilya Sutskever, a co-founder of OpenAI, launches a new business to develop superintelligent AI and guarantees its security


Ilya Sutskever, co-founder and former chief scientist at OpenAI, has announced the establishment of a new company named Safe Superintelligence Inc (SSI). This significant career move comes shortly after Sutskever's departure from OpenAI, highlighting his dedicated shift towards a singular focus on AI safety. SSI, co-founded with Daniel Gross, former AI lead at Apple, and Daniel Levy, a former OpenAI engineer, is committed to developing a safe and powerful AI system, with safety being its primary concern alongside capabilities.

Sutskever announced his new venture on X, stating, "I am starting a new company." In a subsequent tweet, he elaborated that SSI would aim for "safe superintelligence in a straight shot, with one focus, one goal, and one product."

The launch of SSI underscores the company's commitment to advancing AI technology while ensuring that safety remains paramount. Sutskever's vision for SSI is crystal clear: to focus exclusively on creating superintelligent AI systems that are both highly advanced and safe. This targeted approach allows SSI to sidestep the distractions and pressures commonly faced by larger AI firms like OpenAI, Google, and Microsoft, which must balance innovation with commercial and managerial demands.

Sutskever's departure from OpenAI garnered significant attention earlier this year. He played a pivotal role in an attempt to remove OpenAI CEO Sam Altman, an action that led to substantial internal strife. Reflecting on this tumultuous period, Sutskever expressed regret for his involvement, emphasizing his commitment to the mission they had established at OpenAI. This experience seems to have influenced his approach at SSI, where the focus is on maintaining a steady and undistracted path towards the safe development of AI.

SSI’s approach to AI safety is heavily informed by the lessons Sutskever learned at OpenAI. At OpenAI, Sutskever co-led the Superalignment team with Jan Leike, who also left the company in May to join Anthropic, a competing AI firm. The Superalignment team focused on controlling and guiding AI systems to ensure they remained beneficial. This mission continues at SSI, where safety and capabilities are viewed as interrelated challenges to be addressed through innovative engineering and scientific breakthroughs.

In an interview with Bloomberg, Sutskever detailed SSI’s business model, which is designed to insulate safety, security, and progress from short-term commercial pressures. This structure allows the company to concentrate solely on its mission without the distractions of management overhead or product cycles. Unlike OpenAI, which transitioned from a non-profit to a for-profit entity due to the high costs associated with AI development, SSI has been established as a for-profit company from the beginning, with a clear emphasis on raising capital to support its ambitious goals.

Currently, SSI is in the process of building its team and has established offices in Palo Alto, California, and Tel Aviv. The company is actively recruiting technical talent to join its mission of creating safe superintelligence.

buttons=(Accept !) days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !