Google’s parent company Alphabet and chipmaking giant NVIDIA have invested in a new AI venture — Safe Superintelligence Inc. (SSI), founded by Ilya Sutskever, co-founder and former chief scientist of OpenAI. This marks a crucial turning point in the race toward safer, more powerful AI systems. Though still in stealth mode, SSI’s $2 billion valuation reflects enormous confidence in the project’s potential.
What is Safe Superintelligence Inc. (SSI)?
SSI was launched by Ilya Sutskever in 2023 after he left OpenAI in May of the same year. The startup is built around a clear and ambitious mission: to develop safe superintelligence — powerful AI that remains aligned with human values and free from misuse. The company has not publicly released any products or services yet, but it has already gained traction from some of the world’s top investors.
Founders and Vision
In addition to Ilya Sutskever, the founding team includes Daniel Gross (ex-Apple AI director and prominent investor) and Daniel Levy (a former OpenAI researcher). Together, they bring a rare blend of technical brilliance, research depth, and a strong ethical vision to the AI landscape.

Why Are Tech Giants Betting on SSI?
Google and NVIDIA’s decision to invest in SSI is not just financial — it’s strategic. Both companies are deeply involved in AI, and backing a startup focused on safe superintelligence helps them remain relevant in a fast-changing industry. It also shows they are prioritizing safety and ethics, especially after rising concerns about unchecked AI development.
According to Reuters, the funding round, led by Greenoaks, valued SSI at $32 billion, despite no product having launched. That valuation speaks volumes about both the credibility of its founders and the growing demand for AI systems built with safety as the foundation.
How SSI is Different From OpenAI and Others
Unlike many tech companies that split attention across multiple areas, SSI’s website (https://www.ssi.inc/) boldly states: “SSI is our mission, our name, and our entire product roadmap — because it is our sole focus.” That clarity sets it apart in an era where AI labs often juggle research, product launches, and regulatory hurdles all at once.
By avoiding commercial distractions and focusing only on research and development of safe superintelligence, SSI believes it can move faster while keeping its work fully aligned with human interests. This is particularly relevant in light of growing global concerns about AI safety, regulation, and existential risks.
The Bigger Picture: What This Means for the AI Industry
The AI field is at a critical juncture. With breakthroughs happening rapidly, the risks associated with powerful AI systems have also increased. Experts like Ilya Sutskever have long voiced the importance of keeping safety at the forefront. SSI’s founding shows that investment in safe superintelligence startup SSI is not just an ideal — it’s a real, funded mission.
SSI might just be the beginning. As governments and institutions demand more accountability from AI companies, startups focused solely on safety and alignment could become the next major trend. If successful, SSI could set a model that others follow, potentially reshaping how AI is developed globally.
Conclusion
The launch and early success of Safe Superintelligence Inc. signals a critical evolution in AI: a shift from racing to build powerful systems to ensuring those systems are safe and beneficial. Backed by giants like Google and NVIDIA, SSI has both the funding and the talent to redefine the future of artificial intelligence.
As the company begins to reveal more about its work, all eyes will be on how it translates its mission into real-world applications. One thing is certain: investment in safe superintelligence startup SSI is more than a financial trend — it’s a step toward shaping the responsible future of AI.
External References:
- Reuters – Google, NVIDIA invest in Sutskever’s SSI
- SSI Official Website
- The Verge: Why Ilya Sutskever Left OpenAI