Vogon Today

Selected News from the Galaxy

StartMag

Ilya Sutskever’s chimerical project to compete with Altman’s AI

Ilya Sutskever's chimerical project to compete with Altman's AI

Ilya Sutskever left OpenAI to create safe AI, and with his Safe Superintelligence he plans to take the time to do research before bringing AI products to market. Despite the lack of immediate profitability, it raised $1 billion from major backers. Facts, names and numbers

Ten employees, $1 billion in fundraising and an estimated value of $5 billion. It is Safe Superintelligence, the company co-founded by former OpenAI chief scientist Ilya Sutskever, who abandoned Sam Altman's software house – considered by some to be too unscrupulous – due to a difference of opinion.

Sutskever then decided to launch a different project in the name of safe artificial intelligence (AI). In fact, its release on the market, unlike what its competitors have done, will not be immediate. Yet, from Andreessen Horowitz to Sequoia Capital, just to name a few, they have decided to invest in it even though the return may not arrive in the short term.

SUTSKEVER'S DREAM

As Startmag wrote, the relationship between Sutskever and Altman for at least the last year has been rather turbulent and the final break came last May. Shortly thereafter, the former chief scientist of OpenAI announced the birth of Safe Superintelligence, an AI company that aims to "address security and capacity together, and solve technical problems through revolutionary engineering and scientific discoveries". Also joining him are Daniel Gross, former head of AI at Apple, and Daniel Levy, who worked with Sutskever at OpenAI.

“Our goal – explained the founders – involves no distraction from overheads or production cycles, and our business model means that safety, security and progress are all isolated from commercial pressures at short term".

Sutskever told Bloomberg : “This company is special because its first product will be safe superintelligence and it won't do anything else until then. It will be completely insulated from the external pressures of having to manage a large, complicated product and being locked in a rat race.”

1 BILLION DOLLARS FOR SAFE SUPERINTELLIGENCE

Safe Superintelligence's mission is, therefore, to create a safe and powerful AI system within a pure research organization that has no intention of selling AI products or services in the short term – a bit like OpenAI wanted to do at the start. However, according to Bloomberg , how to prevent an AI system from going out of control “remains a mostly philosophical exercise.”

Despite these premises, Safe Superintelligence recently managed to raise 1 billion dollars. “The financing – says Reuters – highlights how some investors are still willing to bet more on exceptional talent focused on research into large artificial intelligence models (LLMs). This is despite the general decline in interest in financing these companies which may not be profitable for a certain period of time and which has led several startup founders to leave their positions for tech giants.”

INVESTORS

Among the investors reported by the news agency include the main venture capital firms Andreessen Horowitz (which, however , does not disdain bolder AI either ), Sequoia Capital, DST Global and SV Angel. NFDG, venture capital of the computer scientist Nat Friedman and Gross, who in addition to being CEO is also responsible for the computing power and fundraising of Safe Superintelligence, also participated.

“It is important for us to be surrounded by investors who understand, respect and support our mission, which is to go straight to safe superintelligence and, in particular, to dedicate a couple of years to research and development of our product before put it on the market,” Gross said.

THE NUMBERS AND HUMAN CAPITAL OF SAFE SUPERINTELLIGENCE

The company declined to share its valuation, but sources say it was valued at $5 billion.

Safe Superintelligence currently has 10 employees and plans to use the funds to acquire computing power and hire top talent. It will focus on building a small team of highly trusted researchers and engineers, split between Palo Alto and Tel Aviv.

The company is dedicating a lot of importance to personnel selection. Gross said they spend hours verifying that candidates have “good character” and that they look for people with “extraordinary skills” rather than placing too much emphasis on credentials and industry experience.

An approach that smacks of nostalgia, in fact, when Sutskever left OpenAI, it dismantled its Superalignment team, which worked to ensure that AI remained aligned with human values ​​to prepare for the day when technology surpasses human intelligence .


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/il-chimerico-progetto-di-ilya-sutskever-per-competere-con-lia-di-altman/ on Thu, 05 Sep 2024 12:00:15 +0000.