Editor’s Observe: The next is a short letter from Ray Kurzweil, a director of engineering at Google and cofounder and member of the board at Singularity Group, Singularity Hub’s mum or dad firm, in response to the Way forward for Life Institute’s current letter, “Pause Big AI Experiments: An Open Letter.”
The FLI letter addresses the dangers of accelerating progress in AI and the following race to commercialize the expertise and requires a pause within the improvement of algorithms extra highly effective than OpenAI’s GPT-4, the big language mannequin behind the corporate’s ChatGPT Plus and Microsoft’s Bing chatbot. The FLI letter has hundreds of signatories—together with deep studying pioneer, Yoshua Bengio, College of California Berkeley professor of laptop science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and lots of others—and has stirred vigorous debate within the AI group.
Relating to the open letter to “pause” analysis on AI “extra highly effective than GPT-4,” this criterion is simply too obscure to be sensible. And the proposal faces a critical coordination drawback: people who conform to a pause could fall far behind companies or nations that disagree. There are super advantages to advancing AI in important fields akin to medication and well being, training, pursuit of renewable vitality sources to switch fossil fuels, and scores of different fields. I didn’t signal, as a result of I imagine we are able to deal with the signers’ security considerations in a extra tailor-made method that doesn’t compromise these important traces of analysis.
I participated within the Asilomar AI Ideas Convention in 2017 and was actively concerned within the creation of tips to create synthetic intelligence in an moral method. So I do know that security is a important challenge. However extra nuance is required if we want to unlock AI’s profound benefits to well being and productiveness whereas avoiding the true perils.
— Ray Kurzweil
Inventor, best-selling creator, and futurist

