OpenAI weeks ago began dissolving the so-called "superalignment" group, integrating members into other projects and research, according to the San Francisco-based firm.
Company co-founder Ilya Sutskever and team co-leader Jan Leike announced their departures from the ChatGPT-maker this week.
The dismantling of an OpenAI team focused on keeping sophisticated artificial intelligence under control comes as such technology faces increased scrutiny from regulators and fears mount regarding its dangers.
"OpenAI must become a safety-first AGI (artificial general intelligence) company," Leike wrote Friday in a post on X, formerly Twitter.
Leike called on all OpenAI employees to "act with the gravitas" warranted by what they are building.
OpenAI chief executive Sam Altman responded to Leike's post with one of his own, thanking him for his work at the company and saying he was sad to see Leike leave.
"He's right we have a lot more to do," Altman said. "We are committed to doing it."
Altman promised more on the topic in the coming days.
Sutskever said on X that he was leaving after almost a decade at OpenAI, whose "trajectory has been nothing short of miraculous."
"I'm confident that OpenAI will build AGI that is both safe and beneficial," he added, referring to computer technology that seeks to perform as well as -- or better than -- human cognition.
Sutskever, OpenAI's chief scientist, sat on the board that voted to remove fellow chief executive Altman in November last year.
The ousting threw the San Francisco-based startup into a tumult, with the OpenAI board hiring Altman back a few days later after staff and investors rebelled.
OpenAI early this week released a higher-performing and even more human-like version of the artificial intelligence technology that underpins ChatGPT, making it free to all users.
"It feels like AI from the movies," Altman said in a blog post.
Altman has previously pointed to the Scarlett Johansson character in the movie "Her," where she voices an AI-based virtual assistant dating a man, as an inspiration for where he would like AI interactions to go.
The day will come when "digital brains will become as good and even better than our own," Sutskever said during a talk at a TED AI summit in San Francisco late last year.
"AGI will have a dramatic impact on every area of life."
South Korea, Britain host AI summit with safety top of agenda
Seoul (AFP) May 20, 2024 -
South Korea and Britain kick off a major international summit on artificial intelligence in Seoul this week, where governments plan to press tech firms on AI safety.
The meeting is a follow-up to the inaugural global AI safety summit at Bletchley Park in Britain last year, where dozens of countries voiced their fears to leading AI firms about the risks posed by their tech.
Safety is again on the agenda at the AI Seoul Summit starting Tuesday and representatives are expected from leading AI firms, including ChatGPT maker OpenAI, Google DeepMind, French AI firm Mistral, Microsoft and Anthropic.
"As with any new technology, AI brings new risks, including deliberate misuse from those who mean to do us harm," South Korean President Yoon Suk Yeol and UK Prime Minister Rishi Sunak said Monday in a joint article.
"However, with new models being released almost every week, we are still learning where these risks may emerge," they said in the piece, published by the South Korean daily JoongAng Ilbo and Britain's i newspaper.
The stratospheric success of ChatGPT soon after its 2022 release sparked a gold rush in generative AI, with tech firms around the world pouring billions of dollars into developing their own models.
Generative AI models can generate text, photos, audio and even video from simple prompts, and its proponents have heralded them as a breakthrough that will improve lives and businesses around the world.
But critics, rights activists and governments have warned that they can be misused in a wide variety of situations, including the manipulation of voters through fake news stories or so-called "deepfake" pictures and videos of politicians.
- Dramatic changes -
Many have called for international standards to govern the development and use of AI.
"When we meet with companies at the AI Seoul Summit, we will ask them to do more to show how they assess and respond to risk within their organisations," Yoon and Sunak wrote.
"We will also take the next steps on shaping the global standards that will avoid a race to the bottom."
The Seoul summit comes days after OpenAI confirmed that it had disbanded a team devoted to mitigating the long-term dangers of advanced AI.
The two-day summit will be partly virtual, with a mix of closed-door sessions and some open to the public in Seoul.
However, a group of six South Korean civil society organisations, including the prominent Peoples Solidarity for Participatory Democracy, criticised the summit's organisers for not including more developing nations.
"It would be beneficial to discuss international norms for AI in a more open forum where all countries and diverse stakeholders from around the world can participate equally, rather than in an elite club of a few developed countries," they said in a joint statement on Monday.
In addition to safety, the summit will discuss how governments can help spur innovation, including into AI research at universities.
Participants will also consider ways to ensure the technology is open to all and can aid in tackling issues such as climate change and poverty.
"It is just six months since world leaders met at Bletchley, but even in this short space of time, the landscape of AI has changed dramatically," Yoon and Sunak said.
"The pace of change will only continue to accelerate, so our work must accelerate too."
France will host the next AI safety summit.
Related Links
All about the robots on Earth and beyond!
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |