Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Tech

OpenAI nixes team focused on risk of AI causing ‘human extinction’

OpenAI eliminated a team focused on the risks posed by advanced artificial intelligence less than a year after it was formed – and a departing executive warned Friday that safety has “taken a backseat to shiny products” at the company.

The Microsoft-backed ChatGPT maker disbanded its so-called “Superalignment,” which was tasked with creating safety measures for advanced general intelligence (AGI) systems that “could lead to the disempowerment of humanity or even human extinction,” according to a blog post last July.

The team’s dissolution, which was first reported by Wired, came just days after OpenAI executives Ilya Sutskever and Jan Leike announced their resignations from the Sam Altman-led company.

OpenAI CEO Sam Altman responded to Leike’s tweet with a post of his own. AFP via Getty Images

“OpenAI is shouldering an enormous responsibility on behalf of all of humanity,” Leike wrote in a series of X posts on Friday. “But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI.”

Sutskever and Leike, who headed the OpenAI’s safety team, quit shortly after the company unveiled an updated version of ChatGPT that was capable of holding conversations and translating languages for users in real time.

The mind-bending reveal drew immediate comparisons to the 2013 sci-fi film “Her,” which features a superintelligent AI portrayed by actress Scarlett Johannson.

When reached for comment, OpenAI referred to Altman’s tweet in response to Leike’s thread.

“I’m super appreciative of @janleike’s contributions to OpenAI’s alignment research and safety culture, and very sad to see him leave,” Altman said. “He’s right we have a lot more to do; we are committed to doing it. I’ll have a longer post in the next couple of days.”

Some members of the safety team are being reassigned to other parts of the company, CNBC reported, citing a person familiar with the situation.

Jan Leike warned that safety has taken a “backseat to shiny products.” Jan Leike/X

AGI broadly defines AI systems that have cognitive abilities that are equal or superior to humans.

In its announcement regarding the safety team’s formation last July, OpenAI said it was dedicating 20% of its available computing power toward long-term safety measures and hoped to solve the problem within four years.

Sutskever gave no indication of the reasons that led to his departure in his own X post on Tuesday – though he acknowledged he was “confident that OpenAI will build [AGI] that is both safe and beneficial” under Altman and the firm’s other leads.

Sutskever was notably one of four OpenAI board members who participated in a shocking move to oust Altman from the company last fall. The coup sparked a governance crisis that nearly toppled OpenAI.

Ilya Sutskever also left OpenAI. REUTERS

OpenAI eventually welcomed Altman back as CEO and unveiled a revamped board of directors.

A subsequent internal review cited a “breakdown in trust between the prior Board and Mr. Altman” ahead of his firing.

Investigators also concluded that the leadership spat was not related to the safety or security of OpenAI’s advanced AI research or “the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners,” according to a release in March.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button