Business

Sam Altman has the idea to make AI “love humanity,” using it to poll billions of people on their value systems



He’s confident that trait could be incorporated into an AI system but isn’t sure.

“I think so,” Altman said when asked the question during an interview with Harvard Business School vice dean Debora Spar.

The question of an AI uprising was once reserved for Isaac Asimov’s sci-fi or James Cameron’s action movies. But since AI emerged, it has become, if not a hot-button issue, then at least debate topic that warrants real consideration. What was once considered a crank’s thought, has now become a real legal question.

Altman said OpenAI’s relationship with the government has been “quite constructive.” He added that a project as far-reaching and wide-reaching as AI development should have been a government project.

“In a well-functioning society, this would be a government project,” Altman said. “Given that doesn’t happen, I think it would be better if it happened this way as an American project.”

The federal government has yet to make significant progress on AI safety legislation. There has been an effort in California to pass that will be Hold AI developers accountable for catastrophic events such as being used to develop weapons of mass destruction or attack critical infrastructure. That bill was passed in the legislature but was veto by California Governor Gavin Newsom.

Some preeminent figures in the field of AI have warned that ensuring it is fully aligned with humanity’s interests is an important question. Nobel Prize winner Geoffrey Hintonknown as the Godfather of AI, said he could not “see a path to safety”. Tesla CEO Elon Musk regularly warns that AI could lead to human extinction. Musk played a key role in founding OpenAI, providing the nonprofit with significant funding from the start. The funding that Altman still maintains “gratitude,” despite the fact that Musk is suing him.

There have been many organizations—like nonprofits Joint Research Center and startups Super intelligent and safe founded by the former chief scientific officer of OpenAI — has emerged in recent years solely dedicated to this question.

OpenAI did not respond to a request for comment.

Altman said AI is now designed very well for alignment. Therefore, he argues, it will be much easier to ensure AI does not harm humanity.

“One of the things that has worked surprisingly well is the ability to tune the AI ​​system to work in a specific way,” he said. “So if we can articulate what that means in a variety of circumstances then I think we can make the system work that way.”

Altman also has a unique idea for how OpenAI and other developers can “articulate” the principles and ideals needed to ensure AI is on our side: using AI for exploration opinion of the general public. He suggests asking AI chatbot users about their values ​​and then using those answers to determine how to adapt AI to protect humanity.

“I’m interested in thought experiments [in which] AI will talk to you for a few hours about your value system,” he said. It “does it to me, to everyone else. And then say ‘okay, I can’t make everyone happy all the time.’”

Altman hopes that by communicating and understanding “at a deep level” with billions of people, AI can identify the challenges facing society more broadly. From there, AI can reach consensus on what it needs to do to bring general happiness to the community.

AI has a dedicated internal team hyperlinkis tasked with ensuring that future digital superintelligence doesn’t go awry and cause untold harm. In December 2023, the team released an initial research paper showing that they were working on a process in which a large language model will monitor another one. This spring, the leaders of that group, Sutskever and Jan Leike, left OpenAI. Their team has been disbanded, according to report from CNBC at the time.

Leike said he put aside growing disagreements with OpenAI’s leadership over its commitment to safety as the company works toward general artificial intelligence, a term for an AI as intelligent as humans. .

“Building machines that are smarter than humans is an inherently dangerous endeavor,” Leike wrote ABOVE X. “OpenAI is shouldering a huge responsibility on behalf of all humanity. But over the years, safety culture and processes have taken a backseat to shiny products.”

When Leike left, Altman wrote on X that he was “very appreciative [his] Contribute to openai [sic] Joint research and safety culture.”

How far apart are you from the world’s most powerful business leaders? Discover who made our brand new list of The 100 most powerful people in business. Plus, learn about the metrics we used to do it.

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button