Tech

OpenAI, Anthropic Will Showcase Their Powerful AI to the US Government Before Release


OpenAIAnthropic Partners with the US AI Safety Institute: If you’ve read about the generation artificial intelligence and how fast it is improving, you will know that many experts and industry leaders have voiced their concerns about how this could endanger humanity. Now, to take corrective steps in this direction, two of the leading AI companies, OpenAI and Anthropic, will now allow the US government to access all of the major AI models they develop before releasing them to the public. This is done to ensure their safety in the future.

Read more: These Google users in India are at high risk, government warns

OpenAI, Anthropic partner with US AI Safety Institute

OpenAI CEO Sam Altman has X (formerly Twitter) to announce the same. “We are excited to have reached an agreement with the US AI Safety Institute to test our future models before they are released,” Altman said.

OpenAI sees it as important that this happens at the national level, he added. “The United States needs to continue to lead!” he added.

Simply put, the US government would be able to work with AI companies to mitigate potential safety risks that advanced AI models could pose, and then provide feedback.

“Safe, trustworthy AI is critical to the positive impact of technology. Our partnership with the US AI Safety Institute leverages their deep expertise to rigorously test our models before widespread deployment,” said Jack Clark, Anthropic Co-Founder and Head of Policy.

Read more: “Never work for an Indian manager”: Europe-based Microsoft employee ‘warned’ in Reddit post

What is the US AI Safety Institute?

The US AI Safety Institute is part of the US Department of Commerce’s National Institute of Standards and Technology (NIST). It is a relatively new organization, created last year by the Biden administration to address the risks of AI. In the future, it will also work with the UK government’s AI Safety Institute to help AI companies ensure safety.

Commenting on the new partnership with OpenAI and Anthropic, Elizabeth Kelly, director of the US AI Safety Institute, said: “These agreements are just the beginning, but they are an important milestone as we work to help manage the future of AI responsibly.”

Read more: Quit IT job; forced to wait tables: Techie shares his big mistake of ‘quitting out of anger’ on Reddit

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button