Elon Musk Backs California AI Safety Bill
The first effort to codify AI regulations anywhere in the United States just won the backing of an influential voice at a crucial time.
Elon Musk, CEO of Tesla and founder of chatbot parent company Grok xAI, has supported California’s “Safety and Security Innovation for Borderline Artificial Intelligence Models Act” (Senate Bill 1047).
Should it be passed in the state assembly and received? final approval Gavin Newsom before the legislative term ends this week, it would set up initial hurdles around the technology. The bill would require developers to create safety protocols, shut down an out-of-control AI model, report security incidents, empower whistleblowers inside AI companies, require companies to take steps to protect AI from being used by malicious hackers, and create liability for companies if their AI software goes out of control.
However, it was opposed by venture capitalists such as Marc Andreessen and even heated debate Among the AI notables: Meta’s AI director Yann LeCun opposed the bill while AlexNet co-founder Geoffrey Hinton supported it.
“This is a difficult decision and will make some people uncomfortable, but ultimately I think California should probably pass the AI safety bill SB 1047,” Musk posted on Monday, citing “risks to the public” from AI.
This is a difficult decision and may make some people uncomfortable, but ultimately I think California should probably pass the AI safety bill SB 1047.
For over 20 years, I have been an advocate for regulating AI, just as we regulate any product/technology that has the potential to pose a risk…
— Elon Musk (@elonmusk) August 26, 2024
To date, the only existing regulatory framework focuses on the largest supercomputers with 10*26 floating point operations, which cost over $100 million to train. However, this is not a federal law on the statute books, but rather a executive order Biden’s administration could easily be overturned by his successor next year.
This bill would at least partially alleviate this and provide some legal clarity for Big Tech companies like Microsoft OpenAI, Anthropic links backed by Amazon and Googleeven if they don’t necessarily agree with it.
“SB 1047 is a straightforward, common-sense, gentle bill based on President Biden’s executive order,” said California state Sen. Scott Wiener, the bill’s sponsor. earlier this month.
Last week for California to pass before legislative term ends
If any state were to take on this responsibility, California would be the logical choice. Its $4 trillion economy is roughly the size of Germany and Japan in absolute dollar terms, thanks largely to Silicon Valley’s thriving tech sector. It’s doing more to foster innovation than any of its G7 peers.
Speaking to Bloomberg TV, Wiener said he sympathized with the argument that Washington should push forward on the issue, but cited a range of issues including data privacy laws, social media and net neutrality that Capitol Hill has consistently failed to address convincingly.
“I agree, this issue should be addressed at the federal level,” Wiener told the television station on Friday“Congress has a very poor record of regulating the tech sector and I don’t see that changing so California should take the lead.
Great argument by @AndrewYNg against California’s shameful SB1047 regulation, which would essentially kill open source AI and significantly slow or stop AI innovation. https://t.co/pZuLUXYCLR
— Yann LeCun (@ylecun) July 12, 2024
This month is the last chance for SB 1047 to pass. After this week ends, the state legislature will recess before the November election. If passed, it still needs to be approved by Newsom before the end of September, and last week the U.S. Capitol Chamber urged him to veto bill if it is delivered to his desk.
But regulating technology can be a fool’s errand because policy always lags behind the pace of innovation. Interfering with the free market can inadvertently stifle innovation—and that’s the main criticism around Bill 1047.
Former OpenAI researcher reveals his colleagues are giving up
Just a year ago, Big Tech Champion could largely quash any attempts at outside interference in the field. Most policymakers understand that the United States is locked in a high-risk AI arms race with Chinaand neither side can afford to lose. If the United States imposes restrictions on domestic industries, it could tip the balance in Beijing’s favor.
A rash recent departure between Senior AI Security Expert from OpenAI, the company that started the AI gold rush, has raised concerns that executives—including CEO Sam Altman—may throw caution to the wind in an effort to accelerate commercialization extremely expensive technology.
Former OpenAI safety researcher Daniel Kokotajlo said Luck on Monday that nearly half of the AI management team had decided to leave the former nonprofit out of frustration with the organization’s current direction.
“It’s just people giving up individually,” he said in a exclusive interview. Kokotajlo chose to renounce any shares he had in the company to avoid signing a non-disclosure agreement forbade him to talk about his former master.
Musk may also be personally affected by the law. Last year, he founded his own general artificial intelligence startup xAI. He just launched a brand new company supercomputer cluster in Memphis that is powered by AI training chip And run by experts He actually stole it from Tesla.
But Musk is no ordinary competitor: he’s tech-savvy, co-founding OpenAI in December 2015 and personally recruiting former chief scientist. Later CEO and entrepreneur of Tesla fall out With Altman, the final decision was sued the company not once but twice.