Tech

Marc Andreessen once called the online safety team the enemy. He still wants walled gardens for children


In his polarization “A manifesto of technological optimism” last year venture capitalist Marc Andreessen listed some enemies of technological progress. Among them are “technology ethics” and “trust and safety,” a term used to refer to the work of online content moderation, which he argues has been used to subject humanity to suffered “a campaign of mass demoralization” against new technologies such as artificial intelligence.

Andreessen’s statement drew both public and silent criticism from those working in those fields—including at Meta, where Andreessen is a board member. Critics considered his writing to be a misrepresentation of their work keep internet service more secure.

On Wednesday, Andreessen offered some explanation: When it comes to his 9-year-old son’s online life, he favors barriers. “I want him to be able to sign up for internet services, and I want him to have a Disneyland-like experience,” the investor said during an onstage conversation at a conference for the Human-centric AI research institute Center of Stanford University. “I love free internet for everyone. One day, he will enjoy free internet too, but I want him to have walled gardens.”

In contrast to his reading of the manifesto, Andreessen went on to say that he welcomes tech companies — and by extension their trust and safety teams — to establish and enforce rules for with the type of content allowed on their service.

“There are a lot of companies that have the power to decide this,” he said. “Disney imposes different rules of conduct at Disneyland than what happens on the streets of Orlando.” Andreessen hinted at how tech companies could face government penalties for allowing images of child sexual abuse and several other types of content, so they cannot lack trusted and secure groups.

So what kind of content censorship does Andreessen consider the enemy of progress? He explained that he is concerned two or three companies dominate cyberspace and become “attached” to the government in a way that makes certain restrictions pervasive, causing what he calls “potential social consequences” without specifying what those restrictions might be. “If you end up in an environment where there is pervasive censorship, pervasive control, then you have a real problem,” Andreessen said.

The solution as he describes it is to ensure competitiveness in the technology industry and diversity in approaches to content moderation, some of which have greater restrictions on speech and action than other solutions. other law. “What happens on these platforms really matters,” he said. “What happens in these systems really matters. What happens in these companies really matters.”

Andreessen did not mention X, the social platform run by Elon Musk and formerly known as Twitter, which his company Andreessen Horowitz invested in when the Tesla CEO took over in late 2022. Musk will follow soon management laid off the majority of the company’s safety and trust employees, Shut down Twitter’s AI ethics teamRelax content rules and reinstate users who were previously permanently banned.

Those changes combined with Andreessen’s investment and manifesto have created some perception that investors want fewer limits on free speech. His clarifying comments were part of a conversation with Fei-Fei Li, co-director of Stanford’s HAI, titled “Removing Obstacles to a Robust AI Innovation Ecosystem.”

During the session, Andreessen also repeated arguments he has made over the past year that slowing the development of AI through regulations or Other measures are recommended by some AI safety advocates will repeat what he sees as America’s mistake in cutting investment in nuclear energy several decades ago.

Andreessen said nuclear power would be the “silver bullet” that addresses many current concerns about carbon emissions from other power sources. Instead, America withdrew and climate change was not stopped the way it could have been. “It’s an extremely negative, risk-averse frame,” he said. “The assumption in the discussion is, if there are potential harmful effects Therefore, there is a need to regulate, control, limit, suspend, stop, freeze.”

For similar reasons, Andreessen said, he would like to see the government invest more in AI research and infrastructure, and have more freedom for AI testing, e.g. open source AI model in the name of security. If he wants his son to have a Disneyland-like AI experience, some rules, whether from the government or trust and safety groups, may be necessary.

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button