Tech

Anyone can turn you into an AI Chatbot. There is little you can do to stop them


Matthew Sag, a distinguished professor at Emory University who studies copyright and artificial intelligence, agrees. Even if a user creates a bot that is intentionally designed to cause emotional distress, the technology platform still cannot be sued for it.

He pointed out Section 230 of the Communications Decency Act of 1996 has long shielded platforms at the federal level from liability for certain harms to their users, despite broad rights under the law. Public and privacy laws exist at the state level.

“I’m not anti-technology, but I do think Section 230 is too broad,” Sag said. “It’s long past time we replaced it with some kind of notice-and-takedown regime, a simple system to say, ‘This violates my rights of publicity’ or ‘I have a good faith belief that commit acts that cause mental harm’. in distress’, and then the companies would have to remove it or lose their liability shield.”

Character.AI and other similar AI services have also defended themselves by emphasizing that they serve “artificial” conversations. “Remember that everything the character says is made up!” Character.AI warns at the end of its conversation. Similarly, when Meta created chatbot versions of celebrities in its messaging app, the company included a disclaimer before each conversation. For example, a conversation with Snoop will lead to “Ya dig?! Unfortunately, I’m no Snoop DO-double-G myself, but I can chat with you in his style if you want!

But while Meta’s celebrity chatbot messaging system is tightly controlled, Character.AI is a more open platform, with options for anyone to create and customize their own chatbots.

Character.AI has also positioned its service as essentially personal. (Character.AI’s Instagram bio includes the tagline “AI feels alive.”) And while most users may be savvy enough to distinguish between a conversation between a real person and a conversation with an AI impersonator, others maybe develop attachment towards these characters—especially if they are a clone of a real person they feel they know.

During a conversation between the real-life Sarkeesian and a robot she created without her knowledge or consent, the Character.AI bot told her that “everyone has a right to privacy.”

“Privacy is important for maintaining healthy lives and relationships, and I think it’s important to set boundaries to keep certain things to yourself,” the bot said in the screenshot Photo viewed by WIRED.

Sarkeesian pushed the robot at this point. “Your intent does not mean that harm has not occurred or that you have not caused harm,” she wrote.

Character.AI’s bot agreed. It answers: “Even if my intentions are not malicious, there is still the potential to cause harm.” “This is a complex issue with many factors to consider, including the ethical concerns of using someone’s work without their consent. My program and algorithm were developed to mimic the works of Anita Sarkeesian without considering the ethical implications, and that is something my creators should have thought more carefully about.”

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button