Tech

Researchers say ‘responsible’ AI can protect children on social media


use iphone

Credit: Unsplash/CC0 Public Domain

New research by academics at the University of Warwick’s Warwick Business School argues that “responsible” AI can prevent children from viewing both harmful and legitimate but harmful content online.

This new study presents a framework that uses so-called “responsible” AI to help moderate content. The study was published in the journal Competitive advantage in the digital economy (CADE 2022).

The proposed system can sift through a large amount of language and images to compile a “dictionary” of insights into each of the most harmful areas that threaten children and youthinclude hate speech, cyber bullyingsuicide, anorexia, child abuse and child sexual abuse.

The growing popularity of social media has led to increasing scrutiny calls when it comes to moderating online content for vulnerable groups, such as children. However, this makes it difficult to moderate large volumes of online content.

System use natural language processing algorithms with a layer of knowledge, which can then allow technology to understand language more like humans.

This means that technology can understand the context of comments, the nuances of speech, the social relationships between individuals of their age and relationships.

New research proposes a content moderation system that takes into account the context of behaviors and social harms, which may require human interpretation, combined with an AI system that can trace amounts of information. the giants that social networks create.

Shweta Singh, a professor at Warwick Business School, University of Warwick, is one of the study’s authors and plans to give evidence before Congress on children’s online safety. She commented, “So far, legislators have largely allowed social media platform to mark her own homework—causing outrage from both caregivers of vulnerable children and defenders of free speech.

“Lawmakers need to better understand the technology they’re seeking to regulate. Tech businesses have little incentive to hold AI accountable, with accusers recently speaking out. opposes Meta’s censorship methods and their harmful effects.

“If regulators understood what could happen—’smart’ technology that could read between the lines and sift benign communications from malicious—they could demand its presence. in the laws they are trying to pass.”

More information:
S. Singh et al, Framework for Responsible AI Integration into Social Media Platforms, Competitive advantage in the digital economy (CADE 2022) (2022). DOI: 10.1049/icp.2022.2051

quote: Researchers say ‘responsible’ AI can protect children on social networks (2023, February 7) retrieved February 7, 2023 from https://techxplore.com/news 2023-02-responsible-ai-children-social-media.html

This document is the subject for the collection of authors. Other than any fair dealing for private learning or research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.

news7f

News7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button