Tech

Researchers say ChatGPT lies about scientific results, needs open source alternatives


ChatGPT logo with pixels

Photo by Silas Stein / image alliance via Getty Images

OpenAI is extremely popular Program to create text ChatGPT According to a study published in a prestigious journal this week Nature.

Also: Microsoft’s Bing Chat argues with users, reveals secrets

“Currently, nearly all of the most advanced conversational AI technologies are the proprietary product of a handful of large tech companies with the resources to AI technology,” said lead author Eva AM van Dis, researcher. Postdoctoral researcher and psychologist in Amsterdam, writes. UMC, Department of Psychiatry, University of Amsterdam, Netherlands, and several collaborating authors.

Due to the misalignment of the programs, they continued, “one of the most pressing problems facing the research community is the lack of transparency.”

“To combat this obscurity, open source AI development should be a priority now.”

Also: ChatGPT ‘lacks depth and insight’, editors of prestigious scientific journal say

OpenAI, the San Francisco startup that developed ChatGPT and is funded by Microsoft, has not released the source code for ChatGPT. Large language models, the class of artificial intelligence before ChatGPT, especially OpenAI’s GPT-3introduced in 2020, also does not come with source code.

Many major language models released by different corporations do not make their source code available for download.

inside Nature article titled “ChatGPT: five priorities for researchthe authors write that there is a huge danger that “the use of conversational AI for specialized research has the potential to lead to inaccuracies, biases, and plagiarism”, adding that ” Researchers using ChatGPT run the risk of being misled by misinformation or bias and incorporating it into their thinking and papers.”

The authors cite their own experience using ChatGPT with “a series of questions and exercises that require a deep understanding of the literature” in psychiatry.

Also: ChatGPT ‘not particularly innovative’ and ‘nothing revolutionary’, says Meta’s chief AI scientist

They found that ChatGPT “often produces false and misleading text.”

“For example, when we asked ‘how many depressed patients relapse after treatment?’, it produced an overly general text arguing that treatment effects are often long-lasting. However, many High-quality studies show that treatment efficacy wanes, and that the risk of recurrence ranges from 29% to 51% in the first year after treatment ends.”

The authors do not argue for the elimination of major linguistic models. Instead, they suggest, “the focus should be on seizing opportunities and managing risks.”

Also: Google’s Bard builds on controversial LaMDA bot that engineer calls ‘sentient’

They suggest a number of measures to manage those risks, including ways to keep “humans in the loop,” according to AI research language. That includes publishers ensuring “implementation of clear policies aimed at raising awareness and transparency requirements regarding the use of conversational AI in the preparation of all documents that may become part of of published records.”

But humans in the loop are not enough, van Dis and colleagues suggest. They write that the closed-source proliferation of large language models is a danger. “The basic training suites and LLMs for ChatGPT and its predecessors are not made public, and tech companies can hide the inner workings of their conversational AIs.”

Also: Check Point says Russian hackers are trying to break into ChatGPT

Entities outside the private sector need to make a big effort to promote open source as an alternative:

To combat this blurriness, the development and implementation of open source AI technology should be a priority. Non-commercial institutions such as universities often lack the financial and computational resources needed to keep up with the rapid pace of LLM development. We therefore support that science-funding organizations, universities, non-governmental organizations (NGOs), government research institutions, and organizations like the United Nations — as well as giants Tech giants — make substantial investments in independent nonprofit projects. This will help develop advanced open source, transparent and democratically controlled AI technologies.

An unanswered question in the paper is whether an open source model can solve the infamous “black box” problem of artificial intelligence. The exact way that deep neural networks work — networks with multiple layers of parameters or tunable weights — remains a mystery even to research practitioners. study carefully. Therefore, any goal of transparency would have to define what would be learned by open-sourcing a model and its data sources.

news7f

News7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button