Tech

Computer scientists discuss the pros and cons of ChatGPT


artificial intelligence chatbot

Credit: Pixabay/CC0 Public Domain

With the uncanny ability to imitate human language and reasoning, ChatGPT seems to herald a revolution in artificial intelligence. The nimble chatbot can conjure up poems and essays, share recipes, translate languages, give advice, and tell jokes, among countless apps users have tested since its inception. OpenAI research lab in Silicon Valley released its natural language processing tool in November.

The excitement comes with some anxieties—that this technology could undermine people’s ability to write and authentically think critically, upset industries, and amplify stereotypes. and our own biases.

For those who work in artificial intelligenceJohns Hopkins University assistant professor of computer science Daniel Khashabi, who specializes in language processing and have worked on similar tools.

“ChatGPT seems like a sudden revolution that came out of nowhere,” he said. “But the technology has evolved gradually over the years, with rapid progress in the last few years.”

However, Khashabi acknowledges the unprecedented era that ChatGPT seems to begin, an era brimming with potential for human progress. “This is really an opportunity for us to rethink our understanding of what it means to be smart,” he said. “This is an exciting time because we have this opportunity to tackle new challenges and new horizons that we once felt were beyond our reach.”

As Microsoft invested in the tool, OpenAI released a paid version, and Google plans to release its own experimental chatbot, Hub signed up with Khashabi for insight into the technology and its direction. .

Can you break down how ChatGPT works?

The first stage does not involve direct human feedback. This model learns the structure of language by retrieving large volumes of text from the web—for example, sentences and paragraphs from Wikipedia, Twitter, Reddit, The New York Times, etc., and in all other languages together. It also trains in code written by programmers, from platforms like GitHub.

In the second stage, [generally called “self-supervised” learning], the human annotator participates in training the model to become more complex. They write answers to different types of queries that ChatGPT receives, so the model learns to perform tasks from commands like “write an essay on this topic” or “modify this passage” .

Since OpenAI is sitting on a goldmine, they can afford to hire many annotators and have them annotate lots of high-quality data. I’ve heard rumors that the original system was given close to 100,000 human feedback loops. So there’s a lot of human labor behind this.

But OpenAI’s secret weapon is not its AI technology but the people who use its services. Every time someone queries their system, they collect those queries to make ChatGPT adapt to what users are looking for and identify weaknesses in their system. In other words, OpenAI’s success is attracting millions of people to use its demo.

How have you personally tested ChatGPT?

It can be a great writing and brainstorming tool. I can write a summary of the idea I have in mind, ask ChatGPT to expand on it in a more sophisticated way, then pick the results I like and develop them further on my own, or continue using them. use ChatGPT. This is a collaborative way of writing between humans and machines. As someone on Twitter aptly put it: “ChatGPT is the electric bike for your brain!”

What do you think of all the attention and press it is getting?

It’s another major milestone for the advancement of AI and deserves to be celebrated. It’s exciting that AI and natural language processing are getting closer to helping humans perform the tasks they care about.

However, I worry about overstating the state of the AI. Progress has been made, but”general intelligence” has yet to emerge. Over the past few decades, we have continually revised our concept of what it means to be “smart” as we progressed. During the 1960s and 1970s, the milestone. our goal was to create a system to play chess against a human.There are many examples like this.Every time we progress, we think, “This is it!” But after a while Over time, the hype dies, and we see problems and identify new needs.

“Intelligence” has always been a moving target and likely remains one, but I’m excited by the progress I’ve seen in finding the shortcomings of systems like ChatGPT.

What are those shortcomings?

It’s easy for ChatGPT to make everything. If you ask ChatGPT about something it hasn’t seen before, it creates the illusion of truth in fluent argumentative language. For example, if you asked it to specify “In what tournament did Venus Williams win his eighth Grand Slam?” it should be the answer for you, even though Venus Williams has won seven major titles. She wanted to win eighth, as many outlets reported, but she didn’t. And the model is confusing the two concepts “want to win” and “win”.

And the problem is that it does this too fluently. It may give you rubbish, but in such fluent, coherent language that—if you’re not an expert in that field—you can believe what it says is true. That worries me, and I think it’s easy for us humans to believe in the face of seemingly obvious outputs.

On the other hand, what’s interesting about ChatGPT?

We now have these tools that can produce creative and fluent language, a challenge that has taken us years to solve. As an AI scientist, my excitement is about the next steps and we have new problems for AI to solve.

I’m less excited about the goal of AI—reverse-engineering human intelligence—and more about IA, aka intelligence enhancement. I think using AI to help people do better things and enhance human capabilities is a worthwhile goal. I’m very excited about those kinds of collaborative systems.

How do you see technology evolving?

We are still in the process of making this change, but we will continue to make language models more efficient, resulting in much more compact but high quality models. As a result, we will witness highly reliable forms of conversational agents everywhere. Future models will be your assistant for navigating the web, completing various common web-based tasks that we perform ourselves today.

The same set of technologies is also beginning to make its way into the physical world. Existing models, such as ChatGPT, are not aware of their environment. For example, they can’t see where my phone is or how tired I am. We will be seeing ChatGPT with our own eyes soon. These models will use different data methods (text, image, auditory, etc.) necessary for them to serve us every day.

This will lead to robots that self-monitor based on data about their physical environment, including physical objects, people, and their interactions. The impacts here will be huge. In less than 10 years, any physical device we use every day—cars, refrigerators, washing machines, etc.—will become the conversational agents you’ll be talking to. We will also see robots that are extremely powerful in solving problems that are not possible today. Imagine you are about to talk to your Roomba about things you want to do or not do the way you chat with ChatGPT.

It is essential not to forget how these technologies will change things at the societal level. Future multimodal models—ChatGPT has eyes and ears—will be everywhere and will impact everything, including public safety. But now to the concern: In a society where we are constantly being followed by AI models with eyes and ears and constantly getting better as they tend to be more, freedom and privacy What will ours be like?

That sounds like a dark society described in the popular 1984 novel. Like any other technology, self-monitoring models are a double-edged sword. The best we can do now is be on the lookout, anticipate, and debate such issues before applications ramp up. Ideally, we need to develop frameworks that guarantee our freedom and fairness by extrapolating from examples like ChatGPT to its future extensions. I am optimistic that we will.

quote: Computer Scientist Discusses Pros and Cons of ChatGPT (2023, Feb 10) retrieved Feb 10, 2023 from https://techxplore.com/news/2023-02-scientist-discusses -pros-cons-chatgpt.html

This document is the subject for the collection of authors. Other than any fair dealing for private learning or research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.

news7f

News7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button