Tech

ChatGPT sounds exactly like us. How is that a good thing?


In 1950, Alan Turing, the British computer scientist who decoded the Enigma during World War II, wrote a paper in which he posed a seemingly silly question: “Can machines think ?” The launch late last year of the eerily lifelike ChatGPT seemed to bring us closer to the answer. Overnight, a fully formed silicon-based chatbot stepped out of the digital darkness. It can make jokes, write ad copy, debug computer code, and chat about anything and everything. This disturbing new reality has been described as one of the “tipping points” in the history of artificial intelligence.

But it has been a long time coming. And this particular creation has been conceived in computer science labs for decades.

To test his proposal for a thinking machine, Turing described an “imitation game” in which one person would interrogate two subjects in another room. One will be a human being in the flesh, the other a computer. The interrogator will be tasked with figuring out which one by asking questions through a “remote printer”.

Turing imagined an intelligent computer that answered questions easily enough that the interrogator could not distinguish between humans and machines. While acknowledging that his generation’s computers could not pass the test, he predicts that by the end of the century, “one will be able to talk about thinking machines without expecting to be contradicted.”

His essay helped launch research into artificial intelligence. But it also sparked a protracted philosophical debate, as Turing’s argument effectively dismissed the importance of human consciousness. If a machine can parrot only the appearance of thought – but without any awareness of doing so – is it really a thinking machine?

For years, the practical challenge of building a machine that could play the mimic game overshadowed these deeper questions. The main obstacle is that human language, unlike calculating complex mathematical problems, has proven remarkably resistant to the application of computing power.

This is not for lack of trying. Harry Huskey, who worked with Turing, returned to the US to build what the New York Times breathlessly hailed as an “electronic brain” capable of translating languages. This project, funded by the federal government, was driven by Cold War imperatives that made translation from Russian to English a priority.

The idea that words can be translated in a one-on-one fashion – like decoding – quickly delves into the complexity of syntax, not to mention the ambiguity inherent in individual words. Is “Fire” a flame? End of job? Trigger?

Warren Weaver, one of the Americans behind these early efforts, realized that context was key. If “fire” appears near “gun”, then one can draw certain conclusions. Weaver calls these types of correlations “statistical semantic properties of language,” an insight that will have important implications in the coming decades.

The achievements of this first generation were too low by today’s standards. Translation researchers found themselves hampered by the diversity of languages, and by 1966, a government-sponsored report concluded that machine translation was a dead end. Funds dried up for many years.

But others have conducted research into what is known as Natural Language Processing, or NLP. These early attempts were intended to demonstrate that a computer, given enough rules to guide its responses, could at least play the imitation game.

Typical of these efforts is a program that a group of researchers announced in 1961. Dubbed “Baseball,” the program billed itself as the “first step” in enabling users “ask the computer questions in plain English and ask the computer to answer questions directly.” But there is a catch: the user can only ask questions about baseballs stored in the machine. count.

This chatbot was quickly overshadowed by other innovations born in the Jurassic era of digital technology: SIR (Semantic Information Retrieval), launched in 1964; ELIZA, answering questions like a caring therapist; and SHRDLU, which allows the user to instruct the computer to move shapes using common language.

Though crude, many of these early experiments helped spur innovations in how humans and computers might interact — for example, how a computer could be programmed to “listen.” a question, twist the question and answer in a way that sounds believable and lifelike, and reuse the words and ideas posed in the original query.

Others seek to train computers to produce original works of poetry and prose with a combination of randomly generated rules and words. For example, in the 1980s, two programmers published The Policeman’s Beard Is Half Constructed, widely considered the first book written entirely in computers.

But these demonstrations obscured a deeper revolution that was brewing in the NLP world. As computing power grows exponentially and more and more works become available in machine-readable formats, it will be possible to build increasingly sophisticated models to quantify the probability of correlation between words. .

This period, which one account aptly described as “big data mining,” exploded with the advent of the internet, providing an ever-growing corpus of texts that could be used to derive “soft” probabilistic instructions that allow the computer to grasp the nuances of language. Instead of rigid and fast ‘rules’ that seek to predict every permutation of language, the new statistical approach has adopted a more flexible, often rather nil, approach.

The proliferation of commercial chatbots has grown from this research, as have other applications: basic language recognition, translation software, popular autocorrect features, and other popular features. now in our increasingly wired lives. But as anyone who has ever yelled at an artificial airline agent knows, these things certainly have their limits.

In the end, it turned out that the only way for a gaming machine to mimic was to mimic the human brain, with its billions of interconnected neurons and synapses. So-called artificial neural networks work the same way, sifting through data and drawing increasingly strong connections over time through a feedback process.

The key to doing so is another obvious human tactic: practice, practice, practice. If you train a neural network by having it read books, it can start generating sentences that mimic the language in those books. And if you have a neural network that reads, say, everything ever written, then it can communicate really, really well.

That is, more or less, what lies at the heart of ChatGPT. The foundation has trained on a large volume of written works. Indeed, Wikipedia as a whole represents less than 1% of the text it gathers in an attempt to mimic human speech.

Thanks to this training, ChatGPT can win the imitation game. But something rather curious happened along the way. By Turing’s standards, machines can now think. But the only way they can achieve this feat is to become less like machines with rigid rules and more like humans.

It’s something to consider amid all the anxiety caused by ChatGPT. Imitation is the sincere form of flattery. But are it the machines we need to fear or are we ourselves?

© 2023 Bloomberg LP


Affiliate links can be generated automatically – see ours Moral standards for details.

news7f

News7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button