Tech

Book review: Why we need to treat tomorrow’s robots like tools


Don’t be swayed by the enticing dial tones of tomorrow’s AIs and their whistle-blowing songs about the singularity. No matter how much artificial intelligence and robots may look like humans, they will never really be to be people, argue Paul Leonardi, Duca Family Professor of Technology Management at the University of California Santa Barbara, and Tsedal Neeley, Naylor Fitzhugh Professor of Business Administration at Harvard Business School, in their new book Digital thinking: What it really takes to thrive in the age of data, algorithms and artificial intelligence – and therefore should not be treated like humans. The pair argue in the excerpt below that in doing so hinders interaction with cutting-edge technology and impedes its further development.

Digital Mindset cover

Harvard Business Review Press

Reprinted with permission from Harvard Business Review Press. Taken from THE DIGITAL MINDSET: What it really takes to thrive in the age of data, algorithms, and artificial intelligence by Paul Leonardi and Tsedal Neeley. Copyright 2022 Harvard Business School Publishing Corporation. Copyright Registered.


Treat AI like a machine, even if it seems to act like a human

We are used to interacting with computers in an intuitive way: buttons, drop-down lists, sliders, and other features allow us to give computer commands. However, advances in AI are shifting our interactions with digital tools to more human-like and natural interactions. The so-called chat user interface (UI) gives people the ability to act with digital tools through writing or talking, that is how we interact with others more, for example. such as Burt Swanson’s “conversation” with his assistant Amy. When you say, “Hey Siri,” “Hello Alexa,” and “OK Google,” it’s a conversational UI. The evolution of tools driven by conversational user interfaces is astounding. Every time you call 800 and are asked to spell your name, answer “Yes” or say the last four digits of your social security number, you are interacting with an AI using a user interface conversation. Chat bots have become popular partly because they make good business sense and partly because they allow us to access services more efficiently and conveniently.

For example, if you booked a train through Amtrak, you may have interacted with an AI chatbot. Its name is Julie, and it answers more than 5 million questions annually from over 30 million passengers. You can book rail travel with Julie just by saying where and when you’re going. Julie can pre-fill forms on Amtrak’s scheduler and provide directions through the rest of the booking process. Amtrak saw an 800% return on their investment in Julie. Amtrak saves more than $1 million in customer service costs each year by using Julie to answer low-level, predictable questions. Reservations have increased by 25% and bookings made through Julie generate 30% more revenue than bookings made through the website, because Julie is good at upselling for customers!

One reason for Julie’s success is that Amtrak makes it clear to users that Julie is an AI worker and they tell you why they decided to use AI instead of connecting you directly with a human. That means man orients it like a machine, not a human being. They don’t expect too much from it, and they tend to ask questions in ways that elicit helpful answers. Amtrak’s decision sounds counter-intuitive, as many companies try to treat their chatbots as real people, and it seems that interacting with a machine as if it were a human is exactly the way to go. achieve the best results. Digital thinking requires a change in the way we think about our relationship with machines. Even as they become more human, we need to think of them as machines – requiring clear instructions and focusing on narrow tasks.

x.ai, the company that created the Amy meeting scheduler, allows you to schedule a meeting at work or invite a friend to your kid’s basketball game by simply emailing Amy (or her partner). her partner, Andrew) with your request as if they were a direct personal assistant. However, Dennis Mortensen, the company’s CEO, commented that more than 90% of the requests the company’s helpdesk receives are related to the fact that people are trying to use natural language. with bots and struggle to get good results.

Perhaps that is why scheduling a simple meeting with a new acquaintance is so frustrating for Professor Swanson, who always tries to use colloquial words and conventions in conversation. intimate. In addition to the way he talked, he made many completely valid assumptions about his interactions with Amy. He assumed that Amy could understand his schedule constraints and that “she” would be able to discern his preferences from the context of the conversation. Swanson is intimate and casual — bots don’t get it. You don’t understand that when it comes to asking for someone else’s time, especially if they’re helping you, frequent or sudden shifts in meeting logistics won’t work. It turns out that randomly interacting with an intelligent robot is harder than we thought.

Researchers have validated the idea that treating machines like machines works better than trying to be human with them. Stanford professor Clifford Nass and Harvard Business School professor Youngme Moon conducted a series of studies in which people interacted with humanoid computer interfaces. They found that individuals tended to abuse human social categories, apply gender stereotypes to computers, and identify races with computer agents. Their findings also suggest that people exhibit over-learned social behaviors such as politeness and reciprocity towards computers. Importantly, people tend to engage in these behaviors – treating robots and other intelligent agents as if they were humans – even when they know they are interacting with the computer and not. right human. It seems that our collective urge to relate to people often creeps into our interactions with machines.

This problem of mistaking computers for humans is further complicated when interacting with artificial agents through conversational user interfaces. Take, for example, a study we did with two companies that used AI assistants to provide answers to common business queries. One used a human-like anthropomorphized AI. The other is not.

Workers at companies that use personification frequently get mad at the agent when the agent doesn’t return a useful answer. They often say things like, “He sucks!” or “I hope he does better” when referring to the results produced by the machine. Most importantly, their strategies for improving relationships with the machine mirror the strategies they will use with others in the office. They will ask their questions more politely, they will rephrase them in different words, or they will try to time their questions strategically when they think the rep will “not so busy”. None of these strategies have been particularly successful.

In contrast, workers at another company said they were much more satisfied with their experience. They enter search terms as if it were a computer and spell everything out in great detail to ensure that the AI, who can’t “read between the lines” and perceive nuances, pays attention to your preferences. surname. The second group frequently commented on how surprised they were when their queries returned useful or even surprising information, and they flagged any issues that came up against typical errors of computer.

For the foreseeable future, the data is clear: treating technologies – regardless of whether they are human-like or intelligent – ​​as technology is key to successful interactions with machines. A big part of the problem is that they set expectations for users that they will respond in human-like ways and make us think they can infer our intentions, when they can’t. . Successfully interacting with chat UI requires a digital mindset that understands that we are still far from effective human-like interaction with technology. Being aware that the AI ​​agent cannot correctly deduce your intentions means that it is important that you write down each step of the process and be clear about what you want to accomplish.

All products recommended by Engadget are handpicked by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.



Source link

news7f

News7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button