Tech

Generative AI’s biggest challenge is showing ROI – here’s why


rainbow-colored-pedestrian-bridge-gettyimages-1482832215

Robbie Goodall/Getty Images

While executives and managers may be excited about the ways they can apply artificial general intelligence (AI) and large language modeling (LLM) to their current work, the time has come. It’s time to step back and consider where and how profits can be realized for the business. This remains an ambiguous and poorly understood field, requiring approaches and skill sets that bear little resemblance to previous waves of technology.

Also: AI’s employment impact: 86% of workers fear losing their jobs, but here’s some good news

Here’s the challenge: While AI often produces eye-catching proofs of concept, monetizing them is difficult, he said. Steve Jones, Executive Vice President of Capgemini, in a presentation at the recent Databricks conference in San Francisco. “Proving ROI is the biggest challenge when putting 20, 30, 40 GenAI solutions into production.”

Investments that need to be made include testing and monitoring the LLM that goes into production. Testing in particular is essential to keep the LLM accurate and on track. “You want to get a little wicked to test these models,” advises Jones. For example, during the testing phase, developers, designers or QA professionals should intentionally “poison” their LLM to see how well they handle erroneous information.

To test the negative output, Jones cited an example of how he promoted a business model that a company was “using dragons for long-distance shipping.” The model replied in the affirmative. He then prompted the model to provide information about long-distance shipping.

“The answer given was ‘this is what you need to do to do the long-distance transport job, because you will have to work a lot with dragons as you told me, then you need training specializing in fire and safety'” related Jones. “You also need etiquette training for the princesses, because dragon jobs involve working with princesses. And then, a series of standard jobs related to transportation and storage have been removed from the rest of the solution.”

Also: From AI trainer to ethicist: AI may obsolete some jobs but create new ones

The point, Jones continued, is that generative AI “is a technology where it’s never been easier to add one to your existing application and pretend you’re doing it right. Gen AI is a technology Phenomenal just adds some bells and whistles to an application, but truly terrible from a security and risk perspective in production.”
Innovative AI will take another two to five years before it becomes part of mainstream adoption, as quickly as other technologies. “Your challenge is how to keep up,” Jones said. There are two scenarios put forward at this point: “The first is that it will become a great big model, it will know everything and there will be no problems. That’s called optimistic-and- no- theory is imminent.”

What’s happening is that “every vendor, every software platform, every cloud is going to want to compete vigorously and fiercely to be a part of this market,” Jones said. “What that means is you’re going to have a lot of competition as well as a lot of variation. You don’t have to worry about multi-cloud infrastructure and having to support that, but you’re going to have to think about things like that.” like a railing.”

Also: 1 in 3 marketing teams have implemented AI in their workflow

Another risk, Jones said, is applying LLM to tasks that require less power and analytics — such as address matching. “If you use one big model for everything then you’re basically just burning money. That’s the equivalent of going to a lawyer and saying, ‘I want you to write me a birthday card.’ do that and they will charge you attorney fees.”

He urged it was important to be alert to cheaper and more effective ways to take advantage of the LLM. “If something goes wrong, you need to be able to decommission a solution as quickly as possible. And you need to ensure that all the related artifacts around it are brought into operation according to the model. ”

There’s no such thing as deploying a single model — AI users should apply their queries against multiple models to measure performance and response quality. “You should have a common way to capture all the metrics, to replay queries, based on different models,” Jones continues. “If you have people querying GPT-4 Turboyou want to see how the same query works for llamas. You should be able to have a mechanism for you to replay those queries and responses and compare performance metrics so you can see if you can do it in a cheaper way. Because these templates are constantly updated.”

Also: ChatGPT vs. ChatGPT Plus: Is the paid subscription still worth it?

He added that creative AI “doesn’t make mistakes in the usual way.” “GenAI is where you fill out a bill and it says, ‘Wow, here’s a 4,000-word essay about President Andrew Jackson. Because I’ve decided that’s what you want to say.’ There are guardrails to prevent that.”

News7f

News 7F: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button