We need bold minds to challenge AI, not lazy writers to remind, says bank CIO
After leading the company Boston Consulting GroupThe 2023 report found that their IT consultants were more productive when using Open AI’s GPT-4 tool, the company received backlash that people should only Use ChatGPT for free instead of retaining its services for millions of dollars.
Here’s their reasoning: Consultants will get answers or advice from ChatGPT anyway, so they should avoid the third party and go straight to ChatGPT.
Also: Mastering AI without tech skills? Why complex systems require diverse learning
There’s a valuable lesson for anyone hiring or looking to be hired for AI-related jobs, be they developers, consultants, or business users. The message of this critique is that anyone, even with limited or inadequate skills, can now use AI to get ahead or appear like they are ahead. Because of this, the playing field has been leveled. There is a need for people who can provide perspective and critical thinking towards the information and results that AI provides.
Even highly skilled scientists, technologists, and subject matter experts can fall into the trap of relying too heavily on AI for their outputs — rather than their own expertise.
According to research on the topic, “AI solutions can also exploit our cognitive limitations, making us susceptible to illusions of understanding in which we believe we know a lot about world more than we truly understand.” published In essence.
Even scientists trained to look critically at information are being lured by machine-generated insights, warn researchers Lisa Messer of Yale University and MJ Crockett of Princeton University.
“Such illusions obscure the ability of the scientific community to see the formation of monocultures of science, in which certain types of methods, questions, and perspectives dominate over alternatives, making science less innovative and more prone to error,” their study said.
Messer and Crockett state that in addition to concerns about AI ethics, bias, and job displacement, the risks of overreliance on AI as a source of expertise are only just beginning to become known.
In a mainstream business context, there are consequences of users being too dependent on AI, ranging from lost productivity and loss of trust. For example, users “can shift, change, and transform their actions to match the AI’s recommendations,” Microsoft’s Samir Passi and Mihaela Vorvoreanu observed in a overview of research on the topic. Additionally, users will “have difficulty evaluating AI performance and understanding how AI impacts their decisions.”
That is the thought of Mai MaiDirector of Innovation at Esquire Bank, who sees AI as a key tool for customer engagement, while warning against overusing it as a substitute for human experience and critical thinking. Esquire Bank provides specialized finance to law firms and wants people who understand business and what AI can do to advance it. I recently met Mai at Salesforce’s New York conference, who shared her experience and perspective on AI.
Mai, who has risen through the ranks from programmer to multifaceted CIO, doesn’t think AI is likely to be one of the most valuable productivity tools on the horizon. But he also worries that relying too much on creative AI—whether for content or code—will diminish the quality and sharpness of human thinking.
Also: Beyond programming: AI creates a new generation of job roles
“We found that having great brains and great results is not necessarily as good as someone who is willing to think critically and form their own perspective on what AI and AI bring to the table,” he said. you in the form of suggestions”. “We want people who are emotionally and personally aware to say, ‘hmm, this doesn’t seem right, I’m brave enough to talk to someone, to make sure there’s a child. Participants.'”
Esquire Bank is using Salesforce tools to capture both sides of AI — generative and predictive. Predictive AI gives the bank’s decision makers insights into “which attorneys are visiting their website and helps personalize services based on those visits,” said Mai, whose CIO role includes customer engagement and IT systems.
As a fully virtual bank, Esquire uses many of its AI systems across its marketing teams, combining AI-generated content with predictive AI algorithms behind the scenes.
“Every person’s experience is different,” Mai says. “So we’re using AI to predict what content they’re going to get next. It’s based on all the analytics behind the scenes and in the system about what we can do with that particular prospect.”
Also: Artificial intelligence is the technology that IT feels the most pressure to exploit.
While working closely with AI, Mai discovered an interesting change in human nature: People tend to disregard their own judgment and diligence as they increasingly rely on this system. “For example, we found that some people get lazy – they prompt something and then decide, ‘well, that seems like a really good response’ and send it.”
When Mai senses an overreliance on AI, “I’ll take them into my office and say, ‘I’m paying for your opinion, not the AI prompts and feedback you’re going to ask me to do.’ I’m not looking for just getting the results and giving them back to me, I’m looking for your critique.”
Still, he encourages his tech team members to shift mundane development tasks to synthetic AI tools and platforms, freeing up their time to work more closely with the business. “Developers are finding that 60% of the time they used to spend writing administrative code that wasn’t necessarily groundbreaking. AI can do that for them through voice prompts.”
Also: Will AI Hurt or Help Workers? It’s Complicated
As a result, he sees “the line between a classic programmer and a business analyst is blending more, because programmers don’t spend so much time doing things that don’t really add value.” increase. It also means that business analysts can become software developers.”
“It will be exciting when I can sit in front of a podium and say, ‘I want a system that does this, this, this and this’ and it does it.”