The recent frenzy over language processing tools like ChatGPT has left organizations scrambling to come up with guidelines for responsible use. For example, the online publishing platform Medium has released a statement about AI-generated text promote “transparency” and “disclosure of information”.
My own organization has set up a general AI FAQ page urges educators to “use artificial intelligence and chatbots wisely and ethically”.
These ethical measures may seem odd, given the release of the more robust GPT-4 this week, which risks becoming a misinformation and propaganda machines. OpenAI declares GPT-4 was able to pass the simulated bar test in the top 10 percent, compared to the GPT-3.5 that only scored in the bottom 10 percent.
ChatGPT is now powered by a powerful supercomputer and cloud platform, both sponsored and created by Microsoft. This Microsoft OpenAI partnership will accelerate the global spread of generic AI products through Microsoft’s Azure Platform.
Perhaps coincidentally, GPT-4 was released less than two months after Microsoft firing a moral and social group. Annoyed Members of the group speak decision based on pressure from Microsoft’s C suitewhich emphasizes the need to move AI products “to the hands of customers at a very high speed.”
Silicon Valley’s once-deprecated motto is “move fast and break things” may be back in fashion.
When I asked ChatGPT what responsible innovation was, it said: “The process of developing and implementing new technologies, processes, or products in a way that addresses ethical, social, and environmental concerns . It involves taking into account the potential impacts and risks of innovation on various stakeholders, including customers, employees, communities and the environment.”
ChatGPT’s definition is correct, but doesn’t fit the context. Whose ideas are these and how are they being implemented? In other words, who is responsible for responsible innovation?
Over the past decade, a number of companies, consulting firms, and organizations have developed responsible innovation initiatives to anticipate and mitigate the negative consequences of technological developments.
Google set up a responsible innovation team in 2018 to leverage “experts in ethics, human rights, user studies, and racial justice.” The most notable product of this team is Google’s Responsible AI Guidelines. But the company’s ethical record beyond this is questionable.
In fact, Google’s biggest contribution to responsible innovation comes from the fundamental efforts of Google’s own employees. This shows that responsible innovation may need to evolve from the bottom up. But this is a high order in the age of Mass layoffs in the tech industry.
The Computer Society Code of Ethics and Professional Conduct states that technology professionals have a responsibility to uphold the common good as they innovate. But without support from the top, guidance from ethics and regulatory experts from government agencies, what drives tech professionals to be “good”? Can tech companies be trusted to self-audit?
Another issue related to self-audit is moral cleansing, where companies just talk about ethics. Meta’s responsible innovation efforts are a prime example of this.
In June 2021, Meta .’s top product design executive praise responsible innovation team she helped launch in 2018, the ad is “Meta’s commitment to making the most ethically responsible decisions possible every day.” By September 2022, her group was disbanded.
Today, Responsible innovation is used as a marketing slogan in the Meta store. Meta’s responsible AI team was also dissolved in 2021 and merged into Meta . Social Impact Teamhelp nonprofits take advantage of Meta products.
The shift from responsible innovation to social innovation is an ethical wash-down tactic to cover up unethical behavior by turning the subject to philanthropy. For this reason, it is essential to distinguish “good technology” from responsible for technology design from today’s popular charity PR phrase”good technology.”
Responsible innovation versus profit
Not surprisingly, the most complex calls for responsible innovation come from outside of corporate culture.
Rules sketch in a white paper from the Information and Communications Technology Council (ICTC), a Canadian nonprofit, talks about values such as self-awareness, fairness, and justice—concepts familiar to philosophers and ethicists rather than CEOs and founders.
The ICTC principles call on technology developers to go beyond minimizing negative consequences and work to reverse social power imbalances.
One might ask how these principles apply to recent developments in general AI. When OpenAI claims to be “developing technology that empowers people”, who is included in the term “people?” And under what circumstances will this “power” be used?
These questions reflect the work of philosophers such as Ruha Benjamin And The town of Armond those who doubt the term “people” in these contexts and who question the identity of “people” in human-centered technology.
Such considerations will slow down the AI race, but that might not be a terrible outcome.
this is a persistent stress between financial valuation and ethical value in the technology industry. Responsible innovation initiatives have been established to ease these tensions, but recently, such efforts are being pushed aside.
The tension in the US conservatives’ reaction to the recent failure of the Silicon Valley Bank can be seen clearly. Some staunch Republican figures, including Donald Trump, have wrongly blamed the banking turmoil. “awakening prospect” and its commitment to responsible investment And fair initiative.
in the words of Home Depot Co-Founder Bernie Marcus“these banks are doing poorly because everyone is focused on diversity and all the issues have been woken up,” rather than what Trump calls it. “common business practice.”
The future of responsible innovation may depend on how so-called “normal business practices” can be affected by so-called “wake-up” issues such as relationships. ethical, social and environmental concerns. If ethics can be washed away by treating them as “awakened”, the future of responsible innovation is as promising as the future of CD-ROM.
quote: AI arms race highlights urgent need for responsible innovation (2023, 20 March) accessed 20 March 2023 from https://techxplore.com/news/2023-03 -ai-arms-highlights-urgent-responsible.html
This document is the subject for the collection of authors. Other than any fair dealing for private learning or research purposes, no part may be reproduced without written permission. The content provided is for informational purposes only.