[ad_1]
Imagine pulling up an AI-powered weather app and seeing clear skies in the forecast for a company picnic that afternoon, only to end up standing in the pouring rain holding a soggy hot dog. Or having your company implement an AI tool for customer support, but which integrates poorly with your CRM and loses valuable customer data.
According to new research, third-party AI tools are responsible for over 55% of AI-related failures in organizations. These failures could result in reputational damage, financial losses, loss of consumer trust, and even litigation. The survey was conducted by MIT Sloan Management Review and Boston Consulting Group and focused on how organizations are addressing responsible AI by highlighting the real-world consequences of not doing so.
Also: How to write better ChatGPT prompts for the best generative AI results
“Enterprises have not fully adapted their third-party risk management programs to the AI context or challenges of safely deploying complex systems like generative AI products,” Philip Dawson, head of AI policy at Armilla AI, told MIT researchers. “Many do not subject AI vendors or their products to the kinds of assessment undertaken for cybersecurity, leaving them blind to the risks of deploying third-party AI solutions.”
The release of ChatGPT almost a year ago triggered a generative AI boom in technology. It wasn’t long before other companies followed OpenAI and released their own AI chatbots, including Microsoft Bing and Google Bard. The popularity and capabilities of these bots also gave way to ethical challenges and questions.
As ChatGPT’s popularity soared as both a standalone application and as an API, third-party companies began leveraging its power and developing similar AI chatbots to produce generative AI solutions for customer support, content creation, IT help, and checking grammar.
Out of 1,240 respondents to the survey across 87 countries, 78% reported their companies use third-party AI tools by accessing, buying, or licensing them. Of these organizations, 53% use third-party tools exclusively, without any in-house AI tech. While over three-quarters of the surveyed companies use third-party AI tools, 55% of AI-related failures stem from using these tools.
Also: You can have voice chats with ChatGPT now. Here’s how
Despite 78% of those surveyed relying on third-party AI tools, 20% failed to evaluate the substantial risks they pose. The study concluded that responsible AI (RAI) is harder to achieve when teams engage vendors without oversight, and a more thorough evaluation of third-party tools is necessary.
“With clients in regulated industries such as financial services, we see strong links between model risk management practices predicated on some sort of external regulation and what we suggest people do from an RAI standpoint,” according to Triveni Gandhi, responsible AI lead for AI company Dataiku.
Also: Why IT growth is only leading to more burnout, and what should be done about it
Third-party AI can be an integral part of organizational AI strategies, so the problem can’t be wiped away by removing the technology. Instead, the researchers recommend thorough risk assessment strategies, such as vendor audits, internal reviews, and compliance with industry standards.
With how fast the RAI regulatory environment is evolving, the researchers believe organizations should prioritize responsible AI, from regulatory departments up to the CEO. Organizations with a CEO who is hands-on in RAI reported 58% more business benefits than those with a CEO who is not directly involved in RAI.
Also: Why open source is the cradle of artificial intelligence
The research also found that organizations with a CEO who is involved in RAI are almost twice as likely to invest in RAI than those with a hands-off CEO.
[ad_2]
Source link