[ad_1]
With some organizations moving ahead to adopt generative artificial intelligence (AI), it is critical they do so while mitigating potential risks and with some level of skepticism.
As it is, 45% of businesses are currently piloting generative AI, while 10% already have such tools in production, revealed a Gartner study released Tuesday. The survey polled 1,419 executives during a webinar last month to discuss business costs and risks of generative AI.
Also: AI safety and bias: Untangling the complex chain of AI training
These figures are significantly higher than a previous poll that Gartner ran in March and April this year, during which 15% reported piloting the technology and just 4% had these tools in production.
In the latest survey, some 78% said the benefits of generative AI outweighed its risks, higher than the 68% who thought likewise in the earlier poll.
Gartner noted that 45% of businesses were scaling up their generative AI investments across multiple functions, with 22% doing so across at least three different functions. Investment or adoption of generative AI in software development clocked the highest rate, at 21%, followed by marketing and customer service at 19% and 16%, respectively.
“Organizations are not just talking about generative AI — they’re investing time, money, and resources to move it forward and drive business outcomes,” said Frances Karamouzis, Gartner’s group chief of research and distinguished analyst, noting that 55% of organizations had increased their investment in generative AI since its emergence in the public domain 10 months ago.
“Executives are taking a bolder stance on generative AI as they see the profound ways that it can drive innovation, optimization, and disruption,” said Karamouzis. “Business and IT leaders understand that the ‘wait and see’ approach is riskier than investing.”
When having doubt is necessary
Should businesses decide to move ahead, though, they should have the framework in place to ensure they are adopting generative AI responsibly and ethically.
Some level of skepticism also should apply, including toward tools used to detect when AI has been applied, said Kathy Baxter, Salesforce.com’s principal architect of responsible AI.
Also: How AI can improve cybersecurity by harnessing diversity
Baxter noted that the technology has become democratized, enabling anyone to use generative AI without many handrails. But while many organizations are doing a decent job in trying to weed out toxic content and continuing to invest in such efforts, there still is not yet a lot of understanding on “how big a grain of salt” one should take with regard to AI-generated content.
Users regard all of such content as fact even if it is fabricated, Baxter said in an interview with ZDNET, noting that even AI detecting tools can be wrong in some instances, but may be deemed to be always accurate. Such perceptions may have an adverse impact when generative AI and its associated tools are used in some areas such as education, where students may be wrongly accused of using AI in their work.
Expressing her concerns over such risks, she urged any individual or organization using generative AI to do so with “enough skepticism”.
Like others in the industry, she echoed the need for sufficient guardrails in place to ensure AI is accurate and safe. It would help, too, if deployments were rolled out alongside risk mitigation tools, she added. These can include fault detection and reporting features, and mechanisms to collect and provide human feedback.
Also: 5 handy AI tools for school that students, teachers, and parents can use, too
Grounding AI also is critical, she said, stressing the importance of data used to train AI models. Not many organizations, though, have good data hygiene, she noted.
In fact, just 4% of business and technology leaders described their data as fully accessible, according to the C-Suite Global AI Indicator Report released last month. Commissioned by Workday, the study polled 2,355 executives across Asia-Pacific, Japan, North America, and EMEA, who led various functions and included CEOs, CIOs, and CFOs.
More than half of respondents, at 59%, described their data as somewhat or completely siloed, the survey found.
While 98% believed there would be some immediate business benefit from deploying AI and machine learning, 49% said their organization was unprepared to do so due to a lack of tools, skills, and knowledge.
Some 43% expressed concern about the trustworthiness of AI and machine learning, with 67% of CEOs pointing to potential errors as a top risk of bringing on AI and machine learning.
Also: Is AI lying to us? These researchers built an LLM lie detector of sorts to find out
Increased transparency is needed to build trust, but siloed data is obscuring leaders’ ability to lean in. Of organizations surveyed, 59% reported that their data is somewhat or completely siloed. Only 4% of all respondents said their data is fully accessible.
Workday CTO Jim Stratton said: “Despite some uncertainty, leaders are optimistic that AI and machine learning will augment their workforce and drive productivity. Trust is paramount to embracing these benefits, and building trust requires the right data foundation and commitment to governance.”
Stratton urged organizations to prioritize data quality and transparency when implementing AI, in order to benefit from the technology.
[ad_2]
Source link