Although AI has been around since the 1950s, it has seen tremendous growth within the past year. Tech giants have been implementing AI into their products and services, while individuals are using it to make their lives a little easier.
According to Deloitte, 74% of companies surveyed in its second edition Technology Trust Ethics Report have already begun testing generative AI, while 65% have begun to use it internally. The increasing awareness of AI’s new capabilities has led to the pressing question of how organizations can use this technology ethically.
Deloitte interviewed 26 specialists in various industries to gather information about how industry leaders are considering concerns about the ethical use of emerging technologies, including generative AI.
The company then tested hypotheses and delivered a 64-question survey to more than 1,700 businesses and technical professionals to gain further insights.
The report, by Beena Ammanath, managing director of Deloitte Consulting LLP and leader of Deloitte’s Technology Trust Ethics practice, refers to emerging technologies as the following: Cognitive technologies (including general and generative AI and chatbots), digital reality, ambient experiences, autonomous vehicles, quantum computing, distributed ledger technology, and robotics.
According to the survey, 39% of survey respondents, consisting of business leaders and developers of emerging technologies, thought cognitive technologies had the most potential for social good, compared to 12% in digital reality, and 12% in ambient experiences.
However, 57% of survey respondents also thought that cognitive technologies had the greatest potential for serious ethical risk.
The most concerning statistic is that over half of the respondents (56%) said their “company does not have or are unsure if they have ethical principles guiding the use of generative AI.”
Compared to Deloitte’s report in 2022 about ethics and trust in emerging technologies, this year’s report reveals that “organizations find themselves wrestling with new ethical issues posed by wide-scale adoption of this once-again new technology.”
These issues are tied to concerns about how businesses and organizations are using these technologies.
Despite the many benefits of AI, 22% of respondents were concerned with data privacy while 14% cited transparency about how AI is trained with data to produce its outputs.
Data poisoning as well as intellectual property and copyright were concerns that each consisted of 12% of survey respondents. Data poisoning is the “pollution” of data training sets by bad actors and can lead to inaccurate results produced by AI.
Deloitte’s report also detailed the types of damage that survey respondents believe could arise when ethical violations are not taken seriously.
Reputational damage was the greatest source of concern coming from 38% of respondents, followed by human damage such as misdiagnoses or data privacy violations (27%), regulatory penalties like copyright infringement (17%), financial damage (9%), and employee dissatisfaction (9%).
These damages are evident in the several lawsuits that have already been filed due to privacy violations, copyright infringement, and other issues related to the unethical use of AI.
So how can companies ensure they using AI safely? Deloitte lists a multi-step approach to helping companies:
- Exploration: Companies can begin by letting product owners, business leaders, and AI/ML practitioners explore generative AI through workshops to see how it could create value for their businesses. This way, companies can recognize the costs and benefits of incorporating AI into their businesses.
- Foundational: Companies could buy or build AI platforms to implement generative AI into their businesses. Of the survey respondents, 30% of survey respondents’ companies chose to use existing capabilities with major AI platforms. 8% of respondents created their own in-house AI platforms, while 5% decided not to use generative AI.
- Governance: Creating standards and protocols for AI use could minimize the potentially harmful impacts of AI, so companies should determine what types of ethical principles they plan to uphold.
- Trainings and Education: Companies could mandate trainings that outline the ethical principles of using AI. In addition, technical trainings that educate employees about using a variety of LLMs could provide companies with more guidance about the ethical use of AI.
- Pilots: Engineers and product leaders could run experiments on a variety of use cases to test proof of concepts and pilot programs and then eliminate aspects that are too risky.
- Implementation: Companies should draft a plan for introducing a newly enhanced product into the market and assign accountability for product implementation and ownership. The company should also have a team of experts prepared to address any issues that may arise. Transparency is also crucial for this step, as companies should explain how user data is inputted into the model, how the model reaches its output, and how likely the model is to hallucinate.
- Audit: According to one interviewee, companies will need to modify their policies depending on the risks of AI use. This could vary company by company, as not all organizations will incorporate AI for the same use case.
“The sooner companies work together to identify the risks and establish governance up front, the better their ability may be to help generate stakeholder value, elevate their brands, create new markets, and contribute to building a more equitable world,” said Ammanath.