Warning: Undefined array key "HTTP_ACCEPT_LANGUAGE" in /home/u596154002/domains/usbusinessreviews.com/public_html/wp-includes/load.php on line 2057

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the rank-math domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/u596154002/domains/usbusinessreviews.com/public_html/wp-includes/functions.php on line 6114
Singapore identifies six generative AI risks, sets up foundation to guide adoption - Best Business Review Site 2024

Singapore identifies six generative AI risks, sets up foundation to guide adoption

[ad_1]

generative AI apps

OLIVIER MORIN/AFP via Getty Images

Singapore has identified six top risks associated with generative artificial intelligence (AI) and proposed a framework on how these issues can be addressed. It also has established a foundation that looks to tap the open-source community to develop test toolkits that mitigate the risks of adopting AI

Hallucinations, accelerated disinformation, copyright challenges, and embedded biases are among the key risks of generative AI outlined in a report released by Singapore’s Infocomm Media Development Authority (IMDA). The discussion paper details the country’s framework for “trusted and responsible” adoption of the emerging technology, including disclosure standards and global interoperability. The report was jointly developed with Aicadium, an AI tech company founded by state-owned investment firm Temasek Holdings.

Also: Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist

The framework offers a look at how policy makers can boost existing AI governance to address “unique characteristic” and immediate concerns of generative AI. It also discusses investment needed to ensure governance outcomes in the longer term, IMDA said. 

In identifying hallucinations as a key risk, the report noted that — similar to all AI models — generative AI models make mistakes and these are often vivid and take on anthropomorphization. 

“Current and past versions of ChatGPT are known to make factual errors. Such models also have a more challenging time doing tasks like logic, mathematics, and common sense,” the discussion paper noted.

“This is because ChatGPT is a model of how people use language. While language often mirrors the world, these systems however do not yet have a deep understanding about how the world works.”

These false responses can also be deceptively convincing or authentic, the report added, pointing to how language models have created seemingly legitimate but erroneous responses to medical questions, as well as generating software codes that are susceptible to vulnerabilities. 

In addition, dissemination of false content is increasingly difficult to identify due to convincing but misleading text, images, and videos, which can potentially be generated at scale using generative AI. 

Also: How to use ChatGPT to write code

Impersonation and reputation attacks have become easier, including social-engineering attacks that use deepfakes to gain access to privileged individuals. 

Generative AI also makes it possible to cause other types of harm, where threat actors with little to no technical skills can potentially generate malicious code. 

Also: Don’t get scammed by fake ChatGPT apps: Here’s what to look out for

These emerging risks might require new approaches to the governance of generative AI, according to the discussion paper. 

Singapore’s Minister for Communications and Information Josephine Teo noted that global leaders are still exploring alternative AI architectures and approaches, with many people issuing caution about the dangers of AI.

AI delivers “human-like intelligence” at a potentially high level and at significantly reduced cost, which is especially valuable for countries such as Singapore where human capital is a key differentiator, said Teo, who was speaking at this week’s Asia Tech x Singapore summit.

The improper use of AI, though, can do great harm, she noted. “Guardrails are, therefore, necessary to guide people to use it responsibly and for AI products to be ‘safe for all of us’ by design,” she said. 

“We hope [the discussion paper] will spark many conversations and build awareness on the guardrails needed,” she added.

Also: 6 harmful ways ChatGPT can be used

During a closed-door dialog at the summit, she revealed that senior government officials also debated the recent advancements in AI, including generative AI models, and considered how these could fuel economic growth and impact societies. 

Officials reached a consensus that AI had to be “appropriately” governed and used for the good of humanity, said Teo, who provided a summary as chair of the discussion. Participants at the meeting, which included ministers, represented nations that included Germany, Japan, Thailand, and the Netherlands. 

The delegates also concurred that increased collaboration and information exchange on AI governance policies would help identify common grounds and lead to better alignment between approaches. This unity would lead to sustainable and fit-for-purpose AI governance frameworks and technical standards, Teo said. 

The officials urged greater interoperability between governance frameworks, which they believe is necessary to facilitate responsible development and adoption of AI technologies globally.  

There was also recognition that AI ethics should be infused at early stages of education, while investments in reskilling should be prioritized. 

Galvanizing the community 

Singapore has launched a not-for-profit foundation to “harness the collective power” and contributions of the global open-source community to develop AI-testing tools. The goal here is to facilitate the adoption of responsible AI, and promote best practices and standards for AI. 

Called AI Verify Foundation, it will set the strategic direction and development roadmap of AI Verify, which was introduced last year as a governance-testing framework and toolkit. The test toolkit has been made open source. 

Also: This new AI system can read minds accurately about half the time

AI Verify Foundation’s current crop of 60 members includes IBM, Salesforce, DBS, Singapore Airlines, Zoom, Hitachi, and Standard Chartered. 

The foundation operates as a wholly owned subsidiary under IMDA. With AI-testing technologies still nascent, the Singapore government agency said tapping the open-source and research communities would help further develop the market segment. 

Teo said: “We believe AI is the next big shift since the internet and mobile. Amid very real fears and concerns about its development, we will need to actively steer AI toward beneficial uses and away from bad ones. This is core to how Singapore thinks about AI.”

In his speech at the summit, Singapore’s Deputy Prime Minister and Minister for Finance Lawrence Wong further reiterated the importance of establishing trust in AI, so the technology can have widespread acceptance. 

“We are already using machine learning to optimize decision making and generative AI will go beyond that to create potentially new content and generate new ideas,” Wong said. “Yet, there remain serious concerns. Used improperly, [AI] can perpetuate dangerous biases in decision making. And with the latest wave of AI, the risks are even higher as AI becomes more intimate and human-like.”

Also: AI is more likely to cause world doom than climate change, according to an AI expert

These challenges pose difficult questions for regulators, businesses, and society at large, he said. “What kind of work should AI be allowed to assist with? How much control over decision making should an AI have and what ethical safeguards should we put in place to help guide its development?”

Wong added: “No single person, organization, or even country, will have all the answers. We will all have to come together to engage in critical discussions to determine the appropriate guardrails that will be necessary to build more trustworthy AI systems.”

At a panel discussion, Alexandra van Huffelen from the Netherlands’ Ministry of Interior and Kingdom Relations, recognized the potential benefits of AI, but expressed worries about its potential impact, especially amid mixed signals from the industry. 

The Minister for Digitalisation noted that market players, such as OpenAI, tout the benefits of their products, but at the same time issue caution that AI has the potential to destroy humanity. 

“That’s a crazy story to tell,” van Huffelen quipped, before asking a fellow panelist from Microsoft how he felt as his company is an investor in OpenAI, the brainchild behind ChatGPT. 

Also: I used ChatGPT to write the same routine in these 10 languages

OpenAI’s co-founders and CEO last month jointly published a note on the company’s website, urging the regulation of “superintelligence” AI systems. They talked about the need for a global authority, such as the International Atomic Energy Agency, to oversee the development of AI. This agency should “inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security,” among other responsibilities, they proposed. 

“It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say,” they noted. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past…Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property.”

In response, Microsoft’s Asia President Ahmed Mazhari acknowledged that van Huffelen’s pushback was not unwarranted, noting that the same proponents who signed a petition in March to pause AI developments had proceeded the following month to invest in their own AI chatbot. 

Also: The best AI chatbots: ChatGPT and alternatives to try

Pointing to the social harm that resulted from a lack of oversight of social media platforms, Mazhari said the tech industry has the responsibility to prevent a repeat of that failure with AI. 

He noted that the ongoing discussion and heightened awareness about the need for AI regulations, in particular in the areas of generative AI, was a positive sign for a technology that hit the market just six months ago. 

In addition, van Huffelen underscored the need for tech companies to act responsibly, alongside the need for rules to be established, and enforcement to ensure organizations adhered to these regulations. She said it remained “untested” whether this dual approach could be achieved in tandem. 

She also stressed the importance of establishing trust, so people want to use the technology, and ensuring users have control over what they do online, as they would in the physical world. 

Also: How does ChatGPT work?

Fellow panelist Keith Strier, Nvidia’s vice president of worldwide AI initiatives, noted the complexity of governance due to the wide accessibility of AI tools. This general availability means there are more opportunities to build unsafe products. 

Strier suggested that regulations should be part of the solution, but not the only answer, as industry standards, social norms, and education are just as crucial in ensuring the safe adoption of AI. 



[ad_2]

Source link

slot gacor slot gacor togel macau slot hoki bandar togel slot dana slot mahjong link slot link slot777 slot gampang maxwin slot hoki slot mahjong slot maxwin slot mpo slot777 slot toto slot toto situs toto toto slot situs toto situs toto situs toto situs toto slot88 toto slot slot gacor thailand slot bet receh situs toto situs toto slot toto slot situs toto situs toto situs toto situs togel macau toto slot slot demo slot pulsa slot pragmatic situs toto deposit dana 10k surga slot toto slot link situs toto situs toto slot situs toto situs toto slot777 slot gacor situs toto slot slot pulsa 10k toto togel situs toto slot situs toto slot gacor terpercaya slot dana slot gacor pay4d agen sbobet kedai168 kedai168 deposit pulsa situs toto slot pulsa situs toto slot pulsa situs toto situs toto situs toto slot dana toto slot situs toto slot pulsa toto slot situs toto slot pulsa situs toto situs toto situs toto toto slot toto slot slot toto akun pro maxwin situs toto slot gacor maxwin slot gacor maxwin situs toto slot slot depo 10k toto slot toto slot situs toto situs toto toto slot toto slot toto slot toto togel slot toto togel situs toto situs toto toto slot slot gacor slot gacor slot gacor situs toto situs toto cytotec toto slot situs toto situs toto toto slot situs toto situs toto slot gacor maxwin slot gacor maxwin link slot 10k slot gacor maxwin slot gacor slot pulsa situs slot 10k slot 10k toto slot toto slot situs toto situs toto situs toto bandar togel 4d toto slot