[ad_1]
Amid increasing global alarm that artificial intelligence (AI) is poised to cause irreparable harm to human society in the near future, the Australian government headed by Liberal party leader Anthony Albanese has launched its own review of this rapidly evolving technology.
Industry and Science Minister Ed Husic has released two papers that kickoff an eight-week consultative process that seeks to get input from a variety of stakeholders on a new framework.
Also: Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist
One is a ‘rapid response’ report commissioned by the National Science and Technology Council (NSTC) that explores the opportunities and risks posed by generative AI.
This analysis has become expedient because of the speed at which existing tech companies in Australia and globally are pivoting to AI, and the pace with which AI is seeping into almost every industry.
It’s a transformation that has triggered increasing concern about AI’s intrusiveness or tendencies toward bias as well as concerns about truthfulness and ‘hallucinations’.
Also: Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist
For instance, Australia’s leading medical association has called for regulations that will keep a watch over AI in healthcare — a sector that is vulnerable to racial or age-related bias, with potentially catastrophic consequences.
The paper looks at a variety of similarly disruptive scenarios that AI could foment, including rampant misinformation and subsequent polarization of the population, along with widespread job displacement and increasing inequity.
The next stage is a consultation paper that examines what other nations around the world are doing to address AI regulation.
Toward a framework
The Australian government is looking closely at an almost identical review that the U.K.’s antitrust agency Competition and Markets Authority announced that it was launching, as well as an AI-specific Act that the European Union is deliberating.
The government will also no doubt look at recent actions of the U.S. government, which launched similar public assessments of all major AI generative systems, including by communities of hackers, data scientists, independent community partners, and AI experts.
Also: How to use ChatGPT: Everything you need to know
“Given the developments over the last, in particular, six months, we want to make sure that our legal and regulatory framework is fit-for-purpose, and that’s why we’re asking people, either experts or the community, to be involved in this process, the discussion process, with the papers that we’ve put out, to let us know what their expectations are and what they want to see,” said Husic.
“We need the framework right, that people are confident that it’s working in favor or for the benefit of communities — it’s really important,” he added.
The Australian government’s consultation paper emphasizes that AI still has to operate under the aegis of existing rules in the country, as they currently are in the U.K. or Europe, which range from those that are sector-specific (such as healthcare and energy) or those that are general to all industries (privacy, security, and consumer safeguards).
The deliberation will involve a consideration of whether to simply strengthen each sector accordingly or to introduce specific AI legislation, or both.
Also: AI is more likely to cause world doom than climate change
Husic mentioned that if there are “high-risk” areas that emerge from this process — he singled out a hypothetical abuse of facial recognition as an example — that his government will be proactive in addressing these concerns seriously within an emerging framework.
“We want people to be confident that the technology is working for us and not the other way around,” Husic added. “Governments have got a clear role to play in recognizing the risk and responding to it, putting the curbs in place.”
[ad_2]
Source link