[ad_1]
Singapore has released draft guidelines on how personal data should be managed when used to train artificial intelligence (AI) models and systems.
The document outlines how the country’s Personal Data Protection Act (PDPA) will apply when businesses use personal information to develop and train their AI systems, according to the Personal Data Protection Commission (PDPC), which administers the Act. The guidelines also include best practices in establishing transparency on how AI systems use personal data to make decisions, forecasts, and recommendations.
Also: AI is more likely to cause world doom than climate change, according to an AI expert
The guidelines, however, are not legally binding and do not supplement or alter any existing laws. They look at issues and situations, such as how companies may benefit from existing exceptions within the PDPA in the development of machine learning models or systems.
The guidelines also address how organizations can meet requirements involving consent, accountability, and notification when collecting personal data for machine learning AI systems that facilitate predictions, decisions, and recommendations.
The document also cites when it’s appropriate for companies to turn to two exceptions, for research and business improvement, without having to seek consent for the use of personal data to train AI models.
Also: 6 harmful ways ChatGPT can be used
Business improvement exceptions might apply when companies develop a product, or have an existing product, that they are looking to improve. This exception might also be relevant when the AI system is used to power decision-making processes that improve operational efficiency or that offer personalized products and services.
For instance, the business improvement exception can be applied for internal human resource recommendations systems that are used to provide a first cut of potential candidates for a role. It might also be applied in the use of AI or machine learning models and systems to provide new features that improve the competitiveness of products and services.
Organizations, though, will have to ensure the business improvement purpose “cannot reasonably” be attained without using personal data in an individually identifiable way.
Also: Just how big is this generative AI? Think internet-level disruption
Under the research exception, organizations are permitted to use personal data to conduct research and development that might not have an immediate application in existing products and services or business operations. This can include joint commercial research work with other companies to develop new AI systems.
Organizations should ensure the research cannot be reasonably accomplished without the use of personal data in an identifiable form. There should also be clear public benefits in using the personal data for research, and the results of the research cannot be used to make decisions that affect the individual. In addition, published results of the research should not identify the individual.
The guidelines also recommend organizations that use personal data for AI systems should conduct a data protection impact assessment, which looks at the effectiveness of risk mitigation and remediation measures applied to the data.
Also: AI could automate 25% of all jobs. Here’s which are most at risk
With regards to data protection, organizations should include appropriate technical processes and legal controls when developing, training, and monitoring AI systems that use personal data.
“In the context of developing AI systems, organizations should practise data minimization as good practice,” the guidelines state.
“Using only personal data containing attributes required to train and improve the AI system or machine learning model will also reduce unnecessary data protection and cyber risks to the AI system.”
The PDPC is seeking public feedback on the draft guidelines, which should be submitted by August 31.
Partnership to test privacy safeguard tools
Singapore has also announced a partnership with Google that enables local businesses to test the use of “privacy enhancing technologies”, or what the government coins PETs.
Touting these as further tools to help organizations build their datasets, Minister of Communications and Information Josephine Teo said: “PETs allow businesses to extract value from consumer datasets, while ensuring personal data is protected. Through facilitating data sharing, they can also help businesses develop useful data insights and AI systems.”
The use of PETs, for example, allows banks to collect data and build AI models for more effective fraud detection, while protecting their customers’ identity and financial data, Teo said.
To drive the adoption of PETs, the Infocomm Media Development Authority (IMDA) last year introduced a PET sandbox to offer businesses access to grants and resources to develop such solutions.
Also: ChatGPT is more like an ‘alien intelligence’ than a human brain, says futurist
The collaboration with Google will allow Singapore organizations to test their Google privacy sandbox applications within the IMDA sandbox. This system provides a secure environment in which companies can use or share data without revealing sensitive information, the PDPC said.
It added that the IMDA and Google sandbox is available to businesses based in Singapore and is designed for adtech, publishers, and developers, among others.
Also: Why your ChatGPT conversations may not be as secure as you think
According to Teo, the partnership marks Google’s first such collaboration with a regulator in Asia-Pacific to facilitate the testing and adoption of PETs.
Through the initiative, organizations could access a “safe space” to pilot projects using PETs on a platform on which they already operate, she said.
“With the deprecation of third-party cookies, businesses can no longer rely on these to track consumers’ behavior through the browser and will need PETs as an alternative,” she said. “Consumers will experience being served more relevant content without fearing their personal data is compromised.”
[ad_2]
Source link