Recent research shows that AI-based deep learning models can determine a patient's race based on radiology images such as chest X-rays with greater accuracy than anthropology experts. Sandeep Sharma, chief data scientist at Capgemini Consulting, pointed out that when AI is applied to personal data collection and analysis, there is a huge risk of invading personal privacy. At the same time, this threat is exacerbated by a lack of adequate understanding of privacy among organizations using AI.
Generally speaking, there are several significant problems in the current AI applications involving personal information in enterprise organizations: first, the use of data for purposes other than collection; second, collection Personal information that does not fall within the scope of data collection; third, storing data for longer than necessary. These may violate regulations on data privacy, such as the European Union's General Data Protection Regulation (GDPR).
The risks posed by AI-based systems involve many aspects. For example, Tom Whittaker, a senior associate in the technology team at the British law firm Burges Salmon, believes that the potential bias of AI needs to be taken into account. AI systems rely on data, and when it comes to personal data, bias can be inadvertently created by how that data or the model is trained.
At the same time, the AI system may also be damaged, and personal privacy information may also be leaked. Whittaker noted that part of the reason is that AI systems rely on large data sets, which could make them prime targets for cyberattacks. The data output by the AI system may expose personal privacy information directly or when combined with other information.
As AI systems are used in an increasing number of applications, society is also exposed to more pervasive risks. Credit scores, criminal risk analysis and immigration adjudication are some examples. If AI or the way it is used is flawed, people may suffer greater privacy violations than they would otherwise. ”
However, some experts have pointed out that AI can have a positive impact on privacy. It can be used as a form of privacy-enhancing technology (PET) , to help organizations comply with data protection by design obligations.
Whittaker explained, “AI can be used to create synthetic data that replicates the patterns and statistical properties of personal data. AI can also minimize the risk of privacy violations by encrypting personal data, reducing human error, and detecting potential cybersecurity incidents. ”
Some governments have seen the good side of AI. For example, Ott Velsberg, chief data officer of the Estonian Ministry of Economic Affairs and Communications, said that AI plays a key role in various industries and the Estonian government’s goal is to Achieve widespread application of AI in 2018.
He introduced that in order to ensure compliance with data protection regulations while applying it, Estonia has developed a service that enables people to share government-held data with external stakeholders. In addition Estonia has launched a data tracker to view the processing of personal data on the government portal.
AI is currently subject to regulations including GDPR Regulations, but more privacy regulations will be introduced in the future. Currently, the EU has the strongest legal AI-related privacy protection.
Whittaker pointed out that the EU also plans to introduce more regulations for AI, Designed to prohibit certain AI systems from misusing data and impose obligations on any high-risk systems on how data is stored and used. These regulations relate to those deploying AI systems in the EU market and will affect those who sell or deploy AI into the EU Solution companies.
To this end, when trying to manage the risks of AI, business leaders should understand current and planned AI regulatory policies. Failure to comply with these regulations may result in serious consequences. Reports show that, Violations of high-risk obligations under the EU's proposed AI bill could result in fines of up to 20 million euros or up to 4% of annual turnover.
First, for enterprise organizations using AI systems, there are changes in the way data is used. Transparency is crucial. If users do not know that they are affected by AI decisions, they will not be able to understand or question it. Secondly, ensure that users have the right to know about the use of data and that the data is used reasonably and legally. Thirdly, organizations should ensure that AI algorithms The data themselves and the data they rely on are carefully designed, developed and managed to avoid unnecessary negative consequences.
In a nutshell, good data security measures in AI applications are indispensable for organizations. Don’t collect unnecessary data and make sure the information is deleted after a certain period of time. Also make sure access to the data is appropriately restricted and has good security practices.
To sum up, AI is undoubtedly a game changer technology will be widely used in business applications, but it must be managed responsibly to avoid privacy violations. To do this, business leaders need to think more critically about how AI is used and abused, and play the role of AI. Positive impact and avoid the negative effects of AI.
The above is the detailed content of How is artificial intelligence changing the game for data privacy?. For more information, please follow other related articles on the PHP Chinese website!