Home > Technology peripherals > AI > body text

How responsible use of AI can create safer online spaces

WBOY
Release: 2023-04-08 14:21:03
forward
1516 people have browsed it

How responsible use of AI can create safer online spaces

  • Artificial intelligence algorithms have a huge impact on human life and wider society.
  • Ethical dilemmas surrounding artificial intelligence include digital disparity and its weaponization.
  • Autonomy should be balanced with human oversight, while responsible use of AI should be promoted so that it can be leveraged to address discrimination.

Driven by computer advances, data science, and the availability of massive data sets, artificial intelligence (AI) has become a powerful everyday reality and business tool. Big tech companies like Google, Amazon, and Meta are now developing AI-based systems. The technology can mimic human speech, detect disease, predict criminal activity, draft legal contracts, solve accessibility issues, and complete tasks better than humans. For businesses, AI holds the promise of predicting business outcomes, improving processes, and increasing efficiencies with significant cost savings.

But concerns about artificial intelligence are still growing.

Artificial intelligence algorithms have become so powerful that some experts label AI as sentient that any corruption, tampering, bias or discrimination can have a huge impact on organizations, human lives and society.

digital identification

Artificial intelligence decision-making affects and changes people's lives on an increasing scale. Their irresponsible use can exacerbate existing human biases and discriminatory measures, such as racial profiling, behavioral prediction, or sexual orientation identification. This inherent bias occurs because AI is only as good as the amount of training data we can provide it with, which is susceptible to human bias.

Bias also occurs when machine learning algorithms are trained and tested on data that underrepresents certain ethnic groups, such as women, people of color, or people with certain age demographics. For example, research shows that people of color are particularly vulnerable to algorithmic bias in facial recognition technology.

Deviations may also occur during use. For example, an algorithm designed for a specific application may be used for unintended purposes, leading to misinterpretation of the output.

Validating AI Performance

AI-led discrimination can be abstract, non-intuitive, subtle, invisible and difficult to detect. Source code may be restricted, or auditors may not know how the algorithm is deployed. The complexity of going into an algorithm to see how it is written and how it responds cannot be underestimated.

Current privacy laws rely on notice and choice; therefore, the resulting notices asking consumers to agree to lengthy privacy policies are rarely read. If such notices were applied to artificial intelligence, there would be serious consequences for the safety and privacy of consumers and society.

AI AS A WEAPON

While true AI malware may not exist yet, it is not far-fetched to assume that AI malware will enhance the capabilities of attackers. The possibilities are endless – malware that learns from the environment to identify and exploit new vulnerabilities, tools that test AI-based security, or malware that can poison AI with misinformation.

Digital content manipulated by artificial intelligence is already being used to create hyper-realistic synthetic copies of individuals in real time (also known as deepfakes). As a result, attackers will use deepfakes to create highly targeted social engineering attacks, cause financial losses, manipulate public opinion, or gain a competitive advantage.

“AI-led discrimination may be abstract, non-intuitive, subtle, invisible and difficult to detect. Source code may be restricted, or auditors may not know how the algorithm works Deployed.” — Steve Durbin, CEO, Information Security Forum businesses have moral, social and fiduciary responsibilities to manage the adoption of AI in an ethical manner. They can do this in a number of ways.

1. Translate ethics into metrics

Ethical AI adheres to clearly defined ethical principles and fundamental values ​​such as individual rights, privacy, non-discrimination and, importantly, non-manipulation. Organizations must establish clear principles to identify, measure, assess and mitigate AI-led risks. Next, they must translate them into practical, measurable metrics and embed them into daily processes.

2. Understand the sources of bias

Having the right tools to investigate the sources of bias and understand the impact of fairness in decision-making is absolutely critical to developing ethical AI. Identify systems that use machine learning, determine their importance to the business, and implement processes, controls, and countermeasures against biases caused by AI.

3. Balance autonomy with human oversight

Organizations should establish a cross-disciplinary ethics committee to oversee the ongoing management and monitoring of risks introduced by artificial intelligence systems in the organization and supply chain. Committees must be composed of people from diverse backgrounds to ensure sensitivity to all ethical issues.

The design of algorithms must take into account expert opinions, contextual knowledge and awareness of historical biases. Manual authorization processes must be enforced in critical areas, such as in financial transactions, to prevent them from being compromised by malicious actors.

4. Empower employees and promote responsible AI

Nurt a culture that empowers individuals to raise concerns about AI systems without stifling innovation. Build internal trust and confidence in AI by transparently handling roles, expectations, and responsibilities. Recognize the need for new roles and proactively upskill, reskill or hire.

If desired, users can be empowered by providing greater control and access to recourse. Strong leadership is also critical to empowering employees and promoting responsible AI as a business necessity.

5. Use artificial intelligence to solve discrimination problems

Programmed inspections are the traditional method of assessing human fairness by running algorithms during human decision-making processes, comparing results, and explaining the reasons behind machine-led decisions. reasons to benefit from artificial intelligence. Another example is MIT’s research program on Combating Systemic Racism, which works to develop and use computational tools to create racial equity in many different sectors of society.

The above is the detailed content of How responsible use of AI can create safer online spaces. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
AI ai
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template