Home > Technology peripherals > AI > Six factors against blind trust in artificial intelligence

Six factors against blind trust in artificial intelligence

王林
Release: 2023-05-27 12:45:12
forward
1195 people have browsed it

Six factors against blind trust in artificial intelligence

The impact of ChatGPT has touched various industries, which proves that artificial intelligence is changing the world. In any case, not these developments are fundamentally certain. We cannot ignore the fact that AI lacks an inherent moral compass or fact-checking system to guide its decisions, despite the exciting new opportunities it offers in many areas.

As the world continues to become more AI-centric, we must insist on fact-checking everything we hear. It is not advisable to trust AI unwisely, as some tools may misunderstand context and confidently make mistakes, or even manipulate data.

6 Factors Against Blind Trust in Artificial Intelligence

1. Security

The most obvious and fundamental thing is security. Violating the basic principle of "first do no harm" will bring serious and irreparable consequences, such as betrayal. In 2018, a driver died when a Tesla self-driving car collided with a concrete guardrail. A 2019 research paper attempted to prove that strategically drawn lines on the road could hijack an artificial intelligence algorithm or cause it to crash a vehicle, although this case was a catastrophic outlier.

2. Robustness and Security

Restricting access, maintaining information integrity, and maintaining accessibility consistency are the core of security. Thousands of algorithms exploit weaknesses in the robustness and security of artificial intelligence. These malicious attacks continue to be imagined. In the absence of comprehensive protection measures, it is even more important that analog intelligence engineers will customize the appropriate safety ratio for each new safety measure. AI can be fooled or weakened by design flaws or weaknesses in specific adversarial attacks. If this were true, it would also be possible to hijack someone's wheelchair to get into a safe area.

3. Privacy

Preventing harm is also the core of privacy principles. There are so many information leaks, all of which provide opportunities for bad actors to identify or analyze an individual without their consent—data about their happiness, money, and personal life. A worrying 81% of Americans believe the benefits of data collection outweigh the risks. Researchers have found that people of color and minority ethnic groups are more vulnerable than other groups. Due to its underrepresentation, more effective anonymization of its information is even more necessary following the previously mentioned leaks.

4. Simplicity and rationality

Regarding computer-based intelligence, simplicity is a broad characteristic. Users at least know they are interacting with artificial intelligence rather than a human. At its most extreme, all professional cycles and information are reported, accessible, and meaningful. This example fully illustrates the consequences of a lack of transparency, and the exam marking scandal in the UK is very representative. When assessing scores, the algorithm considers not only a student's performance, but also the school's historical score record and the number of students who received the same score.

5. Ethics and Environment

Ethics and fairness must be the goals of artificial intelligence. It must adhere to established and enforced social norms, also known as laws. It's simple, but technically challenging. The real story begins when public powers execute obsolete or adopt free enterprise strategies. Finding the right ethical balance is the responsibility of AI engineers and owners to address stakeholder interests, means and ends, privacy rights, and data collection. And not just in the workplace, big tech companies are often accused of perpetuating sexism. Female voice assistants are seen by activists and researchers as normalizing the views of women as workers and parents.

6. Accountability

Responsibilities ensure that the framework can be investigated against the different components referenced earlier. To filter progress and prevent loss of profits, many organizations are now building responsible AI in-house. However, it may not allow for external control or review. Clearview AI is an example in this regard. The company's facial recognition technology outperforms all other technologies on the market, yet it is privately owned and owner-controlled. If exploited by criminal gangs or authoritarian regimes, this could put countless people at risk.

The above is the detailed content of Six factors against blind trust in artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
AI ai
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template