The development and application of artificial intelligence requires the use of large amounts of data. The more data you feed machine learning algorithms, the better they can discover patterns, make decisions, predict behavior, personalize content, diagnose medical conditions, achieve intelligence, and detect cyber threats and fraud.
In fact, artificial intelligence and data have reached such a relationship: "Algorithms without data are blind, and data without algorithms are stupid." However, the use of artificial intelligence technology may face risks, not Everyone wants to share their information or data, at least under the current rules of digital engagement. Therefore, some digital self-defense measures are required.
Some people completely cut off contact with the outside world and become digital hermits; others play it safe and use digital self-defense, a privacy-enhancing technology (PET), to respond to digital leaks.
People who use digital self-defense distrust website privacy notices and instead use tools like Privacy Level Extension to verify them. Instead of communicating their preferences, they use specialized tools—such as AI-powered privacy-preserving search engines and browsers—to conduct anonymous searches. These tools block invisible trackers from tracking across pages and help them explore the world, collaborate, and store data in new ways.
People can adopt some tools to deal with the collection and analysis of privacy by artificial intelligence algorithms. This is called digital self-defense, or also known as surveillance self-defense.
Users who adopt digital self-defense are often tired of the ubiquitous tracking of their online behavior that has even entered their offline lives. Digital self-defense is not an act of espionage, but self-protection of privacy.
This behavior began with the U.S. government’s sweeping surveillance program revealed by former U.S. agent Edward Snowden in 2013, which gave the NSA unprecedented access to people’s communications and information, including from Apple , steal emails, photos, videos, real-time chat and other information from the servers of technology giants such as Google, Skype and Facebook.
Similarly intrusive tactics of the British intelligence agency GCHQ were also disclosed, which triggered a higher awareness of privacy security, changed the development trajectory of EU data protection regulations, and promoted the implementation of consumer-oriented regulations. central, privacy-centric culture such as GDPR regulations.
Tech giants are scrambling to prove their privacy credentials, making encryption the default and declaring user privacy a priority. They position themselves as trusted data guardians who consumers can trust without having to defend themselves digitally.
But the Cambridge Analytica data breach scandal in 2018 revealed something that may be even more disturbing than government surveillance: "surveillance capitalism." Driven by adtech, people's digital identities are scraped, packaged, analyzed, profiled, auctioned, exchanged, and weaponized by online trackers and improper data to provide "precision marketing" to influence people's consumer behavior.
It turns out that Amazon may know people better than they know themselves. Facebook can predict what people will say next on social media, and what’s even more frightening is that it can predict voters’ voting tendencies. The Cambridge scandal revealed how ad tech "microtargeting" was combined with fake news and psychological warfare tactics to sway voters' decisions, leading Facebook Inc to ban political ads from its platform earlier this year.
The new darling of ad tech is location-based marketing, which tracks and maps users’ behavior, tracking them into the offline world and inferring intimate details by combining app data with data from other sources they collect and compile. to create rich profiles and segment users by race, personality, financial status, age, eating habits, substance abuse history, political affiliation, social networks, social networks, etc., and then peddle them on the dark web on real-time bidding platforms , so it may have adverse effects.
And ad tech collecting personal data isn’t always accurate, and targeting decisions can be offensive.
For example, bail bond ads in the United States target users with African-American names, and big brands such as Pepsi-Cola are promoted on extremely radical websites. A social media user who said she was constantly bombarded with parenting ads during her pregnancy and miscarriage publicly called on tech giants to fix their algorithm, saying “the algorithm was smart enough to know I was pregnant. And after I miscarried, it sent me Sending hospital ads, that's creepy."
Precision marketing is effective, but it can also be harmful because it doesn't really work. Anthropologist Tricia Wang points out that this is commodifying human relationships at the expense of customer understanding. She said that 70% of CMOs believe that ad technology does not provide valuable customer insights. Big data gives a big picture (it’s an incomplete and inaccurate picture) but lacks the human narrative.
Ironically, ad tech is in danger of purging customer-centricity from marketing because it cannot see the “human zeros and ones.”
Pernille Tranberg, co-founder of Data Ethics, is helping consumers and businesses find win-win alternatives to surveillance. She teaches consumers the basics of digital self-defense so they can protect their data using tools like Digi.Me and Tapx and trade it for fair market value on their own terms.
Website tools like Blockthrough block unknown or unsafe third-party trackers while allowing privacy-friendly trackers. Brave helps businesses generate more browsing revenue by rewarding visits so visitors can get paid for viewing privacy-friendly ads. Analytics providers like Matomo provide website owners with the rich data they have, as well as the privacy controls they can take advantage of.
Is there a confrontational relationship between consumers and advertisers? Yes, but not entirely. Advertisers need to respect people's choices and not disable ad blockers or trick users into agreeing to deceptive interfaces and black box models. They should work together, not against each other, and advertisers need to understand their customers, otherwise they will ultimately fail.
It can be seen from the above that if people are not careful, advertising technology that uses artificial intelligence to analyze data can invade their privacy. For this reason people want to take back their agency. They are tired of being spied on and are monitoring advertiser behavior in turn.
Data shows that 1.7 billion people now use ad blocking tools to block ads, which is "the largest boycott in human history." As companies like Apple and Google move away from third-party cookies, advertisers will need to find new ways to leverage consumer data while maintaining trust.
Privacy security makes artificial intelligence human, and it can save ad tech. For example, privacy-friendly tools level the playing field and can encourage people to share more information. Advertisers will analyze anonymized or encrypted data and reprogram it into artificial intelligence systems to unlock false customer insights that ad tech cannot provide.
In short, people’s awareness of digital self-defense is awakening, and the traditional artificial intelligence method of abusing data needs to evolve!
The above is the detailed content of Awakening of Digital Self-Defense Will Privacy Technology Kill Artificial Intelligence?. For more information, please follow other related articles on the PHP Chinese website!