Generative AI is undoubtedly an incredible technological breakthrough. But, really, does it have to be shoved into every single app and service I use?
Here's why I think it doesn't.
One key concern of generative AI is privacy and security.
Dedicated AI apps like ChatGPT or Gemini can manage this risk to an extent. But when these features start intruding into Google Docs, Messages, and other day-to-day apps, it becomes really challenging to ensure that sensitive personal information isn't exposed to AI.
While AI companies' privacy policies vary, let's take OpenAI as an example. OpenAI mentions that it collects “Personal Information that is included in the input, file uploads, or feedback” and can share it with third parties.
Likewise, if you haven't opted out of OpenAI using your data to train its models, it's likely that your info has already been used for training. Not to forget, researchers have been able to extract ChatGPT's training data, as per 404Media.
Even when the privacy policies of Big Tech companies state that they don't share or use your data, it's hard to rely on their word, given their unimpressive track record.
If you've accidentally tapped on the Meta AI icon while searching for a chat in WhatsApp, you know how annoying these features can be. Why would anyone in their right mind launch WhatsApp on their phone to “Imagine a tasty dessert” or “Invent a new language”?
It's not just WhatsApp or Meta; you'll find Google's Gemini AI popping up in Messages, Gmail, Drive, and other places. AI Overviews in Google Search also deserve a mention here.
What makes these generative AI features particularly annoying is that they make the user interface of these apps ugly, especially when they are made unnaturally prominent.
There are use cases where AI is greatly useful, and there are circumstances where it isn't. Or at least not yet.
Unfortunately, tech companies are more interested in jumping on the AI bandwagon without considering whether it will actually be helpful for users.
Likewise, the trend of shipping half-baked AI features, which are far from helpful for users, doesn't improve matters. Again, AI Overviews in Google Search, which once told a user to add glue to pizza, are a great example.
I may use AI to draft a toneless, professional email. But when I'm sending my family a message or commenting on my friend's post, I can't imagine using AI. There are also other reasons social platforms should not have generative AI.
Similarly, GenAI in creative tools can be helpful, but not all creative software needs AI. Also, creative professionals continue to debate whether such tools can match the originality and thoughts of a sentient human artist.
So, it doesn't really make sense to use generative AI in areas that require authenticity and uniqueness.
Training and using generative AI requires a substantial amount of energy. To give an idea, training OpenAI's GPT-3 consumed an estimated 1287 MWh [PDF] of energy. Likewise, according to IEA, a ChatGPT request consumes nearly 10 times more power than a Google search, while this amount of energy could power an average US household for 120 years.
It's easier to justify large amounts of energy usage when generative AI is used for actual, productive work. However, shoving GenAI into every app results in a waste of energy (and, of course, more carbon emissions) without any real benefit.
To sum up, as useful as generative AI may be, it's certainly not a technology that's necessary for each app to function. On the contrary, it raises privacy concerns, makes the UI clunky, and results in more wasted energy. Most importantly, GenAI can have negative effects on your health.
The above is the detailed content of Why I Don\'t Want Generative AI Shoved Into Every App I Use. For more information, please follow other related articles on the PHP Chinese website!