Jump to content

Artificial intelligence

From Consumer Rights Wiki
Revision as of 23:18, 15 February 2026 by Banana (talk | contribs) (Added archive URLs for 2 citation(s) using CRWCitationBot)

⚠️ Article status notice: This Article's Relevance Is Under Review

This article has been flagged for questionable relevance. Its connection to the systemic consumer protection issues outlined in the Mission statement and Moderator Guidelines isn't clear.

Learn more ▼

Article Status Notice: Inappropriate Tone/Word Usage

This article needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Specifically it uses wording throughout that is non-compliant with the Editorial guidelines of this wiki.

Learn more ▼

Artificial intelligence (AI) is a field of computer science that produces systems designed to solve problems that humans typically solve using intelligence. In the consumer and industry space, it is commonly referred to as chatbots or large language models (LLMs), which have been a main focus of industry since the November 2022 launch of OpenAI's ChatGPT, with tens of billions of dollars in funding allocated to producing more popular LLMs. Also, a significant focus is on text-to-image models, which "draw" an image using a written prompt, and less commonly, text-to-video models, which extend the text-to-image concept across several smooth video frames.

AI is not a new concept; it has been of interest since the 1950s. AI is a catch-all term, encompassing many areas and techniques.

Generative artificial intelligence models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written. LLM does not understand anything; they cannot reason. Everything they generate is just a randomly modulated pattern of tokens. People reading sequences of tokens sometimes perceive things they think are true. Sequences that do not make sense to the reader, or that are false, are called hallucinations. LLMs are typically trained to produce output that is pleasing to people, exhibiting dark patterns. For example, they often produce output which seems confidently written, use patterns which praise the user (sycophancy), and employ emotionally manipulative language.

LLMs are a glorified autocomplete. People are accustomed to interacting with others, and many overestimate the abilities of things that exhibit complex, person-like patterns. Promoters of “AI” systems take advantage of this tendency, using suggestive names (like “reasoning” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.

From November 2022 to 2025, venture capitalists and companies invested hundreds of billions of dollars into AI but received minimal returns. When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data may be sold or repurposed, costs may rise, and companies may reduce staff or fail. Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”

The current well-funded industry of artificial intelligence tools has led to the rampant and unethical use of content. Startups aiming to develop AI services have been rapidly scraping the internet for content to train future models, and members of the field are concerned that they are approaching the limit of publicly available content to train from.[1]

Why is it a problem

Unethical training of data

Further reading: Artificial intelligence/training

Users' work is sometimes silently trained without their explicit consent, as was the case for Adobe's AI policy.

Privacy concerns of AI

AI can be and has been used to generate deepfakes of people with and without their consent, Deepfakes are the generation of media with the likeness of an individual. The types of deepfake media can range from harmless to harmful. The latter extreme including child pornography, revenge porn, blackmail, etc. Since the rampant rise of consumer AI, deepfakes have become even more prevalent. With some websites explicitly specializing in them.

Privacy concerns of online AI models

There are several concerns with using online AI models like ChatGPT, not only because they are proprietary, but also because there is no guarantee of where your data will be stored or used. Recent developments in local AI models offer an alternative to online AI models, as they can be used offline once downloaded from platforms like HuggingFace. Common models to run include Llama (Meta), DeepSeek (DeepSeek), Phi (Microsoft), Mistral (Mistral AI), Gemma (Google).

In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet (Perplexity), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.[2] These arbitrary prompts can then be exploited to hijack sensitive information, or worse, gain unauthorized access to high-value accounts, such as those for banking or gaming libraries.[3]

Further reading

References

  1. Tremayne-Pengelly, Alexandra (16 Dec 2024). "Ilya Sutskever Warns A.I. Is Running Out of Data—Here's What Will Happen Next". Observer. Archived from the original on 26 Nov 2025.
  2. "Tweet from Brave". X (formerly Twitter). Aug 20, 2025. Retrieved Aug 24, 2025.
  3. "Tweet from zack (in SF)". X (formerly Twitter). Aug 23, 2025. Archived from the original on 24 Aug 2025. Retrieved Aug 24, 2025.