Artificial intelligence: Difference between revisions

Drakeula (talk | contribs)
Drakeula (talk | contribs)
dark patterns
Line 5: Line 5:
Since the November 2022 launch of [[ChatGPT]], [[wikipedia:Large language model|large language model]] (LLM) chatbots have been a main focus of industry, with tens of billions of dollars in funding allocated to producing more popular LLMs. Also a significant focus are [[wikipedia:Text-to-image model|text-to-image models]], which "draw" an image using written prompt, and [[wikipedia:Text-to-video model|text-to-video models]], which extend the text-to-image concept across several smooth video frames.
Since the November 2022 launch of [[ChatGPT]], [[wikipedia:Large language model|large language model]] (LLM) chatbots have been a main focus of industry, with tens of billions of dollars in funding allocated to producing more popular LLMs. Also a significant focus are [[wikipedia:Text-to-image model|text-to-image models]], which "draw" an image using written prompt, and [[wikipedia:Text-to-video model|text-to-video models]], which extend the text-to-image concept across several smooth video frames.


[[wikipedia:Generative artificial intelligence|Generative artificial intelligence]] models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written.  LLM do not understand anything, they can not reason.  Everything they generate is just a randomly modulated pattern of tokens.  People reading sequences of tokens sometimes see things they think of as being true.  Sequences which do not make sense to the reader, or which are false are called [[wikipedia:Hallucination (artificial intelligence)|hallucination]].  LLM are typically trained to produce output which is pleasing to people, therefore they often produce output which seems confidently-written, and use patterns which praise the user (sycophancy).   
[[wikipedia:Generative artificial intelligence|Generative artificial intelligence]] models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written.  LLM do not understand anything, they can not reason.  Everything they generate is just a randomly modulated pattern of tokens.  People reading sequences of tokens sometimes see things they think of as being true.  Sequences which do not make sense to the reader, or which are false are called [[wikipedia:Hallucination (artificial intelligence)|hallucination]].  LLM are typically trained to produce output which is pleasing to people, exhibiting [[dark patterns]], for example they often produce output which seems confidently-written, use patterns which praise the user (sycophancy) and emotionally manipulative language.   


LLM are a glorified autocomplete.  People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns.  Promoters of “AI” systems take advantage of this tendency,  using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.
LLM are a glorified autocomplete.  People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns.  Promoters of “AI” systems take advantage of this tendency,  using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.
Line 104: Line 104:
In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet ([[Perplexity]]), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.<ref>{{Cite web |date=Aug 20, 2025 |title=Tweet from Brave |url=https://xcancel.com/brave/status/1958152314914508893#m |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref> These arbitrary prompts can then be abused to hijack sensitive information, or worse, break into high-value accounts, such as for banking or game libraries.<ref>{{Cite web |date=Aug 23, 2025 |title=Tweet from zack (in SF) |url=https://xcancel.com/zack_overflow/status/1959308058200551721 |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref>
In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet ([[Perplexity]]), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.<ref>{{Cite web |date=Aug 20, 2025 |title=Tweet from Brave |url=https://xcancel.com/brave/status/1958152314914508893#m |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref> These arbitrary prompts can then be abused to hijack sensitive information, or worse, break into high-value accounts, such as for banking or game libraries.<ref>{{Cite web |date=Aug 23, 2025 |title=Tweet from zack (in SF) |url=https://xcancel.com/zack_overflow/status/1959308058200551721 |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref>


== Further reading ==
==Further reading==


* [[Automatic Content Recognition]]
*[[Dark pattern]]
* [[Palantir]]
*[[Automatic Content Recognition]]
* [[Meta]]
*[[Palantir]]
* [[Yandex]]
*[[Meta]]
* [[TikTok & AI-powered Ad Tracking]]
*[[Yandex]]
* [[Flock License Plate Readers]]
*[[TikTok & AI-powered Ad Tracking]]
* [[Ring]]
*[[Flock License Plate Readers]]
* [[Waymo]]
*[[Ring]]
* [[Google]]
*[[Waymo]]
*[[Google]]


==References==
==References==