Further reading: rm irrelevant links
Banana (talk | contribs)
Added archive URLs for 2 citation(s) using CRWCitationBot
Line 11: Line 11:
From November 2022 to 2025, venture capitalists and companies invested hundreds of billions of dollars into AI but received minimal returns.  When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data may be sold or repurposed, costs may rise, and companies may reduce staff or fail.  Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”
From November 2022 to 2025, venture capitalists and companies invested hundreds of billions of dollars into AI but received minimal returns.  When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data may be sold or repurposed, costs may rise, and companies may reduce staff or fail.  Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”


The current well-funded industry of artificial intelligence tools has led to the rampant and unethical use of content. Startups aiming to develop AI services have been rapidly scraping the internet for content to train future models, and members of the field are concerned that they are approaching the limit of publicly available content to train from.<ref>{{Cite web |last=Tremayne-Pengelly |first=Alexandra |date=16 Dec 2024 |title=Ilya Sutskever Warns A.I. Is Running Out of Data—Here’s What Will Happen Next |url=https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |website=Observer}}</ref>
The current well-funded industry of artificial intelligence tools has led to the rampant and unethical use of content. Startups aiming to develop AI services have been rapidly scraping the internet for content to train future models, and members of the field are concerned that they are approaching the limit of publicly available content to train from.<ref>{{Cite web |last=Tremayne-Pengelly |first=Alexandra |date=16 Dec 2024 |title=Ilya Sutskever Warns A.I. Is Running Out of Data—Here’s What Will Happen Next |url=https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |website=Observer |url-status=live |archive-url=http://web.archive.org/web/20251126053705/https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |archive-date=26 Nov 2025}}</ref>


==Why is it a problem==
==Why is it a problem==
Line 25: Line 25:
There are several concerns with using online AI models like [[ChatGPT]], not only because they are proprietary, but also because there is no guarantee of where your data will be stored or used. Recent developments in local AI models offer an alternative to online AI models, as they can be used offline once downloaded from platforms like [https://huggingface.co/ HuggingFace]. Common models to run include Llama ([[Meta]]), DeepSeek ([[DeepSeek]]), Phi ([[Microsoft]]), Mistral ([[Mistral AI]]), Gemma ([[Google]]).
There are several concerns with using online AI models like [[ChatGPT]], not only because they are proprietary, but also because there is no guarantee of where your data will be stored or used. Recent developments in local AI models offer an alternative to online AI models, as they can be used offline once downloaded from platforms like [https://huggingface.co/ HuggingFace]. Common models to run include Llama ([[Meta]]), DeepSeek ([[DeepSeek]]), Phi ([[Microsoft]]), Mistral ([[Mistral AI]]), Gemma ([[Google]]).


In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet ([[Perplexity]]), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.<ref>{{Cite web |date=Aug 20, 2025 |title=Tweet from Brave |url=https://xcancel.com/brave/status/1958152314914508893#m |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref> These arbitrary prompts can then be exploited to hijack sensitive information, or worse, gain unauthorized access to high-value accounts, such as those for banking or gaming libraries.<ref>{{Cite web |date=Aug 23, 2025 |title=Tweet from zack (in SF) |url=https://xcancel.com/zack_overflow/status/1959308058200551721 |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref>
In some cases, these AI models can also be hijacked for malicious purposes. Demonstrated from the usage of Comet ([[Perplexity]]), users can run arbitrary prompts to the browser's built-in AI assistant via hiding text in the HTML comments, non-visible webpage text, or simple comments on a webpage.<ref>{{Cite web |date=Aug 20, 2025 |title=Tweet from Brave |url=https://xcancel.com/brave/status/1958152314914508893#m |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]])}}</ref> These arbitrary prompts can then be exploited to hijack sensitive information, or worse, gain unauthorized access to high-value accounts, such as those for banking or gaming libraries.<ref>{{Cite web |date=Aug 23, 2025 |title=Tweet from zack (in SF) |url=https://xcancel.com/zack_overflow/status/1959308058200551721 |access-date=Aug 24, 2025 |website=X (formerly [[Twitter]]) |url-status=live |archive-url=http://web.archive.org/web/20250824201111/https://xcancel.com/zack_overflow/status/1959308058200551721 |archive-date=24 Aug 2025}}</ref>


==Further reading==
==Further reading==