Artificial intelligence: Difference between revisions

Drakeula (talk | contribs)
Intro - refine, expand a bit, there is no intelligence in AI or in LLM.
Drakeula (talk | contribs)
refine time range
Line 9: Line 9:
LLM are a glorified autocomplete.  People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns.  Promoters of “AI” systems take advantage of this tendency,  using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.
LLM are a glorified autocomplete.  People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns.  Promoters of “AI” systems take advantage of this tendency,  using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems.


As of 2025, venture capitalists and companies have thrown hundreds of billions into AI, but received minimal returns.  When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data to be sold or repurposed, costs to rise, and companies to  reduce staff or fail.  Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”
From November 2022 to 2025, venture capitalists and companies threw hundreds of billions into AI, but received minimal returns.  When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data to be sold or repurposed, costs to rise, and companies to  reduce staff or fail.  Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.”


The current well-funded, industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a fast pace, and members of the field are concerned that they are approaching the limit of publicly-available content to train from.<ref>{{Cite web |last=Tremayne-Pengelly |first=Alexandra |date=16 Dec 2024 |title=Ilya Sutskever Warns A.I. Is Running Out of Data—Here’s What Will Happen Next |url=https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |website=Observer}}</ref>
The current well-funded, industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a fast pace, and members of the field are concerned that they are approaching the limit of publicly-available content to train from.<ref>{{Cite web |last=Tremayne-Pengelly |first=Alexandra |date=16 Dec 2024 |title=Ilya Sutskever Warns A.I. Is Running Out of Data—Here’s What Will Happen Next |url=https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |website=Observer}}</ref>