Artificial intelligence: Difference between revisions
Rolled back prev edits. Added irrelevant notice, which should stay until the article is restructured to fit within scope. |
Intro - refine, expand a bit, there is no intelligence in AI or in LLM. |
||
Line 1: | Line 1: | ||
{{Irrelevant}}{{ToneWarning}} | {{Irrelevant}}{{ToneWarning}} | ||
'''Artificial intelligence''' (AI) is a field of computer science producing | '''Artificial intelligence''' (AI) is a field of computer science producing systems that aim to solve problems which humans solve by using intelligence. So far, no AI solutions are intelligent. AI is not a new concept - it has been of interest as early as the 1950s. AI is a catch-all, it encompasses many areas and techniques, so merely saying that something uses AI tells one little about it. | ||
[[wikipedia: | Since the November 2022 launch of [[ChatGPT]], [[wikipedia:Large language model|large language model]] (LLM) chatbots have been a main focus of industry, with tens of billions of dollars in funding allocated to producing more popular LLMs. Also a significant focus are [[wikipedia:Text-to-image model|text-to-image models]], which "draw" an image using written prompt, and [[wikipedia:Text-to-video model|text-to-video models]], which extend the text-to-image concept across several smooth video frames. | ||
The current well-funded, | [[wikipedia:Generative artificial intelligence|Generative artificial intelligence]] models are trained through vast amounts of existing human-generated content. Using the example of an LLM, by gathering statistics on patterns of words that people use, the model can generate sequences of words that seem similar to what a person might have written. LLM do not understand anything, they can not reason. Everything they generate is just a randomly modulated pattern of tokens. People reading sequences of tokens sometimes see things they think of as being true. Sequences which do not make sense to the reader, or which are false are called [[wikipedia:Hallucination (artificial intelligence)|hallucination]]. LLM are typically trained to produce output which is pleasing to people, therefore they often produce output which seems confidently-written, and use patterns which praise the user (sycophancy). | ||
LLM are a glorified autocomplete. People are used to dealing with people, and many overestimate the abilities of things that exhibit complex, person like patterns. Promoters of “AI” systems take advantage of this tendency, using suggestive names (like “reasoning,” and “learning”) and grand claims (“PHD level”), which make it harder for people to understand these systems. | |||
As of 2025, venture capitalists and companies have thrown hundreds of billions into AI, but received minimal returns. When companies seek returns, consumers can expect that products may be orphaned, services may be reduced, customer data to be sold or repurposed, costs to rise, and companies to reduce staff or fail. Historically, AI has had brief periods of intense hype, followed by disillusionment, and “AI winters.” | |||
The current well-funded, industry of artificial intelligence tools has resulted in rampant unethical use of content. Startups intending to produce AI services have been scraping the internet for content to train future models at a fast pace, and members of the field are concerned that they are approaching the limit of publicly-available content to train from.<ref>{{Cite web |last=Tremayne-Pengelly |first=Alexandra |date=16 Dec 2024 |title=Ilya Sutskever Warns A.I. Is Running Out of Data—Here’s What Will Happen Next |url=https://observer.com/2024/12/openai-cofounder-ilya-sutskever-ai-data-peak/ |website=Observer}}</ref> | |||
==Unethical website scraping== | ==Unethical website scraping== |