Talk:Artificial intelligence: Difference between revisions

Drakeula (talk | contribs)
Drakeula (talk | contribs)
Scope?: add inaccuracy, bullet items to make clearer
Line 6: Line 6:
:I see AI more as a theme/background article.  AI is so pervasive now, and affects people in so many ways, that I think it makes sense to have at least one article on it.
:I see AI more as a theme/background article.  AI is so pervasive now, and affects people in so many ways, that I think it makes sense to have at least one article on it.
:Things that I think such an article should cover include:
:Things that I think such an article should cover include:
:Data centers - environmental impacts, community impacts, energy demand and subsidy by electricity and water rate payers, and how many of these agreements are made in secret, even in nominally democratic/open governmental systems.  In the US data centers are often located in marginalized communities, where people are not as organized to protect their community .  (This is not exclusively an AI thing might be worth a separate article about data centers in general, covering crypto mining operations, etc.)
:* Data centers - environmental impacts, community impacts, energy demand and subsidy by electricity and water rate payers, and how many of these agreements are made in secret, even in nominally democratic/open governmental systems.  In the US data centers are often located in marginalized communities, where people are not as organized to protect their community .  (This is not exclusively an AI thing might be worth a separate article about data centers in general, covering crypto mining operations, etc.)
:Control of information - Use of LLM in place of search is decimating independent information sources (taking away advertising revenue, taking away views). LLM make a poor substitute for human written product reviews. (Inaccurate, praises whatever the user wants - even products that don't exist.)
:* Inaccuracy and inappropriate use of LLM. "Hallucinations" People not understanding what an LLM is and assuming they are more capable than they are. LLM make a poor substitute for human written product reviews. (Inaccurate, praises whatever the user wants - even products that don't exist.)  
:Intellectual property - piracy in training data (using stolen data), use of output.
:* Control of information - Use of LLM in place of search is decimating independent information sources (taking away advertising revenue, taking away views). 
:Privacy and security - data poisoning, ease of subverting guardrails, gathering data for training, revealing prompts, law enforcement review of chatbot prompts and outputs, etc.
:* Intellectual property - piracy in training data (using stolen data), use of output.
:Concerns about possible effects on users - AI psychosis, etc.
:* Privacy and security - data poisoning, ease of subverting guardrails, gathering data for training, revealing prompts, law enforcement review of chatbot prompts and outputs, etc.
:Labor concerns - conditions of labelers/piece workers.
:* Concerns about possible effects on users - AI psychosis, etc.
:Liability - LLM are often inaccurate, what happens when the AI harms people (libel, suicide, etc.)
:* Labor concerns - conditions of labelers/piece workers.
:* Liability - LLM are often inaccurate, what happens when the AI harms people (libel, suicide, etc.)
:I have sources for a bunch of this, will be adding them to the article talk page as time permits.  [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:27, 4 September 2025 (UTC)
:I have sources for a bunch of this, will be adding them to the article talk page as time permits.  [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:27, 4 September 2025 (UTC)


Return to "Artificial intelligence" page.