Drakeula (talk | contribs)
Drakeula (talk | contribs)
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
==Hello==
==Hello==
Hi.  Most of my wiki experience is from editing on wikipedia.
Hi.  Most of my wiki experience is from editing on wikipedia (15+ years ago).


My background is in computers, computers & society, and technical writing/editing.  Plus some public health.
My background is in computers, computers & society (especially privacy and user centered design), and technical writing/editing.  Plus some public health.




Line 8: Line 8:


==Draft for article on Artificial Intelligence==
==Draft for article on Artificial Intelligence==
Data mining/Big Data
Some techniques are considered "AI"  When AI is popular, it is used as a marketing term, may encompas other techniques.


===Machine learning (Generative AI is subset)===
===Machine learning (Generative AI is subset)===
Line 22: Line 27:


==Why it is a problem==
==Why it is a problem==
Natural language processing (LLM are in that area)


===LLM===
===LLM===
Line 33: Line 39:


'''Unreliable.'''  No way to make them reliable.  [No cure for hallucinations][Cite reducing medical disclaimers]
'''Unreliable.'''  No way to make them reliable.  [No cure for hallucinations][Cite reducing medical disclaimers]
Wordy, cliched, pointless.  Summaries wordy, included material not in source, strange selections what to include.[https://pivot-to-ai.com/2024/09/04/dont-use-ai-to-summarize-documents-its-worse-than-humans-in-every-way/ Don't use AI summarize]


'''Decreased security.'''  Agents especially.  If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input.  (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do.  If you have the agent read a web page, or a paper, or evaluate a potential hire...)  Companies that use agents may be easier to hack.  If you give them your data, it may be more likely to fall into unauthorized hands.
'''Decreased security.'''  Agents especially.  If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input.  (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do.  If you have the agent read a web page, or a paper, or evaluate a potential hire...)  Companies that use agents may be easier to hack.  If you give them your data, it may be more likely to fall into unauthorized hands.
Line 46: Line 54:
*Lack grounding in reality and safety. [suicides]  AI psychosis[ ]
*Lack grounding in reality and safety. [suicides]  AI psychosis[ ]


'''Fraud''' is a major use-case for generative AI.  Easy to generate low-quality output that looks like a particular type of communication with a specified message.  Fake reviews.  Fake scientific articles.     
'''Fraud''' is a major use-case for generative AI.  Easy to generate low-quality output that looks like a particular type of communication with a specified message.  Fake reviews.  Fake scientific articles.  [[check def. of fraud]   
 
Fake decisions (government reports, decide applications), pretend has judgement.     


Waste of  time/resources.  Lot more garbage to throw out to find anything of worth.  (Dead internet hypothesis.)   
Waste of  time/resources.  Lot more garbage to throw out to find anything of worth.  (Dead internet hypothesis.)   
Fake products/services. 
Identity theft/reputational threat 


'''Deepfakes.'''  Sell counterfeit song recordings (sometimes authorized, and some unauthorized).  Fake audio/video from a known/trusted source. [Fake songs artist reputation, ]
'''Deepfakes.'''  Sell counterfeit song recordings (sometimes authorized, and some unauthorized).  Fake audio/video from a known/trusted source. [Fake songs artist reputation, ]


Programs make creating real-seeming documentation of fake events easy.  (Nudify filters, )  Pushed  
Programs make creating real-seeming documentation of fake events easy.  (Nudify filters, )  Pushed  
Violate privacy - send information to unauthorized parties. 
Infer information from patterns.


==Examples (of abuse)==
==Examples (of abuse)==
Line 61: Line 79:
Pretend LLM can understand things and has judgement:
Pretend LLM can understand things and has judgement:


* UK government using MS Copilot (using ChatGPT) to decide visa and asylum applications, previous machine learning visa review tool was very racist.https://pivot-to-ai.com/2024/11/11/uk-home-office-speeds-up-visa-and-refugee-processing-with-copilot-ai-reject-a-bot/
*UK government using MS Copilot (using ChatGPT) to decide visa and asylum applications, previous machine learning visa review tool was very racist.https://pivot-to-ai.com/2024/11/11/uk-home-office-speeds-up-visa-and-refugee-processing-with-copilot-ai-reject-a-bot/
* Nevada using LLM to decide claim appeals.[https://pivot-to-ai.com/2024/09/10/nevada-to-clear-unemployment-claim-backlogs-with-one-weird-trick-pretending-an-ai-has-judgment/]
*Nevada using LLM to decide claim appeals.[https://pivot-to-ai.com/2024/09/10/nevada-to-clear-unemployment-claim-backlogs-with-one-weird-trick-pretending-an-ai-has-judgment/]
 
https://pivot-to-ai.com/2024/09/27/worst-government-use-of-chatgpt-in-a-child-protection-report/


Search summaries [what is google's name]
Search summaries [what is google's name]
Line 78: Line 98:




a
==Not AI==
 
===EULA of despair===
 
Might be useful to add link to EULA of despair to the relevant articles.  This is what I put in Discord's (as a template?).  See what others think.  It is dated, but still may be an interesting/educational perspective.
 
Discord terms of service are lengthy and complex,  in Oct 2025, just the base terms are 29 pages, 14th grade (Junior in college) reading level, estimated reading time 42 minutes.<ref>{{Cite web |title=Calculated using readabilitychecker.com based on current discord TOS. |url=readabilitychecker.com |access-date=9 Oct 2025}}</ref>  The terms incorporate extensive additional material; a 2021 version of Discord TOS, featured in "EULAs of despair", would take an estimated over 275 hours to read.<ref>{{Cite web |title=EULA of despair |url=https://www.pilotlab.org/eulas-of-despair |access-date=9 Oct 2025 |website=Penn State University Pilot Lab}}</ref>
<references />