Drakeula
Joined 4 September 2025
(One intermediate revision by the same user not shown) | |||
Line 33: | Line 33: | ||
'''Unreliable.''' No way to make them reliable. [No cure for hallucinations][Cite reducing medical disclaimers] | '''Unreliable.''' No way to make them reliable. [No cure for hallucinations][Cite reducing medical disclaimers] | ||
Wordy, cliched, pointless. Summaries wordy, included material not in source, strange selections what to include.[https://pivot-to-ai.com/2024/09/04/dont-use-ai-to-summarize-documents-its-worse-than-humans-in-every-way/ Don't use AI summarize] | |||
'''Decreased security.''' Agents especially. If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input. (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do. If you have the agent read a web page, or a paper, or evaluate a potential hire...) Companies that use agents may be easier to hack. If you give them your data, it may be more likely to fall into unauthorized hands. | '''Decreased security.''' Agents especially. If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input. (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do. If you have the agent read a web page, or a paper, or evaluate a potential hire...) Companies that use agents may be easier to hack. If you give them your data, it may be more likely to fall into unauthorized hands. | ||
Line 46: | Line 48: | ||
*Lack grounding in reality and safety. [suicides] AI psychosis[ ] | *Lack grounding in reality and safety. [suicides] AI psychosis[ ] | ||
'''Fraud''' is a major use-case for generative AI. Easy to generate low-quality output that looks like a particular type of communication with a specified message. Fake reviews. Fake scientific articles. | '''Fraud''' is a major use-case for generative AI. Easy to generate low-quality output that looks like a particular type of communication with a specified message. Fake reviews. Fake scientific articles. [[check def. of fraud] | ||
Fake decisions (government reports, decide applications), pretend has judgement. | |||
Waste of time/resources. Lot more garbage to throw out to find anything of worth. (Dead internet hypothesis.) | |||
'''Deepfakes.''' Sell counterfeit song recordings (sometimes authorized, and some unauthorized). Fake audio/video from a known/trusted source. | Fake products/services. | ||
Identity theft/reputational threat | |||
'''Deepfakes.''' Sell counterfeit song recordings (sometimes authorized, and some unauthorized). Fake audio/video from a known/trusted source. [Fake songs artist reputation, ] | |||
Programs make creating real-seeming documentation of fake events easy. (Nudify filters, ) Pushed | Programs make creating real-seeming documentation of fake events easy. (Nudify filters, ) Pushed | ||
Violate privacy - send information to unauthorized parties. | |||
Infer information from patterns. | |||
==Examples (of abuse)== | ==Examples (of abuse)== | ||
===LLM=== | ===LLM=== | ||
Customer service chatbots present misinformation as fact (rug pull). For example, misrepresent prices, misstate policies. Even if the company will say that is a mistake when challenged, the company may profit from people who don't notice, or don't know to challenge it.[Cite burger joint, system capabilities] | Customer service chatbots present misinformation as fact (rug pull). For example, misrepresent prices, misstate policies. Even if the company will say that is a mistake when challenged, the company may profit from people who don't notice, or don't know to challenge it.[Cite burger joint, system capabilities] | ||
Pretend LLM can understand things and has judgement: | |||
*UK government using MS Copilot (using ChatGPT) to decide visa and asylum applications, previous machine learning visa review tool was very racist.https://pivot-to-ai.com/2024/11/11/uk-home-office-speeds-up-visa-and-refugee-processing-with-copilot-ai-reject-a-bot/ | |||
*Nevada using LLM to decide claim appeals.[https://pivot-to-ai.com/2024/09/10/nevada-to-clear-unemployment-claim-backlogs-with-one-weird-trick-pretending-an-ai-has-judgment/] | |||
https://pivot-to-ai.com/2024/09/27/worst-government-use-of-chatgpt-in-a-child-protection-report/ | |||
Search summaries [what is google's name] | Search summaries [what is google's name] | ||
Search vs. AI summaries. Not clearly differentiated. Different levels of reliability.[ToDo: Check TOS] Publisher vs. platform. [Lawyers. Libel cases.] | Search vs. AI summaries. Not clearly differentiated. Different levels of reliability.[ToDo: Check TOS] Publisher vs. platform. [Lawyers. Libel cases.] | ||
Music platform refuses to label AI content. AI content not generally useful. | |||
People suicide, AI psychosis. | |||
===Generative not just LLM:=== | ===Generative not just LLM:=== | ||
Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users. They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model). | Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users. They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model). | ||
==Further Reading== | ==Further Reading== | ||
a |