Jump to content

User:Drakeula: Difference between revisions

From Consumer Rights Wiki
Drakeula (talk | contribs)
No edit summary
Drakeula (talk | contribs)
No edit summary
Line 18: Line 18:


Programs make creating real-seeming documentation of fake events easy.  (Nudify filters, )  Pushed  
Programs make creating real-seeming documentation of fake events easy.  (Nudify filters, )  Pushed  
Piracy.  Monopoly.  Unlicensed use of content created by others.  A few large providers (Google, OpenAI) compete with and threatens the existence of other providers without whom their product can not operate. 
Data centers.
Labor practices.


==Examples (of abuse)==
==Examples (of abuse)==
Line 25: Line 31:


Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users.  They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model).   
Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users.  They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model).   
Search summaries [what is google's name] 


==Further Reading==
==Further Reading==

Revision as of 07:30, 23 September 2025

Draft for article on Artificial Intelligence

Machine learning

Bias. Replicates patterns and deficiencies in the training data. May be intentional biases from the trainers, or patterns in the data set that the trainers did not consider or were unaware of. Examples. Image classifiers labeling African-Americans as gorillas. Self-driving car killing person (walking a bicycle was it), because the training set did not include persons with bicycles (what about walkers, canes, unicycles, strollers, ...?). Amazon hiring program was trained on a primarily male workforce, so it discarded resumes that contained the word women (or other markers).


Specifically LLM/Chatbots

Why it is a problem

Unreliable. No way to make them reliable.

Decreased security. Agents especially. If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input. (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do. If you have the agent read a web page, or a paper, or evaluate a potential hire...) Companies that use agents may be easier to hack. If you give them your data, it may be more likely to fall into unauthorized hands.

Fraud is a major use-case for generative AI. Easy to generate low-quality output that looks like a particular type of communication with a specified message. Fake reviews. Fake scientific articles.

Deepfakes. Sell counterfeit song recordings (sometimes authorized, and some unauthorized). Fake audio/video from a known/trusted source.

Programs make creating real-seeming documentation of fake events easy. (Nudify filters, ) Pushed

Piracy. Monopoly. Unlicensed use of content created by others. A few large providers (Google, OpenAI) compete with and threatens the existence of other providers without whom their product can not operate.

Data centers.

Labor practices.

Examples (of abuse)

Customer service chatbots present misinformation as fact (rug pull). For example, misrepresent prices, misstate policies. Even if the company will say that is a mistake when challenged, the company may profit from people who don't notice, or don't know to challenge it.[Cite burger joint, system capabilities]

Vibe coding. Vibe means incompetent. If you are doing vibe coding, the AI will not teach you best practices, and what you are doing wrong. An experienced programmer may fix the problems caused by using AI coding (but in studies, this takes more time to do that than to do the job without AI).

Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users. They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model).

Search summaries [what is google's name]

Further Reading