Draft for article on Artificial Intelligence
Machine learning
Bias. Replicates patterns and deficiencies in the training data. May be intentional biases from the trainers, or patterns in the data set that the trainers did not consider or were unaware of. Examples. Image classifiers labeling African-Americans as gorillas. Self-driving car killing person (walking a bicycle was it), because the training set did not include persons with bicycles (what about walkers, canes, unicycles, strollers, ...?). Amazon hiring program was trained on a primarily male workforce, so it discarded resumes that contained the word women (or other markers).
Data centers.
Labor practices.
Specifically LLM/Chatbots
Why it is a problem
Unreliable. No way to make them reliable.
Decreased security. Agents especially. If you use a large language model, realize that anything the "agent" can do on your behalf, anybody else can also tell it to do, just by giving it input. (So, if an agent reads your e-mail, anybody sending you an e-mail can tell it what to do. If you have the agent read a web page, or a paper, or evaluate a potential hire...) Companies that use agents may be easier to hack. If you give them your data, it may be more likely to fall into unauthorized hands.
Piracy. Monopoly. Unlicensed use of content created by others. A few large providers (Google, OpenAI) take content from other creators without license, paying or permission, compete with them, and threaten their existence. [These other creators are mostly small entities, without the resources to fight many hundred billion dollar companies. Every-day consumers lose out because when the journalists who supply Google with information, the product reviewers, the youtubers, are driven out of business, then the LLM summaries will be even further disconnected from reality, having no human content to feed on.]
Fraud is a major use-case for generative AI. Easy to generate low-quality output that looks like a particular type of communication with a specified message. Fake reviews. Fake scientific articles.
Deepfakes. Sell counterfeit song recordings (sometimes authorized, and some unauthorized). Fake audio/video from a known/trusted source.
Programs make creating real-seeming documentation of fake events easy. (Nudify filters, ) Pushed
Examples (of abuse)
Customer service chatbots present misinformation as fact (rug pull). For example, misrepresent prices, misstate policies. Even if the company will say that is a mistake when challenged, the company may profit from people who don't notice, or don't know to challenge it.[Cite burger joint, system capabilities]
Vibe coding. Vibe means incompetent. If you are doing vibe coding, the AI will not teach you best practices, and what you are doing wrong. An experienced programmer may fix the problems caused by using AI coding (but in studies, this takes more time to do that than to do the job without AI).
Search summaries [what is google's name]
Generative not just LLM:
Providers of nudify programs typically do not provide adequate user education on the legal and reputational dangers to users. They also do not adequately protect the photographic subjects (enforce that models must be informed and the user must have a valid release from the model).