Drakeula (talk | contribs)
Drakeula (talk | contribs)
Line 29: Line 29:
*Vibe coding.  AI coding assistants are claimed to allow anyone to program ("vide coding"  In this context, vibe means incompetent).  However, the AI will not teach you best practices, and what you are doing wrong.[Cite vibe code lose data]  The results of vibe coding tend to be difficult to modify or maintain.
*Vibe coding.  AI coding assistants are claimed to allow anyone to program ("vide coding"  In this context, vibe means incompetent).  However, the AI will not teach you best practices, and what you are doing wrong.[Cite vibe code lose data]  The results of vibe coding tend to be difficult to modify or maintain.
*Delusions of competence.  One may hear news about "AI" analyzing medical tests as well as doctors, and not realize that that is very different from asking a chatbot.  People get the delusion that chatbots are competent.
*Delusions of competence.  One may hear news about "AI" analyzing medical tests as well as doctors, and not realize that that is very different from asking a chatbot.  People get the delusion that chatbots are competent.
**There are purpose-built expert systems that can diagnose particular conditions on particular scans, some with comparable accuracy to an expert.  These systems still require expert knowledge of their limitations to operate, and interpret their results.  They are not generally available to the public.
**There are purpose-built expert systems that can diagnose particular conditions on particular scans, some with comparable accuracy to an expert.  These systems still require expert knowledge of their limitations to operate, and interpret their results.  They are not generally available to the public.
**When asked to do a task, like interpreting medical results, a chatbot may produce a bunch of words that sound confident, that look like what an expert might produce.  However it knows nothing, it intends nothing, it means nothing, it can take no responsibility.[Cite reducing disclaimers]
**When asked to do a task, like interpreting medical results, a chatbot may produce a bunch of words that sound confident, that look like what an expert might produce.  However it knows nothing, it intends nothing, it means nothing, it can take no responsibility.[Cite reducing disclaimers]


Line 44: Line 44:
*Lack grounding in reality and safety. [suicides]  AI psychosis
*Lack grounding in reality and safety. [suicides]  AI psychosis


Fraud is a major use-case for generative AI.  Easy to generate low-quality output that looks like a particular type of communication with a specified message.  Fake reviews.  Fake scientific articles.   
'''Fraud''' is a major use-case for generative AI.  Easy to generate low-quality output that looks like a particular type of communication with a specified message.  Fake reviews.  Fake scientific articles.   


'''Deepfakes.'''  Sell counterfeit song recordings (sometimes authorized, and some unauthorized).  Fake audio/video from a known/trusted source.
'''Deepfakes.'''  Sell counterfeit song recordings (sometimes authorized, and some unauthorized).  Fake audio/video from a known/trusted source.