Talk:Artificial intelligence: Difference between revisions
Add topic →Appeal posted re proposed deletion: new section |
→Sources - potentially useful: new section |
||
(11 intermediate revisions by 3 users not shown) | |||
Line 6: | Line 6: | ||
:I see AI more as a theme/background article. AI is so pervasive now, and affects people in so many ways, that I think it makes sense to have at least one article on it. | :I see AI more as a theme/background article. AI is so pervasive now, and affects people in so many ways, that I think it makes sense to have at least one article on it. | ||
:Things that I think such an article should cover include: | :Things that I think such an article should cover include: | ||
:Data centers - environmental impacts, community impacts, energy demand and subsidy by electricity and water rate payers, and how many of these agreements are made in secret, even in nominally democratic/open governmental systems. In the US data centers are often located in marginalized communities, where people are not as organized to protect their community . (This is not exclusively an AI thing might be worth a separate article about data centers in general, covering crypto mining operations, etc.) | :*Data centers - environmental impacts, community impacts, energy demand and subsidy by electricity and water rate payers, and how many of these agreements are made in secret, even in nominally democratic/open governmental systems. In the US data centers are often located in marginalized communities, where people are not as organized to protect their community . (This is not exclusively an AI thing might be worth a separate article about data centers in general, covering crypto mining operations, etc.) | ||
: | :*Inaccuracy and inappropriate use of LLM. "Hallucinations" People not understanding what an LLM is and assuming they are more capable than they are. LLM make a poor substitute for human written product reviews. (Inaccurate, praises whatever the user wants - even products that don't exist.) | ||
:Intellectual property - piracy in training data (using stolen data), use of output. | :*Control of information - Use of LLM in place of search is decimating independent information sources (taking away advertising revenue, taking away views). | ||
:Privacy and security - data poisoning, ease of subverting guardrails, gathering data for training, revealing prompts, law enforcement review of chatbot prompts and outputs, etc. | :*Intellectual property - piracy in training data (using stolen data), use of output. | ||
:Concerns about possible effects on users - AI psychosis, etc. | :*Privacy and security - data poisoning, ease of subverting guardrails, gathering data for training, revealing prompts, law enforcement review of chatbot prompts and outputs, etc. | ||
:Labor concerns - conditions of labelers/piece workers. | :*Concerns about possible effects on users - AI psychosis, etc. | ||
:Liability - LLM are often inaccurate, what happens when the AI harms people (libel, suicide, etc.) | :*Labor concerns - conditions of labelers/piece workers. | ||
:*Liability - LLM are often inaccurate, what happens when the AI harms people (libel, suicide, etc.) | |||
:I have sources for a bunch of this, will be adding them to the article talk page as time permits. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:27, 4 September 2025 (UTC) | :I have sources for a bunch of this, will be adding them to the article talk page as time permits. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:27, 4 September 2025 (UTC) | ||
::You're completely right. My mistake. This article does need significant reworking to maintain relevance, and a lot of the technical details should be simplified to maintain the wiki's voice and tone. But it's pretty relevant overall, so with time, it could fit better. [[User:Beanie Bo|Beanie Bo]] ([[User talk:Beanie Bo|talk]]) 19:02, 4 September 2025 (UTC) | |||
::posted this on the moderator's noticeboard, but posting it again here: | |||
::I'd caution here that I think quite a few of the practices listed probably wouldn't be within scope. | |||
::Certainly the following: | |||
::- Labour concerns | |||
::- Intellectual property | |||
::- Control of information/search blocking | |||
::- Environmental/social impact of data centres | |||
::Feel like they're out-of-scope as they concern relationships not relevant to the wiki, between businesses and other businesses/creatives, businesses and their employees/workers, as well as between businesses and the wider environment. To prevent scope creep, we want to keep the wiki focused on the consumer-rights issues. | |||
::And these ones feel like 'edge cases' for relevancy - I'd appreciate some wider input: | |||
::- Possible effects and harm on users from improper function (I'd argue that in a lot of cases there's not much to be done on this front, but I think if insufficient steps are taken to warn and safeguard users, then they could be mentioned. Certainly things like character.ai and similar do feel very exploitative, but I'm not sure I'd bundle the normal assistants under the same umbrella there) | |||
::- Liability (I'd say this can be relevant, but the emphasis should be placed on situations where people create systems using AI that take decisions that really shouldn't be left to AI, and harm consumers that way. This is always going to be a fuzzy line, and I'd expect extensive discussion over it - it feels analogous to the question of 'at what point does someone getting injured by their own chainsaw go from being manufacturer negligence, to user error?') | |||
::More broadly, I think that 'AI' probably isn't the best title for an article, as it's such a wide field. AI technically includes almost anything done by a computer. If we go by dictionary definitions, the computer opponents in old strategy games would count as 'AI'. LLMs, Generative image/video models, and traditional ML stuff like image recognition are all distinct enough, and are related to different issues, that it feels like they'd be better separated into their own articles, rather than bundled. [[User:Keith|Keith]] ([[User talk:Keith|talk]]) 16:04, 8 September 2025 (UTC) | |||
:::Since AI is so pervasive as a marketing term, I think it appropriate to have an article on it. If nothing else, to give an easy to find roadmap of other related articles that cover aspects. For example, I would not expect the layman to know what generative AI, or LLMs are - even if much of the content of concern here winds up under such sub-articles. | |||
:::I agree that subdivisiion probably makes sense. However, at this point I don't have a clear idea of what bits to split off. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 08:51, 13 September 2025 (UTC) | |||
:::I am unclear why those particular items seem out of scope. Perhaps you could give more detail? What scoping rule do they violate? Is it because they are primarily "old" consumer concerns, rather than "new" ones? [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 09:23, 13 September 2025 (UTC) | |||
::::Upon rereading @[[User:Keith|Keith]]'s post, I think maybe I see a bit of how we see these issues differently. I see all of the issues I mentioned through the lense of how they affect the general public. (Whether users, or non-users who are affected by AI or its uses.) Therefore, I see them all as "consumer" issues. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 10:21, 13 September 2025 (UTC) | |||
:::'''Control of information.''' Access to information is one of the central pillars of right to repair. Since AI radically changes what information people access, I am missing why it is out of scope. | |||
:::AI summaries in search lead to: Loss of independent journalism.[https://www.404media.co/the-medias-pivot-to-ai-is-not-real-and-not-going-to-work/ The Media's Pivot to AI Is Not Real and Not Going to Work] Loss of review sights. Reviews provided by AI regurgitate manufacturer specs, give incorrect information about products, give questionable recommendations.[https://housefresh.com/beware-of-the-google-ai-salesman/ Beware of the Google AI salesman and its cronies] I haven't seen sources on this, but I see no reason why sites that post repair information/fora would be exempted from this widespread pattern. | |||
:::While not as far as I know related to AI, Mr. Rossman recently noted that his repair business is no longer listed on Google. He said that is a big deal for the size of a repair business like his. That may make it harder for people to exercise the right to repair. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 09:23, 13 September 2025 (UTC) | |||
:::Definitely consumer concerns and therefore I think worth mentioning. The cost of AI is largely hidden. Simply acknowledging that these are consumer concerns with AI, and pointing to good places to get started with the issues. In an overview article, if we only list "new" consumer concerns, it may make things needlessly difficult for the reader. | |||
:::'''Data Centers.''' Electricity rates are projected to go up precipitously because of AI. Pollution and greenhouse gasses affect everybody. (MS, Google missing carbon reduction goals because of AI.) Tax subsidies are paid for by broad segments of the population. | |||
::::In siting an AI data center, the local commissioners and the company typically make the secret deal, where the company gets public subsidy of electricity, taxes, water, etc., and the community gets a few jobs (at several million per job), plus whatever other arrangements. Even though consumers are excluded from the negotiations, that is still a consumer issue, since consumers pay the electricity rates, they breathe the air polluted by the on-site gas generators, they have their wells polluted by excessive water draw, they deal with noise pollution, the community has to deal with the electronic waste, etc. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 10:23, 13 September 2025 (UTC) | |||
:::'''Labor concerns.''' People make choices about clothing brands based on how the people making the clothes are treated. Why should it be different with AI. Just because I am not the person getting [bleeped] by the corporation, doesn't mean it is only between the business and the employee. Sure, not everybody cares, but I think clippies might. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 09:46, 13 September 2025 (UTC) | |||
:::'''Intellectual property''' Perhaps I could have picked a more evocative title? For the purposes of AI, every person is a "creative." This concerns use of your e-mails/facebook posts/tweets/photographs/security camera footage/footage from your roomba/footage from your cars cameras/etc. for training AI. Ownerships of the logs of where you go, what you buy (and where and when). Who you communicate with. Ownership of your medical scans. The output of your medical monitors (CGM, CPAP, pacemaker, smart watch, etc.), and their use in training/advertising/whatever. Publication of your AI queries, your chat logs. Use of AI responses. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 09:58, 13 September 2025 (UTC) | |||
::::I suppose "big data" might be a useful adjunct term to use here. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 10:03, 13 September 2025 (UTC) | |||
== Appeal posted re proposed deletion == | ==Appeal posted re proposed deletion== | ||
I posted an appeal request regarding article deletion. Things I would like to see in this article are listed under scope. I think easier to edit existing article than start new one. | I posted an appeal request regarding article deletion. Things I would like to see in this article are listed under scope. I think easier to edit existing article than start new one. | ||
If you have an opinion one-way or the other, please add on to the appeal discussion on the moderator page (wouldn't want multiple appeals). Thanks. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:41, 4 September 2025 (UTC) | If you have an opinion one-way or the other, please add on to the appeal discussion on the moderator page (wouldn't want multiple appeals). Thanks. [[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:41, 4 September 2025 (UTC) | ||
== Sources - potentially useful == | |||
General sources: | |||
Pivot to AI. https://pivot-to-ai.com/ By UK journalist David Gerard. Daily news item about AI. He wrote about cryptocurrency. He noticed that many of the same people who had been promoting crypto took the same pitches and applied them to AI. So he similarly pivoted to AI. | |||
Ed Zitron's Where's Your Ed At https://www.wheresyoured.at/ Journalist and PR professional, reports on financial aspects of AI. | |||
[[User:Drakeula|Drakeula]] ([[User talk:Drakeula|talk]]) 18:15, 13 September 2025 (UTC) |
Latest revision as of 18:15, 13 September 2025
Scope?[edit source]
This article is wordy and I'm not sure how it's directly relevant to consumer rights. Scraping the internet and data collection may be seen as unethical, but they're pretty run-of-the-mill at this point. Beanie Bo (talk) 01:56, 29 August 2025 (UTC)
- I agree that the article needs significant revision. It has more detail than needed on some areas (e.g. web scraping), and totally misses other important areas.
- I see AI more as a theme/background article. AI is so pervasive now, and affects people in so many ways, that I think it makes sense to have at least one article on it.
- Things that I think such an article should cover include:
- Data centers - environmental impacts, community impacts, energy demand and subsidy by electricity and water rate payers, and how many of these agreements are made in secret, even in nominally democratic/open governmental systems. In the US data centers are often located in marginalized communities, where people are not as organized to protect their community . (This is not exclusively an AI thing might be worth a separate article about data centers in general, covering crypto mining operations, etc.)
- Inaccuracy and inappropriate use of LLM. "Hallucinations" People not understanding what an LLM is and assuming they are more capable than they are. LLM make a poor substitute for human written product reviews. (Inaccurate, praises whatever the user wants - even products that don't exist.)
- Control of information - Use of LLM in place of search is decimating independent information sources (taking away advertising revenue, taking away views).
- Intellectual property - piracy in training data (using stolen data), use of output.
- Privacy and security - data poisoning, ease of subverting guardrails, gathering data for training, revealing prompts, law enforcement review of chatbot prompts and outputs, etc.
- Concerns about possible effects on users - AI psychosis, etc.
- Labor concerns - conditions of labelers/piece workers.
- Liability - LLM are often inaccurate, what happens when the AI harms people (libel, suicide, etc.)
- I have sources for a bunch of this, will be adding them to the article talk page as time permits. Drakeula (talk) 18:27, 4 September 2025 (UTC)
- You're completely right. My mistake. This article does need significant reworking to maintain relevance, and a lot of the technical details should be simplified to maintain the wiki's voice and tone. But it's pretty relevant overall, so with time, it could fit better. Beanie Bo (talk) 19:02, 4 September 2025 (UTC)
- posted this on the moderator's noticeboard, but posting it again here:
- I'd caution here that I think quite a few of the practices listed probably wouldn't be within scope.
- Certainly the following:
- - Labour concerns
- - Intellectual property
- - Control of information/search blocking
- - Environmental/social impact of data centres
- Feel like they're out-of-scope as they concern relationships not relevant to the wiki, between businesses and other businesses/creatives, businesses and their employees/workers, as well as between businesses and the wider environment. To prevent scope creep, we want to keep the wiki focused on the consumer-rights issues.
- And these ones feel like 'edge cases' for relevancy - I'd appreciate some wider input:
- - Possible effects and harm on users from improper function (I'd argue that in a lot of cases there's not much to be done on this front, but I think if insufficient steps are taken to warn and safeguard users, then they could be mentioned. Certainly things like character.ai and similar do feel very exploitative, but I'm not sure I'd bundle the normal assistants under the same umbrella there)
- - Liability (I'd say this can be relevant, but the emphasis should be placed on situations where people create systems using AI that take decisions that really shouldn't be left to AI, and harm consumers that way. This is always going to be a fuzzy line, and I'd expect extensive discussion over it - it feels analogous to the question of 'at what point does someone getting injured by their own chainsaw go from being manufacturer negligence, to user error?')
- More broadly, I think that 'AI' probably isn't the best title for an article, as it's such a wide field. AI technically includes almost anything done by a computer. If we go by dictionary definitions, the computer opponents in old strategy games would count as 'AI'. LLMs, Generative image/video models, and traditional ML stuff like image recognition are all distinct enough, and are related to different issues, that it feels like they'd be better separated into their own articles, rather than bundled. Keith (talk) 16:04, 8 September 2025 (UTC)
- Since AI is so pervasive as a marketing term, I think it appropriate to have an article on it. If nothing else, to give an easy to find roadmap of other related articles that cover aspects. For example, I would not expect the layman to know what generative AI, or LLMs are - even if much of the content of concern here winds up under such sub-articles.
- I agree that subdivisiion probably makes sense. However, at this point I don't have a clear idea of what bits to split off. Drakeula (talk) 08:51, 13 September 2025 (UTC)
- I am unclear why those particular items seem out of scope. Perhaps you could give more detail? What scoping rule do they violate? Is it because they are primarily "old" consumer concerns, rather than "new" ones? Drakeula (talk) 09:23, 13 September 2025 (UTC)
- Upon rereading @Keith's post, I think maybe I see a bit of how we see these issues differently. I see all of the issues I mentioned through the lense of how they affect the general public. (Whether users, or non-users who are affected by AI or its uses.) Therefore, I see them all as "consumer" issues. Drakeula (talk) 10:21, 13 September 2025 (UTC)
- Control of information. Access to information is one of the central pillars of right to repair. Since AI radically changes what information people access, I am missing why it is out of scope.
- AI summaries in search lead to: Loss of independent journalism.The Media's Pivot to AI Is Not Real and Not Going to Work Loss of review sights. Reviews provided by AI regurgitate manufacturer specs, give incorrect information about products, give questionable recommendations.Beware of the Google AI salesman and its cronies I haven't seen sources on this, but I see no reason why sites that post repair information/fora would be exempted from this widespread pattern.
- While not as far as I know related to AI, Mr. Rossman recently noted that his repair business is no longer listed on Google. He said that is a big deal for the size of a repair business like his. That may make it harder for people to exercise the right to repair. Drakeula (talk) 09:23, 13 September 2025 (UTC)
- Definitely consumer concerns and therefore I think worth mentioning. The cost of AI is largely hidden. Simply acknowledging that these are consumer concerns with AI, and pointing to good places to get started with the issues. In an overview article, if we only list "new" consumer concerns, it may make things needlessly difficult for the reader.
- Data Centers. Electricity rates are projected to go up precipitously because of AI. Pollution and greenhouse gasses affect everybody. (MS, Google missing carbon reduction goals because of AI.) Tax subsidies are paid for by broad segments of the population.
- In siting an AI data center, the local commissioners and the company typically make the secret deal, where the company gets public subsidy of electricity, taxes, water, etc., and the community gets a few jobs (at several million per job), plus whatever other arrangements. Even though consumers are excluded from the negotiations, that is still a consumer issue, since consumers pay the electricity rates, they breathe the air polluted by the on-site gas generators, they have their wells polluted by excessive water draw, they deal with noise pollution, the community has to deal with the electronic waste, etc. Drakeula (talk) 10:23, 13 September 2025 (UTC)
- Labor concerns. People make choices about clothing brands based on how the people making the clothes are treated. Why should it be different with AI. Just because I am not the person getting [bleeped] by the corporation, doesn't mean it is only between the business and the employee. Sure, not everybody cares, but I think clippies might. Drakeula (talk) 09:46, 13 September 2025 (UTC)
- Intellectual property Perhaps I could have picked a more evocative title? For the purposes of AI, every person is a "creative." This concerns use of your e-mails/facebook posts/tweets/photographs/security camera footage/footage from your roomba/footage from your cars cameras/etc. for training AI. Ownerships of the logs of where you go, what you buy (and where and when). Who you communicate with. Ownership of your medical scans. The output of your medical monitors (CGM, CPAP, pacemaker, smart watch, etc.), and their use in training/advertising/whatever. Publication of your AI queries, your chat logs. Use of AI responses. Drakeula (talk) 09:58, 13 September 2025 (UTC)
- I suppose "big data" might be a useful adjunct term to use here. Drakeula (talk) 10:03, 13 September 2025 (UTC)
Appeal posted re proposed deletion[edit source]
I posted an appeal request regarding article deletion. Things I would like to see in this article are listed under scope. I think easier to edit existing article than start new one. If you have an opinion one-way or the other, please add on to the appeal discussion on the moderator page (wouldn't want multiple appeals). Thanks. Drakeula (talk) 18:41, 4 September 2025 (UTC)
Sources - potentially useful[edit source]
General sources:
Pivot to AI. https://pivot-to-ai.com/ By UK journalist David Gerard. Daily news item about AI. He wrote about cryptocurrency. He noticed that many of the same people who had been promoting crypto took the same pitches and applied them to AI. So he similarly pivoted to AI.
Ed Zitron's Where's Your Ed At https://www.wheresyoured.at/ Journalist and PR professional, reports on financial aspects of AI.