Google Gemini: Difference between revisions

SinexTitan (talk | contribs)
stub notice
Undo revision 16332 by 94.47.170.168 (talk)
Tag: Undo
 
(2 intermediate revisions by 2 users not shown)
Line 8: Line 8:
}}[https://de.wikipedia.org/wiki/Google_Gemini Google Gemini] is a cloud-based Large Language Model (AI text generator) which includes image generation features, developed by American search and advertising giant [[Google]]/[[Alphabet]].
}}[https://de.wikipedia.org/wiki/Google_Gemini Google Gemini] is a cloud-based Large Language Model (AI text generator) which includes image generation features, developed by American search and advertising giant [[Google]]/[[Alphabet]].
==Consumer impact summary==
==Consumer impact summary==
{{Placeholder box|Overview of concerns that arise from the conduct towards users of the product (if applicable):
*
* User Freedom
* User Privacy
* Business Model
* Market Control}}
 
*Accusation of exporting American moral code (eg. sexualization of nudity), woke sensibilities and cancel culture<!-- This is a larger problem that affects pretty much all US big tech companies and their content moderation policies. As such, I'm not sure this article is the right place to open this can of worms. Also, we'll need proper citations for this.  -->


==Incidents==
==Incidents==
This is a list of all consumer protection incidents related to this product. Any incidents not mentioned here can be found in the [[:Category:{{PAGENAME}}|{{PAGENAME}} category]].
This is a list of all consumer protection incidents related to this product. Any incidents not mentioned here can be found in the [[:Category:{{PAGENAME}}|{{PAGENAME}} category]].
===Over-eager political correctness filter (''February 2024'')===
Several news outlets reported that when prompted to generate images of German WW II soldiers, Gemini not only generated images of soldiers of male central European types, but various ethnicities and genders that were historically inaccurate, such as male black and female Asian soldiers.
Gemini also refused to generate pictures of white couples when instructed to do so, but had not issues creating images of black or Asian couples.
It is suspected that this is due to misguided efforts by Google to compensate for ethnical biases in the training data set, to cater to a "woke" zeitgeist and to support cancel culture by using a so-called ''initial prompt,'' a hidden fixed text that is prepended to the user's prompt, to always instruct the neural network to always generate pictures of all genders and ethnicities.
Google reacted by displaying a message that it is working on improving the depiction of people and would notify users when the feature returns.<ref>{{Cite web |last=Grant |first=Nico |date=2024-02-22 |title=Google Chatbot’s A.I. Images Put People of Color in Nazi-Era Uniforms |url=https://www.nytimes.com/2024/02/22/technology/google-gemini-german-uniforms.html |archive-url=https://archive.ph/Pkbez |archive-date=2025-07-02 |access-date=2025-07-02 |website=The New York Times}}</ref><ref>{{Cite web |last=Kleinman |first=Zoe |date=2024-02-28 |title=Why Google's 'woke' AI problem won't be an easy fix |url=https://www.bbc.com/news/technology-68412620 |url-status=live |access-date=2025-07-02 |website=BBC}}</ref>