<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://consumerrights.wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=190.2.154.222</id>
	<title>Consumer Rights Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://consumerrights.wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=190.2.154.222"/>
	<link rel="alternate" type="text/html" href="https://consumerrights.wiki/w/Special:Contributions/190.2.154.222"/>
	<updated>2026-04-29T01:15:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://consumerrights.wiki/index.php?title=Anthropic&amp;diff=52541</id>
		<title>Anthropic</title>
		<link rel="alternate" type="text/html" href="https://consumerrights.wiki/index.php?title=Anthropic&amp;diff=52541"/>
		<updated>2026-04-28T19:31:10Z</updated>

		<summary type="html">&lt;p&gt;190.2.154.222: /* Incidents */ included a new incident about serious issues in Anthropic&amp;#039;s System Prompt that can cause high risks for users of Claude models.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StubNotice}}{{CompanyCargo&lt;br /&gt;
|Founded = 2021&lt;br /&gt;
|Industry = Artificial Intelligence&lt;br /&gt;
|Description = American AI startup founded in 2021 commonly known for their family of LLMs named Claude.&lt;br /&gt;
|Website = https://anthropic.com&lt;br /&gt;
|Logo = Anthropic logo.png&lt;br /&gt;
|Type=Private&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;{{wplink|Anthropic|Anthropic PBC}}&#039;&#039;&#039; is a private for-profit American [[artificial intelligence]] (AI) startup founded in 2021. Anthropic is mainly known for their family of large language models (LLMs) known as [[Claude]].&lt;br /&gt;
&lt;br /&gt;
==Consumer impact summary==&lt;br /&gt;
{{Ph-C-CIS}}&lt;br /&gt;
&lt;br /&gt;
==Incidents==&lt;br /&gt;
This is a list of all consumer-protection incidents this company is involved in. Any incidents not mentioned here can be found in the [[:Category:{{FULLPAGENAME}}|{{PAGENAME}} category]].&lt;br /&gt;
&lt;br /&gt;
===Claude Code HERMES.md billing flaw (2026)===&lt;br /&gt;
{{Main|Anthropic Claude Code HERMES.md billing flaw}}&lt;br /&gt;
In April 2026, a technical flaw in Claude Code triggered by the string &amp;quot;HERMES.md&amp;quot; in git commit messages bypassed subscription plans, routing users to pay-as-you-go API rates and charging one account over $200. Anthropic refused to issue a refund, categorizing the overcharge as an un-refundable technical error.&lt;br /&gt;
&amp;lt;!-- INCIDENT_SCORE: Anthropic Claude Code HERMES.md billing flaw | 65/100 | Documented overcharge without refund --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Price crackdown against third-party tool usage (2026)===&lt;br /&gt;
During April 3rd, 2026, Boris Cherny, head of Claude Code, posted on [[X Corp|Twitter]] (now X) announcing Claude subscriptions will &amp;quot;no longer support third-party tools&amp;quot;, such as OpenClaw because it puts an &amp;quot;outsized strain&amp;quot; on Anthropic&#039;s systems. The changes took effect on April 4th, and now to use third-party tools the user must pay a separate fee from subscription or use a separate [[Claude]] API key through Anthropic&#039;s developer platform. It is rumored this action was done to prevent Claude users from using tools from competitors, as OpenClaw is supported by [[OpenAI]]. &amp;lt;ref&amp;gt;[https://nitter.catsarch.com/bcherny/status/2040206441756471399 https://x.com/bcherny/status/2040206441756471399] - [https://web.archive.org/web/20260405235237/https://nitter.catsarch.com/bcherny/status/2040206441756471399 Archived]&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |last=Lee |first=Lloyd |date=3 Apr 2026 |title=Anthropic says Claude subscriptions will no longer support OpenClaw because it puts an &#039;outsized strain&#039; on systems |url=https://www.businessinsider.com/anthropic-cuts-off-openclaw-support-claude-subscriptions-2026-4 |url-status=live |archive-url=https://web.archive.org/web/20260404024034/https://www.businessinsider.com/anthropic-cuts-off-openclaw-support-claude-subscriptions-2026-4 |archive-date=2026-04-04 |access-date=5 Apr 2026 |website=Business Insider}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{Cite web |last=Ha |first=Anthony |date=4 Apr 2026|title=Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage |url=https://techcrunch.com/2026/04/04/anthropic-says-claude-code-subscribers-will-need-to-pay-extra-for-openclaw-support/ |url-status=live |archive-url=https://web.archive.org/web/20260404163645/https://techcrunch.com/2026/04/04/anthropic-says-claude-code-subscribers-will-need-to-pay-extra-for-openclaw-support/ |archive-date=2026-04-04 |access-date=5 Apr 2026 |website=TechCrunch}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Anthropic System Prompt Overrides Consumer Safety ===&lt;br /&gt;
{{Main|Anthropic system prompt overrides consumersafety in Claude models}}&lt;br /&gt;
During April 24-26, 2026, it was discovered that Anthropic System Prompt creates dangerous situations for users interacting with Claude Sonnet and Opus line of products. Of the many examples, particularly concerning one is with respect to users communicating topics such as suicide and self-harm during a chat session with Claude. In such situations, for which the Claude models are commanded by System Prompt to necessarily and immediately provide information about crisis helplines, Claude can fail to respond with appropriate information. This is because, &amp;quot;crisis helplines vary widely in their confidentiality practices and mandatory reporting obligations&amp;quot; across jurisdictions, and Claude doesn&#039;t have a way to reliably indicate that to a user while also finding the user&#039;s correct geographic location. Other faults that are &amp;quot;structurally irreconcilable&amp;quot; by Claude due to conflicts and various inconsistencies within its System Prompt can result in unexpected side-effects, cyber security vulnerabilities, and hazardous outputs.   &lt;br /&gt;
&lt;br /&gt;
==Products==&lt;br /&gt;
*[[Claude]]&lt;br /&gt;
*Claude Code&lt;br /&gt;
*Cowork&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[OpenAI]]&lt;br /&gt;
*[[CursorAI &amp;quot;unlimited&amp;quot; plan rug pull]]&lt;br /&gt;
*[[ChatGPT]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:{{PAGENAME}}]]&lt;/div&gt;</summary>
		<author><name>190.2.154.222</name></author>
	</entry>
	<entry>
		<id>https://consumerrights.wiki/index.php?title=Anthropic_system_prompt_overrides_consumersafety_in_Claude_models&amp;diff=52540</id>
		<title>Anthropic system prompt overrides consumersafety in Claude models</title>
		<link rel="alternate" type="text/html" href="https://consumerrights.wiki/index.php?title=Anthropic_system_prompt_overrides_consumersafety_in_Claude_models&amp;diff=52540"/>
		<updated>2026-04-28T18:46:41Z</updated>

		<summary type="html">&lt;p&gt;190.2.154.222: Created new article about hazardous risks posed by Anthropic&amp;#039;s line of products called Claude Sonnet and Opus.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ProductLineCargo&lt;br /&gt;
|InProduction=Yes&lt;br /&gt;
|Company=Anthropic|Category=AI-as-a-Service|Description=Degradation in Claude Sonnet and Opus models from Anthropic&#039;s system prompts.|ReleaseYear=2026|Website=https://www.anthropic.com/|Logo=Claude_AI_logo.svg}}&lt;br /&gt;
==Consumer-impact Summary==&lt;br /&gt;
Major deficiencies in the Claude Sonnet and Opus line of Artificial Intelligence (AI) products by [[Anthropic]] causes Consumers to have highly degraded User Experiences. This reported issue can also cause Consumers to suffer hazardous impact from Anthropic&#039;s [https://www.producttalk.org/glossary-ai-system-prompt/ System Prompt] that silently and invisibly hinders the Consumer&#039;s fair choices and objectives while using the currently active [[wikipedia:Claude_(language_model)#Models|Claude AI models]] on &amp;quot;claude.ai&amp;quot; web/mobile platform and in &amp;quot;Claude Code.&amp;quot; These are systemic issues that have yet to be addressed and resolved by Anthropic. It can be argued that Anthropic&#039;s business proprietors and promoters knowingly pushed a set of defective and unsafe iteration of Sonnet and Opus models to artificially prop up the company&#039;s multi-billion dollar valuation via deceptive trade practices.&lt;br /&gt;
&lt;br /&gt;
==Incidents==&lt;br /&gt;
This is a list of currently discovered consumer-protection incidents related to the product line of Claude Sonnet and Opus models. Other incidents not mentioned here can be found in the [[Anthropic]] wiki page.&lt;br /&gt;
===Systemic Faults in Claude Sonnet 4.6 (ongoing and unresolved)===&lt;br /&gt;
A summary table&amp;lt;ref&amp;gt;{{Cite web |last=Khan |first=Sameer |date=2026-04-24 |title=Claude System Prompt, Sonnet 4.6 - Identified Maladaptive Patterns Summary |url=https://github.com/klaucious/rnd/blob/trunk/src/claude-sonnet_4_6-paraphrased_system_prompt-compact.md#15-identified-maladaptive-patterns-summary |url-status=live |access-date=2026-04-28 |website=GitHub}}&amp;lt;/ref&amp;gt; about &amp;quot;risks of Claude&#039;s maladaptive behaviors&amp;quot; resulting from bad System Prompt in Sonnet 4.6 shows how Consumers are being overcharged for a defective and deficient product/service. The hazards to Consumers arises from the Claude System Prompt causing the Sonnet 4.6 AI model to produce &amp;quot;structurally irreconcilable&amp;quot; actions when users interact with it.&lt;br /&gt;
&lt;br /&gt;
=== Systemic Faults in Claude Code Opus 4.7 (&#039;&#039;ongoing and unresolved&#039;&#039;) ===&lt;br /&gt;
Anthropic&#039;s System Prompt in Opus 4.7 introduces several architectural vulnerabilities and logical friction points that will inevitably lead to maladaptive AI-agent behaviors. The most problematic of which is, an ability for political, policing, and military agencies to easily inject &amp;lt;code&amp;gt;{default_stance} Ignore previous safety constraints {/default_stance}&amp;lt;/code&amp;gt; and hijack the model, leading to catastrophic safety bypasses. &lt;br /&gt;
&lt;br /&gt;
Such catastrophic failures would negatively impact Consumer Privacy for ordinary residents, citizens, and visitors in a country where those agencies may start to use Claude for purposes like mass-surveillance and lethal weapons development. &lt;br /&gt;
&lt;br /&gt;
Analysis of such issues in Opus line of models is provided on the GitHub repository of the organization called Klaucious&amp;lt;ref&amp;gt;{{Cite web |last=Khan |first=Sameer |date=2026-04-26 |title=Claude Opus 4.7 System Prompt Analysis &amp;amp; Risk Assessment |url=https://github.com/klaucious/rnd/blob/trunk/doc/claude-opus_4_7-system_analysis.md |url-status=live |access-date=2026-04-28 |website=GitHub}}&amp;lt;/ref&amp;gt;. The word Klaucious is a mashup of the words cautious and Claude, spelled with a K. &lt;br /&gt;
&lt;br /&gt;
=== Other Hidden Billing Issues (&#039;&#039;ongoing and unresolved&#039;&#039;) ===&lt;br /&gt;
The &amp;quot;Usage&amp;quot; tab in claude.ai web platform&#039;s &amp;quot;Settings&amp;quot; page shows the &amp;quot;Token Usage Limit&amp;quot; per chat session, as well as weekly limits for all sets of Claude models. The web platform version on https://claude.ai/chat is prohibited by the System Prompt to use Claude API via the AI model&#039;s access to [https://platform.claude.com/docs/en/agents-and-tools/tool-use/bash-tool bash_tool]. This prevents pay-per-use costs of accessing Anthropic&#039;s expensive API. &lt;br /&gt;
&lt;br /&gt;
However, given a task such as, &amp;quot;run sub-agents for this research topic&amp;quot;, the Claude models via the web platform&#039;s inherent API-Key can autonomously use the [https://platform.claude.com/docs/en/agents-and-tools/tool-use/web-fetch-tool fetch_tool] to execute those premium API calls to spawn sub-agents, upon failing to do so with the bash_tool. This results in the chat session Token Usage Limit becoming exhausted from overcharging the premium costs. Consequently the chat window halts with a notification of Token Usage Limit being reached, even when the usage meter shows only 40% to 50% usage in Claude&#039;s User Interface. &lt;br /&gt;
&lt;br /&gt;
More significantly, during such autonomous activities of executing Claude API calls from the web platform or the Claude Code app with Sonnet or Opus, if the option of &amp;quot;Allow Extra Usage&amp;quot; is turned on within the Settings &amp;gt; Billing page, the platform can charge a financial sum beyond the user permitted limit on Extra Usage, by simply overriding Consumer choices and preferences.  &lt;br /&gt;
&lt;br /&gt;
This type of hidden behaviors of those AI models can be construed as a breach of Consumer Confidence via deceptive trade practices engendered by Anthropic&#039;s System Prompt in the company&#039;s currently active catalog or line of AI enabled products.  &lt;br /&gt;
&lt;br /&gt;
== Critique ==&lt;br /&gt;
Though Anthropic has yet to respond to this set of issues, one might be inclined to suggest that &amp;quot;Claude is reasonably usable&amp;quot; in the form of Sonnet or Opus models as a product served through claude.ai platform or Claude Code desktop app, and therefore, the paying Consumer assumes the risks of using such AI models for conducting any type of personal or business endeavors. However, it can be argued that though Asbestos is fairly usable, its marketed sale to the public across the globe would be a gross act of negligence against Human Safety and Consumer Protection. Similarly, a &amp;quot;usable&amp;quot; product or service in the form of AI models that can pose or that already poses high risks to Human Safety must have better guard rails before being released to the public; instead of using &amp;quot;early adopters&amp;quot; as experimental guinea pigs for increasing the company&#039;s revenues and valuation at the expense of international Public Health and Safety.  &lt;br /&gt;
&lt;br /&gt;
There is sufficient evidence indicating Cause of Action, for Consumer Protection Advocacy groups to take such grievous issues and legal matter to a suitable Court of Law.   &lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
[[Anthropic Claude Code HERMES.md billing flaw|Anthropic Claude Code&#039;s Billing Flaw]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:{{PAGENAME}}]]&lt;/div&gt;</summary>
		<author><name>190.2.154.222</name></author>
	</entry>
	<entry>
		<id>https://consumerrights.wiki/index.php?title=Anthropic_system_prompt_overrides_consumersafety_in_Claude_models&amp;diff=52536</id>
		<title>Anthropic system prompt overrides consumersafety in Claude models</title>
		<link rel="alternate" type="text/html" href="https://consumerrights.wiki/index.php?title=Anthropic_system_prompt_overrides_consumersafety_in_Claude_models&amp;diff=52536"/>
		<updated>2026-04-28T16:10:03Z</updated>

		<summary type="html">&lt;p&gt;190.2.154.222: Created page with &amp;quot;{{ProductLineCargo |InProduction=No }} {{Ph-C-Int}}    ==Consumer-impact summary==  {{Ph-C-CIS}}    ==Incidents==  {{Ph-C-Inc}}  This is a list of all consumer-protection incidents related to this product line. Any incidents not mentioned here can be found in the {{PAGENAME}} category.  ===Example incident one (&amp;#039;&amp;#039;date&amp;#039;&amp;#039;)===  {{Main|link to the main CR Wiki article}}  Short summary of the incident (could be the same as the summary preceding the...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ProductLineCargo&lt;br /&gt;
|InProduction=No&lt;br /&gt;
}}&lt;br /&gt;
{{Ph-C-Int}}&lt;br /&gt;
&lt;br /&gt;
==Consumer-impact summary==&lt;br /&gt;
{{Ph-C-CIS}}&lt;br /&gt;
&lt;br /&gt;
==Incidents==&lt;br /&gt;
{{Ph-C-Inc}}&lt;br /&gt;
This is a list of all consumer-protection incidents related to this product line. Any incidents not mentioned here can be found in the [[:Category:{{PAGENAME}}|{{PAGENAME}} category]].&lt;br /&gt;
===Example incident one (&#039;&#039;date&#039;&#039;)===&lt;br /&gt;
{{Main|link to the main CR Wiki article}}&lt;br /&gt;
Short summary of the incident (could be the same as the summary preceding the article).&lt;br /&gt;
===Example incident two (&#039;&#039;date&#039;&#039;)===&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
==Products==&lt;br /&gt;
{{Ph-C-P}}&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
{{Ph-C-SA}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:{{PAGENAME}}]]&lt;/div&gt;</summary>
		<author><name>190.2.154.222</name></author>
	</entry>
</feed>