Jump to content

Anthropic: Difference between revisions

From Consumer Rights Wiki
mNo edit summary
Incidents: included a new incident about serious issues in Anthropic's System Prompt that can cause high risks for users of Claude models.
 
(14 intermediate revisions by 8 users not shown)
Line 1: Line 1:
{{StubNotice}}{{CompanyCargo
{{StubNotice}}{{CompanyCargo
| Founded = 2021
|Founded = 2021
| Industry = Artificial Intelligence
|Industry = Artificial Intelligence
| Description = American AI startup founded in 2021 commonly known for their family of LLMs named Claude.
|Description = American AI startup founded in 2021 commonly known for their family of LLMs named Claude.
| Website = https://anthropic.com
|Website = https://anthropic.com
| Logo = Anthropic logo.svg.png
|Logo = Anthropic logo.png
|Type=Private}}Anthropic PBC is a private for-profit American AI startup founded in 2021. Anthropic is mainly known for their family of large language models (LLMs) known as Claude.
|Type=Private
}}
 
'''{{wplink|Anthropic|Anthropic PBC}}''' is a private for-profit American [[artificial intelligence]] (AI) startup founded in 2021. Anthropic is mainly known for their family of large language models (LLMs) known as [[Claude]].


==Consumer impact summary==
==Consumer impact summary==
{{Ph-C-CIS}}
{{Ph-C-CIS}}
This section is incomplete and is a placeholder.


==Incidents==
==Incidents==
{{Ph-C-Inc}}
This is a list of all consumer-protection incidents this company is involved in. Any incidents not mentioned here can be found in the [[:Category:{{FULLPAGENAME}}|{{PAGENAME}} category]].
 
===Claude Code HERMES.md billing flaw (2026)===
{{Main|Anthropic Claude Code HERMES.md billing flaw}}
In April 2026, a technical flaw in Claude Code triggered by the string "HERMES.md" in git commit messages bypassed subscription plans, routing users to pay-as-you-go API rates and charging one account over $200. Anthropic refused to issue a refund, categorizing the overcharge as an un-refundable technical error.
<!-- INCIDENT_SCORE: Anthropic Claude Code HERMES.md billing flaw | 65/100 | Documented overcharge without refund -->
 
===Price crackdown against third-party tool usage (2026)===
During April 3rd, 2026, Boris Cherny, head of Claude Code, posted on [[X Corp|Twitter]] (now X) announcing Claude subscriptions will "no longer support third-party tools", such as OpenClaw because it puts an "outsized strain" on Anthropic's systems. The changes took effect on April 4th, and now to use third-party tools the user must pay a separate fee from subscription or use a separate [[Claude]] API key through Anthropic's developer platform. It is rumored this action was done to prevent Claude users from using tools from competitors, as OpenClaw is supported by [[OpenAI]]. <ref>[https://nitter.catsarch.com/bcherny/status/2040206441756471399 https://x.com/bcherny/status/2040206441756471399] - [https://web.archive.org/web/20260405235237/https://nitter.catsarch.com/bcherny/status/2040206441756471399 Archived]</ref><ref>{{Cite web |last=Lee |first=Lloyd |date=3 Apr 2026 |title=Anthropic says Claude subscriptions will no longer support OpenClaw because it puts an 'outsized strain' on systems |url=https://www.businessinsider.com/anthropic-cuts-off-openclaw-support-claude-subscriptions-2026-4 |url-status=live |archive-url=https://web.archive.org/web/20260404024034/https://www.businessinsider.com/anthropic-cuts-off-openclaw-support-claude-subscriptions-2026-4 |archive-date=2026-04-04 |access-date=5 Apr 2026 |website=Business Insider}}</ref><ref>{{Cite web |last=Ha |first=Anthony |date=4 Apr 2026|title=Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage |url=https://techcrunch.com/2026/04/04/anthropic-says-claude-code-subscribers-will-need-to-pay-extra-for-openclaw-support/ |url-status=live |archive-url=https://web.archive.org/web/20260404163645/https://techcrunch.com/2026/04/04/anthropic-says-claude-code-subscribers-will-need-to-pay-extra-for-openclaw-support/ |archive-date=2026-04-04 |access-date=5 Apr 2026 |website=TechCrunch}}</ref>
 
=== Anthropic System Prompt Overrides Consumer Safety ===
{{Main|Anthropic system prompt overrides consumersafety in Claude models}}
During April 24-26, 2026, it was discovered that Anthropic System Prompt creates dangerous situations for users interacting with Claude Sonnet and Opus line of products. Of the many examples, particularly concerning one is with respect to users communicating topics such as suicide and self-harm during a chat session with Claude. In such situations, for which the Claude models are commanded by System Prompt to necessarily and immediately provide information about crisis helplines, Claude can fail to respond with appropriate information. This is because, "crisis helplines vary widely in their confidentiality practices and mandatory reporting obligations" across jurisdictions, and Claude doesn't have a way to reliably indicate that to a user while also finding the user's correct geographic location. Other faults that are "structurally irreconcilable" by Claude due to conflicts and various inconsistencies within its System Prompt can result in unexpected side-effects, cyber security vulnerabilities, and hazardous outputs. 


==Products==
==Products==
{{Ph-C-P}}
*[[Claude]]
*Claude Code
*Cowork


==See also==
==See also==
*[[OpenAI]]
*[[OpenAI]]
*[[CursorAI "unlimited" plan rug pull]]
*[[ChatGPT]]


==References==
==References==
<references />
{{Reflist}}
 
[[Category:{{PAGENAME}}]]

Latest revision as of 19:31, 28 April 2026

Article Status Notice: This Article is a stub


This article is underdeveloped, and needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Learn more ▼

Anthropic
Basic information
Founded 2021
Legal Structure Private
Industry Artificial Intelligence
Also known as
Official website https://anthropic.com

Anthropic PBC is a private for-profit American artificial intelligence (AI) startup founded in 2021. Anthropic is mainly known for their family of large language models (LLMs) known as Claude.

Consumer impact summary

[edit | edit source]

Overview of concerns that arise from the conduct towards users of the product (if applicable):

  • User Freedom
  • User Privacy
  • Business Model
  • Market Control

Add your text below this box. Once this section is complete, delete this box by clicking on it and pressing backspace.


Incidents

[edit | edit source]

This is a list of all consumer-protection incidents this company is involved in. Any incidents not mentioned here can be found in the Anthropic category.

Claude Code HERMES.md billing flaw (2026)

[edit | edit source]
Main article: Anthropic Claude Code HERMES.md billing flaw

In April 2026, a technical flaw in Claude Code triggered by the string "HERMES.md" in git commit messages bypassed subscription plans, routing users to pay-as-you-go API rates and charging one account over $200. Anthropic refused to issue a refund, categorizing the overcharge as an un-refundable technical error.

Price crackdown against third-party tool usage (2026)

[edit | edit source]

During April 3rd, 2026, Boris Cherny, head of Claude Code, posted on Twitter (now X) announcing Claude subscriptions will "no longer support third-party tools", such as OpenClaw because it puts an "outsized strain" on Anthropic's systems. The changes took effect on April 4th, and now to use third-party tools the user must pay a separate fee from subscription or use a separate Claude API key through Anthropic's developer platform. It is rumored this action was done to prevent Claude users from using tools from competitors, as OpenClaw is supported by OpenAI. [1][2][3]

Anthropic System Prompt Overrides Consumer Safety

[edit | edit source]
Main article: Anthropic system prompt overrides consumersafety in Claude models

During April 24-26, 2026, it was discovered that Anthropic System Prompt creates dangerous situations for users interacting with Claude Sonnet and Opus line of products. Of the many examples, particularly concerning one is with respect to users communicating topics such as suicide and self-harm during a chat session with Claude. In such situations, for which the Claude models are commanded by System Prompt to necessarily and immediately provide information about crisis helplines, Claude can fail to respond with appropriate information. This is because, "crisis helplines vary widely in their confidentiality practices and mandatory reporting obligations" across jurisdictions, and Claude doesn't have a way to reliably indicate that to a user while also finding the user's correct geographic location. Other faults that are "structurally irreconcilable" by Claude due to conflicts and various inconsistencies within its System Prompt can result in unexpected side-effects, cyber security vulnerabilities, and hazardous outputs.

Products

[edit | edit source]

See also

[edit | edit source]

References

[edit | edit source]
  1. https://x.com/bcherny/status/2040206441756471399 - Archived
  2. Lee, Lloyd (3 Apr 2026). "Anthropic says Claude subscriptions will no longer support OpenClaw because it puts an 'outsized strain' on systems". Business Insider. Archived from the original on 2026-04-04. Retrieved 5 Apr 2026.
  3. Ha, Anthony (4 Apr 2026). "Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage". TechCrunch. Archived from the original on 2026-04-04. Retrieved 5 Apr 2026.