Jump to content

Claude

From Consumer Rights Wiki

Article Status Notice: This Article is a stub


This article is underdeveloped, and needs additional work to meet the wiki's Content Guidelines and be in line with our Mission Statement for comprehensive coverage of consumer protection issues. Learn more ▼

Claude
Basic Information
Release Year 2023
Product Type Software, Artificial Intelligence, Generative AI, Large Language Models
In Production Yes
Official Website https://claude.ai/


Claude is a generative artificial intelligence, a large language model (LLM) developed and released by Anthropic. It was created with the objective of being a safe AI for the public. Claude family includes Haiku, a fast and cheaper model, Sonnet, a more complex model capable of completing more complex tasks, and Opus, their most advanced model.

Consumer impact summary

[edit | edit source]

Overview of concerns that arise from the conduct towards users of the product (if applicable):

  • User Freedom
  • User Privacy
  • Business Model
  • Market Control

Add your text below this box. Once this section is complete, delete this box by clicking on it and pressing backspace.


Business model

[edit | edit source]

Free tier users have a limited access to only one version of the LLM. The LLM token limit ends after generating some messages, but the platform doesn't specify how much tokens or credits are left. Some experimental or advanced features of Claude can be very limited for free users or is just paywalled.[citation needed]

Third-party usage

[edit | edit source]

Anthropic provides two ways of accessing their LLMs: subscriptions and direct API usage. API pricing is simple: you pay for what you use. By paying for a subscription you can get up to 13.5x worth of API usage — a "Max" subscription costs $200 per month but allows users to use up to $3,000 worth of actual API costs [source]. However paying for a subscription means user is locked into Anthropic's tools only — either their web app or desktop app (Claude Code). Several talented engineers managed to hijack the behavior of Anthropic's tools and hence use limits of their subscriptions in third-party tools. Anthropic responded by changing their policies and banning any suspicions of account subscriptions being used outside of their first-party applications.[1] Several users on X complained about being banned even without ever taking part in such activities, namely those who have used multiple accounts with named subscription on the same computer.[citation needed]

Anthropic's most popular product, Claude Code, is closed-source, meaning the actual code used to make the application is not public. Additionally the distributed version of said application is obfuscated. Obfuscation is a common process used in order to make reading code more difficult, essentially impossible without the use of reverse-engineering tools. In 2025 Anthropic accidentally published "source maps" of the application, aiding in mapping the obfuscated code to its original form. Some developers posted said information online, to which Anthropic responded with DMCA claims.[2] GitHub, a platform for sharing code, keeps a track of DMCA claims and makes them public.[3]

Privacy

[edit | edit source]

In order to use the LLM, a person must sign in with an e-mail address or to log in with a Google account, as well as obligatorily verify a smartphone number.[4]

Anthropic may use input and output data from their services to train their AI models. Users are able to opt-out if they want to. However, Anthropic may still collect inputs and outputs that belong to conversations that have been flagged for safety review or content that has been reported by the user.[5]

Incidents

[edit | edit source]

This is a list of all consumer-protection incidents related to this product. Any incidents not mentioned here can be found in the Claude category.

ID verification for newer accounts and access of advanced features (April 2026)

[edit | edit source]

In April 2026, Anthropic announced they would implement identity verification that are done by the third-party provider Persona "for a few use cases" to access to advanced features and to verify new accounts created. The reason given was to "prevent abuse, enforce our usage policies and comply with legal obligations" (at a moment there's no currently known legislation in the US or any country that enforces AI companies to ask for identity verification). The purpose for the ID verification is "to confirm who you are and not for any other purposes".[6]

Tests for removing Claude Code features from Pro subscription tier (April 2026)

[edit | edit source]

In April 2026, users reported on social media that the Claude pricing page shows that Claude Code is not supported in the Pro subscription tier. After the reports from users, Anthropic replied that this was caused by a modification done for testing purposes, affecting 2% of the users visiting the site.[7]

See also

[edit | edit source]

References

[edit | edit source]
  1. Zolkos, Rob (18 Feb 2026). "Rob Zolkos on X". X. Archived from the original on 23 Apr 2026. Retrieved 22 Apr 2026.
  2. Wiggers, Kyle (25 Apr 2025). "Anthropic sent a takedown notice to a dev trying to reverse-engineer its coding tool". TechCrunch. Archived from the original on 25 Apr 2025. Retrieved 22 Apr 2026.
  3. "Code search results". GitHub.{{cite web}}: CS1 maint: url-status (link)
  4. "Verifying your phone number | Claude Help Center". Claude. 24 Mar 2026. Archived from the original on 8 Apr 2026. Retrieved 24 Mar 2026.
  5. "Privacy Policy". Anthropic. 20 Mar 2023. Archived from the original on 11 Feb 2026. Retrieved 27 Jan 2026.
  6. Pirat_Nation (16 Apr 2026). "Pirat_Nation on X". X. Archived from the original on 23 Apr 2026. Retrieved 22 Apr 2026.
  7. Axon, Samuel (22 Apr 2026). "Anthropic tested removing Claude Code from the Pro plan". Ars Technica. Archived from the original on 22 Apr 2026. Retrieved 22 Apr 2026.