<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://consumerrights.wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=146.70.202.155</id>
	<title>Consumer Rights Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://consumerrights.wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=146.70.202.155"/>
	<link rel="alternate" type="text/html" href="https://consumerrights.wiki/w/Special:Contributions/146.70.202.155"/>
	<updated>2026-04-29T01:57:50Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://consumerrights.wiki/index.php?title=Claude&amp;diff=47018</id>
		<title>Claude</title>
		<link rel="alternate" type="text/html" href="https://consumerrights.wiki/index.php?title=Claude&amp;diff=47018"/>
		<updated>2026-03-24T23:57:33Z</updated>

		<summary type="html">&lt;p&gt;146.70.202.155: /* Privacy */  I tried to cite the official source for my statement that it&amp;#039;s mandatory to verify phone number, im sorry if I didnt do it right, Im trying to help but I never edited a wiki or webpage before&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StubNotice}}{{ProductCargo&lt;br /&gt;
|Company=Anthropic&lt;br /&gt;
|ProductLine=&lt;br /&gt;
|ReleaseYear=2023&lt;br /&gt;
|InProduction=Yes&lt;br /&gt;
|ArticleType=Product&lt;br /&gt;
|Category=Software, Artificial Intelligence, Generative AI, Large Language Models&lt;br /&gt;
|Website=https://claude.ai/&lt;br /&gt;
|Description=&lt;br /&gt;
|Logo=Claude AI logo.svg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claude&#039;&#039;&#039; is a generative artificial intelligence, a large language model (LLM) developed and released by [[Anthropic]]. It was created with the objective of being a safe AI for the public. Claude family includes Haiku, a fast and cheaper model, Sonnet, a more complex model capable of completing more complex tasks, and Opus, their most advanced model.&lt;br /&gt;
&lt;br /&gt;
==Consumer-impact summary==&lt;br /&gt;
{{Ph-C-CIS}}&lt;br /&gt;
====Business model====&lt;br /&gt;
Free tier users have a limited access to only one version of the LLM. The LLM token limit ends after generating some messages, but the platform doesn&#039;t specify how much tokens or credits are left. Some experimental or advanced features of Claude can be very limited for free users or is just paywalled. {{Citation needed}}&lt;br /&gt;
&lt;br /&gt;
====Third-Party Usage====&lt;br /&gt;
Anthropic provides two ways of accessing their LLMs: subscriptions and direct API usage. API pricing is simple: you pay for what you use. By paying for a subscription you can get up to 13.5x worth of API usage - a &amp;quot;Max&amp;quot; subscription costs $200 per month but allows users to use up to $3,000 worth of actual API costs [[https://she-llac.com/claude-limits source]]. However paying for a subscription means user is locked into Anthropic&#039;s tools only - either their web app or desktop app (claude code). Several talented engineers managed to hijack the behavior of Anthropic&#039;s tools and hence use limits of their subscriptions in third-party tools. Anthropic responded by changing their policies and banning any suspicions of account subscriptions being used outside of their first-party applications [[https://x.com/robzolkos/status/2024125323755884919 source]]. Several users on twitter/X complained about being banned even without ever taking part in such activities, namely those, who have used multiple accounts with named subscription on the same computer. For a more detailed (and opinionated) view on Anthropic&#039;s lack of transparency and unusual business practices, follow [https://www.youtube.com/watch?v=M-pkXr-qqII this video].&lt;br /&gt;
&lt;br /&gt;
====DMCA====&lt;br /&gt;
Anthropic&#039;s most popular product Claude Code is closed-source, meaning the actual code used to make the application is not public. Additionally the distributed version of said application is obfuscated. Obfuscation is a common process used in order to make reading code more difficult, essentially impossible without the use of reverse-engineering tools. In 2025 Anthropic accidentally published &amp;quot;source maps&amp;quot; of the application, aiding in mapping the obfuscated code to its original form. Some developers posted said information online, to which Anthropic responded with DMCA claims[[https://techcrunch.com/2025/04/25/anthropic-sent-a-takedown-notice-to-a-dev-trying-to-reverse-engineer-its-coding-tool/ source]]. Github, a platform for sharing code, keeps a track of DMCA claims and makes them public [[https://github.com/search?q=repo%3Agithub%2Fdmca%20anthropic&amp;amp;type=code relevant claims]].&lt;br /&gt;
&lt;br /&gt;
====Privacy====&lt;br /&gt;
In order to use the LLM, a person must sign in with an e-mail address or to log in with a Google account, as well as obligatorily verify a smartphone number.&amp;lt;ref&amp;gt;{{Cite web |first=Claude Help Center (Anthropic) |date=2026-03-24 |title=Verifying your phone number {{!}} Claude Help Center |url=https://support.claude.com/en/articles/8287232-verifying-your-phone-number |url-status=live |access-date=2026-03-24 |website=support.claude.com}}&amp;lt;/ref&amp;gt;{{Citation needed}} &lt;br /&gt;
&lt;br /&gt;
Anthropic may use input and output data from their services to train their AI models. Users are able to opt-out if they want to. However, Anthropic may still collect inputs and outputs that belong to conversations that have been flagged for safety review or content that has been reported by the user.&lt;br /&gt;
&amp;lt;ref&amp;gt;{{Cite web |date=20 March 2023 |title=Privacy Policy |url=https://www.anthropic.com/legal/privacy |url-status=live |access-date=27 Jan 2026 |website=Anthropic |archive-url=http://web.archive.org/web/20260211053754/https://www.anthropic.com/legal/privacy |archive-date=11 Feb 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Incidents==&lt;br /&gt;
{{Ph-C-Inc}}&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[OpenAI]]&lt;br /&gt;
*[[Artificial intelligence]]&lt;br /&gt;
*[[ChatGPT]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
[[Category:Claude]]&lt;/div&gt;</summary>
		<author><name>146.70.202.155</name></author>
	</entry>
	<entry>
		<id>https://consumerrights.wiki/index.php?title=Claude&amp;diff=47017</id>
		<title>Claude</title>
		<link rel="alternate" type="text/html" href="https://consumerrights.wiki/index.php?title=Claude&amp;diff=47017"/>
		<updated>2026-03-24T23:50:20Z</updated>

		<summary type="html">&lt;p&gt;146.70.202.155: /* Privacy */ Claude/Anthropic also asks for a phone number to link your account to your physical identity/phone number, this is mandatory and can&amp;#039;t be skipped sadly&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StubNotice}}{{ProductCargo&lt;br /&gt;
|Company=Anthropic&lt;br /&gt;
|ProductLine=&lt;br /&gt;
|ReleaseYear=2023&lt;br /&gt;
|InProduction=Yes&lt;br /&gt;
|ArticleType=Product&lt;br /&gt;
|Category=Software, Artificial Intelligence, Generative AI, Large Language Models&lt;br /&gt;
|Website=https://claude.ai/&lt;br /&gt;
|Description=&lt;br /&gt;
|Logo=Claude AI logo.svg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claude&#039;&#039;&#039; is a generative artificial intelligence, a large language model (LLM) developed and released by [[Anthropic]]. It was created with the objective of being a safe AI for the public. Claude family includes Haiku, a fast and cheaper model, Sonnet, a more complex model capable of completing more complex tasks, and Opus, their most advanced model.&lt;br /&gt;
&lt;br /&gt;
==Consumer-impact summary==&lt;br /&gt;
{{Ph-C-CIS}}&lt;br /&gt;
====Business model====&lt;br /&gt;
Free tier users have a limited access to only one version of the LLM. The LLM token limit ends after generating some messages, but the platform doesn&#039;t specify how much tokens or credits are left. Some experimental or advanced features of Claude can be very limited for free users or is just paywalled. {{Citation needed}}&lt;br /&gt;
&lt;br /&gt;
====Third-Party Usage====&lt;br /&gt;
Anthropic provides two ways of accessing their LLMs: subscriptions and direct API usage. API pricing is simple: you pay for what you use. By paying for a subscription you can get up to 13.5x worth of API usage - a &amp;quot;Max&amp;quot; subscription costs $200 per month but allows users to use up to $3,000 worth of actual API costs [[https://she-llac.com/claude-limits source]]. However paying for a subscription means user is locked into Anthropic&#039;s tools only - either their web app or desktop app (claude code). Several talented engineers managed to hijack the behavior of Anthropic&#039;s tools and hence use limits of their subscriptions in third-party tools. Anthropic responded by changing their policies and banning any suspicions of account subscriptions being used outside of their first-party applications [[https://x.com/robzolkos/status/2024125323755884919 source]]. Several users on twitter/X complained about being banned even without ever taking part in such activities, namely those, who have used multiple accounts with named subscription on the same computer. For a more detailed (and opinionated) view on Anthropic&#039;s lack of transparency and unusual business practices, follow [https://www.youtube.com/watch?v=M-pkXr-qqII this video].&lt;br /&gt;
&lt;br /&gt;
====DMCA====&lt;br /&gt;
Anthropic&#039;s most popular product Claude Code is closed-source, meaning the actual code used to make the application is not public. Additionally the distributed version of said application is obfuscated. Obfuscation is a common process used in order to make reading code more difficult, essentially impossible without the use of reverse-engineering tools. In 2025 Anthropic accidentally published &amp;quot;source maps&amp;quot; of the application, aiding in mapping the obfuscated code to its original form. Some developers posted said information online, to which Anthropic responded with DMCA claims[[https://techcrunch.com/2025/04/25/anthropic-sent-a-takedown-notice-to-a-dev-trying-to-reverse-engineer-its-coding-tool/ source]]. Github, a platform for sharing code, keeps a track of DMCA claims and makes them public [[https://github.com/search?q=repo%3Agithub%2Fdmca%20anthropic&amp;amp;type=code relevant claims]].&lt;br /&gt;
&lt;br /&gt;
====Privacy====&lt;br /&gt;
In order to use the LLM, a person must sign in with an e-mail address or to log in with a Google account, as well as obligatorily verify a smartphone number.{{Citation needed}} &lt;br /&gt;
&lt;br /&gt;
Anthropic may use input and output data from their services to train their AI models. Users are able to opt-out if they want to. However, Anthropic may still collect inputs and outputs that belong to conversations that have been flagged for safety review or content that has been reported by the user.&lt;br /&gt;
&amp;lt;ref&amp;gt;{{Cite web |date=20 March 2023 |title=Privacy Policy |url=https://www.anthropic.com/legal/privacy |url-status=live |access-date=27 Jan 2026 |website=Anthropic |archive-url=http://web.archive.org/web/20260211053754/https://www.anthropic.com/legal/privacy |archive-date=11 Feb 2026}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Incidents==&lt;br /&gt;
{{Ph-C-Inc}}&lt;br /&gt;
&lt;br /&gt;
==See also==&lt;br /&gt;
*[[OpenAI]]&lt;br /&gt;
*[[Artificial intelligence]]&lt;br /&gt;
*[[ChatGPT]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{reflist}}&lt;br /&gt;
[[Category:Claude]]&lt;/div&gt;</summary>
		<author><name>146.70.202.155</name></author>
	</entry>
</feed>