Rudxain (talk | contribs)
m wanted: JS
Rudxain (talk | contribs)
m link AI
Line 22: Line 22:
While it is good practice for a bot to respect robots.txt, there is no requirement for it, and there is no punishment for not following a website's wishes. It is additionally standard practice, but in no way enforced, that bots use a [[wikipedia:User-Agent header|User-Agent header]] to uniquely identify itself. This allows a website operator to observe a bot's traffic patterns, potentially blocking the bot outright if its scraping is not desirable. The header also typically contains a URL or email address that can be used to contact the operator in case of anomalies observed in its traffic.
While it is good practice for a bot to respect robots.txt, there is no requirement for it, and there is no punishment for not following a website's wishes. It is additionally standard practice, but in no way enforced, that bots use a [[wikipedia:User-Agent header|User-Agent header]] to uniquely identify itself. This allows a website operator to observe a bot's traffic patterns, potentially blocking the bot outright if its scraping is not desirable. The header also typically contains a URL or email address that can be used to contact the operator in case of anomalies observed in its traffic.


Unethical AI scraper bots do not follow robots.txt - in fact, they may not even request this file at all. They typically completely ignore it, instead opting to start from an entry point such as the root home page (<code>/</code>), working its way through an exponentially growing list of links as it finds them, with little to no delay between requests. The bots use false User-Agent header strings that would correspond to real web browsers on desktop or mobile operating systems - blocking them would also block legitimate users, or at least legitimate users on VPNs.
Unethical [[Artificial_intelligence|AI]] scraper bots do not follow robots.txt - in fact, they may not even request this file at all. They typically completely ignore it, instead opting to start from an entry point such as the root home page (<code>/</code>), working its way through an exponentially growing list of links as it finds them, with little to no delay between requests. The bots use false User-Agent header strings that would correspond to real web browsers on desktop or mobile operating systems - blocking them would also block legitimate users, or at least legitimate users on VPNs.


Some AI services opt to use separate User-Agent strings, potentially also ignoring robots.txt, when a request is made through user command rather than as part of model training. For example, ChatGPT identifies itself as <code>ChatGPT-User</code> rather than its standard <code>OpenAI</code> when it uses the "search the web" command - even if searching the web was an automatic decision. In a less favorable example, Perplexity AI in this same situation falsely identifies as a standard Chrome web browser running on Windows. AI companies defend this under the belief that they are not a "spider", but rather a "user agent" (like a web browser), when called upon by a user's request.<ref name="perplexity-aws" />
Some AI services opt to use separate User-Agent strings, potentially also ignoring robots.txt, when a request is made through user command rather than as part of model training. For example, ChatGPT identifies itself as <code>ChatGPT-User</code> rather than its standard <code>OpenAI</code> when it uses the "search the web" command - even if searching the web was an automatic decision. In a less favorable example, Perplexity AI in this same situation falsely identifies as a standard Chrome web browser running on Windows. AI companies defend this under the belief that they are not a "spider", but rather a "user agent" (like a web browser), when called upon by a user's request.<ref name="perplexity-aws" />
Line 91: Line 91:


On 17 March 2025, the Git source code host SourceHut announced that the service was being disrupted by large language model crawlers. Mitigations deployed to reduce disruption involved requiring login for some areas of the service, and blocking IP ranges of cloud providers, affecting legitimate use of the website by its users.<ref>{{Cite web |date=17 Mar 2025 |title=LLM crawlers continue to DDoS SourceHut |url=https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/ |website=sr.ht status |url-status=live |archive-url=http://web.archive.org/web/20251220125852/https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/ |archive-date=20 Dec 2025}}</ref> In response to the event, SourceHut founder Drew DeVault wrote a blog post entitled "[https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html Please stop externalizing your costs directly into my face]", discussing his frustrations with having ongoing and ever-adapting attacks that must be addressed in a timely fashion to reduce disruption to legitimate SourceHut users. DeVault estimates that between "20-100%" of his time is now spent addressing such attacks.
On 17 March 2025, the Git source code host SourceHut announced that the service was being disrupted by large language model crawlers. Mitigations deployed to reduce disruption involved requiring login for some areas of the service, and blocking IP ranges of cloud providers, affecting legitimate use of the website by its users.<ref>{{Cite web |date=17 Mar 2025 |title=LLM crawlers continue to DDoS SourceHut |url=https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/ |website=sr.ht status |url-status=live |archive-url=http://web.archive.org/web/20251220125852/https://status.sr.ht/issues/2025-03-17-git.sr.ht-llms/ |archive-date=20 Dec 2025}}</ref> In response to the event, SourceHut founder Drew DeVault wrote a blog post entitled "[https://drewdevault.com/2025/03/17/2025-03-17-Stop-externalizing-your-costs-on-me.html Please stop externalizing your costs directly into my face]", discussing his frustrations with having ongoing and ever-adapting attacks that must be addressed in a timely fashion to reduce disruption to legitimate SourceHut users. DeVault estimates that between "20-100%" of his time is now spent addressing such attacks.


==References==
==References==