Jump to content

User:Bythmusters

From Consumer Rights Wiki
Revision as of 20:28, 31 January 2026 by Bythmusters (talk | contribs) (Documented methodology to find pages without templates)

clippy's strongest soldier o7

CRW Backrooms
Moderators' noticeboard
Archive everything Cargo-complete
SpecialPages Category root
RecentChanges Todo categories
AncientPages Uncategorized categories
WantedPages Uncategorized pages
FewestRevisions Pages w/ most categories
ShortPages Most popular categories
Statistics

Good lists to monitor and trim down every now and then: Double redirects, broken redirects

It's 2026, who isn't using https-only? Turn http links to https

Search queries are a good way to find articles that have starter text: Pages containing [Incident]

Also template search, pages with: InfoboxCompany InfoboxProductLine (mostly gone)

Scroll through every page at once by namespace: Special:AllPages

Finding articles without certain templates

As of 1/31/26, I am trying to find articles which are missing any of the four cargo templates. I have manually scraped the list of pages with certain templates copying lists fromSpecial:WhatLinksHere and all pages in the main namespace (excluding redirects) from Special:AllPages into text documents. Using these lists, you can run a basic Python script and count how many lists each page is present in. Any page found in only 1 list must have no cargo templates, any page found in 2 lists has one cargo template, any pages with 3-5 lists has several cargo templates.

I am not an advanced programmer but here is my script:

# these files all generated 1/30/26 21 UTC

  1. generate list of filepaths to be read

filenames = ["allpages.txt", "company.txt", "incident.txt", "productline.txt", "product.txt"] pathprefix = "[your-path-here]" filepaths = [] for x in filenames:

   filepaths.append(pathprefix + x)
  1. print(filepaths)
  1. read files line-by-line, sanitize lines, and count in dictionary

match = " (transclusion) (← links | edit)" table = {} # String to Int: line, count for x in filepaths:

   with open(x, "r") as file: # auto closes file
       content = file.read()
       lines = content.split('\n')
       for line in lines:
           line = line.replace(match, "")
           # print(line)
           if line in table:
               table[line] += 1
           else:
               table[line] = 1
  1. print(table)
  1. read unsorted dict and sort into new dict with count as key

sortedtable = {} # Int to List[String]: count, lines for pair in table.items():

   line = pair[0]
   count = pair[1]
   if count in sortedtable:
       sortedtable[count].append(line)
   else:
       sortedtable[count] = [line]

print("#####Only in AllPages#####") for x in sortedtable[1]:

   print(x)

Then a list can be displayed or saved with Bash: python3 table.py | less