Jump to content

User:Bythmusters: Difference between revisions

From Consumer Rights Wiki
added a note that this hyperlinked on the homepage, because it's probably a typo, seeing as there's nothing on the page.
 
Bythmusters (talk | contribs)
m ce
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
Someone should tell Louis that this hyperlinked in the 1 year anniversary page. 4th paragraph.
clippy's strongest soldier o7
{| class="wikitable"
|+CRW Backrooms
! colspan="2" |[[Consumer Rights Wiki talk:Moderators' noticeboard|'''<big>Moderators' noticeboard</big>''']]
|-
![[Projects:Archive everything|Archive everything]]
![[Projects:Cargo-complete|Cargo-complete]]
|-
!'''[[Special:SpecialPages|<big>SpecialPages</big>]]'''
!'''[[:Category:Wiki root|<big>Category root</big>]]'''
|-
|[[Special:RecentChanges|RecentChanges]]
|[[:Category:Todo|Todo categories]]
|-
|[[Special:AncientPages|AncientPages]]
|[[Special:UncategorizedCategories|Uncategorized categories]]
|-
|[[Special:WantedPages|WantedPages]]
|[[Special:UncategorizedPages|Uncategorized pages]]
|-
|[[Special:FewestRevisions|FewestRevisions]]
|[[Special:MostCategories|Pages w/ most categories]]
|-
|[[Special:ShortPages|ShortPages]]
|[[Special:MostLinkedCategories|Most popular categories]]
|-
! colspan="2" |[[Special:Statistics|<big>Statistics</big>]]
|}
Redirect cleanup: [[Special:DoubleRedirects|Double redirects]], [[Special:BrokenRedirects|broken redirects]]
 
It's 2026, who isn't using https-only? [https://consumerrights.wiki/w/Special:LinkSearch?target=http%3A%2F%2F*&namespace= Turn http links to https]
 
Search queries are a good way to find articles that have starter text or broken elements: [https://consumerrights.wiki/index.php?search=%5BIncident%5D&title=Special%3ASearch&wprov=acrw1_-1 <nowiki>Pages containing [Incident]</nowiki>]
 
Also template search, pages with: [https://consumerrights.wiki/index.php?title=Special:WhatLinksHere/Template:InfoboxCompany&limit=500 InfoboxCompany] [https://consumerrights.wiki/w/Special:WhatLinksHere?target=Template%3AInfoboxProductLine&namespace=&limit=50 InfoboxProductLine] (mostly gone)
 
Scroll through every page at once by namespace: [[Special:AllPages]]
 
==Finding articles without certain templates==
As of 1/31/26, I am trying to find articles which are missing any of the four [[Help:Templates#Cargo|cargo templates]]. I have manually scraped the list of pages with certain templates copying lists from [[Special:WhatLinksHere]] and all pages in the main namespace (excluding redirects) from [[Special:AllPages]] into text documents. Using these lists, you can run a basic Python script and count how many lists each page is present in. Any page found in only 1 list must have no cargo templates, any page found in 2 lists has one cargo template, any pages with 3-5 lists has several cargo templates.
 
I am not an advanced programmer but here is my script:
 
<nowiki># these files all generated 1/30/26 21 UTC
# generate list of filepaths to be read
filenames = ["allpages.txt", "company.txt", "incident.txt", "productline.txt", "product.txt"]
pathprefix = "[your-path-here]"
filepaths = []
for x in filenames:
    filepaths.append(pathprefix + x)
# print(filepaths)
# read files line-by-line, sanitize lines, and count in dictionary
match = " (transclusion) (← links | edit)"
table = {} # String to Int: line, count
for x in filepaths:
    with open(x, "r") as file: # auto closes file
        content = file.read()
        lines = content.split('\n')
        for line in lines:
            line = line.replace(match, "")
            # print(line)
            if line in table:
                table[line] += 1
            else:
                table[line] = 1
# print(table)
# read unsorted dict and sort into new dict with count as key
sortedtable = {} # Int to List[String]: count, lines
for pair in table.items():
    line = pair[0]
    count = pair[1]
    if count in sortedtable:
        sortedtable[count].append(line)
    else:
        sortedtable[count] = [line]
print("#####Only in AllPages#####")
for x in sortedtable[1]:
    print(x)</nowiki>
 
Then a list can be displayed or saved with Bash: <code>python3 table.py | less</code>

Latest revision as of 03:39, 3 February 2026

clippy's strongest soldier o7

CRW Backrooms
Moderators' noticeboard
Archive everything Cargo-complete
SpecialPages Category root
RecentChanges Todo categories
AncientPages Uncategorized categories
WantedPages Uncategorized pages
FewestRevisions Pages w/ most categories
ShortPages Most popular categories
Statistics

Redirect cleanup: Double redirects, broken redirects

It's 2026, who isn't using https-only? Turn http links to https

Search queries are a good way to find articles that have starter text or broken elements: Pages containing [Incident]

Also template search, pages with: InfoboxCompany InfoboxProductLine (mostly gone)

Scroll through every page at once by namespace: Special:AllPages

Finding articles without certain templates

[edit | edit source]

As of 1/31/26, I am trying to find articles which are missing any of the four cargo templates. I have manually scraped the list of pages with certain templates copying lists from Special:WhatLinksHere and all pages in the main namespace (excluding redirects) from Special:AllPages into text documents. Using these lists, you can run a basic Python script and count how many lists each page is present in. Any page found in only 1 list must have no cargo templates, any page found in 2 lists has one cargo template, any pages with 3-5 lists has several cargo templates.

I am not an advanced programmer but here is my script:

# these files all generated 1/30/26 21 UTC
 # generate list of filepaths to be read
 filenames = ["allpages.txt", "company.txt", "incident.txt", "productline.txt", "product.txt"]
 pathprefix = "[your-path-here]"
 filepaths = [] 
 for x in filenames:
     filepaths.append(pathprefix + x)
 # print(filepaths)
 
 # read files line-by-line, sanitize lines, and count in dictionary
 match = " (transclusion) (← links | edit)"
 table = {} # String to Int: line, count
 for x in filepaths:
     with open(x, "r") as file: # auto closes file
         content = file.read()
         lines = content.split('\n')
         for line in lines:
             line = line.replace(match, "")
             # print(line)
             if line in table:
                 table[line] += 1
             else:
                 table[line] = 1
 # print(table)
 
 # read unsorted dict and sort into new dict with count as key
 sortedtable = {} # Int to List[String]: count, lines
 for pair in table.items():
     line = pair[0]
     count = pair[1]
 
     if count in sortedtable:
         sortedtable[count].append(line)
     else:
         sortedtable[count] = [line]
 
 print("#####Only in AllPages#####")
 for x in sortedtable[1]:
     print(x)

Then a list can be displayed or saved with Bash: python3 table.py | less