Difference between revisions of "MediaWiki talk:Spam-whitelist"

From Dungeons and Dragons Wiki
Jump to: navigation, search
m (typos)
 
Line 9: Line 9:
 
::Wait, hold on, this doesn't seem to be working right. It says it only matches the host portion, but this link doesn't work: <nowiki>[http://www.facebook.com/whatever Bad link]</nowiki>, but this one does: [http://www.facebook.com/dndwiki Good link], presumably on the basis of the rule <code>www.facebook.com/dndwiki</code>. Anyone know how this page ''actually'' works? --[[User:DanielDraco|DanielDraco]] ([[User talk:DanielDraco|talk]]) 11:06, 24 June 2017 (MDT)
 
::Wait, hold on, this doesn't seem to be working right. It says it only matches the host portion, but this link doesn't work: <nowiki>[http://www.facebook.com/whatever Bad link]</nowiki>, but this one does: [http://www.facebook.com/dndwiki Good link], presumably on the basis of the rule <code>www.facebook.com/dndwiki</code>. Anyone know how this page ''actually'' works? --[[User:DanielDraco|DanielDraco]] ([[User talk:DanielDraco|talk]]) 11:06, 24 June 2017 (MDT)
  
:::Okay, I did some tinkering and it turns out that this page does not work at all like advertise. Each like is a regex, but it does ''not'' only match the domain. The documentation says that the matching begins ''after'' the <code>http://</code> portion, but that the <code>^</code> token doesn't match the beginning of the tested text. This makes it essentially impossible, as far as I can tell, to make this actually work as intended. Maybe we can match the beginning of a hostname with something like <code>(?<=^https?://)</code>, but that's more experimentation than I want to do right now. For right now, I'm just going to escape all the periods. It does its primary job of preventing spam either way. --[[User:DanielDraco|DanielDraco]] ([[User talk:DanielDraco|talk]]) 09:37, 4 September 2017 (MDT)
+
:::Okay, I did some tinkering and it turns out that this page does not work at all like advertised. Each line is a regex, but it does ''not'' only match the domain. The documentation says that the matching begins ''after'' the <code>http://</code> portion, but that the <code>^</code> token doesn't match the beginning of the tested text. This makes it essentially impossible, as far as I can tell, to make this actually work as intended. Maybe we can match the beginning of a hostname with something like <code>(?<=^https?://)</code>, but that's more experimentation than I want to do right now. For right now, I'm just going to escape all the periods. It does its primary job of preventing spam either way. --[[User:DanielDraco|DanielDraco]] ([[User talk:DanielDraco|talk]]) 09:37, 4 September 2017 (MDT)

Latest revision as of 15:43, 4 September 2017

I think we might have done this list wrong.

--DanielDraco (talk) 11:02, 24 June 2017 (MDT)

Yep, definitely. I'll go through and clean up. Easy fix. --DanielDraco (talk) 11:02, 24 June 2017 (MDT)
Wait, hold on, this doesn't seem to be working right. It says it only matches the host portion, but this link doesn't work: [http://www.facebook.com/whatever Bad link], but this one does: Good link, presumably on the basis of the rule www.facebook.com/dndwiki. Anyone know how this page actually works? --DanielDraco (talk) 11:06, 24 June 2017 (MDT)
Okay, I did some tinkering and it turns out that this page does not work at all like advertised. Each line is a regex, but it does not only match the domain. The documentation says that the matching begins after the http:// portion, but that the ^ token doesn't match the beginning of the tested text. This makes it essentially impossible, as far as I can tell, to make this actually work as intended. Maybe we can match the beginning of a hostname with something like (?<=^https?://), but that's more experimentation than I want to do right now. For right now, I'm just going to escape all the periods. It does its primary job of preventing spam either way. --DanielDraco (talk) 09:37, 4 September 2017 (MDT)