Forum Moderators: open

Message Too Old, No Replies

More X11 / Ubuntu old-Firefox Activity

         

Bubalo

11:52 am on Jul 8, 2023 (gmt 0)

Top Contributors Of The Month




Hi all.

I am a new member and this my first post.

I am "web mastering" ( a steep learning curve for me) my own personal web site that features some of my artworks and photographs.

I have visited Webmaster World a few times before joining for helpful guidance - particularly about - the User Agent abuse from - Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0.

I am pretty sure, from evidence behavior on my site (but I could be wrong) that this U/A it is a scraper (possibly human but probably a unnamed/unknown bot) that is behind this user agent. It ignores my robots.txt file block request. It switches IP's frequently - most IP's show up in AbuseIPDB website as known dodgy IP's - but some IP's it uses are alarming - the latest being the French Atomic Energy Agency! - A lot of Universities/Schools, Cloud Proxies, and Amazon Aws.

The reason I think this is a scraper is from log reports - here is an example:

8 Jul 2023, 01:34:47104.219.213.35GET1.1200162,241425Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0
8 Jul 2023, 01:31:5844.229.15.165GET1.140316,3690Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0
8 Jul 2023, 01:30:2844.229.15.165GET1.140316,3690Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0

When is encounters a 403 Response Code (my ip block) - it switches IP - to something new and when is gets a 200 Response Code it then takes any thing from a few hundred Time Taken (ms) to over two thousand Time Taken(ms) to GET what it wants. It usually does this in blocks of 3 attempts and maybe only 5 or 6 attempts in one 24 hour period before moving on to a different target on my website. It seems to be concentrating on GETTING individual images (.jpeg) (hundreds of them on the site). There are only 13 different HTML pages on the site. There is no advertising on the site and it is NOT a commercial site as nothing is offered for sale on the site.

Of course I wondered at first was this U/A legitimate soon after it appeared a few months ago so when I noticed this Forum message about the botnet coming back I became more suspicious. As I blocked the IP's it just seemed to switch to new IP's as fast as I blocked them.

I also noticed many of the IP's were associated with China, North Korea, Hong Kong but as I blocked these - the IP's switching went worldwide - USA, UK, etc. So I tried blocking the countries China and HK - and then there was a marked increase in the U/A string using international IP's.

So far I have blocked probably a hundred different IP's and incidences now seem to be slowing down - most now come out of the USA.

I have not used the .htaccess file to attempt to block as I am pretty sure X11 the U/A will ignore that too.

I few days ago I decided as an experiment to lift the county block for China and -- I got over 30 hits from x11 in 24 hours - so I blocked China again. I don't get any audience traffic from China - other than hosting companies like 10 cent so I thought no great loss of traffic and so worth a shot to see what happened.

So. X11 seems to originate form China but what is behind it?

I notice on GitHub A LOT of people learning or using scraping use the X11 user agent string - and there is advice there for them to switch it often to another UA !

Legitimate traffic to my site does not seem to be down much and it usually fluctuates up and down anyway - but I do fear the X11 trouble could get much worse as others posting on WM world have indicated has occurred on their web sites. I don't want this to happen to me. My host does not have a anti-scrape tool, yet, And *loudflare has other problems I don't want to touch

I thought to post my experience here and welcome all comments and suggestions from you guys who are more experienced.
Thanks.




[edited by: not2easy at 1:55 pm (utc) on Jul 8, 2023]
[edit reason] split thread cleanup [/edit]

not2easy

3:11 pm on Jul 8, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Hi Bubalo and welcome to WebmasterWorld [webmasterworld.com]

I split your thread off from the older discussion here: [webmasterworld.com...] because it introduces several new and different topics that differ from that earlier discussion.

So far I have blocked probably a hundred different IP's and incidences now seem to be slowing down - most now come out of the USA.

I have not used the .htaccess file to attempt to block as I am pretty sure X11 the U/A will ignore that too.

How are you blocking IPs if not using htaccess to do that? Blocking individual IPs is not a very effective way to deal with unwanted traffic. Typically, the U/A does not get to ignore your htaccess file.

I do not know of a host that does offer anti-scrape tools. Some might include blacklists or whitelists in some form but generally not included with shared hosting.

Usually we all need to find ways to keep unwanted traffic from our sites. A really old (2016) discussion does explain some of the basics: [webmasterworld.com...] If you are familiar with your access logs it helps a lot.

It is hard to read those fields in your log without separation characters, is this correct for the last entry?
8 Jul 2023, 01:30:28|44.229.15.165|GET1.1|403|16,3690|Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0| 
it appears that they are receiving a 403 response but a huge file in place of a 403 error document - or I've missed a separator. That IP is Amazon and can be blocked (from 44.192.0.0 - 44.255.255.255) with 44.192.0.0/10

lucy24

4:05 pm on Jul 8, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I have not used the .htaccess file to attempt to block as I am pretty sure X11 the U/A will ignore that too.
Nobody can “ignore” htaccess, just as they can’t ignore the identical directive in the main config file. That’s assuming the config file has the appropriate AllowOverride settings so htaccess can be used in the first place.

I have blocked probably a hundred different IP's
Do you mean individual IPs, down to the last digit? That's not worth it, except in the rare case of an infected human machine from an otherwise legitimate neighborhood. Look up the range and block the whole thing.

200162,241425
40316,3690
Can you unpack this? It looks like three separate numbers, starting with the response code:
200 162,241 425
403 16,369 0
Is the third number the elapsed time? Why on earth does a 403 generate 16 kilobytes? (Quick check reveals that mine totals about 8k, which is already pretty large because my 403 page is intended for humans, so it includes all headers and footers.)

Bubalo

7:08 pm on Jul 8, 2023 (gmt 0)

Top Contributors Of The Month



Hi not2easy - Thank you for your prompt and helpful reply.

I am using the raw access logs provided from my host to identify what IP's are accessing what and when on my site.

I then use the Block Traffic IP/IP Range tool provided by the nost to block those IP's that are dodgy. I can also block countries using this security tool.

Blocking IP's like this I know is not very efficient in terms of time wasting but seems to block.

It's a bit like playing a game of 'Wack-a-(Bot) Mole', but this particular bot Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0 is SO persistent to be truly annoying - wack it and it just switches to another IP that is not blocked.

It's a waste of time really but important to me. Some of my work pops up on other websites/blogs but Google usually takes them down a few days after they appear or sometimes I have to request Google to take down, which takes longer.

I thought by blocking the IP's X11 is using it might trigger a response to stop the scraping but not so much luck/courtesy in this case.

I reported the problem to my host for advice on writing a bit of .htaaccess code to block this particular UA but they advised me to use the robots.txt file instead. But since doint this - X11 is just ignoring the robots.txt instruction to disallow which is why I think this is a bad bot/human behind it.

The host said scraping is a huge problem but at this time they do not have a solution/tool although they would escalate the problem to senior managers on my request they offer a tool. I am not holding my breath for a positive reply.

I was also trying the easy options first (block individual IP's/ranges and using the robots.txt file because for a newbie like me I was concerned about making a mistake writing code I didn't fully understand to .htaccess because I don't really understand the .htaccess file but I know making a mistake in there could be worse than fiddling in the dark with something I don't, yet, fully understand.

Perhaps you can kindly suggest a good starting place for me to understand more about writing to the htaccess file or perhaps what code to write to block this x11 pest.

I can see now from what you and Lucy24 write that it seems it is the .htaccess file I am going to have to use next to block this bot. I am going to do more reading into this - and also the link you sent for the previous 2016 discussion posting on the subject I will also look further in depth.

I also got a bit of a physical disability so sitting/standing for more than an hour each is not good for me.

Sorry I did not put the separators in place but you are correct in as you have separated the log file as :

8 Jul 2023, 01:30:28|44.229.15.165|GET1.1|403|16,3690|Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0|

You have not missed a separator. It IS a big file but what file in particular I have no idea !

Yes the IP is Amazon and they have so many IP's/ranges all over the world it is a task to block them all but I will use those IP's you suggest.

I think this bot is malicious and I don't want to fall into the trap of block blocking good traffic or mucking up my own website trying to mitigate against X11 because this might actually be the objective of the bot operator - to screw my own site. I am very happy with my host.

I DO block whole ranges of IP's that are known to be dodgy.

I made a mistake writing it was GitHub that was advising scrapers how to get around website owners block (sorry GitHub) - it was Stack Overflow so advising.

Unpacking the log numbers I posted earlier are as follows:

200 and 403 are response codes
162,204 16,139 and 16,139 are bytes
425 and 0 and 0 are Time Taken (ms)

So the 3rd number(s) 425, 0, 0 are the elapsed time(s)

I have no idea why my 403 generates 16 kilobytes but the 403 page is created by the host and shows a graphic illustration (probably a .png) on a full whole page. But you are correct - why such a big file? This is something else I am going to have to inquire about.

Bubalo

7:12 pm on Jul 8, 2023 (gmt 0)

Top Contributors Of The Month



Thank you lucy24 for your reply. I trust it is ok how I have replied as best I can in one post (above) to you both in my reply to not2easy. If not, do let me know the etiquette here. I got to get up and go walk about now. :-)

not2easy

7:47 pm on Jul 8, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



You can create your own 403 error file. I made my own so that if I accidentally block a human, they can let me know. You might want to do that after you learn a little more about editing your .htaccess file. You are so right not to just jump into that until you learn more about it but at least you are in a place where there are thousands of previous questions like yours.

You might want to check with the host before using
those IP's you suggest.
because 44.192.0.0 - 44.255.255.255 is a range of all IPs from the first IP shown to the last one shown. The 44.192.0.0/10 is the CIDR for all of them and that would be the one to use IF your host allows you to add CIDRs.

I believe that old thread mentioned above explains CIDRs and how to use them, but to be sure, you should read and ask a little more before you jump into blocking in .htaccess. I do not care for the current search here (bing) but it beats trying to find things page by page. The first time I ever edited my .htaccess file I made a mistake and my site was offline until I could fix it. That was over 20 years ago but I still don't ever upload an edited .htaccess until I review each line. Oh, and always keep a backup of the file you are replacing so worst case is a step back and not starting over.

The Apache forum where that old thread is found does have a "Library" (found under the Forum Options menu button) where you can read and learn things - keeping in mind that those older threads were based on an older version of Apache. The old syntax generally would work, but if you plan to be doing this for years you might want to look into the newer syntax for replacing those allow,deny lines before you have hundreds to edit. You will find that lucy24 and others here are the authority .htaccess people. I do OK, but I check twice before I'm sure.

tangor

10:38 pm on Jul 8, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Study up on env .... this is useful for many things, but particularly for UA strings. Example:

SetEnvIfNoCase Request_URI "(robots\.txt)$" pass

Deny from env=ban
Allow from env=pass

SetEnvIfNoCase User-Agent "x11" ban



Top line ensures all visitors can see robots.txt at all times, regardless of blocks
Next two set variables for env
The last line is an instruction that would ban any user agent with x11 in the string.

STUDY THIS FIRST, don't just cut and paste! As suggested above, keep a copy of .htaccess (.htaccess.old) when changing things.

Talking about size of 403s---Mine is just over 400 bytes. Saves bandwidth and I know who I am blocking and why...

lucy24

6:29 am on Jul 9, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I was concerned about making a mistake writing code I didn't fully understand to .htaccess because I don't really understand the .htaccess file but I know making a mistake in there could be worse than fiddling in the dark with something I don't, yet, fully understand
This is a perfectly sound and reasonable starting position. A simple mistake such as a misplaced comma can bring down your entire site. So always test by opening some random page every time you have made changes in htaccess.

Deny from env=ban
Allow from env=pass
Um, er, ahem. Those are Apache 2.2 directives. They will work in 2.4 if mod_access_compat is installed (it almost certainly is), but if you are just getting started on access controls, you may as well proceed directly to current syntax and then you won’t have to unlearn anything.

Bubalo

8:33 am on Jul 9, 2023 (gmt 0)

Top Contributors Of The Month



Thanks to you all for very helpful replies.

So reassuring to be with good guys.

Looks I have some more study to do about .htaccess and syntax but, before I begin, what does "env" actually mean in.... Deny from env=ban
Allow from env=pass ? I get it these are Apache 2.2 syntax directives so am I going to have to learn Apache too?

Also, about the X11 pest...I forgot to mention before, there is NEVER a Referer address shown for it in my logs and it ALWAYS only hits on HTTP version 1.1 and NEVER hits on HTTP version 2.2.

blend27

1:11 pm on Jul 9, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



re: ALWAYS only hits on HTTP version 1.1

And there you go!

Is there a redirect to your HTTPS version of the site? If NOT set one up, make it a 301. Most dumb bots do no follow it. Almost all modern browsers do.

Added..

What I am saying here if a request is made using HTTP 1.+ the site should redirect(301 if file exists of-cause ) to HTTPS version of the site(end-point these days). The content served at that point is just a chunk oh HTTP Headers(couple of bytes). Then then learn from that event.....

As for Firefox/72.0, my 13 year old tablet stopped updating it self on version 76, a long time back. Unless you site might benefit from 72, just nuke the UA.

Bubalo

5:00 pm on Jul 9, 2023 (gmt 0)

Top Contributors Of The Month



Thank you blend27, (and all) for your post(s) .

I think you are on the right track.

I have been in touch with my host after your post and they have now created a SPECIFIC block against X11 in the .htaccess file.

I have also asked them to check if there is a 301 redirect to the https version of my site, They are looking into this and will get back to me soon. (Today is a Sunday),

It is interesting/curious that all the bad actors/bots I get visiting my site - ALL of them use HTTP 1.1 version to access and not HTTPS! I have been HTTPS registered for over 2 years now and my URL check in Google confirms the padlock and https protocol - so my certificate is in place and Google knows it.

not2easy

5:27 pm on Jul 9, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



If you want to check whether there is a 301 redirect to the https version, you could paste the http: URL into your browser's address bar and see whether it renders the https: version - then you could look for your test in your logs to see whether the server returned a 301 response before the 200 response.

lucy24

5:49 pm on Jul 9, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



so am I going to have to learn Apache too?
All of it? Nah, just learn specific things as and when you need them. For access control, start with the two directives
Require {blahblah}
and
Require not {blahblah}
where blahblah can be a variety of things such as ip, or environmental variables (“env=something”, using the name--not value--of the environmental variable), or more complicated stuff that you need not learn all at once. Setting environmental variables is another useful skill; you can use them not only for access control but for things like picking which version of robots.txt a given visitor sees.

explorador

3:12 pm on Jul 10, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



X2 to the htaccess filter/block

Depending how your website is built, you could include your own code (php? perl?) to deal with this, showing those ips whatever you want them to see instead of your final content (this is called cloacking). You might want to TRY and check if those bots accept cookies, or some way to identify them even if they are coming from a diff IP, but still have the cookie you set on the previous transaction, just like regular users using computers store cookies regardless of changing IP.

About security, general information: it's not always the best approach to show an error or a blocking filter message, because when someone is trying to get you, they will use these message to adapt to your security. On those instances, it's better to provide whatever alternate content, so the hacker, or automated bot fails to understand there is a security layer taking place. Random results, or non clear error messages / blocks makes it harder to detect security layers.

Bubalo

6:36 pm on Jul 10, 2023 (gmt 0)

Top Contributors Of The Month



Thank you explorador (and all) for your posting(s).

New security measures are now in place and am monitoring. I will feedback results here.

All of your comments and advice have been of tremendous help to me and have increased my understanding of the issue.

To get the right answers one must know the right questions to ask. To give your time and expertise as you all do are truly noble deeds and I thank you all for your help.

It's a Wild Wild Web out there. It's nice to feel at home among friends.

tangor

11:07 pm on Jul 10, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Stellar words appreciated---all too often folks ask, get, and never say thanks!

phranque

12:02 am on Jul 11, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



welcome to WebmasterWorld, Bubalo!

Bubalo

4:11 pm on Jul 12, 2023 (gmt 0)

Top Contributors Of The Month



Success!

X11 is now effectively blocked (403) by my .htaccess file.

There is also a redirect to https in place too.

Thanks again all for your helpful advice and suggestions - you guys were correct all the way.

It has also been informative monitoring the behavior of X11 since the block and to see it dashing about the globe like a headless chicken trying lots of different countries (the usual suspects) but totally failing get in to my site globally.

But, as I suspected, (the evidence is in the log) it seems to have changed tack and switched to Python to scrape. It is actually quite easy to spot its doings when you know what its leaving look like.

I have been using my robots.txt file to block many of the various versions of Python that scrapers have been using against my site - but most ignore the robots.txt.

My host has very helpfully recommended a number of directives to block these Python versions in my .htaccess file either by individual Python versions or as a block against all Python.

My first thought was to block all Python as it does seem to be the "go to" tool for scrapers popularly use to scrape but I seem to recall reading somewhere - perhaps in a WebmasterWorld discussion - that there are some good bots that use Python and so a blanket Python block might not be a sensible option, or perhaps I was just dreaming again about a better world.

Please let me know your thoughts about Python and any other comments about this X11 issue.

not2easy

5:32 pm on Jul 12, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



I am sorry to tell you that robots.txt cannot block anything. It can ask that bots do not crawl something, but there is nothing to make all bots either read or comply with it.

I block all python and pcore scrapers. You can block by UA in a number of formats, a common setup is shown at the end of this (VERY) old discussion: [webmasterworld.com...] - keep in mind that was not for Apache 2.4 because it did not exist in 2003. I refer to it only to give you food for thought in setting up a UA block system that can be updated easily using a rewrite rule. The best part it is does not need to have the entire UA string, it uses a "contains" format like this example:
(badbot|bulid|pcor|pytho) 
so "pytho" blocks anything with Python in the UA string.

tangor

6:54 pm on Jul 12, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Never found any use for Python UAs... blocked.

My robots.txt is a whitelist, all other bots are denied. Those that honor are left alone, those that do not are nuked.

YMMV

Bubalo

7:35 am on Jul 13, 2023 (gmt 0)

Top Contributors Of The Month



I will block the Python lot.
Everything in this discussion has opened my eyes and mind with advice, tips and cautions. Many thanks.

Bubalo

8:01 am on Jul 18, 2023 (gmt 0)

Top Contributors Of The Month



UPDATE
Since effective .htaccess block, the X11 has now morphed into this modified scraper: text after 72.0 is the modification....
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0 (compatible; mozilla/5.0 (windows nt 6.1; win64; x64; rv:47.0) gecko/20100101 firefox/47.0; +https://github.com/rom1504/img2dataset)

Google rom1504/img2dataset scroll down for y combinator -

I hesitate to publish more information here about my recent experience as this is a public forum - seems clear to me this "adaption" has the potential of being a mega pest to all.

I am very interested to receive your comments

not2easy

12:17 pm on Jul 18, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



I might add "img2da" or "datase" to my UA blocklist. Select a UA string that is common only to the unwanted traffic. So long as you are checking your logs you can keep up with them.

Bubalo

1:15 pm on Jul 18, 2023 (gmt 0)

Top Contributors Of The Month



I have done so in my UA blocklist.
Do these bots ever give up after getting so many 403 blocks?

not2easy

1:21 pm on Jul 18, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



Yes, it isn't overnight but they generally move on to greener pastures. Only because time is money and there are so many sites that never look into dealing with unwanted traffic. Most sites just count the traffic and do not realize that maybe half of their traffic is not human.

Bubalo

1:49 pm on Jul 18, 2023 (gmt 0)

Top Contributors Of The Month



Thanks not2easy. Phew! Good to know. - I will keep checking logs.

Bubalo

6:47 pm on Aug 9, 2023 (gmt 0)

Top Contributors Of The Month



UPDATE:

Hi guys,

Since my recent blocking in .htaccess this triggered much more activity against my site with X11 using hundreds of different IP addresses from all over the world. Sometimes the X11 appearances in my logs looked like DDOS attacks. Large blocks of IP's.

Some of the countries it was coming out of were alarming too, like North Korea ! Some of the IP's too like French Atomic Energy Commission.

I don't think the IP's themselves are the issue - although some of them are listed as dodgy - but many are not and seem to be ordinary people's IP address.

A big clue is how it switches countries and IP addresses so frequently.

I have established that the "spider crawler" behind X11 uses TWO other User Agents as listed below:

Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0

Mozilla/5.0 (iPhone; CPU iPhone OS 15_5 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Mobile/19F77

python-requests/2.28.0

The iPhone U/A string seems to be linked in some way to Facebook but I can't be sure the others are linked to Facebook.

If it is Facebook behind all three U/A strings - why is Facebook (if it is Facebook behind X11) using hundreds (if not thousands) if different IP addresses from all over the world to access my site? And, why does it not show itself as Facebook? Why is it hidings its identity?

This all has got me wondering what is going on.

Is this constant web scraping evidence of a benign but "unknown" spider crawler that is very good at hiding it's name identity (to make it harder to block) or is perhaps some end user software somewhere that is malfunctioning and turning itself into a very aggressive web crawler using random IP addresses. Some of the IP addresses it uses are probably compromised individual personal machines but others are from the cloud, aws, etc, i.e., the usual suspects.

Any thoughts of yours on my questions and update will be much appreciated.



not2easy

6:51 pm on Aug 9, 2023 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



If these are all receiving 403s they aren't gaining anything for their efforts. I hope you are using UA blocking and not IPs only, that can go on forever.

lucy24

9:16 pm on Aug 9, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Some of the countries it was coming out of were alarming too, like North Korea ! Some of the IP's too like French Atomic Energy Commission.
Yikes. Has your site offended someone? When you get endless hits from wildly improbable IPs, a last-resort possibility is some kind of DDoS attack: they don't actually want your files, they just want your server to become overloaded and be unable to serve files to legitimate people who do want them.

tangor

1:38 am on Aug 10, 2023 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



When you say "hundreds" "thousands", how many actually occur in a 24 hour period, and is that a constant number or rising to even greater numbers?

Serving a 403 is a "split second" and back to business. It would take an awful lot of 403s to tank a site for DDoS purposes.

I average 50-200 per day--mosquito bite.

Control the intrusions, of course, but never expect to eliminate them! The inventive mind of bad actors is enormous. :)
This 63 message thread spans 3 pages: 63