The log entries show that your server is returning a 404 error for their efforts, that is what it should do and it will send them looking for somewhere else.
please describe the type of request you are trying to Forbid.
The GET request was for the URL. I was trying to block any GET request that began with http://
What I tried did not work. Here is another one:
222.186.128.nn - - "GET http://example.net/fastenv HTTP/1.1" 404 - "-" "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)"
They all get 404s. Most of them are Chinanet IPs. Not sure why I would find this type of thing in my logs. The GET request URLs are all different.
i see that now - i initially missed the fact you were showing the path and not the full url.
your RewriteCond should be testing a more suitable environment variable such as REQUEST_URI.
your regular expression needs some work:
- not sure what the .*= is doing at the start - it is ambiguous, greedy and promiscuous and will be inefficient in practice
- you don't need to escape the colon with a backslash as it's not a special character
- the .*$ at the end is unnecessary since you are not capturing it and you should remove it or use a more efficient pattern
Note that technically you don't need to do anything. Since the pages don't exist, they are already getting 404s, which shouldn't take up any more resources than a 403.
But it is perfectly understandable if you want to 403 them instead on the grounds that you don't like their face ;)
I let my host know about these types of log entries and they may have done something -- don't know what -- but I have not seen any more of these types of hits for 2 days. But since these types of entries would not find anything like that on the server, they would be getting 404s anyway.
My host uses mod security. Perhaps they changed a setting that would hinder this type of activity.
Thank you for all your help.
|I let my host know about these types of log entries and they may have done something -- don't know what -- but I have not seen any more of these types of hits for 2 days. |
As a precaution, I'd be looking for some assurance that this was NOT done for all 404's!
|My host uses mod security. Perhaps they changed a setting that would hinder this type of activity. |
So does mine-- it's an optional add-on-- but when it kicks in, you can see it in the error logs. Generally it's something truly sinister, like asking for nonexistent files with ".exe" at the end.
It's just as likely that the robot simply got bored and went away. The list of robots who hammer away forever, day after day for months and years, is really pretty short. You block IPs because if they allow one robot today, they'll allow an unrelated robot next week.
Lucy is correct. They just took the day off. They are back today. Just two IPs and two URLs, which I blocked in htaccess. From what I have been able to determine, they appear to be probes checking to see if the server my site is hosted on can be used as a proxy. I would block them anyway. Most of them are from China and Russia, Poland, etc. Today's IPs were:
Oh, those are both HUGE China ranges. If you don't do business in China you can block 'em wholesale. I've got: