Forum Moderators: open
OK, serious question then. How come you treat TP with greater prejudice than any other host? Or do you also block places like SB, SL and others?
IMO TP has been a haven for rogue bots and assorted trouble makers for years, but I also block ranges from other hosts, colo, server pools, proxies and other culprits; currently numbering about 50.
Seriously though, SB takes abusive scrapers and other bad things very seriously and a few things running rogue from within their own network were shut down with extreme prejudice when I brought the AUP violation to their attention.
SB is one of the actual good guys IMO.
Just think how bad boys we could take out with some Level3 focus behind the scenes ;)
Or those pesky old Road Runner ranges, which use to bother everybody and are no longer seen because RR has turned them into hub-relay-centers. Presenting instead an otherwise useless and/or duplicated IP range for the majority of their customers (at least from an SSID perspective).
What I don't do publicly though is bad-mouth a host simply because some of their customers do bad things with IP Addresses provided by the host. I dunno, sometimes I'm overly sensitive about this and really, it's kind of silly.
Yeah, Don, I followed that thread and thought to myself how nice it would be if we could do something like that. It's a nice dream.
Well, sorry for beating the hornet's nest. I'm going back to analyzing log files from last week now.
So someone like the planet who offers just hosting they are ruled out automatically from the main web scripts. To do that I prefer to get the dns records and then locate the host behind, once the forward/reverse ip resolves properly. And so regardless of IP, the host is always visible. Based on that you can block or allow access. I find it more effective than going after ips/ranges.
Well there is a difference between hosts and ISPs. For hosts I do not expect visitors coming in, unless for specific purposes like retrieving feeds of some other automated process.
engima,
In theory, your correct.
However, over time the methods uses by internet providers has changed drastically.
In order to maximize their computers and/or data centers, many internet providers offer a variety of services outside of basic internet connection (colo, hosting and more).
In addition, there are more and more, orgs that are providing combined commercial services with no clear distinction and/or separation between a basic connection and 3rd party services.
However the question is when you take the decision to block access for a host. So if I see in my reports that most or all ips coming from the same host and do something bad on my server, I have the control of switching on/off access easily without worrying about ip ranges. All this is controlled manually I do not automate this functionality
For large isps, blocking ips individually still remains an option although is not very effective as most of them are given in a dynamic manner.
And another thing I've seen is that .org that show up with the dns records with normal browser signatures, another red flag (unless it's something known, eg: internet archives or w3c that I see in my logs but these aren't real visitors but bots).
And seriously hosts and ISPs have all the power to discipline their customers. For instance nowdays if you're late paying the bill for hosting you get emails, warnings, your ISP brings up all sorts of reminder screens.
Therefore how hard will be, to identify instantly if a customer mis-behaves either intentionally or because his system is compromised? I don't believe it's too hard. Plus there is a market for the hosts right there. Cleaning up compromised systems :)
One of the (many) problems is ISPs who will not even patch their servers. Hence the serious number of compromised servers tied into botnets at the moment. Admittedly their clients often have some kind of control over servers they rent but it's still reasonable for the ISP to test the servers for patch status, infestation and exploits.
I block server farms as I find them, using the mail server's quarantine bin as well as web server traps. Got quite a few at present - about 1800 IP ranges including persistent dynamics in the middle and far east. I find major nuisances here (UK) include Russia, Ukraine, India, China and USA; not just servers but compromised broadband as well, often organised into botnets but "freelance" as well.
In the pipeline at the moment is a site by site block-list of countries. Some of my clients target the world but a lot are UK-centric and couldn't care less about the far eastern trade, for example.
As to white-listing - I consider it faster to reject by IP if it's a known baddy and then ask questions if it isn't rejected by that.
Each to their own. :)
Yes there are many problems they're having, another one is with the router firmware and wifi although the issues are well documented seems nothing happens to rectify the hacks that are going on.
Although not new, the last incident I remember reading about, a month ago, was attackers uploading malware to routers because they were unprotected or were using default passwords.
Now if you sign up for an account with an ISP, he typically sets up a password and username for you. How hard is it to set up the same info to the modem/router you ship to the customer? I can only guess they prefer to cut costs by having a manufacturer or vendor shipping the router to the customer directly. But in the long run it hurts their business, as more and more sites deploy countermeasures blocking access.
Since you're in UK, I signed up with one of the major ISPs there, they send me the modem/router. Some months later, I started seeing emails from them, suggesting to password protect the router. Many people reading that kind of email simply won't even understand what it's all about. They (ISPs) should had protected the router in the first place. Plus they have full control to do updates remotely (at least the first time when you activate the account before you go online) Why all these complications?
The servers I use were set up to auto-patch themselves, but that doesn't always happen and when it does it's at some inconvenient, high-traffic time of day (I altered mine). And it's not unknown for an MS update to go wrong - I'm still trying to recover from one problem resulting from a patch last October!
Apart from that, IIS servers are usually only patched monthly, allowing loads of time for hackers to look for exploits. In some cases holes aren't fixed for months anyway.
In any case I doubt very much if SQL databases, PHP and such-like are auto-patched - they aren't on my servers.
Of course, there is also the problem of illegal MS installations. Many of the countries with compromised computers have a high incidence of bootleg OS's and MS won't permit auto-update, so if the guy forgets or can't be bothered to do a manual update the OS is vulnerable, possibly for months.
I don't run linux servers but from my desktop machine I assume they can be set to patch themselves completely, and updates happen as soon as a patch is available rather than next month - far more sensible.
All of that assumes it isn't some home or office server set up accidentally or deliberately and then left.
You're talking about two different aspects (dynamic and servers) but the same applies all around.
The difference is that with dynamics almost no one has a clue about security. If it auto-updates, fine. If not, who knows?
All of which is off-topic, for which, Moderators, I apologise. To bring it back on topic:
I can't see any reason for not blocking all server farms and then opening up a hole for "wanted" bots etc. The problem is, once blocked you have to check the rejection logs (if any) to see if there are any new goodies. Time consuming but perhaps quicker than the alternative?