Welcome to WebmasterWorld Guest from 220.127.116.11
approx 40K IP ranges[
The main worry is the upper limit of IP ranges for iptables before it starts impacting the server performance.
My preferred method is to do the same in MySQL and PHP at the beginning of all your scripts or files and keep all the data crunching out of Apache because it's flaky at best.
When you download ip info from arin, ripe etc and import it in the database you are more on autopilot for blocking countries.
Is this 40k in ranges or 40k in file size?40K in IP ranges. I have the website:ip address mapping of websites in com/net/org/biz/info/mobi/asia/us and approximately another fifteen million sites in various ccTLDs. It is part of a survey that I run. I suppose I could optimise the ranges. What I have noticed is a lot of Chinese and Indian subnets using US and CA IP ranges.
If 40k in ranges than you have the IP tables complied incorrectly (at least in regards to IP ranges).
40K in IP ranges. I have the website:ip address mapping of websites in com/net/org/biz/info/mobi/asia/us and approximately another fifteen million sites in various ccTLDs.
What I have noticed is a lot of Chinese and Indian subnets using US and CA IP ranges.
You could reduce those numbers by leaps and bounds with a few lines of mod_rewriteThat approach (starting with A ranges) might be ok for country level blocks but sorting genuine human users from data centres and hosting ranges might require a bit more precision. Blocking at the upstream provider IP range might deal with server farms. Those CN/IN subnets are a common thing. Where a country's internet infrastructure isn't well developed a high percentage (possibly 50% or more) of that country's websites might be hosted on IP ranges outside that country. The US, CA, DE and UK tend to be the most popular. Using an A approach is, perhaps, like using a chainsaw when a scalpel is required. That said, I do have a few countries blocked on some of my sites.
[edited by: jmccormac at 1:27 pm (utc) on Aug 30, 2013]
You could reduce those numbers by leaps and bounds with a few lines of mod_rewrite
Using an A approach is, perhaps, like using a chainsaw when a scalpel is required.
collateral damage of people using data centre IPs for web proxies
Would the "allow" list be smaller?Possibly. It is certainly worth considering.
And the long range benefits of allowing the wider IP ranges are not that great unless your sales are in the millions.It depends on the audience for the website as much as the sales. Blocking entire countries might be acceptable with some sites, especially if there is no financial argument for allowing traffic from that particular country. However a site that has a localised, country-level market and only sells to that country could benefit from blocking countries outside its market. The important point is that there is no one-size-fits-all approach to blocking.
You'll find out over time that it's far easier and less time consuming to keep the blade sharp on your chainsaw than it is to keep your scalpel sharp.If I was simply basing the approach on detecting problem ranges as they hit my sites, then the A approach might make sense. However I don't use that approach. As part of the work I do on hoster statistics and domain name tracking, the IPs for about 3.6 million DNSes have to be checked (simple country level resolution in most cases) and that produces a list of approximately 3.3 million distinct IP addresses each month. That's separate from the surveys of the website IPs of com/net/org/biz/info/mobi/asia/us/etc. The website IP survey is part of a full web mapping project and it does produce a lot of IP data. There may be a vast difference between this relatively industrialised approach and the "block on detection" approach.
The country IP range database can be downloaded for free from Maxmind which easily allows countries to be blocked by country code with just a couple of lines of code, it's very fast, and avoids the bazillion lines of DENY statements.Maxmind data can be useful for country level blocks but beyond that it has granularity problems.
or would it be better to use a set of deny statements in Apache's httpd.conf or .htaccess
The iptables option is probably the more instinctive solution because it just drops the packets and doesn't return a 403.
I have found the vast majority of directories are garbage, with all due respect to IncrediBill