homepage Welcome to WebmasterWorld Guest from 54.204.58.87
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Search Engines / Search Engine Spider and User Agent Identification
Forum Library, Charter, Moderators: Ocean10000 & incrediBILL

Search Engine Spider and User Agent Identification Forum

This 86 message thread spans 3 pages: < < 86 ( 1 [2] 3 > >     
The Whitelist
Key Elements
tangor




msg:4640883
 12:13 am on Jan 29, 2014 (gmt 0)

Whitelisting access to a web site is easier than blacklisting. One is who you let in, the other is endless whack-a-mole. To start:

.htaccess to allow all comers access to robots.txt

robots.tx allows a short list (b, g, y, examples) all others 403

UAs allowed, a slightly longer list, but still very limited. No match: 403

Getting grancular: Referer keywords (this is a blacklist, but still pretty short): 403

These are my basic tools. Are there others, or perhaps refinements? And, where does Whitelisting fail? And how to poke holes for desired? (I know HOW, but have yet to really find a need to do it).

These are concepts I've been using for a few years, but I suspect there are other methods, so this is a request for discussion on Whitelisting in general. I went down the blacklisting hole for a number of years--and the subsequent heartburn and agitation--until making that paradigm shift from takiing out the bad to allowinig the good. And, though I'm not unhappy with the rsults so far, I wonder if the Whitelist might be losing some potential traffic.

Your thoughts?

Thanks.

 

lucy24




msg:4643370
 8:04 pm on Feb 7, 2014 (gmt 0)

Getting the client to execute some javascript and verifying it happened is a good idea.

... and now we're onto the "would you be able to get into your own site?" parallel discussion.

Do you allow visitors using proxies?
Do you allow visitors using anonymizers?
Do you allow visitors without javascript?
Do you serve non-page files to requests without a referer?

If no, you've excluded a certain number of WebmasterWorld members.

dstiles




msg:4643388
 9:26 pm on Feb 7, 2014 (gmt 0)

> execute some javascript and verifying it happened is a good idea

I browse with JS turned off - always except for known sites such as my bank. This is a NoScript default anyway. I would not enable JS just to prove I'm human (although I assure you I am!).

Bots can already action JS so that would in any case prove little.

Once again, this is an EXTRA security feature on SOME sites but by no means a foolproof test.

Lucy:

Some (eg education, yahoo, some google, a few other known ones).
Not if I can detect them.
Yes: on my sites JS is an optional navigational helper or form-checker but not mandatory.
Some: first page access often has no referer but I block non-referers to forms.

keyplyr




msg:4643407
 12:37 am on Feb 8, 2014 (gmt 0)

Some: I filter through allowed range/UA list.
No
Yes but they get a "Please enable Javascript to use this website" alert and I'm afraid JS/Ajax is absolutely necessary at my personal site.
No: Not image, scripts, pdf or css without my site as referrer.

trintragula




msg:4643411
 1:13 am on Feb 8, 2014 (gmt 0)

Checking for javascript is subject to the same mechanism/policy framework described above: if it's prone to false positives (in this case because some humans disable it), and you care about that, then you either need to not use the (JS detector) mechanism, or provide some means for visitors to overcome it (e.g. by CAPTCHA or a Login).

A policy of sending a 403 to anyone who declines to use javascript would be uncompromising, though I dare say a lot of sites out there are sufficiently unusable without javascript that they might as well send a 403. See @moTi above, who highlighted the mechanism. (Keyplyr slipped in another example while I was writing this)

One alternative policy is sending an explanatory page, as Lucy mentioned for old browsers (and Keyplyr again...). Then a human visitor can make an informed choice about whether to enable javascript, or move on.

There are many mechanisms and many policies. Some pairings will work better than others. I think the distinction is useful, because it allows us to talk about bot detection mechanisms without getting diverted by judgements about when they're best used, and what action to take.

keyplyr




msg:4643413
 1:21 am on Feb 8, 2014 (gmt 0)

Well checking for JS to determine bot or human is a flawed & futile exercise. Half the bots I see support JS.

moTi




msg:4643414
 1:27 am on Feb 8, 2014 (gmt 0)

I hear reports that Google are already running javascript on pages they visit. Doubtless some of the other bots will follow suit with headless browsers if they haven't already.

i can tell you that on my sites googlebot.com and msn.com are the only bots that execute my (although complicated) javascript as yet. which i find rather astonishing, because with headless browsing technique, it should not be a big problem for others to follow? but they don't since years. still it's just a matter of time, that's right.

mouseMove + mouseOver or css/media query content overlay style & and touch(ontouchstart or onmsgesturechange) for Mobile UAs, think like a Human... triggered by unusual request not from.... (say you know where your overage visitor is from).

yeah, i've experimented in that direction as well. namely at the time when google had that great idea to let certain bots make a screenshot of every page visited and display it alongside the serps. you remember, they made it impossible to opt out unless you would go without the text snippet, which was plain extortion in my book. their screenshot tool completely acted like a rogue bot. anyway, banning it without blacklisting was a good exercise, because in the process you became aware that their coding team heavily optimized against the webmasters to force the screenshooting down our throats no matter what. i remember that this damn thing even was immune against that mouse gesture test so that it was pretty hard to block. i finally got it with a kind of delayed trigger iirc, rather suboptimal. most brutal bot ever.

btw that was the moment when i began to hate google. don't be evil, haha.. well, obviously they have abandoned that screenshot nonsense in the serps by now.

trintragula




msg:4643464
 10:16 am on Feb 8, 2014 (gmt 0)

Well checking for JS to determine bot or human is a flawed & futile exercise. Half the bots I see support JS.


I think if there were any completely successful way of stopping bots, then only completely successful methods would be interesting.
In the absence of that, it seems reasonable to use a combination of methods to improve coverage.
For that reason, I'm reluctant to dismiss methods that only catch a subset, unless the subset is small.
This is particularly true with methods that hold out some hope of stopping botnets using browser UAs, which are otherwise hard to spot.

I'm interested to hear about people's experiences with referers: do browsers reliably send them? Do bots often not?
This is not an option I've explored much, though I have been partially monitoring them. I'm presuming that this is how some image hot link blockers work.

keyplyr




msg:4643489
 1:38 pm on Feb 8, 2014 (gmt 0)


I'm interested to hear about people's experiences with referers: do browsers reliably send them? Do bots often not?


IMO referrers are pretty much useless for whitelisting as well. All modern browsers offer a means of hiding where they came from & almost all bots show no referrer either.

I agree that no one specific method stands alone as completely effective, but only as one layer in a comprehensive filter.

Angonasec




msg:4643506
 4:38 pm on Feb 8, 2014 (gmt 0)

"almost all bots show no referrer either."

Ironically, referrer-spam in my access-logs has been mounting steadily for years, all fake referrers naturally, and, when legible, mostly obscene.

Widening the thread topic; Although blacklist-ers are being swamped by incoming tide of sewage... imagine how increasing usage of IPv6 will soon finish that defence off.

Then it will be farewell to the internet for many site owners I'm sure.

trintragula




msg:4643508
 4:55 pm on Feb 8, 2014 (gmt 0)

On the plus side, ipv6 is supposed to put an end to NAT, so may marginally help in that respect. It will also reduce the need to reuse IP addresses in other ways, or allocate addresses quite so randomly. So I'm not sure whether it will make things worse or better.

With regard to referers: it's another possible detection mechanism - if you have more policies available for dealing with matches than either letting them in or sending them a 403, then you can probably make some use of it, e.g. by challenging visitors that come without an appropriate referer to verify that they're not a bot. The trick as usual would be finding a method that stops bots without annoying your real visitors. I definitely think there's some mileage here.

I'm not so pessimistic about this.

lucy24




msg:4643538
 6:54 pm on Feb 8, 2014 (gmt 0)

Referers tend to me more site-specific than simpler checks such as UA or IP; you can't just download someone else's boilerplate and paste it in.

If I were based in eastern Europe, I would not be able to slap a global block on any-and-all .ua or .ru referers (with search-engine exemption).

If I had top-level pages I would not be able to block referers in the form "example.com/blahblah.html" (a form that does not exist and must therefore be bogus).

If I had deep internal links I would not be able to block requests for /directory/blahblah.html that gave example.com/$ as referer. (Very common form of fake referer.)

If I didn't have a domain-name-canonicalization redirect I would not be able to block requests giving "example.com" (without www.) as referer.

Those are all blacklists, but referer-based lockouts work in either direction. "A referer CAN be x, y, or z; a referer can NOT be a, b or c." If you have large files, a block on auto-referers for those files is a good safety measure.

I don't currently use it for anything, but I've still got code in place that redirects certain requests to a page that says you're only allowed here if you came in by a certain route, or you've been here before.

I am always surprised to hear of people who don't have a hotlinking function in place. I think it was one of the very first things I ever put in an htaccess file (after, duh, -Indexes). Some human browsers don't send a referer at all; it's rare for them to send a fake one.

dstiles




msg:4643562
 8:17 pm on Feb 8, 2014 (gmt 0)

Angonasec - farewell to the internet

Yesterday I read a threatpost blog entry that quoted a respected Kaspersky virus researcher Costin Raiu as saying:

I operate under the principle that my computer is owned by at least three governments,

and...

"Email? Broken. Mobile communications? Broken. Web traffic? Really broken. Crypto? So, so broken"

I've been saying for years that the internet was badly designed from the very start and that it is most certainly badly broken. My hope is that by the time it becomes unusable I will have retired. :(

keyplyr




msg:4643567
 9:45 pm on Feb 8, 2014 (gmt 0)


Although blacklist-ers are being swamped by incoming tide of sewage... imagine how increasing usage of IPv6 will soon finish that defence off.

Why? You can block IPv6 the same way as IPv4.

trintragula




msg:4643571
 10:03 pm on Feb 8, 2014 (gmt 0)

By 'auto-referer' do you mean a request for a url that sends the same url as the referer? I've seen quite a few of those in passing...
I do have top-level content, and at this point I would be reluctant to move it lower, but I can certainly see the sense/value in that, also given the number of bots that come by that are interested in only the home page.
I'm finding this thread really useful - lots of ideas to think about.

@dstiles
broken internet: I think you could make the same argument about society in general. The internet just mirrors that. When things get bad enough, something gets done.
I think ultimately we as webmasters can't actually fix the bot problem globally - it's going to require changes in a different domain. But that's another topic. The best we can do for now is fight them at the castle gates.

tangor




msg:4643592
 11:16 pm on Feb 8, 2014 (gmt 0)

My move to whitelisting was primarily to make sure my site(s) are available to USERS and not being hammered by bots. I generally use hosting that is robust enough to handle both... but why pay for the traffic that doesn't count?

Whitelisting is the front end. Blacklisting is the clean up. Logs are the data source. Since I moved to whitelisting I've dramatically cut down on the non-useful traffic as a 403 is very responsive (and small) and many potential scrapers have been thwarted.

At the same time I freely admit I've probably lost some potential visitors... and that's what this discussion is about: How to make whitelisting better!

The internet is a work in progress. I'm not sure what might come next, but until there is a change, this is what we have and, for all its flaws, is still pretty flexible and open.

lucy24




msg:4643603
 11:46 pm on Feb 8, 2014 (gmt 0)

By 'auto-referer' do you mean a request for a url that sends the same url as the referer? I've seen quite a few of those in passing...

Yes, I don't know if it has an official name. There's no way to block these globally via RewriteRules or similar, though they're easy to recognize after the fact. Closely related is the robot that gives your root as the referer for everything. Both obviously are intended to circumvent rules that look at missing or inappropriate referers.

As far as I know, auto-referers never occur in nature. (Maybe, maybe with a mail or form-fill type of function that POSTs back to the same page?) If you refresh a page, it comes through as a referer-less request; if you click an internal link, it isn't a request at all.

My current compromise is to make individual rules for a half-dozen or so very large files. This is subjective. On my site, >200k for an html file counts as "very large"; I don't think I even have anything in the 50-200 range, so it's a clean cutoff. If you have significant numbers of large files, it may be worth routing them all via a quick php script that checks the referer.

Oops. Sorry, tangor, we're not really talking about whitelisting are we?

tangor




msg:4643605
 11:59 pm on Feb 8, 2014 (gmt 0)

Oops. Sorry, tangor, we're not really talking about whitelisting are we?

lucy24, we truly are... as blacklisting is part of whitelisting (and I, too, have large files, some up to 4mb which if open to bots would hammer my site bandwidth horrifically).

A true whitelist (these and no others) is not quite webmaster suicide, but dang close. The web changes, the players change, hour by hour, day by day... logs are very revealing! There's got to be a balance to all this.

I just think whitelisting on the front end makes more sense than endless whack-a-mole blacklisting... which never ends to an order of magnitude.

So, yeah, I'd like to keep focus on whitelisting if we can... and yet every comment so far expresses just how difficult that is to accomplish. And in the end we all might learn something new.

lucy24




msg:4643638
 2:39 am on Feb 9, 2014 (gmt 0)

Some of it's technology. Take mod_authz-thingy in Apache 2.2, or equivalents in earlier versions. (I haven't yet been able to explore the <If ...> options in 2.4 so I don't know how many new opportunities this creates.) You've basically got two toggles:

Allow some finite list-of-IPs (or, in a limited way, environmental variables)
Within this list, deny certain substandard applicants
OR
Lock out list-of-IPs,et cetera as above
Within this list, admit certain exceptions.

And that's all. What you really want is nests:

Unconditionally deny such-and-such vast IP ranges (if you've got non-portable widgets like wilderness, that means everything non-ARIN)
Poke a hole for your good friend Boris who's got a floating IP within a /22 sector
Apply a further lockout to the botrunner who operates out of the same IP range but fortunately hasn't mastered the humanoid UA-plus-referer package
But don't lock out poor Boris, who is stuck with MSIE 5 and won't be able to change any time soon.

OR:

In addition to your A-level blocks, lock out that nasty server farm down the street.
But poke a hole for the good and worthy robot whose aims you approve of though it isn't yet big enough to have its own IP range.
Do not, however, allow the same favors to the good robot's nephew who shares a server and UA but was absent the day they taught "Disallow".*

Sure, I could put together a RewriteRule with conditions to make each of those happen. But now you're looking at sending each request through a gauntlet of evaluations, to the point where this in itself is wearing out the server.


* Surely someone, somewhere, has a custom error page that says simply "Which part of 'Disallow' did you not understand?"

wilderness




msg:4643640
 2:54 am on Feb 9, 2014 (gmt 0)

Unconditionally deny such-and-such vast IP ranges (if you've got non-portable widgets like wilderness, that means everything non-ARIN)
Poke a hole for your good friend Boris who's got a floating IP within a /22 sector


And here ya er talking that binary trash again ;)

Last night I was contacted by a friend in Norway (whom two years ago didn't even know what an IP # was) explaining that perhaps his range had changed and could I make an adjustment.

His Class C changed by a rather large range, which isn't unusual for RIPE IP's, and I added an exception for a /29, since his other IP lasted nearly two years (quite unusual for a RIPE IP), I'm hoping this one will last at least as long.

incrediBILL




msg:4643641
 2:57 am on Feb 9, 2014 (gmt 0)

My move to whitelisting was primarily to make sure my site(s) are available to USERS and not being hammered by bots. I generally use hosting that is robust enough to handle both... but why pay for the traffic that doesn't count?


That's physically impossible.

The bots crawl your site whether you think they are or not because they look like almost any other browser with a few exceptions. Nothing you can do to your site, short of installing a bot blocker, will discourage bots and even then with all the 403s they just keep knocking. I set up a site that has no human traffic whatsoever just to show what starts to come knocking but attempts to hide as human traffic.

Just to make a point, which I can't do easily any other way, I'm doing to break a cardinal rule of WebmasterWorld and link to one of my own blog posts because there's a report I ran which is embedded in the middle of the blog post showing what kind of stuff comes from data centers:
[incredibill.com...]

The sneaky bots are color coded in the report to stand out so you can see the stuff trying to hide that is 100% bot and would slip right past the undiscerning eye.

It's too big to post here or easily reformat for WebmasterWorld to be meaningful, would take hours, so I'm not even going to try, therefore the link. Please forgive this transgression as I think it's more important to get this data out there for others to see over a little rule bending on this occasion.

Hope you find it useful.

tangor




msg:4643644
 3:19 am on Feb 9, 2014 (gmt 0)

My move to whitelisting was primarily to make sure my site(s) are available to USERS and not being hammered by bots. I generally use hosting that is robust enough to handle both... but why pay for the traffic that doesn't count?



That's physically impossible.

The bots crawl your site whether you think they are or not because they look like almost any other browser with a few exceptions.

My quote above remains true, just dang difficult... which is what we are all fighting.

And the take away from the above report (which bent rule should allow) is whitelisting works. With blacklisting, too.

Blocking server farms, etc is a part of whitelist in they are not allowed in bulk... and the whitelist of what I do allow is the other side.

What we do not want to do is white or black list ourselves away from the evolving web, new audience, etc. However, along the way that evolution will produce more bad guys. This will not stop, but we can't give up either.

wilderness




msg:4643645
 3:25 am on Feb 9, 2014 (gmt 0)

The sneaky bots are color coded in the report to stand out so you can see the stuff trying to hide that is 100% bot and would slip right past the undiscerning eye.


Bill,
I cringe every time this topic appears.

With all due respect, there are different ways to skin a cat.

Many of the color-highlighted UA's in your list offer simple solutions in black-listing (and have bene expressed here in SSID for some years), however just because a simple solution exists with black-listing that doesn't mean that one method is the only way to be effective, or other methods are ineffective.

Rather, it just translates to different-strokes-for-different-folks.

Don

incrediBILL




msg:4643647
 3:33 am on Feb 9, 2014 (gmt 0)

Blocking server farms, etc is a part of whitelist in they are not allowed in bulk... and the whitelist of what I do allow is the other side.


Let's get the terms right:

Blocking the data centers is my FIREWALL

Punching holes in the firewall to allow certain crawlers is the WHITELIST which could be just a user agent but I prefer to restrict to a range of IP's when possible.

I don't do anything I'd call a specific black list, but I have several FILTERS that I apply to headers, user agents, etc. that examine certain details and kick out anything that falls outside the norm.

tangor




msg:4643649
 3:50 am on Feb 9, 2014 (gmt 0)

Agree the terminology should be codified... and that's part of this discussion. Your Firewall is my whitelist, ie, who gets through. Agreed?

It is very easy to control the front door. It is who we LET in that gets to be the interesting part.

incrediBILL




msg:4643652
 3:54 am on Feb 9, 2014 (gmt 0)

Many of the color-highlighted UA's in your list offer simple solutions in black-listing


Sorry Don, but in this modern world where Google penalizes people for literally anything and everything you can't wait until the damage is done before blocking something. Blacklisting is hardly a simple solution, the damage has already been done when you find something new to blacklist, as it's a never ending whack-a-mole that never stops. Something I call a no-win scenario which is why I stopped doing it out of frustration after only a couple of months many years ago.

By the time you find them to blacklist, the damage is done, the cows are out of the barn and you have to herd them all back in again. Next you need things like Google alerts, Copyscape, etc, to find out what they did with your data and then it's off to the races with link disavows, C&Ds, DMCA's and lawyers.

It's not different strokes really, it's proactive vs. reactive and I prefer to spend my time doing more productive things.

Not that my methods are fool proof and damage still happens, but it's very minimal and takes up much less of my time that it did once upon a time. It's so minimal it's almost negligible and I can go many months without checking to see if some scraper is doing bad deeds with my data.

Besides, you block the entire WORLD except the whitelisted USA IP ranges, so you've already blocked most of the data centers and bad countries to start. I know you're smirking now. stop it. I listened to you and did that for a USA-only client with problems of worldwide spam recently, worked like a charm.

and that's part of this discussion. Your Firewall is my whitelist, ie, who gets through. Agreed?


No, my firewall is the firewall, NOBODY GETS THROUGH, unless... they're on the WHITELIST!

Think of the firewall as the bouncer outside the club, if you're not on that list on the clipboard (the whitelist) you don't get past the velvet ropes. That's right, the perfunctory with a clipboard (BBT fans?) will keep you out.

lucy24




msg:4643659
 4:58 am on Feb 9, 2014 (gmt 0)

His Class C changed by a rather large range, which isn't unusual for RIPE IP's, and I added an exception for a /29

If I lived in a RIPE country I would never get through to your site, because my ISP uses three different A segments. In general, my IP only changes when I turn off the modem-- which I never do if I can help it, but things do happen every few months. I could hardly expect a website to poke a fresh hole-- and close up the old one-- several times a year.

Your Firewall is my whitelist, ie, who gets through. Agreed?

Uh-oh. To me a firewall is a physical thing. That is, ahem, not literally a physical thing-- unless your server is located in a very low-rent neighborhood-- but it's one type of technology. Whether the firewall is configured as a whitelist or a blacklist doesn't change its essential firewallness.

tangor




msg:4643666
 5:26 am on Feb 9, 2014 (gmt 0)

No, my firewall is the firewall, NOBODY GETS THROUGH, unless... they're on the WHITELIST!


If we are to agree on terminology, we also have to agree with what others have to say. I said that. firewall=tangorwhitelist (ie. who gets in)

You are doing it one level higher than I am but we are doing exactly the same thing: who we let in.

If we don't agree with that then conversation will continue to be muddied.

keyplyr




msg:4643668
 5:37 am on Feb 9, 2014 (gmt 0)

it's proactive vs. reactive

I disagree. Once again this is not a competition, not one or the other. There cannot be an effective whitelist defense without blacklisting. They are symbiotic.

People use stealth downloading tools undetectable until after the fact. Lots of caching going on without leaving a clue. My site and those of a couple clients have a very large international following, too dynamic to keep track of. There are many more examples where whitelisting fails and blacklisting is needed.

Of course some things can never be predicted or defended against. Just watched a friend's site (saw his logs) get hammered by a botnet coming from a dozen major ISPs. These would be exactly the ones that would be whitelisted. No bad headers, hits were never too rapid. No unusual behavior, yet they managed to fully scrape a 2k page site w/ scripts/css/images.

wilderness




msg:4643669
 5:48 am on Feb 9, 2014 (gmt 0)

Sorry Don, but in this modern world where Google penalizes people for literally anything and everything you can't wait until the damage is done before blocking something.


Bill, Google may penalize me all they wish, however they'd be cutting off their nose to spite their face.
The widget content on my sites doesn't exist on any other websites with only a few exceptions.
Most of the visitors are aware that my sites content are less than 1% of my total archived data, and should they plagiarize the website data than they would lose any possibility of inquires into non-web-active-data. (It's the same for images).

It's not different strokes really, it's proactive vs. reactive and I prefer to spend my time doing more productive things.


When a noob begins the process of black-listing I agree entirely, however over time old-reactive solutions becomes proactive. These days (except in very rare instances), I'm adding merely custom solutions in order to force the hand of widget visitors that refuse to use the contact link on the bottom of every page.

Besides, you block the entire WORLD except the whitelisted USA IP ranges,


That's what you believe.

because my ISP uses three different A segments.


lucy, there are multiple-major North American providers that do this, however the same segments are consistently used.
My own provider and their IP has been dynamic for more than five years, and except for brief-troubled-periods that identical IP is re-assigned after a temporary issue is resolved.

lucy24




msg:4643671
 7:16 am on Feb 9, 2014 (gmt 0)

however the same segments are consistently used

I've never had the same IP twice. Sometimes several consecutive ones are close together-- same a.b.c. --but generally when they change, it's utterly random.

:: detour for multi-file search ::

In the past year (since January 2013) I've had at least 20 IP addresses, with durations lasting from a few days up to a month and a half. In addition to the 66.120.0.0/13, 67.112.0.0/12 and 69.224.0.0/12 ranges that I recognize, I spent several weeks at a 64.160 address that I didn't even realize belonged to my ISP. And once in a blue moon I do meet a botnet from some of those same ranges; if I were Ukrainian I'd probably have blocked me by now :)

Things do level off. I am not prepared to believe that there are fewer robots active today than there were 2 or 3 years ago. But my current logs show a lot less Robot Color than when I started keeping track. Most robots simply aren't that imaginative. They find a congenial host and some generic UA and stick with it-- and so eventually they get blacklisted.

I have to say it helps when not one but two major browsers go through new version numbers at a ridiculous pace, the way they've been doing for the last year or two. Humans tend to auto-upgrade, ao the remaining robots claiming to be Netscape 2.0 when all the world is Netscape 13.7 stick out like sore thumbs.

tangor




msg:4643672
 7:36 am on Feb 9, 2014 (gmt 0)

ao the remaining robots claiming to be Netscape 2.0 when all the world is Netscape 13.7 stick out like sore thumbs.


Shhhh! lucy24, don't give away the other side of whitelist (which is as we all know is who gets in).

UA blocks are not perfect, of course, but they are key to the total defense. The if < x is not met then no entry. Works for me.

This 86 message thread spans 3 pages: < < 86 ( 1 [2] 3 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Search Engine Spider and User Agent Identification
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved