homepage Welcome to WebmasterWorld Guest from 50.17.27.205
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Home / Forums Index / Local / Foo
Forum Library, Charter, Moderators: incrediBILL & lawman

Foo Forum

This 223 message thread spans 8 pages: 223 ( [1] 2 3 4 5 6 7 8 > >     
lets try this for a month or three...
last recourse against rogue bots
Brett_Tabke




msg:329347
 1:21 am on Nov 19, 2005 (gmt 0)

[webmasterworld.com...]

required login the real story here...
MSN and yahoo bots were blocked in October. This does everyone else.

 

bill




msg:329348
 7:55 am on Nov 19, 2005 (gmt 0)

That's going to be an issue for the site search [webmasterworld.com].

pmac




msg:329349
 8:50 am on Nov 19, 2005 (gmt 0)

Yeah, SE traffic is soooo 2001.

DoppyNL




msg:329350
 9:01 am on Nov 19, 2005 (gmt 0)

I've been using such a robots.txt for ages!
Works like a charm :P
And plenty of visitors from search engines :)

trillianjedi




msg:329351
 9:21 am on Nov 19, 2005 (gmt 0)

That's going to be an issue for the site search.

Yeah - that's my only problem with it. Is there anything we can use to replace search facilities for our own use if the SE's are no longer allowed in here?

TJ

bird




msg:329352
 9:27 am on Nov 19, 2005 (gmt 0)

Yeah, that kind of mandates a decent search facility onsite, one would think... Anything like that in the pipelines?

stever




msg:329353
 10:00 am on Nov 19, 2005 (gmt 0)

Since a good proportion of the current worth of WebmasterWorld (as a long-term subscribing member) is in the archives, it would seem a questionable decision to cut off access to it.

I was just yesterday reading an interesting discussion highly relevant to current Google changes dating from 2001 and found through a favourite search engine...

Those who cannot learn from history are doomed to repeat it.

vincevincevince




msg:329354
 11:38 am on Nov 19, 2005 (gmt 0)

Brett-

Is that served to everyone? Or just to unrecognised robots / IPs?

<edit (addition)>
If it's to everyone, then I think that is drastic but possibly inspired action.

If it's just to anyone unauthorised, then that would be in line with what has always been done with meta tags, etc, would it not?
</edit>

maccas




msg:329355
 11:50 am on Nov 19, 2005 (gmt 0)

Personally I could live a month or so without a site search, just for the incite we will all gain from this experiment. Maybe do it in two parts though, the first blocking all but Google and the second all but Yahoo or maybe do it to searchengineworld?

mattglet




msg:329356
 4:44 pm on Nov 19, 2005 (gmt 0)

You're not feeling the hit of the Rackspace bandwidth charges are you?

walkman




msg:329357
 5:25 pm on Nov 19, 2005 (gmt 0)

why Brett?
to see if they will respect the robots.tx, how fast you will be reincluded or...?

Brett_Tabke




msg:329358
 12:58 am on Nov 20, 2005 (gmt 0)

> why

Seeing what effect it will have on unauthorized bots. We spend 5-8hrs a week here fighting them. It is the biggest problem we have ever faced.

We have pushed the limits of page delivery, banning, ip based, agent based, to avoid the rogue bots - but it is becoming an increasingly difficult problem to control.

> robots.txt

also - everyone will have to login to access the site starting now.

> search

a solution is being tested and worked on. It will probably take atleast 60 days for the old pages to be purged from the engines.

trillianjedi




msg:329359
 10:58 am on Nov 20, 2005 (gmt 0)

Seems wrong to me to try and use robots.txt to ban rogue crawlers - the truly rogue crawlers don't obey it anyway.

Does robots.txt not get parsed in sequence - i.e. you do the inverse of what you did previously - allow the crawlers you specifically want at the top of the file, and the last line in the file is ban everything?

tigger




msg:329360
 11:07 am on Nov 20, 2005 (gmt 0)

>a solution is being tested and worked on. It will probably take atleast 60 days for the old pages to be purged from the engines

wouldn't it have been better to have this system in place before stopping the bots crawling the site and cutting of our only way of searching the forum?

Brett_Tabke




msg:329361
 1:07 pm on Nov 20, 2005 (gmt 0)

> rogue crawlers don't obey it anyway.

That is part of it and part of the testing. I have found that the majority DO obey robots. However, most of them use weird agent names or browser agent names. The majority certianly do not support cookies.

Agreed tiger, but 12million page views last week while we were away at the conference by rogue bots caused change in that time line.

tigger




msg:329362
 1:28 pm on Nov 20, 2005 (gmt 0)

Ok thanks I can understand that

vincevincevince




msg:329363
 3:36 pm on Nov 20, 2005 (gmt 0)

Strange - I had been checking Google's cache for the robots.txt to see what Google sees. Now, there is no cache... has it been removed through the URL removal tool? or perhaps Google is getting a 404 (seems the easiest way!)

Not sure why? (Apologies if this is off topic)

phpmaven




msg:329364
 4:50 pm on Nov 20, 2005 (gmt 0)

Brett,

I can certainly agree that you have a major problem on your hands. However I don't quite understand the logic of banning all bots.

I would suggest that you redirect all requests for robots.txt to a script called robots.php or whatever and then look-up the IP in a list of known IPs for googlebot and then feed googlebot your usual robots.txt and anybody else the ban all bots robots.txt.

You could also log all requests for robots.txt to a DB or log file and go back and analyze which bots are following the disallow directives.

I'm doing something similar on my site and it's working pretty well.

Maybe you are already doing some thing along those lines since you mentioned "cloaking"

pmkpmk




msg:329365
 5:06 pm on Nov 20, 2005 (gmt 0)

Foo? Why in "Foo"?

P.S. I would really love my competitors to try what you just did :-)

Brett_Tabke




msg:329366
 10:37 pm on Nov 20, 2005 (gmt 0)

phpmaven, we have been doing EVERYTHING you can think of. This is a part of that ongoing process. We can't require all people to login and allow bots onto the site (eg: pure cloaking - we aren't the new york times!). Even the random ad scripts we cloak off to keep bots from seeing session id like content, gets grumbles from alot of members. The claims are that we are either selling links (which they claimed about our links to westhost and now rackspace are paid), or claim we are cloaking to get higher pr when we do block bots from seeing session ids. eg: no win situation for us.

So, we start by banning bots, and then follow immediatly with required cookies/logins for everyone. That will stop most of the bots. The ones it don't, we will follow up with session id's, and auto ban in htaccess for page view abuse. Lastly, we will move to captcha logins, and then random login challenges with other captcha gfx requirements.

> Why in "Foo"?

I hate talking about it at all. It is like talking about security problems in public (given I believe that the majority of bots we see here are owned by members). However, it is better brought up by us, than someone else.

bcc1234




msg:329367
 10:55 pm on Nov 20, 2005 (gmt 0)

Brett, I know the rule about the urls, but have you tried something like that?

[goudkov.com...]

but based on known ip ranges of good bots?

The idea is that a legitimate user will not request more than X number of pages in a specified amount of time. So you limit access to the ones that go over the limit.
You should see the appropriate X from your stats, and make exception for the known major bot networks.

ogletree




msg:329368
 1:23 am on Nov 21, 2005 (gmt 0)

I'm glad your going to do captcha. Until you do what is going to happen is people will write botts that get a user name and then hit randomly to look like they are humans. Unfortunatly captcha has been broken I have seen articles where bots can get past that. The good news is that what you are planning will keep away almost all bots. If somebody wants to crawl your site bad enough there is not much you can do about it. You can just make it harder on them and more costly.

Brett_Tabke




msg:329369
 4:39 am on Nov 21, 2005 (gmt 0)

> bcc1234

Wouldn't work here were it is not uncommon to have more than 1000 visitors that will view more than 500 pages a day or 200 visitors that will visit more than 1000 pages in an 8hr day or 50 visitors that view more than 2000 pages a day. deminishing returns on a script like that.

bcc1234




msg:329370
 12:46 pm on Nov 21, 2005 (gmt 0)

It does not have to be just per day, you can try counting on hourly basis with a fallback of a daily limit.

You could put a cap on the number of pageviews based on the 95-percentile of your normal users' stats, and have a whitelist system for those who really read 1000s of pages per day.

DaveN




msg:329371
 3:29 pm on Nov 21, 2005 (gmt 0)

dam dangerous game Brett ... what if someone uses the url removal tool in google ..

DaveN

pmkpmk




msg:329372
 3:49 pm on Nov 21, 2005 (gmt 0)

Don't you think that any such request for WebmasterWorld would raise some sort of extra scrutiny within Google? I guess this forum is very well known... (at least that's what I hope).

DaveN




msg:329373
 3:58 pm on Nov 21, 2005 (gmt 0)

it's automated .. all you need is to add the robots.txt which brett has done

DaveN

MatthewHSE




msg:329374
 7:24 pm on Nov 21, 2005 (gmt 0)

I sure hope you'll keep us posted about any side-effects to this Brett! ;)

Incidentally, any chance of getting a better site-search now that Google and AllTheWeb won't be indexing new content?

Brian_M




msg:329375
 10:29 pm on Nov 21, 2005 (gmt 0)

Hi Brett,

If you changed the robots.txt file to the following syntax (in the order shown below), wouldn't that allow only Googlebot in and keep all the other good bots out? Rogue bots will completely ignore the robots.txt anyway, but at least the site search would still work:

User-agent: Googlebot
Disallow:

User-agent: *
Disallow: /

Brett_Tabke




msg:329376
 3:59 am on Nov 22, 2005 (gmt 0)

> any side-effects to this Brett!

Ya, the site is as fast as it has ever been.

Why would you possibly allow google with a nonstandard robots.txt entry and not allow, jeeves, yahoo, and msn?

This 223 message thread spans 8 pages: 223 ( [1] 2 3 4 5 6 7 8 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Local / Foo
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved