Welcome to WebmasterWorld Guest from 184.108.40.206 , register , free tools , login , search , pro membership , help , library , announcements , recent posts , open posts Pubcon Platinum Sponsor 2014
Slurp has a thing for my robots file slurp has been grabbing my robots file mogwai
My site dropped out of Inktomi in August last year and hasn't had a single page spidered by slurp since. I lost around 20% of my traffic as a result of this drop!
These last couple of days slurp has been requesting my robots file around 8 - 10 times a day!
Has anyone else had this experience? Could this mean that my site is on its way back in to Ink?
Slurp has been pretty thirsty of late, a lot of folks have seen heavy activity.
Has it only requested robots.txt, not anything else? If so, you might want to give it a quick check to make sure you're not inadvertently blocking Slurp.
I been trying to figure out way, canīt find reason. The same happends to me, from 1/1 till 12/1 they didnīt spider the robots.txt at all or any other pages. And since the 13/1 untill today they are spidering the robots.txt every day, some days only once, other 3-4 and one day 14 days..........
This is my robotfile:
User-agent: * Disallow: /404.shtml Disallow: /espanol/404.shtml Disallow: /svenska/404.shtml Disallow: /cgi-bin/ Disallow: /scgi-bin/
As well I did save it in linux mode and uploaded it in ascii or something like that, to make sure itīs correctly, but they keep on spidering only the robots.txt. I have another post on it named why does Inktomi spiders me, where you can see I wrote them and they gave me their link to their websearch-guidelines.............
Try using the
robots.txt validator [ searchengineworld.com] helenp
Itīs validated by 3 diferents validator thanks anyway helenp
I just read in an spanish site some very interesting thing.
They said, but donīt know why, that when slurp spider the robots.txt file is always like this:
"Mozilla/5.0 (Slurp/si; firstname.lastname@example.org; [
and when spider rest of site ítīs like this:
"Mozilla/5.0 (Slurp/cat; email@example.com; [
The diference are in Slurp/si; and Slurp/cat,
just checked, all my spidered robots file are with slurp/si
That means there are two diferent spiders?
and that slurp/cat maybe not started to work properly yet? mogwai
My robots.txt is very basic and valid and it doesn't cause problems for other bots so I don't think its keeping slurp out.
I checked another sites logs and found that slurp is indexing pages with (Slurp/cat) and robots files with (Slurp/si).
Looks like I need to wait for Slurp/si's big brother to visit. Hopefully it's only a matter of time.
same happens to me:
see also this post in this forum : [
and this post in another forum : [
and in another forum :
[ ...] abakus-internet-marketing.de
This has been going on for quite a time now.