homepage Welcome to WebmasterWorld Guest from 54.205.59.78
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Website
Visit PubCon.com
Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
Forum Library, Charter, Moderators: goodroi

Sitemaps, Meta Data, and robots.txt Forum

    
robots.txt - Disallow/Allow
eyalkattan




msg:3596681
 7:58 pm on Mar 10, 2008 (gmt 0)

Hello,

I'm relatively newbie to robots.txt and need some ideas to solve a challenge I have.

I would like to block my entire site except the index.html on the root directory.

Unfortunately, There are some files that must be on the root directory as well.

I tried to follow the Google's guideline which suggest that you can do something like this:

User-agent: Googlebot
Disallow: /
Allow: /sitemap.xml
Allow: /index.html

However when i test it with Google's own webmaster tools, it tells be that access is denied by robots.txt.

Any ideas what am I doing wrong or how can I workaround this?

Thanks
Eyal

 

cyberdyne




msg:3598252
 9:24 am on Mar 12, 2008 (gmt 0)

Hi eyalkattan,
By usingb 'Disallow: /' you have told Googlebot NOT to crawl any of your site.

To permit Googlebot to crawl your site, but Disallow it to crawl all of your sub-directories, you should perhaps list them all individually. Something like:

User-agent: Googlebot
Disallow: /sub-directoryA
Disallow: /sub-directoryB
Disallow: /sub-directoryC

Hope this helps.

eyalkattan




msg:3598388
 12:51 pm on Mar 12, 2008 (gmt 0)

Yeah, this does the trick, however I was hoping to avoid listing my site's structure for security reasons.
I wonder why the specs are missing the "allow" directive - it seem logical to be able to block the entire site and allow individual files or folders....

wtkad




msg:3598405
 1:11 pm on Mar 12, 2008 (gmt 0)

If you don't want to reveal your site structure, remember that robots.txt matches partial names. You don't need to put the full directory name, just the first letter (except for 'i' and 's', since you're allowing files that start with those).

User-agent: Googlebot
Disallow: /a
Disallow: /b
Disallow: /c
...etc.

If you have other files or folders that start with i or s, add additional filters for those, but you can use just enough of the name so that the 2 files you allow are the only things that don't match.

vincevincevince




msg:3598432
 1:34 pm on Mar 12, 2008 (gmt 0)

You could use .htaccess to 'physically' block access to anything but index.html

eyalkattan




msg:3599139
 1:09 am on Mar 13, 2008 (gmt 0)


User-agent: Googlebot
Disallow: /a
Disallow: /b
Disallow: /c
...etc.

This sounds like a nice workaround actually. I'll definitely give it a try.

Does this apply also for other bots or just google?

eyalkattan




msg:3599140
 1:14 am on Mar 13, 2008 (gmt 0)

You could use .htaccess to 'physically' block access to anything but index.html

The files I'm trying to block from the bot, needs to be accessible by the index.html
The way I architected the site, the index.html loads dynamic pages from JOOMLA cms into dynamic DIV. This way I can control what google and other bots index on my site as I don't whish every page of my site to be indexed
Another reason is that I am able to load pages into the DIV without having to refresh the entire page.

I think setting .htaccess to block access to these files and folders may cause the site to malfunction.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Search Engines / Sitemaps, Meta Data, and robots.txt
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved