Forum Moderators: goodroi
I'm relatively newbie to robots.txt and need some ideas to solve a challenge I have.
I would like to block my entire site except the index.html on the root directory.
Unfortunately, There are some files that must be on the root directory as well.
I tried to follow the Google's guideline which suggest that you can do something like this:
User-agent: Googlebot
Disallow: /
Allow: /sitemap.xml
Allow: /index.html
However when i test it with Google's own webmaster tools, it tells be that access is denied by robots.txt.
Any ideas what am I doing wrong or how can I workaround this?
Thanks
Eyal
To permit Googlebot to crawl your site, but Disallow it to crawl all of your sub-directories, you should perhaps list them all individually. Something like:
User-agent: Googlebot
Disallow: /sub-directoryA
Disallow: /sub-directoryB
Disallow: /sub-directoryC
Hope this helps.
User-agent: Googlebot
Disallow: /a
Disallow: /b
Disallow: /c
...etc.
If you have other files or folders that start with i or s, add additional filters for those, but you can use just enough of the name so that the 2 files you allow are the only things that don't match.
You could use .htaccess to 'physically' block access to anything but index.html
The files I'm trying to block from the bot, needs to be accessible by the index.html
The way I architected the site, the index.html loads dynamic pages from JOOMLA cms into dynamic DIV. This way I can control what google and other bots index on my site as I don't whish every page of my site to be indexed
Another reason is that I am able to load pages into the DIV without having to refresh the entire page.
I think setting .htaccess to block access to these files and folders may cause the site to malfunction.