I'm new to experimenting robots.txt files and I want to try something.
I've found that optimizing the content on a site for the most part works effectively across the search engines, except obviously not google - not anymore. :(
I've recently de-optimized a client's index page after it took a hit during the florida update in an attempt to make the content seem more organic and not optimized.
Unfortunately the rankings for that particular page slipped on the other engines. So what I did was make a clone of the index page and named it index2. I plan to reoptimize the index2 page back to the way it was and disallow googlebot visitation rights. Obviously, I don't want to deny googlebot from visiting the rest of the site though.
Is it possible to write a robots.txt file to keep googlebot from visiting just the index2 page?
Or is this just a stupid idea to begin with? Like I said, I have some room for experimenting here so I'm curious. Thanks in advance!
The other alternative is cloaking. Personally, I wouldn't recommend either approach, unless it's a throw-away domain. I'd simply find a way to integrate a second, different, and useful page that would rank well where the other one does not. Others may disagree.
The main problem I see with this two-page robots approach is that you will be splitting your incoming links across two pages. Although the engines other than Google don't use the PageRank concept per-se, some of them do take into account "link popularity" and clustering. So, you'll take a hit from that angle as well. :(