Forum Moderators: goodroi
Since the old URLS will still work, we don't want Googlebot to return and end up indexing the same page twice, so I'm thinking of using a robots.txt that looks like
User-Agent: Googlebot
Disallow: /*.cfm?*
The goal being that if Googlebot returns through a previously indexed url, it will drop the old one and end up picking up the page with the new one. (.cfm/)
Has anyone done this, and if so, how well has it worked?
Patterns must begin with / because robots.txt patterns always match absolute URLs.
* matches zero or more of any character.
$ at the end of a pattern matches the end of the URL; elsewhere $ matches itself.
* at the end of a pattern is redundant, because robots.txt patterns always match any URL which begins with the pattern.
thanks