Welcome to WebmasterWorld Guest from 126.96.36.199
Forum Moderators: open
Just to let you know since I didn't notice any tread on this ...
When deciding which pages to crawl on a particular host, Googlebot will obey the first record in the robots.txt file with a User-Agent starting with "googlebot". If no such entry exists, it will obey the first entry with a User-Agent of "*".
It's the URI portion of the User_Agent header that's changed, so people using "user_agent cloaking" that relies on matching the entire string may be caught out by this.
I was surprised to see it has been happening since March 3rd because I didn't see any change until July 9th.