Thanks, that's kinda what we thought.
Our main concern is that when google comes today they see site A. When the come again, there's a 50% chance they see site B. No big deal; it may think we just launched a new site. Then when it comes yet again it may see site A again. This is the scenario that worries us. Is it duplicate content? is it an indication of black hat stuff?
I argue that Google wouldn't consider this bad for the following reasons. As I understand it, the bad cloaking works like this:
->request made for page
->examine request for page
->if from SE serve heavily SEO'd page
->else show real page that is user friendly (or perhaps on a different topic)
In this scenario, Google *always* sees the same page, it's just maybe not representative of what a user sees.
In our example, we make no attempt to dupe the engine, and as such, it will often see different material. However the two versions would be very similar from a site metrics perspective:
-the material is on-topic
-use same keywords
-we will use similar KW counts
-linking structure will be comparable if not the same
-and all the other *nice* SEO stuff you'd expect to see.
-we will however use different graphics, product write-ups..
The real question is how Google might examine a page to determine if cloaking is in effect. Will they use a bot that doesn't advertise it's a bot? Perhaps some other crawler that looks like a browser.... If so and this other bot compares what the googlebot sees to what it discovers, we'll be busted.
How about this for a work around:
It would seem that geo-cloaking is acceptable, so what if we say all traffic from Googles 'geo-location' goto one site always, while the remaining sites get the 50-50 split. While it will perhaps skew our data, would it minimize our risk of being de-listed?