Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: open
However, I'm not sure of whether Google is making a special exception for the NYT, or if perhaps Googlebot has a "subscription" to the NYT...
You could always program your cloaking software to give "free subscriptions" to search engine spiders ;)
You can test this, just copy some text from an older post and paste it into Google.
Interestingly (and this is something you would want to do) the search results do not offer a cached version of the page.
Has anybody had any bad experiences with using cloaking for this purpose? Or are there any 'official' guidlines for this from any of the SEs etc?
Does webmaster world and NYT get 'special' treatment or do the SEs really not find this a problem?
As I am not trying to fool the search engines in anyway, I have a mass of data I would like to be indexed as it is of use to people, they just have to sign up to access it.
Another thing to think about is the fact that almost all engines cache the pages they visit. This would make it "easy" for human visitors to see your cached data simply by clicking the "Cache" link for your page. To protect against this, make sure to use the appropriate NOCACHE meta tag on these pages.