Forum Moderators: open
I always thought that making copies of someone's pages for any reason whatsoever without their express approval was abuse.
So how would using the no cache tag to tell Google not to copy web pages be a form of abuse?
Keeping the customer satisfied meant that they bite the bullet and take the loss, which they did. But now they are considering the no-cache tag on ALL pages that include pricing information.
It's interesting to me that John Q. User is not alwys clear on the concept of the Google cache, even when he uses it!
At the same time, I feel that there are very legitimate reasons for using the no-cache tag, and I hope that Google won't simply take the easy way out and autmatically penalize sites that use it.
Paid for pages. I offer a 14 day free trial for a listing on my site. If Googlebot caches the page during those 14 days then the 2 weeks free trial becomes a 2 month freebie.
I have not yet used the No-cache tag because I don't want to be wrongly associated with those "96% of people". Perhaps that is one factor as to why the 96% figure is so high.
During that time, site after site was pointed out that was using no cache. Most of the time Opera got the blame because it wouldn't cache the page even though it fully supported the http 1.1 w3c recommendation.
The latest switch occurred with Opera 6 where CGI pages (any page with a question mark in the URL) no longer caches unless it sets a specific expires header. (see: http 1.1 [url=ftp://ftp.isi.edu/in-notes/rfc2616.txt]rfc 2616[/url] 13.9 and the infamous paragraph 2).
The effect has been that pages that would previously cache no longer do. Page such as Google search queries. Do you realize how many cgi pages you run into in a day? Personally, I think they did it just for that reason - to get more money out of the embedded affiliates such as Google and to make all those cgi page counters roll with opera ua's on them.
Lastly, proxy caches. Who knows what is going on with the likes of AOL's and MSN's proxy caches? We already know AOL uses their own cache for search engine listings. I think 95% of the no cache abuse that occurs is to overcome proxy caches.
Throw in the fact that most Cable, DSL, and other broadband suppliers of modems and software come default configure their products with Cacheing disabled - it sure makes you wonder why there is even a browser cache available. If you read some of what Berners Lee and the chief cohorts on the w3c lists have been saying, I'm convinced the next major http rfc put forth will have caching off by default unless a site specifically enables it.
I would like it if Google would start obeying the HTTP spec and caching standards - not partially, but fully. It would mean the end of many a corporate site that dominates the rankings in Google. Most all of those sites use no cache tags.
>96%
That's not my experience at all. Most of the majors are using no cache. Most of the majors don't need to be using no cache:
Cnn, cnet, zdnet, cnn, most of yahoo (outside the directory), any site with a thrid level domain named "MY" (my.yahoo,...etc), ebay, msn communities - the list goes on for thousands upon thousands upon thousands of sites. It's hard to find a top site that isn't guilty of no cache abuse. God forbid that lazy joe user can't press the reload button.
Anyway, if you are as fed up with no cache abuse as I am, use proxomitron [searchengineworld.com] - it makes the net usabel again.
Googlebot uses "noarchive".
[google.com...]
[Later addition... In fact, as I look at the concurrent Google cloaking thread, most people are also calling it "no cache" although they must mean "no archive." It's that "view cached page" link that throws us]
there is a significant difference obviously, but people are constantly mixing these two up. Do we now know for sure (almost a year after this thread started) that a regular http header pragma nocache has no effect on google serps?