incrediBILL - 3:05 am on Jan 2, 2011 (gmt 0)
Was never endorsed or proposed by any standards body. New engines are not obligated to honor another search engines proprietary commands.
That argument doesn't work because many de facto standards come about without ever being proposed by any standards body, they happen because of majority adoption, which later end up in standards.
Besides, x-no-archive used in a header is an actual RFC standard that was adopted by the search engines from usenet and mutated into the NOARCHIVE meta directive: [en.wikipedia.org...]
Before you run over a cliff with wild bs, you really should checkout Blekko, it has some awesome features. You plugged your nose when Google came around with all it's own issues (like 'caching') - give Blekko the same chance.
We were/are giving it a chance.
The BS started when the CEO came right out and said they wouldn't support NOARCHIVE.
You can either let them post full cache pages, which opposed to fair use snippets is a violation of copyright, with no other option than to completely opt-out of Blekko.
If they force us to opt-out just to protect our content, how is that giving them a chance?
Supporting one simple NOARCHIVE command that's fairly universally supported solves this problem.
Google, Yahoo, Bing, Ask and even Gigablast supports it: [gigablast.com...]
Even open source NUTCH supports it: https://issues.apache.org/jira/browse/NUTCH-167
Though not strictly a bug, this issue is potentially serious for users of Nutch who deploy live systems who might be threatened with legal action for caching copies of copyrighted material. The major search engines all observe this directive (even though apparently it's not stanard) so there's every reason why Nutch should too.
Such universal adoption pretty much spells standard IMO.