I'm looking for an updated debate or discussion on robots-nocontent [help.yahoo.com]
, google_ad_section_start [google.com]
and part-page noindex [en.wikipedia.org]
tags. These are non-standards track efforts by Yahoo, Google and Yandex to identify which content on a page is more or less important.
A 2007 Webmaster thread on the topic is here: [webmasterworld.com
The posting has much negative sentiment against Yahoo's approach, as do blogs all over the net.
Now it is 3 years later. Are these tags useful now? Do they work as intended? Are people actually abusing them? Did other engines pick them up?
The stated use case had to do with excluding common page elements (e.g. menus, sidebars, framing, page templates) which appear over and over. But I figure search engines better be good at figuring that out *.
I'm more interested in marking good content. Or excluding irrelevant content like a rotating "featured article
" in a sidebar. Or excluding visual blocks like Amazon's "Customers Who Bought This Item Also Bought
". Or "specials on sale today
". Or, structuring a page which is about 5-10 distinct topics (for example the abstracts of 5-10 distinct academic papers, or 5-10 distinct news articles).
Do these tags work for these purposes? Is there a way to say certain content is part of a "group" of content and distinct from other content?
In a world of seemingly declining search engine relevance, are these actual solutions? * At least until we can publish our pages like we author them (e.g. "template_1.xhml" + "common.css" + "common.js" + "content_page_1.xml" == final page).