Forum Moderators: Robert Charlton & goodroi
Are Directories Dead in 2016?
[edited by: aakk9999 at 10:13 am (utc) on Jul 6, 2016]
I'm actually surprised directories still exist. I had thought their day had long gone with the turn of the century and the advent of Mr. Google.Google only started murdering directories in the late 2000s. Basically they were getting traffic and advertising revenue and after Google scraped them for their content, one of the asinine Animal Farm kludges hit many of these directories. Basically a new directory subpage would have very little content and some directory owners may have auto-generated the schema using off-the-shelf directory software.
Google only started murdering directories in the late 2000s.
All the travel/review pages with hotels, they are nothing else then a directories with products:
All the travel/review pages with hotels, they are nothing else than directories with products: Hotels. Hotels do not need their own pages/sites as the directories bring them enough clients.
But dummies at Google are actively trying to kill them, because all they see is around is spam
Sending their users from a page of links to another page of links is a poor user experience. That's all it is.
Yet, all major search engines employ thousands of cheap rankers to rank the web. And those rankers have seconds to rank your site (including directory) , and they also have guidelines
You are mistaken in thinking that human evaluators are calling the shots. The human evaluators are simply creating data that is fed to the machine to learn from.
Wired.com: How do you recognize a shallow-content site? Do you have to wind up defining low quality content?
Singhal: ... we used our standard evaluation system that we’ve developed, where we basically sent out documents to outside testers. Then we asked the raters questions like: “Would you be comfortable giving this site your credit card? Would you be comfortable giving medicine prescribed by this site to your kids?”
Cutts: There was an engineer who came up with a rigorous set of questions, everything from. “Do you consider this site to be authoritative? Would it be okay if this was in a magazine? Does this site have excessive ads?” Questions along those lines.
Singhal: And based on that, we basically formed some definition of what could be considered low quality.
Wired.com: But how do you implement that algorithmically?
Cutts: I think you look for signals that recreate that same intuition, that same experience that you have as an engineer and that users have. Whenever we look at the most blocked sites, it did match our intuition and experience, but the key is, you also have your experience of the sorts of sites that are going to be adding value for users versus not adding value for users. And we actually came up with a classifier...
the top positive sites were selected as positive, the worst negative sites were selected as negative
the article you use to support you argument is an interview with two Google spokespeople
However, John Muller in a recent hangout...
Matt and Amit, never say in the interview that human raters evaluated each and every site in the dataset.
However, John Muller in a recent hangout, possibly the last one stated explicitly that they don't use human raters to evaluate websites...
When using Panda as an example, Google already had a pretty good classification of high quality and low quality websites. It is not like Google was unable to determine great from pure spam before Panda, they did not need human raters to go through every site or webpage in their dataset.
It's a common sense fact that giving a search engine user a page of directory links as an answer is a poor user experience. A good algorithm will be able to choose the actual site you need to see. Telling a search engine user to pick a link from ten links, then presenting them a directory page with ten more links is well, stupid.