Andem - 11:39 pm on Mar 17, 2012 (gmt 0)
To me, it is as if he is just describing Wikipedia and does not even understand the diversity and complexity of the web.
This is a very disturbing notion. Wikipedia, along with Google, are the largest scrapers on the web and without the original thought and data they collect from (read: steal), they wouldn't exist.
Either way, a lot of the content I read on the English version of Wikipedia is simply not true and does not follow the so-called NPOV doctrine. If Google views dishonest statements and scraped content as high quality content, then I fear the connected world is in dire straits.
Wikipedia absolutely hates original research. They put up huge warning flags when they suspect an article might contain anything original.
You're completely correct. In the eyes of Wikipedia, if the content is not sourced, then it belongs in the dustbin. Their idea of sources is rather questionable, though. I run a forum which has had several linkbacks from Wikipedia for the past 5 years and to this day, many still exist.
In your opinion What is Good Quality Content To Google?
To be honest, I haven't the slightest idea. There was always a grey area to Google which in my mind, was always a place to stay away from. Original content, a stable, strong and natural portfolio of backlinks and acceptable bounce rates were always good enough to be considered as quality to Google. Today, I come across so much rubbish and sites completely unrelated to my query and it seems as if the grey area is where you're supposed to be in order to effectively rank for decent queries.