Forum Moderators: Robert Charlton & goodroi
Google's gems: Wikipedia (virtually every search), 10-year-old (and untouched since) link/webring sites, public domain reprints with adsense sites, and, of course, amazon.
Lovely choices.
It becomes a problem with many authority sites on general terms. It's always been I think.
for example one of my friends has a business which sells a certain product >>> PROBLEM: last year a movie came out and has for title the "product".
since then the whole SERP is flooded with big brand names showing trailers of the movie and you have to go to page 3 to get the industry leaders that REALLY sell the product.
It's entertaining but as far as relevancy and usability goes I've seen better.
However it seemed to be slightly improving for the past week or so I'm being told, good!
..
Wikipedia is true to the fundamental principles of the Web. I can understand why Google might use Wikipedia as an authoritative "seed site" for its black boxes, and why Wikipedia articles rank high in searches. If Wikipedia's pages rank high in Google, that's fine by me.
Wikipedia suffers from the same syndrome - people are now actively trying to subvert it...
Ideally a research source would be a primary source (e.g. the papers of a scientist who discovered 'X'), now most of us don't have the inclination to read through mountains of research papers, so there are secondary sources who research primary sources, distill the information and make it more accessible to the rest of us. Next comes tertiary sources, which primarily rely on the distilled information of secondary sources (most websites and news articles count as a tertiary source).
The further one gets from the primary source the more convoluted and less reliable a source becomes. This factor is very pronounced in scientific research especially when looking at "facts" like seemly stable data sets. For instance say we wanted to look at the properties of blue widgets; now specific data points of its properties should seem to be very easy to standardize; however, due to many factors (including typographical errors) discrepancies work their way into those data sets and get amplified down the chain.
Some times when working with certain seemly common data sets I can be using five or more published books, which are secondary sources (e.g. they reference primary source research papers) and all five books have a different answer for the same point of what should be consistent data (at these points in time I usually start banging my head against the desk).
Wikipedia should not be relied on for anything more than one of many starting points to dig down to more authoritative reference sources. The more important it is that one gets the right answer the less one should rely on Wikipedia for that answer or even where to start looking for that answer.
Oh in regards to the Wikipedia citations for a given article, this is no guarantee that that citation was actually used to research that article. I've found several instances of Wikipedia citing pages on my site that have NEVER existed (strange but true).
As web publishers and individuals trying to make a living with ecommerce, we all have a vested interest in making sure we don't provide links to any Wikipedia resource as this only serves to feed the monster that threatens our own interests in our own search phrases. Maybe we can't stop the monster, but we don't need to feed it.
I also kind of agree with loudspeaker, though I'm still grappling with his application of Heisenberg's Uncertainty Principle -- which itself has an article in Wikipedia.
What I find worrisome with Google's cozying up to Wikipedia is the same thing I find worrisome about Google cozying up to the Open Directory Project.
Each of the three entities are genuinely glorious realizations of the conceptual underpinnings of the innovation we know as The Internet.
Wikipedia is a down in the pit, human, democratic, free-for-all for self-styled experts, specialists, dilettantes, even idiot savants. It's a great resource, but bring salt.
The ODP is wonderfully, human, self-selecting, elitist, and editor-based -- rigorously and self-consciously above the fray. Hopefully, hutcheson won't take exception to this humble opinion.
Google's beauty, however, resides in its origins out of mathematics. The greatest danger it faces is human bias in any shape, manner, or form. It can't avoid it altogether, of course, just as Statistics can't avoid it -- but at what point does it risk becoming not itself?
A lot sooner than ten years from now, every significant query will be dominated by large, popular, medium-to-high quality websites, and users will be the better for it. If a site like Amazon had non-horrible SEO, it would dominate things far more than it does now. Wikipedia should be the least of any smaller outfit's worries.
The spam model is also taking shape, can't believe so much junky is dominating GOOGLE serps which get 10/10 for using every spam principle know to man/black hatter and google.
Could Google be giving more weight to WIKI's at the moment because they are dealing with a far larger spam issue at the moment, and this is a short term solution to deal with the crisis?
Some of its articles may be "stubs" but it is not Wikipedia's fault that they get high rankings. But all the proper articles that rank high in some cases offer much greater value to the google searchers than what your site probably offers.
And with the added benefit of no advertisements. So I know that it is ranking higher than your site, but that does not mean that it shouldn't be this way or that Wikipedia is not a great resource.
It is also an authority site whose links do count, which is fair because every link inserted gets reviewed by many people who have the authority to remove it. I don't see why it shouldn't be in the Google SERPs. If your site is worth it, link it from the wiki page that ranks higher...
If your site is worth it, link it from the wiki page that ranks higher...
And there you have it! The mind numbing mentality which is "Wikidross" [webmasterworld.com].
Let's take something which is already substandard, turn it into something unstoppable ... and then use it to spam the web with even more substandard crap!
If this thinking prevails, hopefully the problem will eventually take care of itself and Google will wake up from its coma! They have created the monster and will have to deal with it ... at some point.
[edited by: Liane at 10:41 am (utc) on Aug. 10, 2006]
Side question: Who funds wikipedia? Does it have revenue sources? When a porn mogul sets up a site like that, you have to wonder what the real motivation is?
Side question: Who funds wikipedia? Does it have revenue sources? When a porn mogul sets up a site like that, you have to wonder what the real motivation is?
Look at the licensing model. For him Wikipedia is a realtively cheap source of material to create independant MFA and cloaker sites without fear of copyright abuse threats. This may be the most incidious SERP spam scam of all time.
Yes they are getting stronger, I have seen many of my internal pages that were once #1 pushed to #2 by wikipedia.
That said, there are still an awful lot that I rank above so clearly it is not simply their site trampling mine.
It seems clear that it is simply just a link issue.
Whatever tweaks Google makes two things remain basically the same
You are what your links say you are
The page with most (decent) incoming links wins
ust a small question for europeforvisitors
Some of us know your content ,I just wonder if you could have come in this thread and post positively about wikipedia if they where ranking over you for terms like
3 stars or 2 stars or 5 stars widgets in wikilonia.........?
I'd live with it. I certainly wouldn't get huffy about it or get mad at Google and Wikipedia. Where is it written that any of us is entitled to a top ranking for any given search string?
Also, I don't think Wikipedia is nearly as great a threat to the usability of Google's SERPs than computer-generated sites are. Search on a lot of travel topics (including destinations), and you'll find template pages that often have nothing but a keyword-based headlie and an invitation to "Post your review." I've seen the same phenomenon in the tech sector while researching laptops. (A so-called "review" at big-corporate-initials-net will turn out to be nothing more than a few price-comparison listings.)
All things considered, Wikipedia is a useful resource, and the odds are pretty good that users are going to be happy with what they find when they go to a Wikipedia article. That doesn't mean Wikipedia articles should rank #1 for every term, and in fact they don't rank #1 for every term. (In my experience, they're usually a few notches down the first page of the search results--assuming that they're on the first page of results, which isn't always the case.)
Lately wikicancer promotes "wikitravel"
Wikitravel uses the same software technology as Wikipedia, but it's a completely different product (one that's now owned by a commercial entity and partnered with another, slicker-looking travel guide that also uses a Creative Commons license).
This brings up the questions of whether Google gives an extra boost to sites with a Creative Commons license and whether using such a license has become an SEO technique . Will we see an explosion of widget sites, affiliate sites, made-for-AdSense sites, etc. that adopt Wikilike characteristics or use a Creative Commons license so they can move higher on Google's SERPs? Or will that ploy lose its usefulness as more pages of questionable quality with Creative Commons licenses get fed into the "bad examples" side of Google's black box and a Creative Commons license is no longer an indicator of valuable content?