Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

If quality is a goal how does Google score?

         

Hissingsid

10:52 am on Jun 19, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Hi,

A few folks have mentioned Wikipedia and Amazon entering the results higher and more frequently than previously since Big Daddy.

Whatever algorithm Google employs spammers, by a process of trial and error or lucky chance find ways to beat the system and get their pages to the top of SERPS. Google has long stated that they want to find automated methods of removing Spam from their SERPS.

The only Spam that really matters is that which appears in the top 20 results. It becomes increasingly unimportant the lower down you go in the results and, except for more academic searches, anything after 20 is overlooked by 99% of users.

So how do you tweak your algorithm to exclude spam automatically. Well I see two main possibilities:

1. Use identified spam sites/pages as models and exclude other sites/pages that match the model of those sites/pages using filters.
2. Use identified authority sites as models of what a good site should be like in terms of content, inbound, outbound links etc etc. Then tweak the algorithm so that these model sites come in the top 20 and bring with them pages from other quality sites into the top 20.

If they use 2. then only Spam pages that fit the model presented by exemplary sites/pages will get into the top results and if they fit that model then they are not Spam anyway. In gardening terms this is a bit like growing ground cover so that the plants that you want choke the weeds out.

If this is what they are doing then one side effect would be that the model sites/pages that they select will be default get into the top 10 more often than previously. In fact in order for them to not always be #1 the algo must have some method to allow more of at least one of its ingredients to result in a higher ranking. Allinanchor for example.

When you think about it this hypothesis makes sense. If they are planning to incrementally “improve” SERPS quality then they have to have some way to assess it objectively and subjectively.

If this is what they are doing why isn’t it working?

Best wishes

Sid

larryhatch

11:10 am on Jun 19, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Great post Sid, and I wish I knew the answers.

I suppose a few spammers carefully study the authority sites and mimic them in certain ways.
G watches for this of course, human review, and tweeks their algorithms. I'd bet on that.

In the long pull it has to be automated though, sheer volume demands it and automation has its faults.

Then, they decide they need an overhaul. Oh sh** oh dear!
Loads of inferior sites surge thru the holes and its back to the tweeking. That's my best guess. -Larry

texasville

6:07 pm on Jun 19, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Wikipedia and amazon share one thing in particular. Inbound links are to their separate pages and not to their index or main page. Other sites link to topics in those sites, not to the sites themselves.

Right Reading

6:29 pm on Jun 19, 2006 (gmt 0)

10+ Year Member



Wikipedia and amazon share one thing in particular. Inbound links are to their separate pages and not to their index or main page.

I think that's one of the keys to what we're seeing with BD. I suppose there would be some kind of fractal pattern to the IBLs that might be difficult (though certainly not impossible) to replicate artificially.

You would think there would have to be some weight to internal site links in order to serve up the best pages, but this seems to have been strongly devalued, and may be one reason odd pages are popping up in results.

I also think G may be doing a topical analysis of sites (probably derived from something like what is called "page analysis" in their sitemap reports) and trying to favor pages that seem more "on topic," even if they have lower link-based PR than others.