Google Sets, Broad Matching, Stemming, the "~" (tilde) operator, blah blah blah...
Are we overlooking this technology?
Are we looking for our SERP's in the right place?
I've noticed something in my SERP's which I'm still struggling to conceptualise so I'm throwing this out for comment to either be kicked down or expanded on.
Here's an observation:
I'm using an example based on 2 keyphrases which are both related in topic but concern different product models.
1) widget model tutorial
2) widget tutorial
...where widget is the same brand name/product and model is a version number or abbreviation of that product, EG: Dreamweaver MX <-- PURELY AN EXAMPLE!
Pre-Florida, my (steady) rankings:
1) widget model tutorial = #1
2) widget tutorial fluctuated between #1 and #2
These appeared on totally different pages using the above keyphrase queries.
1) widget model tutorial = #1 (No change).
2) widget tutorial = #427! (ergh!)
BUT... on the SERP for widget model tutorial my dumped on, widget tutorial, is right underneath at #2!
It's as though widget tutorial has been "re-classified", grouped, deemed as related to, or a variant of, this particular (widget model tutorial) SERP instead of the pre-Florida SERP...
Here's something else that I've noticed within the same SERP...
If you use the "~" operator like so:
You get "variants" of the word 'tutorial', like:
... as expected, BUT[/i]...
LOOK at results #3 - #10 for [b]widget model tutorial:
3. widget model tutorial : Buy at the best price on [STORE] -
4. KvR : widget model Video Manual and Tutorial CD-ROM Set for the Mac
5. KvR : [DOMAIN-NAME] widget model Video Manual & Tutorial CD-Rom
6. [MANUFACTURER NAME] widget model Bundle w/Tutorial
7. [DOMAIN TRADEMARK]: widget model Video Manual and Tutorial from ...
8. widget model Video Manual and Tutorial CD-ROM Set for the Macintosh
9. Using [TRADEMARK NAME] Mouse Keyboard with widget model - [DOMAIN NAME]
10. widget model tutorial: Hitpoints und Slices Tutorial -
NOTE: Words within "[...]" omitted in compliance with TOS.
NOTE 1: Hold the thought [MANUFACTURER NAME] (above), see below.
Now, notice that the next occurence (after #3) of the phrase widget model tutorial doesn't appear until #10.
Why? Well I don't profess to know the answer - There may be off-page factors (I haven't looked as of yet) and/or it could be sommat as simple as the colon (:) at the end of the word "tutorial".
But look at the occurences of the word "manual"!
Call it themes, call it grouping, call it Sets, related, similar, variants, or whatever... is it now that G can better determine subject matter by looking also for "expected" words within the documents... both linkin from and within the target page perhaps and/or contextually also?
RE: Note 1 (above):
KW1 = widget = Brand/Product name.
If I do:
[MANUFACTURER NAME] comes back as one of the variants (See result #6 in the SERP's above)
G knows that [MANUFACTURER NAME] is related to the [BRAND/PRODUCT NAME] even though they are NOT "common language" words used outside of the industry.
What am I saying?
It seems to me that G "expects" to see related "variant" words within documents so as to "better" classify, or understand what it's actually about.
This is, in theory, a smart algo for not only weeding out pages solely targetted at KW's without regard to the expected language, but also - again in theory - to apply more weight to, or "classify" documents it can "understand".
It's like a multi-pincer "attack" from different angles.
So maybe it's not neccessarily that your pages have bombed but perhaps been "grouped" more "relevantly" elsewhere... if it's not relevant - in your view - maybe it's time to look at what G is "expecting" to see and help it "understand"
Forgive me here but I'm tryin to make sense of this myself whilst writing so I hope that's clear.
Thoughts, observations, expansions?