Forum Moderators: Robert Charlton & goodroi
As reported by rustybrick below, Matt Cutts and Google now do understand what happened and they are backing out of it, rolling out the changes through their many data centers.
And this brings up a bigger - and for me, even better - question. What were they trying to do? Is Google now continuing to do it, but without "the mistake"? How much "forced position" work does Google do in the top positions, especially now that Universal Search is the rule of the day?
We begin with observations from our community, as the signs of the rollback began to appear a few days ago:
[edited by: tedster at 9:00 pm (utc) on Jan. 29, 2008]
Since it was a "mistake" though, penalty isn't the right word any more than dropping a glass onto a tile floor is "penalizing" the glass. Penalities are deliberate.
As bad stuff goes though, this was almost a joke from the get go, but considering there are some terms that are worth $10k+ a day for #1 and maybe #4k a day for #2, any amount of "penalization" can matter a lot.
Google first jumbled up searches that were prone to universal search results last summer, and a different sort of jumble could have happened here. Often for certain terms the universal news result is pinned at a certain spot (like #4 or #10). This placement of widget sellers at #6 could be a haywire attempt to put a froogle-ish result at #6 for widget-y terms.
"business can expect that when I've given it to them for several years"
Only if they are complete doofuses. In this day and age, no one can assume anything with Google. It simply has too many flaws to be thought of in such a reliable way.
Universal Search entries now appear on several of these searches where there were none before.
Almost parallel with the rollback on this #6 ranking shift (or whatever you want to call it), there have been some major new manifestations of Universal Search. I've continued to see those news stories in position #4, and we've got those local map results [webmasterworld.com] up at the top, which seem to shift in response to user intent and the degree of disambiguation, and not always in the direction you'd expect....
I'm also noting on some of the travel results that when you add modifiers to certain searches, Google will display several organic results, usually three, above the map and list of ten.On one search I tried, though, searching without the modifier brought three results for authoritative guides to the subject up above the map... and adding the modifier apparently disambiguated the search and a map came up to the top.
I've long thought that Google has different thresholds for different parts of its algo (like Hilltop) to kick in, based on how competitive a search might be. But "how competitive" is a vague term, which might relate to search volume, or the number of pages satisfying a query, or the degree of optimization, or all of these. In general, though, the more specific a search, the less competitive it was assumed to be.
In the local example I mentioned above, adding a modifier shifted the maps up and the organic results down.
As Marcia noted with regard to the "related to" suggestions she sees....
I think a guess that this was something phrase-based wouldn't be too far off the mark, and neither would a suspicion that there's "query expansion" based on word sense disambiguation be without foundation.
Bingo! I don't think it stops just at the "related to" links at the bottom, though. I think it may also be entering into the order of organic results.
One of the strange things that I've seen in this #6 ranking shift is that a page that worked its way up in the rankings to #1 for [big red widgets], and then also eventually came to rank #1 for [red widgets], would drop down for the three-word phrase, but still stay up at the top for the two-word phrase. You could micro-analyze inbound anchor text and onpage factors, but this to me suggests something phrase-related might be going on.
Position 1 has not been recovered in every case. I see #4 relatively commonly, or bouncing between #1 and #4.
I'm also seeing some of the prior #1 positions dropping to organic #4, so they appear right below those nice news story image thumbnails (which are adding an extra result above organic #4). Seems that the #4 position is one Google is looking at, both with regard to news and with regard to local maps.
Warning, some serious jumping to conclusions here, although I won't claim I reached one. heh, for all I know this is but an idea, and it might be WAY off, so consider the post just trying to stimulate the thread *grin*
some of the below stuff is evident I guess.
Remember this on SearchengineLand? An Insider's View of Google Universal Search. [searchengineland.com]
'k, read it again. Or just jump to that nice yellow chart.
Ask yourself ( especially knowing the screenshots of the AdWords application [blogoscoped.com] ) whether they also have 'verticals' for themes ( categories ) which they *could* combine using a variety of different emphasis when assembing the 'web search' results. When you've stopped guessing just visit this page [google.com]. note: this is for categorizing AdSense publishers' pages for AdWords users - to target by theme. nowhere did I find the slightest official indication of them using it on web search... but...
So they have a not so clear user intent with a phrase that's very competitive ( which in my book means 'competitive in AdWords' otherwise why would Google care one tiny bit ). They have the technology to tell which verticals it would fit ( sure they do, they have theme based targeting in AdWords/AdSense ). They can also tell what the CTR is for different verticals... ie. not the popularity of single URLs but rather... popularity of entire themes/sub categories. They can also see what the most common 'second' searches are and what additional words to the phrase would define which intent. ( remember the AOL leak? people start from broad to narrow - not that we didn't know that ).
They also have search volumes, trends...
...and now they have implemented Universal Search on these SERPs.
You know, the Universal Search that was MEANT to combine different verticals based on 'likely user intent'.
...
Thank you for your time,
I wish I've made at least some of you people confused...
This is what I said [webmasterworld.com] on the original thread about this probably being phrase based.
IF ( all-caps ) this is the case, AND ( all-caps ) your site can be categorized into multiple themes, ie. verticals... adding words that'd ought to be present along the 'big money phrase' can make you compete in the universal search environment.
On the other hand, if a site is ONE THEME only for this kinda categorization, and when user intent is unlikely to be commercial ( there's another aspect to who has what intent with re-ranking retail sites but *cough* uhm... we won't go there ) lets' assume Google didn't want to have pages with certain themes up at the top 3 / top 5 because MOST people weren't looking for them. I mean this would translate as "those who did look for them, will find them as long as they're top 10 anyway". It's only natural that well SEO'd sites ( with ROI on mind ) will be higher in the SERPs, even if they discuss a less broad aspect of a generic theme, right?
Say, for 'Skiing', the most common clicked results might fall into the vertical 'vacation destinations'. Let's use the impossible scenario that Marcia at one point had to optimize a skiing site and as expected, due to the above mentioned circumstances *smirk* the pages lack depth, they fail to discuss things that are usually associated with skiing. However Marcia does a great job in building links, and optimizing the page for 'skiing'. IF Universal Search touches the user intent factor, the site that was previously #1 for skiing could fall in rankings, even though CTR and bounce rates are OK, simply because Google finds the page lacking, compared to the word sets associated with the most popular themes. meaning they feel their users are missing info they were eager to find ( reading their minds as this is but a guess based on their overly generic query + AdWords data ), and make Marcia's skiing site slip, probably even below the Universal Search box... down to at least #4.
again, this is not some conclusion just 'where I'm at' with my guesswork here.
But lemme say that in my experience trust thresholds ( what newbies still call the sandbox ) are tied to AdWords competition and not search volume. The two are of course almost parallel.
...
*hehee*
[tinfoil hat off]
... was just playing with the thought. didn't mean it to take this long...
No time to write another lengthly post
*ack*
*runs off*
Now if only they'll admit they also made a mistake with the 950 penalty...!
On a side note, the Plex has also admitted mistakes in recent months with Adsense.
Too many big mistakes going on for a $500+/share corp!
Some of these top-level execs are in over their heads.
p/g
Too many big mistakes going on for a $500+/share corp!Some of these top-level execs are in over their heads.
Call me a skeptic, but I doubt if the top-level execs are personally tweaking filters or fine-tuning algorithms.
calling it (and continuing to believe) when there were many skeptics
Yesteday I had a chance to look at another SEO's collection of data for rankings that changed in Decemeber. There was a MAJOR clump that had all gone to position #6 - far beyond expected statistical deviation.
Some of these tracked query terms were relatively far down the long tail. But even there, the statistics were very clear. These stats loooked at many hundreds of key search terms, taken from a sizable number of domains. Over 60% of the terms that went to lower positions in December went precisely to position #6.
It will be interesting to see how rankings for those same queries migrate now.
We did a shotgun blast as well and all the new content and 750 new pages pulled us right out. Fortunately I have a large staff that can crank out some web pages when needed. December 26th Tedster announced it, I read it and identified our sites were hit. Fortunately, it is a slow time of the year and I had the majority of my staff working. We peeled off a new site we were building and worked on adding new content on the phrases that were affected. By the time I announced our findings we had about 500 new web pages live, crawled and even cached.
Can I tell you what we did exactly to pull out? No, but I can tell you all the content we added did pull us out and made our site a bit more "universal"
Since Google's doing a rollback
not doing a rollback but rather making the edges a bit more fuzzy
exactly my thoughts. The script was buggy because it sent everything to the same position, the #6 penalty box. ( yeah, sorry 'filterbox' just didn't sound right )
Who would have noticed this script if its effects wouldn't have been uniform?
They admitted their mistake... of letting us know.
...
Why, when there is absolutely zero evidence to support the idea, would someone assert "this was done for a reason"? Why are the FUD-powered black helicopters warming up now?
Why, despite the mountains of evidence to the contrary, do people think Google NEVER EVER makes a mistake and that this buggy, rumbling-bumbling search engine is exactly precisely the thing they want it to be?
The changes they make might not always help but there is certainly a reason behind it.
Define help.
The changes might hurt some webmasters while at the same time helping other webmasters that moved up, so depending on who you asked it certainly helped someone except those whining they got moved down.
C'mon. Why did Google change it's algorithm last year to allow all those hacked edu domains rise in the rankings?
Obviusly they did not. The idea that actions they take NEVER have unintended consequences is just not sensible.
I believe that what we've seen is a second 'due diligence' test. Essentially a mini-algorithm to provide a second check on certain results.
Pass one: find result set sites, using standard algorithm:
Pass two: for each of the first five results, apply due diligence test:
Where a result in the first five under the due diligence test fails to meet the threshold, then that site is demoted out of results 1-5.
The result is that results 1-5 have passed due diligence, and result 6 might well be a previously 1-5 ranking site which failed due diligence and is now demoted directly to slot 6. I guess that if two sites failed due diligence on one term, then one would be 6 and the next 7.
As for what the due diligence test would be; I've not seen enough examples of affected sites to come up with a good answer. I would hazard that it is focused upon higher-level checks for spam and seeks to ask the question "did this site get this ranking by artificially manipulating the results?". Manually reviewed sites may be excepted.
This new 'feature' cannot be seen as easily now and may actually be rolled back, however I'm sure it'll be revisiting us at some point in the near future in a modified form which is not so easy to see; most likely through the 1-5 range being more variable (e.g. 1-10 for competitive terms, 1-8 for less competitive, 1-5 for non-commercial, 1-3 for low-competition...)
Falls apart right there, but more to the point...
You are suggesting this was NEW. Why wouldn't this be just as likely to be something old? Google lowers the score of sites and pages all the time, why wouldn't someone assume this was cancerous version of one of Google's existing penalties, most obviously the -30?
There is no evidence to suggest this was Google trying something new deliberately, and considerably more to suggest something in the sauce currently was mutated accidentally that was thought to be benign. A deodirant can is benign on its own, but if you add heat it explodes. If you didn't know that, how would you know to beware of it?
I have only one (2 word) term affected. That term:
1. Has "Searches related to:" at the bottom of the results page.
2. Is the biggest $ Adwords term by far in our market. In fact over half of the searches for things related to our product are for this two word term.
3. We were at #1 for this term for 2 years before the Florida update and then have been at #1 since, until we were hit by this effect.
4. Add another none joining word to it and we go back to #1 or #2.
5. We have retained our top slots for our target 3 word terms.
6. One of the words in the two word term was identified as causing the problem during the Florida aftermath. Googleguy looked at the issue and we went back to #1. I strongly believe/know that the issue was one of disambiguation.
7. I've recently reported a few instances of inappropriate broad match Adwords ads appearing for this term.
In my opinion they have targetted big volume terms and implemented some form of phrase based element into the algorithm.
The term in question takes the form:
Thing Service
Thing is an potentially ambiguous word.
Service can be applied to any number of different things.
When you join Thing & Service together in a search term then the ammount of ambiguity is massively reduced. If they are looking at the phrase then this changes the affect of the balance between these two words and reduces the affect of the semantically close words which occur on some pages that used to help Google to disambiguate the "Thing" word.
FWIW I think that this is a retrograde step as it is easier to spam. Before you had to have semantically rich pages on the subject, now you just have to have the phrase in the right places at the right degree of repitition and prominence and you will find yourself at the top.
In theory.
Cheers
Sid
[edited by: Hissingsid at 10:24 am (utc) on Jan. 31, 2008]
Question -- is it the same for the singular & plural two word term on your site? (realizing that without knowing the term in context, might not make sense)
My experience is somewhat similar. We have 'related searches' for both the singular and plural (location widget / location widgets), but the #6 jumble only hit the plural.
Google shows 800k competing pages for the singluar, 600k for the plural, but i am certain the plural draws higher ppc bids, FWIW.
Once the #6 was lifted, we bounced right back to #1/#2 for the plural and singular, but that was short lived -- currently at #4/#5, with some questionable sites above us that are OBVIOUSLY paid link driven, with no attempt to hide it.
I think whatever it is, it seems to still be in effect, but no longer with a hard #6 result -- the serps float a bit more.
Kyle
just ignore the overly modest 'disclaimers' and please do it.
- yeah, I'm so modest it hurts.
...
no, really, try applying the logic explained in there to your sites' content and rankings. don't worry, it's easier done than said.
I want to know whether I give too much credit to Google's in-house communication. I mean these things are all technologies they actually *use*, it's just that they were developed for different departments.
[edited by: Miamacs at 5:43 pm (utc) on Jan. 31, 2008]
Some of these tracked query terms were relatively far down the long tail. But even there, the statistics were very clear. These stats loooked at many hundreds of key search terms, taken from a sizable number of domains. Over 60% of the terms that went to lower positions in December went precisely to position #6.
This is exactly what I saw in our statistics. We had dozens of terms (head to tail) that went precisely to position #6. They are now back in their previous ranges.
The reminder from this "mistake" is Google has control knobs that can dramatically impact the traffic they send send to a site, and they are willing to twist the knobs willy nilly.
Don't build a business that depends on Google traffic (organic or PPC).
One case, for example, the url had been holding at #1 for more than a year. After being released from #6, it went first to #5, then to #4, and today it's at #2. In the case of this query, a new "Product search box" has appeared at the top, creating an "eleventh result".
Another case went from a long time #1 to #6 and then directly to #2 in one jump. But still others have gone from #6 directly to #1.
If the ranking shifts had behaved like this from the beginning, instead of sending so many urls to exactly #6, there would have been a lot fewer reports of something funny/new going on.
[edited by: tedster at 12:50 am (utc) on Feb. 3, 2008]
http://sphinn.com/story/24687 - Matt Cutts :In general if you think a site might have a penalty (perhaps from past behavior) and you think the site is clean presently, you can do a reconsideration request in our webmaster console to ask Google to take another look at the site.
This implies that sites dropped across the board. But it also implies Google was looking to select criteria that was intended to drop "some" sites.
What are the things that Google is looking to drop sites for. This is what I'm interested to know, since it heralds a new twist in Google's filter intentions.