| This 77 message thread spans 3 pages: < < 77 ( 1  3 ) > > || |
|First it was Position 6 - Now Position 4 Gets Strange|
The result in position 4 has been rotating between three different firms all day for our industry's top value keyphrase. All other listings above and below remain constant.
|"You contradicted and answered your own statement" |
Um, no I didn't. If you put your underwear in your fourth drawer, and you put your gun in your fourth drawer too, that doesn't make your gun a pair of underwear.
Um, duh - yes, you did! Of course you contradicted yourself, steveb. If the gun and the underwear were both in the fourth drawer, that would be first order co-occurrence. It's an elementary syllogism:
define syllogism [google.com]
It's when there's second order (or high order) co-occurrence that the beast starts to grow hair. But the #6 or in this case the #4 sweet spot consistently turning up for corresponding numerical, yet variable origin-based factors and relevancy-based scenarios, can't just be phumphkered away.
[edited by: Marcia at 11:40 am (utc) on May 8, 2008]
Google #4 vs. #6 Positioning. Here’s a similar situation. We’ve two critical key phrases.
Search w/ phrase-1. Results position is #7 w/ Adwords at top of page.
Immediately search w/ phrase-2. Results position is #3 w/ Adwords at top of page.
Immediately search w/ phrase-1 again. Results position is #3 w/ NO Adwords at top of page.
Immediately search w/ phrase-2 again. Results position is #3 w/ NO Adwords at top of page.
Immediately search w/ phrase-1 again. Results position is #3 w/ NO Adwords at top of page.
Immediately there-after regardless of which phrase I use we're #3, which I'm not complaining about, but it's interesting how I have to enter phrase-2 before phrase-1 moves up in the SERP's.
Clear history, cache etc. Open a new IE and get the exact same results as described above.
With respect to phrase-1 and phrase-2, the number of results in each of these searches remains consitent.
One interesting footnote is that I can turn right around, search phrase-1 and/or phrase-2 in FF and don't get this behavior.
|The result in position 4 has been rotating between three different firms all day for our industry's top value keyphrase. All other listings above and below remain constant. |
This is exactly what I'm also seeing. The #4 slot is NOT being assigned to a totally random site but is rotating 3 (sometimes 4) other sites for that position but it's those same sites that keep cycling before being pushed back to between +10 & 20 positions.
I'm one of those rotating with originally a #5 position and the trouble is that nicely honed bounce rates are creeping up as the home page is dumped (roughly a 10% increase since this started about a week ago). So I'm struggling to see how using this #4 position as an experimental slot to incorporate analytics can have an overall positive effect.
I dont think it is just position four. In two serps i watch closely the switching in and out has happened on several different positions on the first page of results.
Also, it seems to be something Google "switches" on and off.
In the two serps I referred to above (two top searches for a niche), they started with one, switched results in and out for a week or two, then that settled down for a while.
Then they started on the second one. Results on that still switching.
Also, I have sometimes thought that Google is trying out very slightly off-topic pages - pages that wouldnt normally get to the first page, as if they were testing if searchers might want something slightly different. eg a mini widget site, when the search is for widget.
Neither serps has videos or photos showing.
But imho something definitely going on that is new and didnt used to happen.
This may just be a #4 coincidence but I'll report it anyway in case it chimes with anyone.
I have a secondary site that was slumbering around #20 on .co.uk and was about 10 further back on .com.
On Tuesday in an attempt to try to get it doing better on Yahoo (separate subject) I removed h1 tags and the content between them from all pages. These H1 tags had the search term I'm targeting on every page.
Today on google.co.uk this site is at #4 but on .com it is at #67. I've checked this using a DC checking tool and this confirms the result.
The site has only 30 pages and just 15 backlinks (on Yahoo site-explorer) and these are from just 9 domains. Apart from the h1 headers it has not been particularly optimised. Its just a site I was keeping on the back burner in case I needed it.
The domain name includes both words in the term but in the opposite order and one of them is a plural.
I'm noticing this for a few of my industry MAJOR keywords and unfortunately I seem to be the guinea pig. At first I thought it was related to the time of day, but ater tracking for a bit it seems to happen at any given time during the day. However, I don't have enough time to track this hour by hour, every day. When the wheel stops, if it stops, I hope I land on top!
It's position 5 on the results page I watch the most. And what has been popping in there is very interesting. One is an edu page that matches in terms of topic but hasn't been updated since 2002. There are not many backlinks and half look very shady. It makes absolutely no sense that a page like this would rank at #5 even briefly.
The other page that has been there a lot is nice enough a site, decent backlinks but not that many backlinks.
The third one that is in position #5 part of the time is the homepage of a well known and popular site in the field with plentiful backlinks. I'd say it is the site that belongs in that position.
I don't know if this info helps solve the puzzle.
"The fact that you keep putting things in the fourth drawer is a pattern."
Assuming it is the same pattern is silly. So let's forget the tangent. Even if it is a fact that the fourth slot is an exerimental slot, it is totally ludicrous to suggest things in the same drawer were put there for the same reasons, or even in the same thought universe. You don't put a gun in the sock drawer for the same reasons you put socks in there, so the assumption the reasoning muct be the same is just folly.
"I repeat, why isn't every single top 10 SERP result being "tested" at the #4 spot?"
So far as I can see, NONE are. When the #4 phenomenon happens, it is only pages that are not in the top ten otherwise that are put there. And of course that is the way that the phenomenon makes some sense as an experiment.
|When the #4 phenomenon happens, it is only pages that are not in the top ten otherwise that are put there |
You could have a point but as I said I occupied a #5 slot prior to this rotation and had done since the middle of last year.
What cracks me up is how people on one hand say "Goog's engineers are the smartest people on the planet" and then on the other hand act like they are complete morons.
Let's look at the arguments (with logic PLEASE)
- Google is a master at testing.
- ANYONE who understands "testing" knows that multivariate(able) testing is how you truly test things.
- Google is somehow "testing" only a certain number of sites at only ONE position - #4.
This is the most ridiculous argument I have ever heard.
IF Google was "testing" click-thru, analytics, WHATEVER, they would be testing in a completely different manner
(like they test EVERYTHING else, but somehow this situation is different?!)
For example - a very basic test would be to split the top 10 into 3 sections. Top 1-3, 4-6, 7-10 and the compare the click-thrus(or WHATEVER) of all those sites in each position individually and collectively, rotating all 3 sections individually and collectively as well.
But somehow, the "geniuses" at Goog are ONLY testing a few sites at the #4 spot which would take YEARS to get any actionable information?!
Sigh.... I'm trying to be nice, but this argument is quickly pushing my sarcasm, condescension, and smart-aleck buttons.
Claiming the world is flat simply because you haven't done the research that proves it's round does NOT convince me.
So whitenight - what would be your take on what is happening here?
Please see my earlier posts about this being a side-effect of the US testing.
(Which, by the way, is foremost in Goog's minds with their new monetization for Youtube)
... ah... 'k if you're seeing this, let me know:
I've gone after a set of SERPs where
- main word is well known food
- a search for this single word will produce loads of 'related searches'
Checked those out, as I've kept suspecting Universal Search as the reason for sites rotating at #4 ( or at least its infrastructure with a new 'channel' incjected ).
So... here is what I saw when going after keyword + mycity, then the related searches:
- term #1: kw + mycity
Universal Search at #4 ( no grey lines, no highlighting )
Google Maps, local business results
- term #2: kw + otherword
Universal Search from #4 to #6 ( highilghted as such )
non-Google 'picks', no thumbs, but the easy to identify Univ. layout
- term #3: online + kw
Universal Search at #4 ( again no special highlighting )
A news result I was expecting to see. 8 minutes into its lifespan.
- term #4: whaterver, can't remember anymore
Universal Search from #4
video results and thumbnails, one is not YouTube.
... so far so good. All related searches had something injected on their SERPs at one point or another.
- term #5: kw + recipe i.e. [keyword recipe]
... nothing obvious. *stares*
Which is of course suspicious
I waited out and re-loaded the SERP every other hour.
And as expected, the #4 result was rotated.
W/o my usual longwinded explanations, the #4 result was NOT optimized for this keyterm.
query between quotes and the above operators w/ quotes...
...would all show it on the second page or even lower.
Checking its title the reasons are pretty obvious, the term is in plural. As the page itself is about many "recipes".
And as expected
"keyword recipes" - plural
keyword recipes - again plural
would bring it up at #1 and #2.
OK, since this was but a single test, I'll need more ( data ) on this, but if this IS the case, it'd be in line with other reports here. Meaning, there 'could' be a new kind of 'channel' from which Google injects results onto the SERPs ( I'd call this Universal Search as it's the name of the infrastructure bypassing verticals with other, different relevance checks ), and that is: the winners of the related 'plural/singular' or 'other popular adjective' contest.
again this is far from proven, could be a coincidence, etc... but didn't have much time to play with it any further. you could check your data/results too to eliminate/prove this theory
I've seen this for the past week or so. The #4 position for a high comp single word phrase has been rotated between 3 sites in the niche.
The three positions being rotated in this SERP are #4, #9 and #10. All 3 are commercial/vendor sites. There is one other commercial/vendor site at #3 which is unaffected. Checking DCs I've seen a reasonably even split between one set of #4, #9 and #10 and another. And then a rotation will occur and the split goes between two other pairs of #4, #9 and #10.
I can only attest to the pattern I've seen, but the curious issue for me is that other positions have remained quiet while these 3 positions have rotated in a quite regular pattern. The one factor that seems to be consistent from observations in this thread is the #4 position.
Of course there could be some very solid reason why a search engine would choose to test a single position. Not sure why whitenight feels this is implausible - perhaps the #4 position has a high correlation with certain user behaviour that Google would like to isolate?
Positions 1 to 5 are fixed here for my search word.
Here is what I see
Position 1 - 5 is fixed with always the same sites that have the KW in the url.
Position 6 is the one position where a site from a second page sometimes gets popped in there.... for a few hours (generally anywhere from 6-12 hrs)
What this looks like to me for my kw is that the first 1-5 positions are fixed because the url has the KW embedded in it.
The rest is all about relevancy, optimization and how the algo tests that particular word.
It doesnt take much to build a rotating algo so that no one can guess how to optimize their site - because no one knows the ramdomness.
While this could be frustrating for a webmaster - this is actually a great way to spam-proof.
PLUS.... who says that an algo needs to be applied the same exact way on every KW?
The algo is built with a wide lateral base of decision making ability so as to flush out what goes above the fold and what goes below the fold. Based on the many decison branches that the algo permits.... I think all that is certain is going to be ABOVE the fold (position 1-5) and BELOW the fold (position 5-10).
Since there are so many sites, so many ways to optimize, so many ways to have backlinks and on and on... the algo takes it all into consideration and performs the screens... therefore there is no single factor that you can isolate...
Maybe we should all be focusing on looking at the behavior of 1-5 and 5-10 (or where the stable line of demarcation is for each KW you follow)
Maybe this thread should be about behavior of sites above the fold and below the fold... but.... the FOLD is where the algo has determined a high percentage of stability versus a lower percentage of stability.... for each KW the "fold" could be in any position based on the number of sites that qualify...
[edited by: Arctrust at 12:28 pm (utc) on May 9, 2008]
|Of course there could be some very solid reason why a search engine would choose to test a single position. Not sure why whitenight feels this is implausible - perhaps the #4 position has a high correlation with certain user behaviour that Google would like to isolate? |
It allows Google to test whether a result deserves to move up the ranks based on it's clickability whilst still maintaining quality by having three steady options in the top 3 places. It is an optimal position for testing this as the number 4 positions is often either just above or just below the fold (depending on PPC ads and browser addons).
I have wondered for a long time if clickability is in the mix.
I think the ones staying solid at the top are there because they are so much stronger than the others. So there may be 2 or may be 5 depending on how many are way ahead in the overall algo.
The most fascinating part is the pages being switched in and out on the top 10. The one that popped in for a while today was far more general than the keyword. For example if the keyword had been cars it was showing a page about cars, planes and trains.
The other thing I'm noticing is that the backlinks to these pages varies wildly. Some are very solid in terms of many good related links, others have so few backlinks as to be funny. This is why I think they are testing something like how often the link is clicked on compared to the position or perhaps even how often the searcher returns immediately to the search page instead of staying on the site.
I'm loving the different theories and near-flaming of them as well in this thread, although at the end of the day collectively it does not make us much wiser.
|It allows Google to test whether a result deserves to move up the ranks based on it's clickability whilst still maintaining quality by having three steady options in the top 3 places. It is an optimal position for testing this as the number 4 positions is often either just above or just below the fold (depending on PPC ads and browser addons). |
Personally that is the lines i've been thinking along. With an average (browser) resolution of 1024x768 either by set desktop resolution and fullscreen browser or by clients using higher resolutions but non maximized browser windows, it would make sense to test #4 as it is most likely to be the end result just above the fold. As far as why others would be seeing the same phenomenon with positions #6, #9 and #10 (even #3) could perhaps also be attributed to above the fold conditions/assumptions specific to the client.
Or i could be wildly off, who knows ^^
Something that I think helps to reinforce the idea that G is testing site clickability for position 4 is the new benchmarking feature in G Analytics. You can compare your sites performance against cumulative data from other sites in your sector considered the same size as yours. This would indicate that over specific periods (when you're at #4 and when you're not) how your site performs compared to others. Above average compared to the benchmark may ultimately indicate that your site is more worthy of that position.
lol, is anyone going to actually TEST Miamacs completely logical, completely ON-POINT theory
are we just going to ignore it cause it doesn't fit their "world-view"?
"Testing and Logic?
We don't need no stickin' testing and logic!"
useful comment as ever
|Miamacs: #4 is often the place where universal search will inject something it thinks to be very on topic |
Apropos of that I've just seen for the first time a "Local business results" listing for my zip code in the #4 slot, complete with a map and two listings. That's the first time I've ever seen this for my manin search term -- or for any search term, actually.
Ok, how's this for "usefulness"
Your theories are WRONG. Just flat out WRONG.
Miamacs figured it out cause he wasn't addicted to some pie-in-the-sky theory on "clickability"
Clickability = Chicken Bone Throwing 2.0 ... in my book.
Here's 3 (out of 100 reasons - available upon request depending on whatever ridiculous arguments I hear about 'clickability') why it is NOT analytic-testing...
1. In order for Google to test "clickability" LIVE, it will take an infrastructure update 2-3 times the size of "Big Daddy"
It won't be some "minor" update to the algo either.
It will be HUGE with all types of upheaval.
You won't be able to miss it.
That hasn't happen yet, so that rules out it being LIVE..
2. As I said above, any THOUGHT OUT testers knows testing a FEW sites at #4 for ANYTHING would take YEARS of "click-thrus" to get any actionable information.
There are BILLIONS of pages across TRILLIONS(possibly infinite) keywords terms that would need to be tested (see point #1 about the massive infrastructure needed to implement click-algo-analytics)
3. If and WHEN, Goog starts testing this LIVE, you will hear ALL about it from Goog, MC, their PR department, et al. to anyone who will listen.
Cause it's a MAJOR leap forward in Search Engines Development and will drive their stock prices through the roof.
It's literally like landing a person on Mars.
Again, "useful" arguments #4-#100 available upon request when I hear Chicken-bone-throwing 2.0
Now go back and re-read Miamacs' posts 20 times as recommended.
It'll help YOUR business if you'll learn from it....
Miamacs, your assessment fits what I see on my end as well. Position 4 in some sectors seems to be assigned to an injection of Universal Search media. This is not seen on all data centers, so it could ne that it is also in testing.
Another possibility is because certain factors - conditions - might trigger a Universal Result to appear and slide a given website at that position up or down to make room for the extra entry, the media may appear during some searches and not appeared during others - even for the same query - depending on whether or not the conditions fire the trigger -(and this is based on assumption that this result is appended to the final export just before those results are released live.
I don't think there's such an extreme polarization of viewpoints here. Yes, Google is most likely using the infrastructure of Universal Search to place some urls in postiion #4. Maybe the word "clickability" is a stumbling block in the discussion.
Consider this - the snippet team has long measured the perfomance of their changes - you might call that :"clickability", but let's just call it perfromance if it helps to find some common ground. Google's main algo team also measures the performance of the urls in the SERPs.
So yes, Miamacs has put forth a good operating theory: Google uses the US infrastructure to try out other urls in that #4 position and then they can see if they perform up to position #4 standards. They apparently do this when the usual "images, news, blogs, etc" are not appropriate.
Here's a real world example. One of my client sites had been ranking at #11 forever. We couldn't seem to budge, no matter what we did, and of course we feel "they are worthy". Then last week, from the NYC area (and not Boston, LA, SF, or Atlanta) their url went to #4. It stayed for a week just from NYC - and just today it is in #4 spot from all locations.
Now we weren't seeing three different urls rotating through the #4 spot in this case, just a sudden promotion for one url - effective last week from NYC only, and now from any US location.
Nice well-rounded response to the potential cause of what we are observing, unlike some of the other commentary above.
The one oddity that I've seen is that results within positions 4-10 have been shuffling and not just shifting. It's all very odd, and very difficult to discern a pattern in this shuffling. Changes often occur intra day, with the occasional site swapped in and out.
I cant help but feel that they could test results at #4 without making such wholesale and frequent changes to subsequent results on the page. And then again, my observations are limited. I only hope that we can actually figure out what is really going on sooner rather than later...
This thread, and my client going to #4 today have been tickling something in my memory. I just verified that the client who just popped into position #4 was also there for a short time four months ago, right before they fell back to #11.
We've been working with them to improve page load speed, using insights from the YSlow tool. The new improvements just went live last week, and the page load speed went from around 11 seconds to 3 seconds. Three days after the improved code went live was when we first saw the #4 return in NYC.
Now I'm thinking that this url might have been "stuck" at #11 because of the poor user experience. The timing of this ranking change is suggestive, at any rate.
While threads like these always go into black helicopter land, this one seems awfully straightforward... some of the time pages are placed in the #4 slot that have some issue with them (and not necessarily a negative one). then it looks like sometimes the pages are allowed to rank where they should, then other times the pages are penalized to some other ranking, be it #11 or not ranking at all.
Google has a lot of things they score. It's not odd at all to think that if a page has one factor that runs counter to the scoring of all the other items, they might flag those pages and see how they respond either in a specific slot, or without the one problematic negative added to the score.
The best part is Google seems to have identified some scoring negatives that are problems AS problems, and deserving pages are getting another look. That's nice.
A couple of years ago we would have had to wait a month or two to see changes, if they made a mistake it took months to put right. They seem to be creating bugs faster and swatting them much faster these days.
I'm lucky enough to hold down the number 1 spot for a competitive term in AU for quite a while now (months). It's been interesting to watch the top 3 results shift around on a daily basis although position #4? It just looks random to me. We go from 1 to 3 back to 1 then 2 back stable at 1 now - hang on refresh nahhhh it's changed again. Random. I need some more backlinks, relative of course. I wonder what the guy down the road sees?
I've been working to stabilize a site that keeps cycling from the second page, up to position #4 and then slding back down to position 12 - often in one step but sometimes in several steps. The changes are very rapid, often within one day, By the way, this is a competitive and highly monetized phrase.
One factor I started studying is the size of the competition websites. These results seem to show that size of the domain may be a ranking factor. Also, before the cycling began, the site was very stably in position #4.
|URLS INDEXED |
Position 01 -- 1,570 Position 12 -- 1,040
Position 02 -- 12,100
Position 03 -- 99,800
Position 04 -- 87,000
Position 05 -- 27,700
Position 06 -- 22,100
Position 07 -- 242,000
Position 08 -- 1,630
Position 09 -- 124
Position 10 -- 689
Position 11 -- 125
Position 13 -- 976
Position 14 -- 13,600
Position 15 -- 401
Position 16 -- 1,660
Position 17 -- 10,400
Position 18 -- 1,490
Position 19 -- 3,750
Position 20 -- 686
|AVERAGE URLS INDEXED |
Average (#01-05) -- 45,634
Average (#05-10) -- 53,309
Average (#11-15) -- 3,228
Average (#16-20) -- 3,597
The anomaly in position #1 is probably due to having backlinks that go way past any of the other sites,
So what do you think? Is size of site a potentially big ranking factor? Then other factors would help Google decide which of the smaller sites deserve a shot at position #4 (or wherever they are currently testing.)
Also note - in the time it took me to write this post, position #4 went from being a website to being News Results (Universal Search). Some times in the past, News Results have appeared at #10 but usually they do not appear at all.
| This 77 message thread spans 3 pages: < < 77 ( 1  3 ) > > |