homepage Welcome to WebmasterWorld Guest from 54.197.171.109
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 65 message thread spans 3 pages: 65 ( [1] 2 3 > >     
Google & Traffic Shaping - a hidden method to the quality madness?
Shaddows




msg:4222998
 9:35 pm on Oct 27, 2010 (gmt 0)

On the Monthly Thread [webmasterworld.com], I've posted about my experience of what appeared to me an extended period testing
Using a multivariate dataset, across a range of different keyphrases, user intents and user types, Google exposed our site in marginal but significant ways (putting us up one place, dropping Universal search, above or below shopping results, etc). They did this with (at least) four separate sets.

Over the course of 6 weeks, we experienced a slow churn of referrals, with four discrete datasets:
BIG uplift in traffic since 1st SEPT (20% above trend) with a corresponding drop in conversion rate, so sales were broadly static (on trend). Referrals shifted at precisely the same time. No visible change in ranking.

A referral shift on 16th Sept, another 6th October- both traffic and conversion neutral (relative to the 1st Sept).

Then the biggy. 12th October, huge referral shift. Traffic-neutral, but conversions back at pre-Sept level. In other words, we are now 20% up on sales. The referrals are NOT the same (or even particularly similar) to the pre-September level

Does anyone else have any experience of what appears to be purposeful traffic shaping- with a definitive end result?

In the past, I've shied away from any theory that requires a "my site is special" mindset, but I am convinced this was outside the normal algo development cycle. My personal point of view is that Google is aggressively profiling users and sites, and trying to match the two within a specific context. Any takers?

I don't want this to become a "Google has no purpose, all my traffic is going to SPAM sites" free-for-all. Please post with qualitative data, or some meta analysis.

 

tedster




msg:4223029
 10:23 pm on Oct 27, 2010 (gmt 0)

I think you're looking in a very good direction - one that i've also been playing with for a while.

Your idea of "traffic shaping" is a kind of umbrella or meta-idea that could absorb many topics that have been cropping up for a while, including:

1. How can traffic stay stable, even though conversions come in spurts and then die away?

2. Why do I see sudden changes in country sources for traffic, even though total traffic stays level?

3. Why do the UK SERPs have so many non-UK results at times?

4. Is Google throttling my traffic? Why doesn't it go up no matter what I do?

I'd like to go back at least two years to the time when Google began to focus on user intention and website types. We noticed this "intention engine" development as it applied to types of queries.

We talked about 3 very big buckets of intention "informational, navigational and transactional" - although I'm sure Google has a much more refined set of user intention buckets than this. Another user intention could be "locational". There's little doubt that some queries have an implied geographic component.

Here's the missing piece in that analysis. In order to tailor specific SERPs to specific user intentions, Google must also assign each website, and possibly each URL, to a specific taxonomy. Only then would they understand which type of page should be returned to which type of user intention.

It seems to me that Google has cranked up some kind of statistical testing - one that tries out a given page against different types of query intentions, and then takes note of the results. After a while, they could discover which intention taxonomy works best and then make a more stable assignment of website type - and some pages might have more than one type.

Yes, this thread is not the place for complaining.
Serious analysis only, please!

tedster




msg:4223041
 10:34 pm on Oct 27, 2010 (gmt 0)

Along these lines, a week ago Bill Slawski posted an article about a Google patent Improved Web Page Classification from Google for Rankings and Personalized Search [seobythesea.com]. The patent talks about Query Profiles and Document Profiles, along with User Profiles.

The kind of testing that Shaddows is thinking about could be a method for statistically testing whether the match between those profiles is accurate.

dvduval




msg:4223077
 12:20 am on Oct 28, 2010 (gmt 0)

To me "profiling" could prove to be a slippery slope with google, and recursiveness of profiling formulas could backfire. If you start to show a different set of data to a group, the group can start to exhibit different behavior. But you are also dealing sets and subsets of data. The different behaviors will then start to have a recursive effect on the overall algorithm.

It's hard to imagine there would not be some "polarization" of search results. For example, if someone lives in a more "Republican" area, might they get more "Republican" results? Or if someone lives in an area where one ethnicity or language is dominant, might they in essence receive information that makes assumptions about their race or national origin? (at least subtly?)

I can understand changing results for users based on them wanting to see something in their language, but beyond that I want to see results that are reflective of the overall, or at least have a choice in the matter. Otherwise I feel a sense of discrimination is a very strange sort of way. This could backfire on google.

tedster




msg:4223081
 12:45 am on Oct 28, 2010 (gmt 0)

Good cautions, dvduval. I do think that as long as query profiles is in the mix - and it definitely is BIGTIME - then they will catch and tweak any user dissatisfaction before it generates long-term effects.

I noticed a long while back that certain query terms would simply not show transactional sites on page one, no matter how strong the page or domain, and no matter how much it seemed like the right audience for the content. Certainly Adwords purchased for that search were getting clicks and conversions.

After using up a good amount of resources trying to rank "money pages" for queries that Google had profiled as informational, I got the message. It became clear that the taxonomy mis-match of query intention against webpage classification would undermine every other aspect of the ranking algo - title tags, anchor text, backlink power etc.

chrislloyd515




msg:4223104
 1:44 am on Oct 28, 2010 (gmt 0)

Regarding the localization aspect of this debate, we have a significant amount of information that this is the case and google is desperately trying to profile sites, we have a large number of sites that all need to rank for localized terms.

I suspected something like this could be happening, but we want our sites to rank worldwide, not localized to a specific country/area, so we took two approaches.

The first approach was to try and remove anything that google could use to tie us to a specific area, the idea being that they would just not localize us at all (and hopefully we would be seen worldwide)

The second approach was to use different techniques to localize us to a specific areas, this was done through different techniques, including the company address on varying pages, including clients addresses on different pages (client address would be in the area we were targeting, the company address mainly isn't)

So essentially the first lot of domains can't be localized, the second lot (which is made up of sub groups with different addresses in different places on the site) can be localized.

A few headlines from the data so far:

1. On Google.com from the US the Localizable domains FAR FAR FAR outranked the non-localizable domains, essentially group 1 never ranked at all without impractically large amounts of inbound links
2. Both sets rank equally well on all other worldwide google variants, including google.com from anywhere but the US, indicating that this isn't worldwide fully yet.
3. The domains that ranked the best included the full zipcode of a client on the homepage, we tested different forms of address and it seems that the most important factor is the zipcode, if that is missing then it seems that the results are closer to non-localized than localized.
4. It also seems that it's better to just have 1 address on the homepage, that makes sense as it's more likely to be your own address if there's just one (even though it might not be!)

So yes in my mind google is definitely trying to profile sites, and it will reward sites when it can if your trying to rank for localized search terms.

Hope this is helpful, it certainly was for us!

Sgt_Kickaxe




msg:4223109
 2:01 am on Oct 28, 2010 (gmt 0)

BIG uplift in traffic since 1st SEPT (20% above trend) with a corresponding drop in conversion rate


Sounds like cherry-picking the most likely to convert traffic and replacing it with low quality traffic. I wonder where the "good" traffic went. I also wonder if it's purely being siphoned off for financial gain or if your site metrics dictate if this happens to your site.

It might also just be bad luck but you're not the first to make that complaint.

scottsonline




msg:4223128
 3:12 am on Oct 28, 2010 (gmt 0)

Of course throttling and shaping are part of the quality control. Think of it as the second nuclear key. You may be able to fool google with paid links and other schemes which would have long ago allowed you to dominate the SERPS, but by making your traffic conform to a curve it gives them more time to assess your site. I was pretty sure of this all along I'm convinced of it now.

The only way you can game them long term now is to weasel your way into the traffic growth AND have good metrics at that point. Otherwise you rob Peter to pay Paul.

By design you can't go from 4500 to 45000 clicks in a day unless there is a viral component. You can't do it through picking up several top quality links because they will even out your traffic anyway and only slowly ramp it up.

True quality control and a great idea to take care of the average black hatter. What they cannot combat yet is the very sophisticated link systems im seeing in October.

rros




msg:4223153
 4:29 am on Oct 28, 2010 (gmt 0)

Are those sophisticated link systems you talk about based on links that appear to be really real votes, one-of-a-kind, dropped with gusto or are they the result of a more intricate form of innocent participation that results in large amounts of site-wides?

Shaddows




msg:4223252
 8:19 am on Oct 28, 2010 (gmt 0)

BIG uplift in traffic since 1st SEPT (20% above trend) with a corresponding drop in conversion rate
Sounds like cherry-picking the most likely to convert traffic and replacing it with low quality traffic. I wonder where the "good" traffic went. I also wonder if it's purely being siphoned off for financial gain or if your site metrics dictate if this happens to your site.
It might also just be bad luck but you're not the first to make that complaint.

No, that's not it at all. I'm not complaining, I'm just offering data up to the group. I'm pretty non-emotional when it comes to data.

Nor do I think cherry-picked traffic was siphoned off. On 1st Sept I got TOTALLY RE-PROFILED TRAFFIC. Different referral strings, different targets (obviously), different behavior once landed. And there was 20% more of it. Sales remained on-trend. There was ZERO business impact.

There were two more total re-profilings (new referrals, new targets), with the same +20% traffic, and the same +-0 sales. Still ZERO impact on the bottom line.

Then, the fifth exposure pattern in 6 weeks (pre-test, 3 test sets, then this one) arrived. The traffic is still up at +20, but now converting at the same rate as before: +20 sales.

I'm trying to offer an alternative vision to the prevailing "Google is sending me zombie traffic" POV. What if Google has re-profiled your traffic and found it was 'happier' somewhere else? What if the algo test-program for your site found most people treated like an info site?

The link Tedster posted is brilliant- its exactly the type of thing that I'm feeling, with a multivariate test-program simultaneously testing both a set of sites, and classifications of user.

Its the kind of statisical data mining that even comprehending should make your head hurt. To watch it happening, you need a solid grasp of your site metrics- not just the headline figure, but the drilled-down detail.

To come out the other end, I think you need a clear, compelling offering that sits inside a Google box- or at least serves a statistically identifiable [user-context] complex.
scottsonline:by making your traffic conform to a curve it gives them more time to assess your site
Sure, but that's not the same thing. I'm talking about a deliberate regime of testing, using ALOT of display-variables, with varying users, in varying contexts, against varying sites.
jdMorgan




msg:4223334
 1:20 pm on Oct 28, 2010 (gmt 0)

> the kind of statistical data mining that even comprehending should make your head hurt.

That's me. I look at an awful lot of data, but on self-reflection I find that after doing that, I tend to "go with my gut" -- some high-level abstraction of what I "feel" I'm seeing. Sort of an "Seemingly-futile Excel analysis until 3:00AM, and no confidence in conclusions or decisions until after a good sleep" kind of thing.

One of the things I "feel" is that the presence of google.com/url?cd=N referrers in the log indicates the activity of this profiling system (or at least its front end). That may be blindingly-obvious to the participants in this thread, but I though it worth a mention.

My highest-ranked, highest-traffic, most-frequently-spidered site is 99.99% informational, so I feel I'm getting a one-sided view of some of the factors being discussed here.

So back on the "data mining" side, I'm wondering if anyone here is seeing anything in their search referrals --the keyphrases most specifically-- that correlates with the behavior that Shaddows terms "exposure patterns" here or with the presumed "Google boxes" or "classification buckets." On sites that offer both informational and marketing content, is there any difference between the google.com/search referral patterns and the google.com/url?cd=N referral patterns? -- Does Google clearly "test" (using the latter referrer type) your marketing or your informational content URLs more?

Jim

scottsonline




msg:4223357
 2:04 pm on Oct 28, 2010 (gmt 0)

Rros no sidewides, all single votes from themed pages. They take a dying domain and load it up with 50 category pages, on each one is a huge amount of content that appears to be scrapped from dozens of sites and then dynamically mixed together. A search will yield two sentences from about 5-6 different sites but the entire thing is broken language. On that page will be three to five links back along with 2-3 links to other parts of the link farm. Each page showed over 100 domains linking to each. Complex, well thought out scheme. There are no ads, no other information. Clean CSS and text, php hosts but stripped pages. Each day hundreds more appear.


Shaddows. A fair share of our traffic is now bing and yahoo. It's increasing month to month. On the days that we have poor google conversions we also have poor bing coversions. Dating back many years the same weeks seem to have this happen. I cannot make the same connection you can on out company site. However I do see one related effect.

We sponsored a scholarship for the first time for a major university. They wrote a very nice article which appeared recently and has stayed on their front page of the department site,in context and by company name only. Our educational traffic went through the roof with google. Probably not a coincidence.

What you report in detail has been noted by many. Maybe what we are seeing is sites fail the test.

tedster




msg:4223488
 5:52 pm on Oct 28, 2010 (gmt 0)

That's me. I look at an awful lot of data, but on self-reflection I find that after doing that, I tend to "go with my gut" -- some high-level abstraction of what I "feel" I'm seeing.

Me too - if I'm drop-dead honest, that's often how I use data. There's just so much time and there are always unforeseeable consequences from any change, no matter what. So I look at the data, make a relatively informed gut decision. Sometimes I just "shoot from the hip."

But that's not the Google mindset at all, and that's why this thread is so important. Google is a profoundly data driven organization. They have found ways to test and measure factors I would never have dreamed could be quantified.

tedster




msg:4223492
 5:57 pm on Oct 28, 2010 (gmt 0)

To watch it happening, you need a solid grasp of your site metrics- not just the headline figure, but the drilled-down detail.

Exactly. If all you know is "traffic is up" or "sales are down" then you're not going to see even a trace of this Traffic Shaping creature running around.

Most companies are still not data-driven to the degree they need to be for powerful online success. I hope this thread is a wake-up call for anyone reading it.

Sgt_Kickaxe




msg:4223501
 6:20 pm on Oct 28, 2010 (gmt 0)

I've been feeling the same for some time Tedster and in my post above I hesitated speaking my mind, I do that a lot unfortunately.

Anyway, profiling seems to be a fact of life for sites and their owners because everything is reduced to data and that data is classified and stored and rated in countless ways.

It's not a complete mystery however. In fact Google is telling us all about what it considers quality factors. On a per factor basis, both good and bad, it's up to us to figure out how we measure up against the collective. Since we can't really know what the collective data is beyond an educated guess the best gameplan is to raise as many of our own quality factors as possible and avoid practices that may damage our own ratings.

Figuring out which darned factor outweighs another however, no two sites seem the same!

Some factors lay outside the box no doubt, perhaps such as "JohnDoe12345's sites send 99% of their traffic to eBay, reduce converting traffic accordingly to bring JohnDoe12345 back in line with similar sites, label JohnDoe12345 an affiliate and assign that label (with all it's glorious effects) to all of his sites until this metric improves". Who knows.

Did this post just end up being added to the collective profiling of me as a webmaster? Perhaps, while I won't put on a tinfoil hat it's technically possible now. Just being aware it's possible means I need to account for it in some way, enter the "gut feeling".

I'm more interested in knowing what you guys think would be "outside the box quality factors" than speculating on if they exist.

coachm




msg:4223583
 8:50 pm on Oct 28, 2010 (gmt 0)

I have noticed some patterns which are not explainable but have been so persistent that I can't ignore them, and they simply have to do with results. We sell, and we also make adsense money. Periodically we'll see that one of the two drops badly in terms of total revenue, while the other improves. It might stay like that for months, and than all of a sudden, it shifts the other way.

The upshot is we almost never make MORE money from our sites, and the only explanation for this recurring phenomenon is that the TRAFFIC is being changed at the source, in this case primarily google search.

Of course, one can only speculate. If it is as I believe, that google is altering the traffic flow and characteristics of visitors coming to our sites via serp alterations, the next question is whether this is intentional, or it's a byproduct of other things they do -- an unintentional side effect which happens with complex systems.

I dunno.

scottsonline




msg:4223595
 9:29 pm on Oct 28, 2010 (gmt 0)

Coachm we see this indirectly through cpm. The fluctuations we see directly in the SERPS appeared immediately in that wide data set. Impressions were altered and so was the click rate on sites we have advertised on for years. That was the first hint of this update and retrospectively indicates to us those sites saw shifts in their "quality" of results. Just like you may end up on a Chinese site from NYC, clickers looking for widgets get misdirected to articles on widgets, bounce without reading and hitting am ad. The overall impressions number is nearly the same, but particular sites changed.

Does google use true conversions as a metric? Particulalry what goes on behind the https?

freejung




msg:4223606
 9:58 pm on Oct 28, 2010 (gmt 0)

Shaddows, I think you're on to something big here.

So here's an angle of meta-analysis that might be useful: assuming this is true, what do we do about it?

Put another way, suppose tedster is correct (that's often a good bet) that this idea explains several diverse problems we've been talking about lately. If you have one of these problems, how does this idea help to resolve it?

I'm willing to have a crack at mine -- the "traffic throttling," "why doesn't my traffic go up no matter what I do?" problem. This problem, as I've experienced it, is like this: your established pages rank well and their rankings are steady. New pages don't seem to rank as well as they should. If you do manage to rank for a new keyword or improve rankings for an existing keyword, you experience corresponding losses in other areas to make up for it.

According to this theory, Google is conducting complex cross-categorization of websites and user groups, involving multivariate testing, to try to match the user groups to the website categories correctly, right?

I think you could probably characterize the categorization of a site roughly along three dimensions: taxonomic category, user intent, and magnitude (overall size, importance, and traffic level).

So presumably if your traffic is unaturally stable (along with other metrics, such as particular referral strings, search traffic to each section of your site etc), what this means is that you've been well and definitively categoriezed along all three dimensions (and any others I'm leaving out).

The good news is that Google knows what your site is about, who your users are, and what they want. Google has decided exactly how much and what sort of traffic your site "deserves" to get. You can probably count on a steady stream of traffic and income despite the chaos going on all around you.

The bad news is... well, exactly the same as the above. You've been put in a particular bucket, and it may be quite difficult to get out of it. I can imagine three possible "escape vectors" as Shaddows called them:

1: Change your taxonomic category. Convince Google that your site is not really about fuzzy widgets, it's about the much more popular pointy widgets instead. Presumably this would involve switching out a lot of content and generally reorganizing the whole site, as well as its outlink and backlink profile. Sounds hard, and you might be better off just starting a new domain in the other category.

2: Change your user intent category. Convince Google that you're not really an informational site about fuzzy widgets, you're actually a fuzzy widget e-comm site, and you need to be sent people who are trying to find and purchase a fuzzy widget, rather than people who are just looking for information on how to use their fuzzy widget. This would probably involve removing some content or moving it to another domain, and developing the other sort of content more extensively.

3: Change your magnitude category. Convince Google that you're now a much bigger and better fuzzy widget info site, with more in-depth information on a much broader range of fuzzy widget topics, and you need to be taken to the next level of exposure. I'm going to go with this one, as it seems the most promising and I think in my case Google has me correctly categorized in the other two dimensions.

Strategy for #3:

A: Remove all potentially ambiguous signals related to taxonomy and intent. The idea is to solidify your existing categorization in those dimensions. Ruthlessly delete or move to another domain any content that doesn't fit strictly into the bucket Google appears to have put you in already.

B: Remove all potential signals of poor quality or spam. It may not appear that you are suffering from a penalty or filter, but these signals may be what is holding you back from being put into the next order of magnitude bucket. Audit your outlinks, clean up any sloppy coding and architecture, optimize your site speed, get rid of any spammy signals like keyword stuffing etc.

C: Most importantly -- add lots of new content. But don't just steadily add new content at a constant rate (that's what I normally do, and it's not working). Save up big chunks of content in one particular existing category or in a whole new category (still within your overall taxonomic space), and publish it all at once. Treat each publication like a product launch -- it should be accompanied by a strong push in social media, new links from your established partners, articles or blog entries or whatever you can manage, email to your loyal visitors if you can do that, anything to create "buzz" around the release of the new content.

The idea here is to remove any obstacles that may be standing in the way of moving to a new magnitude category, and then hit the Google oscillator with a series of sharp hammer blows in the attempt to shock it into phase shifting to a new vibratory mode.

Maybe all of that is obvious, writing it down helped me think it through, and I thought others might find the process useful as well.

Lapizuli




msg:4223614
 10:18 pm on Oct 28, 2010 (gmt 0)

I don't know if this will help anyone or be seen as flaky or worse, obvious. And it's all Big Picture, not data.

But what's happening from our perspective is that Google is learning to be smart like a person.

If I ask you, "Where's the store?" while I'm standing outside a building, I could be asking about a retail establishment or a stash of goods or even a meeting of the Sober Teetotaler's of Rabid Entymologists, or something else.

You'd listen to me, then look around to try to figure out my meaning by the clues, plus your own instincts and smarts and empathic abilities.

Google is trying to become independently capable of discerning what I mean - in other words, to understand the connotation - without needing me to say, precisely and denotatively, what I want.

Google doesn't want better searchers - it wants to be as smart as a searcher, to be the electronic equivalent of "smart" - artificially intelligent, that is.

Every change they're implementing is moving them toward this, as far as I can see. Give 'em a few years, and Google will be the smartest human on the planet. Because while we can understand what a lot of people can say, most people can't understand what everyone on earth is saying. Google's well on their way to getting there.

jimbeetle




msg:4223618
 10:34 pm on Oct 28, 2010 (gmt 0)

Google is trying to become independently capable of discerning what I mean - in other words, to understand the connotation - without needing me to say, precisely and denotatively, what I want.

It definitely feels confident enough that it does, first evinced by the Vince update. And what did (maybe Guptha) say a few months ago, something along the lines "We're going to give you what we know you want, not what you think you need."

Google is confident it has intent down pat.

freejung




msg:4223634
 11:27 pm on Oct 28, 2010 (gmt 0)

But what's happening from our perspective is that Google is learning to be smart like a person.

What's interesting about this is that these kind of statements _used_ to be regarded as flaky anthropomorphization -- but now we're seeing these speculations come to fruition. The new infrastructure has enabled Google to take its "brain" to the next level of virtual evolution.

rros




msg:4223659
 12:24 am on Oct 29, 2010 (gmt 0)

Scotts... that complex system you referred to isn't new. It has been done before and it doesn't appear to me that it will go unnoticed. If you could spot it, most likely google can do so while dreaming.

I am on the "gut" camp too. This shaping and "quality madness" may ultimately be trying to answer a more philosophical question such as "does this page serve any {useful} purpose?" and I know the mathematical aspect of it will forever escape me.

scottsonline




msg:4223664
 12:35 am on Oct 29, 2010 (gmt 0)

Rros, to the point that sites are created for what appears to be a legitimate reason, get listed in dmoz and many other places (instantly it seems) but put one mislabeled link on the homepage to a page that has a list with full paragraph form of links back to sites that are paying the fee?

I don't know, I think it's pretty elaborate to set up 50-100 sites, get them all in dmoz, in other directories, spend months getting up the content so you can "fool" dmoz editors and then turn around and turn them into link depots.

It's working. In the niches I watch every single site on those pages is in the top ten after this update. Sure each page may carry 1/20 th the juice it should but 50 links from 50 different domains often listed in dmoz and the google directory carries enough weight.

This is a brand new development. I'm seeing it in multiple niches. Clearly a link buying service of some type.

tedster




msg:4223669
 12:43 am on Oct 29, 2010 (gmt 0)

Back to "traffic shaping" - I've been brainstorming what types of query profiles might be in use.

Clearly "localized" and "navigational" are two that have a kind of natural built-in granularity. But I'm kind of stumped as to how "informational" and "transactional" might be made more granular - with regard to the query phrase profile itself (not the website profile, that's another story).

Any ideas on that?

briggidere




msg:4223677
 1:21 am on Oct 29, 2010 (gmt 0)

I think that looking at adwords conversion funnels can help see the possible ways they could profile search terms as informational and transactional.

A generic search for a certain brand of bicycle could be considered informational as the user is researching their options, but a search for a particular model of that brand could be seen as more of a transactional search query.

They could even go as far as using that users profile and search history to try to determine the stage of the buying process they are at. Have they already done their research and are ready to buy? Was the research done months ago so they are refreshing their options again so go back to informational rather than transactional even if it's the model of that brand they searched for.
The same query could be both informational and transactional depending on previous user search history

scottsonline




msg:4223682
 2:03 am on Oct 29, 2010 (gmt 0)

My part time job - 100 computers that are scrubbed each shutdown/boot up cycle. 5 shared ips but it's random so 1 user may touch all 5 in a week. Gmail is blocked. How could they accurately profile in that case?

Almost all our converting traffic is widget a blue, red widget c. Our best traffic is brand widget c. Any of those could be informational or transactional.

Those two searches meant something different 12 months ago where 70% would buy. Now it's lower as the economy slides. Is that going to be misinterpreted?

I think they know this, and whatever metrics they are using are based more on original clicks. I will say since the uptick in traffic our bounce rate is 10% better.

Also. Maybe what we thought was branding was the pooling of searches into the informational bracket. Therefore the brand was first. I do believe they are not far off from a major layout change and that's is part of this test too.

Lapizuli




msg:4223693
 2:21 am on Oct 29, 2010 (gmt 0)

Clearly "localized" and "navigational" are two that have a kind of natural built-in granularity. But I'm kind of stumped as to how "informational" and "transactional" might be made more granular - with regard to the query phrase profile itself (not the website profile, that's another story).


Not entirely sure I'm understanding the question - do you mean how does Google conduct qualitative analysis on quantitative data in order to recognize when search strings intend "when were blue widgets first invented" as opposed to "where can I buy blue widgets?"

If so...wouldn't the user's pre- and post- search behavior be tracked and the searches' proximity and sequence become quantifiable over the course of many like sequences, then cross-referenced with lots of other data?

As in this sequence of searches,

blue widgets
turning up Wikipedia-style informational results gets no nibble and a quick second attempt at:

Acme blue widgets
list of price comparison sites yields one listless nibble, plus a longer and deeper bite on one model comparison page, telling Google it's on the right track. Then:

Acme widgets
going from specific to general might show Google the question is now "who the heck stocks this brand?" Long perusal of SERPs and then another search conveys that the problem's still unsolved:

Widgets sale
Ching-ching! After drawing from personalization data and establishing that enough "similar" users perform this approximate search sequence, Google knows enough to skip straight from the "blue widgets" to the "widgets sale" sort of results.

That's all speculation, but it seems like Google's got products for every Internet-related function and so has plenty of relational data. We see ourselves acting with free will; Google sees patterns.

If they can't see what you're doing when you park the car, they can see what door you go in and how long you stayed, and if they can't see whether you actually bought something, they can see if you took out your wallet or if you rang up at the cash register or if you looked at a brochure, and if they don't know if you were happy with your visit, they at least know if you came again another day or you then went to a ball game and never returned again, there or to any similar establishment...etc. It's all an ongoing study. We're all guinea pigs.

At least, that's what it looks like from a non-techie, couldn't-program-her-way-through-a-checkers-game-in-PASCAL kinda gal who probably misunderstood the question in the first place...!

Reno




msg:4223699
 2:54 am on Oct 29, 2010 (gmt 0)

They could even go as far as using that users profile and search history to try to determine the stage of the buying process they are at.

I recently wrote an email through my gmail account where I was talking about floor lamps. Then I went back to general browsing, and immediately on a number of sites (news, etc) I was suddenly seeing ads for light bulbs, designer lighting, lampshades, and more along those lines. Coincidence? I think not. Google parsed my email, assumed I might be ready to buy a lamp, and targetted me for those specific ads. That was my profile that evening, and that my friends is laser beam marketing at the cutting edge of online commerce.

.......................

briggidere




msg:4223701
 3:05 am on Oct 29, 2010 (gmt 0)

Did you buy a lamp?

Everyone will have multiple profiles depending on how many pc's they use, times of day they are on, which Google account you were logged in to, if they have a smartphone etc.

I use a normal PC at work as well as a laptop that I take home with me. I'm pretty sure the laptop will have a work and personal profile attached to it as most of my searches at home don't relate to what I do during the day. Maybe they even go as far as knowing when I am on my lunch break and start reading news etc and switch the profiles for an hour.

tedster




msg:4223702
 3:16 am on Oct 29, 2010 (gmt 0)

All good observations. I doubt that the kind of profiling that is being used for this kind of testing involves individuals.

It's got to be statistical testing across large groups - groups of users with certain patterns in their search histories (en masse), query types that fall into certain patterns and websites of certain patterns.

This 65 message thread spans 3 pages: 65 ( [1] 2 3 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved