Shaddows - 8:19 am on Oct 28, 2010 (gmt 0)
Sounds like cherry-picking the most likely to convert traffic and replacing it with low quality traffic. I wonder where the "good" traffic went. I also wonder if it's purely being siphoned off for financial gain or if your site metrics dictate if this happens to your site.
BIG uplift in traffic since 1st SEPT (20% above trend) with a corresponding drop in conversion rate
It might also just be bad luck but you're not the first to make that complaint.
No, that's not it at all. I'm not complaining, I'm just offering data up to the group. I'm pretty non-emotional when it comes to data.
Nor do I think cherry-picked traffic was siphoned off. On 1st Sept I got TOTALLY RE-PROFILED TRAFFIC. Different referral strings, different targets (obviously), different behavior once landed. And there was 20% more of it. Sales remained on-trend. There was ZERO business impact.
There were two more total re-profilings (new referrals, new targets), with the same +20% traffic, and the same +-0 sales. Still ZERO impact on the bottom line.
Then, the fifth exposure pattern in 6 weeks (pre-test, 3 test sets, then this one) arrived. The traffic is still up at +20, but now converting at the same rate as before: +20 sales.
I'm trying to offer an alternative vision to the prevailing "Google is sending me zombie traffic" POV. What if Google has re-profiled your traffic and found it was 'happier' somewhere else? What if the algo test-program for your site found most people treated like an info site?
The link Tedster posted is brilliant- its exactly the type of thing that I'm feeling, with a multivariate test-program simultaneously testing both a set of sites, and classifications of user.
Its the kind of statisical data mining that even comprehending should make your head hurt. To watch it happening, you need a solid grasp of your site metrics- not just the headline figure, but the drilled-down detail.
To come out the other end, I think you need a clear, compelling offering that sits inside a Google box- or at least serves a statistically identifiable [user-context] complex.
Sure, but that's not the same thing. I'm talking about a deliberate regime of testing, using ALOT of display-variables, with varying users, in varying contexts, against varying sites.
scottsonline:by making your traffic conform to a curve it gives them more time to assess your site