|tracking adsense performance|
my data is all over the place
| 2:57 pm on Apr 24, 2006 (gmt 0)|
I have over 600 pages on my site, and am using URL channels to track 200 of them.
My impressions, ctr, earnings, are all over the place...there's no discernable trend. If I make a change in the ad format, etc, there never seems to be a difference.
I have pages that will go a couple of days with no clicks, then 20, then none, then 1 then 2 then none for a couple days, then 70, then 10, etc....it's all over the place. How do you know if any changes you make actually increase earnings? Any guidelines would be greatly appreciated.
| 3:07 pm on Apr 24, 2006 (gmt 0)|
That sounds pretty normal. I've got 5,000 or so pages on my site, arranged under a number of major topic headings, but stats for individual channels (even channels that cover entire sections) vary a great deal. That stands to reason, though: A section devoted to New York City will attract higher-bid ads than a section devoted to Gary, Indiana, and an article about "How to buy diamond jewelry" should attract higher-bid ads than an article on "How to buy cubic zirconia jewelry." The good news is that, by having a wide range of subtopics, you'll insulate yourself against the volatility that comes from relying too heavily on extremely competitive keyphrases.
| 3:15 pm on Apr 24, 2006 (gmt 0)|
I make custom channels for different page type. In my case: weblog entry, weglog archive, forum topic, user profile, etc. I have about 25 channels. Whenever I decide to make a change to a certain page type I'll calculate the average CTR over 7 days (because a week contains business days and a weekend) and see if this number increases in the next week. I also check if the changes I made have any influence on the percentage of returning visitors in Google Analytics. It's simple but it works for me.
| 3:58 pm on Apr 24, 2006 (gmt 0)|
Originally, I had custom channels for each department on my site (there were about 8 departments). It made it easier to track traffic, but difficult to make changes. Each department had about 35 or 40 pages and in order to keep consistency in the testing, it meant that I had to change 35 or 40 pages for each test. That was a real pain.
| 4:16 pm on Apr 24, 2006 (gmt 0)|
Too-small data sets mean results that are not statistically significant.
For whatever it is you want to measure, calculate a moving average and see how wide you have to make the window before the data begins to look more or less like a smooth curve. Don't be surprised if it's 30 days or more at your current levels.
You wouldn't be surprised if your results were varying widely if you were just sampling data between 2:05pm and 2:06pm. When your traffic is not large enough, sampling a 24-hour period (or even a 1-week period or longer) can produce the same effect. The solution is to sample over a larger period.
Studying a moving average is a cheap and dirty way to get a feel for how big a period you need to sample in order to test a change to any given statistic.