Welcome to WebmasterWorld Guest from 54.211.219.68 register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts, Pubcon Platinum Sponsor 2014
Forum Library, Charter, Moderators: buckworks & eWhisper & skibum

 Tweet
Statistical tests of clickthru rates
How long to wait before pulling under-performing ads?
JonBoy

Msg#: 216 posted 9:39 pm on Dec 18, 2002 (gmt 0)

Here's someting for the mathematically minded. It's got to be a standard dilemma for everyone: how long do you leave an ad for until you decide you've got enough evidence that it is better or worse than the alternative candidates?

You have an adgroup, in which you're leaving the keywords alone while trying out a few different versions of the copy in the body and titles of the ads. After a few days you see that ad A has 17 clicks and ad B has 12. What you wonder though is whether that's just chance or reflects a real difference between the pulling power of the ads.

This is of course the whole area of statistical significance testing: there are precise mathematical answers to this question. I'm a finickety kind of guy: I want to know the real mathematical answer to each case that comes up: is it time to take out the axe yet?

There are plenty of sites that let you plug in your figures and conduct statistical tests on them for you. One that I've found that does the job is [home.clara.net...] (Warning: you need to know a little bit statistcis to know how to use it and understand what it tells you). Unfortunately, it's output is a bit cluttered.

Does anyone know a better online tool, or a desktop software solution, or best of all has anyone worked out a simple table to print out and put on the wall? There could even be a rule of thumb: something like "the number of impressions you need before deciding between ad A and ad B equals (10 divided by (A's CTR - B's CTR))".

Real answers, not just hunches, would be great. If no one's got any thoughts, I'll see if I can work out a rule of thumb and post it here.

andrewg

Msg#: 216 posted 11:15 pm on Dec 18, 2002 (gmt 0)

I agree that you need to let things play out until they are statistically significant - although unfortunately I don't have any "real, hard answers" to provide.

But I do have a caveat. The real determinant should be ROI, not CTR. If you write an ad that doubles your CTR, but which actually converts much worse to sales because it's attracting non-buyers, then you're actually behind the 8-ball.

There is such a thing as aiming for too much precision in this endeavor. Striving for relative excellence in the most important facets of the campaign - including post-click behavior, is more important than achieving perfection in, or perfect understanding of, one particular facet.

This is not an exact science, in spite of all the quantitative data that are put at our disposal.

hannamyluv

Msg#: 216 posted 1:06 am on Dec 19, 2002 (gmt 0)

Our company has the policy of letting the ad run to 400 clicks and then seeing if it produced a sale. Then after that, it must make 1 sale per 100 clicks. Based on that, we tweak the ad and bid price to make it profitable.

The thing is that there really can't be one set way. I have ads that are for items that sell for \$10. I have to sell alot to be profitable. On the other hand, I have a few items that go for \$300+. I only need to sell one once in awhile to stay profitable.

For others on here, where they are vying for people who may potentially send thousands, it may be well worth their money to run for a 1000 clicks at a \$1 a click to get that one sale.

stevenha

Msg#: 216 posted 2:00 am on Dec 19, 2002 (gmt 0)

A chi squared test should do it. I searched for online chi squared calculator, and it took a while to find what I was looking for, basically a 2 row & 2 column form with a calculate button. I mention that, in case the URL gets snipped. One example URL is at [graphpad.com...]

If Group 1 is Ad #1 and Group 2 is Ad #2,
outcome 1 = 100 and outcome 2 = 9900.
and if Ad #2 had 30000 impressions and 200 clickthrus, then its outcome #1= 200 and outcome #2 = 29800.

Enter these outcome values in the 2x2 table, and calculate the chi-squared. If the p value is less than 0.05, then the difference is significant. In this example, the calculated p value is 0.001, which is a significant difference.

whizkid

Msg#: 216 posted 6:31 pm on Dec 30, 2002 (gmt 0)

I have tackled this problem before, and I have an approximate solution. One thing is right, I used conversion rate (i.e. number of sales by visits), and *not* clickthrough rates.

I think I have a solution but I'm not ready to share it. It doesn't involve Chi, but I will look at that possibility too. If someone with enough knowledge of mathematics wants to colaborate with me, I will share my findings. (Am I allowed to post an e-mail? I guess not, so conctact me by sticky mail.) My solution involves the Central Limit Theorem, and the Negative Binomial Probability. (Don't worry if you don't know what that is, if you know what the Binomial or Chi probability is, I will exchange info with you.) I will share some of the general findings in this forum later.

If allowed to post an e-mail, then this is it great_puzzles@hotmail.com. If not, the moderator will remove it, (thanks to him).

webdiversity

Msg#: 216 posted 12:17 am on Dec 31, 2002 (gmt 0)

I'm no mathematician....

We opt for a 3 X 3 matrix or if you have the time and patience a 4 X 4

3 X 3 is 3 titles and 3 descriptions on the same landing page. There are 9 combinations (16 in a 4 X 4). Try it until you get a decent enough number of clicks to give you the CTR ranked top to bottom for the 9 ad combos with the same keywords. Check that against the sales generated for each of the clicks you use, and that will give you a top to bottom for the ads in terms of conversion rate.

Once you have your best performing ad combos you can try different things with your landing pages to see if you can increase the Clicks to sales ratio.

PPC search is too random for the numbers to be fudged with complex formula. There are so many dampening factors that you could and should throw into hard stats that there will always be too big a margin of error for the numbers to stack in every industry.

Our theory works cross industry and is very effective for ascertaining killer combos of titles/descriptions/keywords/sales

 Global Options: top home search open messages active posts  Tweet