Forum Moderators: buckworks & skibum

Message Too Old, No Replies

automatic bidding algo

-

         

catchmeifucan

11:58 pm on Oct 9, 2006 (gmt 0)

10+ Year Member



Has anyone researched an algorithm to optimize bidding for Google adwords?

shorebreak

5:56 am on Oct 10, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



We've built an entire company around it, and we're managing over $250M in annual ppc spend with it.

-Shorebreak

catchmeifucan

4:32 pm on Oct 10, 2006 (gmt 0)

10+ Year Member



what is your company and how does it work?

catchmeifucan

6:27 pm on Oct 10, 2006 (gmt 0)

10+ Year Member



for the ecommerce model,

Profit = Revenue – Cost
= Impression*CTR*Conv Rate*Ave Order Size – Impression*CTR*CPC
= Impression*CTR*(Conv Rate*Ave Order Size – CPC)

Assumption: Impression, Conv Rate, and Ave Order Size are constant in short period of time. Let's assume ctr is some log function of cpc
CTR = lnCPC-1

The formula can be simplified to

Y = k(lnx-1)(C – x) where Y= Profit, x = CPC

differentiate it and let it = 0, and then solve for x. We find the max of Y.

any inputs or discussion is highly appreciated.

catchmeifucan

9:14 pm on Oct 11, 2006 (gmt 0)

10+ Year Member



looks like nobady likes math here? can anyone direct me a better place to discuss about this?

DamonHD

10:11 pm on Oct 11, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Hi,

Sure you can find a local minimum/maximum theoretically. I spend most of my working life sitting next to high-powered maths guys and gals (quants) doing this for derivatives traders.

My main observation is that AW/AS is so noisy that it is difficult to imagine any simple solution not getting "trapped" in a local minimum/maximum or not being able to settle at all.

I usually fold at least 7 days' worth of data into any calculation (ie I low-pass-filter the most obvious cycle in the system) but often much more).

Most of the the people on WW are more salt-of-the-Earth blue-collar HTML hackers: less theory, more pages!

Rgds

Damon

Tastatura

10:41 pm on Oct 11, 2006 (gmt 0)

10+ Year Member



I am not sure I quite understand your question, but AFAIK G's AdWords algorithm is not really a secret.
They use "Vickery auction" (second price auction) with a twist that it’s real-time. Some people also refer to "generalized second price" or GSP auction.

If you can understand how this mechanism/theory works, you can figure what is "best bid" for your goal(s).

HTH

Khensu

10:51 pm on Oct 11, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I think the math heads are in the search & analytics forums.

Most of us just throw up pages and count pennies here. ;)

shorebreak

10:52 pm on Oct 11, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Catchmeifyoucan,

Your math assumes that you have accurate impression data, which is probably not going to be the case. Google's traffic estimator is (and IMO forever will be) broken - our estimates are that it's 30%+ off, 70% of the time.

That said, in order to both run a profitable campaign and maximize the total profits from that campaign, you need to find a way to accurately estimate AdWords traffic. IMO, the only way you can get that is if you work with someone who has an accurate AdWords traffic model.

-Shorebreak

RhinoFish

2:59 pm on Oct 12, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



this assumption, CTR = lnCPC-1, is not accurate in my experience, it's not a gaussian / natural log dist (nor approximately one).

and the derivative / max road assumes a function is continuous, this assumption also is not accurate in my experience.

empirical crunching with dampened feedback loops serve my needs better than functional analysis and allow step adjustments not present in continuous function analysis, like a new competitor entering my niche.

so it's not that we don't want to talk math, many of us analyze in a different manner. i can see how your approach might help save some money in the early stages because of weakness of my method with new data (if i guess wrong about start point, it's going to cost me a lot to get the data to correct me). your method seems superior there. however, through years of doing this, i've learned to use the data of others, in the form of epc, to get me into the right ballpark most of the time - and that takes me 1 sec to calculate - so i'm still avoiding using my 2 full years of college calculus and analytical functions.

[edited by: RhinoFish at 3:00 pm (utc) on Oct. 12, 2006]

catchmeifucan

6:24 pm on Oct 12, 2006 (gmt 0)

10+ Year Member



Great to see so many ideas and comments throw up here. For me, scaning through keyword list and adjust bidding based on experiences (I used to use pivot tables) became more and more tedious. I want to find a way to automate this process with mathmatical or statistical support, or an algorithm that applies empirical crunching.

let's say I have these data here for a keyword:
Time¦Impressions¦Clicks¦CTR¦Postion¦Cost¦Bid¦CPC¦Conversions¦CPA¦ConvRate¦Value¦VPA VPC¦Profit¦ROI
Last Seven¦17,387¦827¦4.760%¦3.547¦$218.95¦$0.30¦$0.26¦10¦$21.89¦1.210%¦$293.70 ¦$29.37¦$0.36¦$74.75¦34.140%
last 14 days¦29,521¦1,302¦4.410%¦4.137¦$304.23¦$0.25¦$0.23¦22¦$13.83¦1.690%¦$634.40¦$28.84¦$0.49¦$330.17¦108.526%

with that data, how would you bid? bid up or bid down? how much? and why?

If I were to answer this question, I would either use Google traffic estimater to estimate traffic at different bid or if I can find a function between traffic and bid with my historical data, and I'd use the avg conv rate and order value for the last 14 days to calculate the optimal bid.

What is your way?

I also have some questions here:
Damon: what does AW/AS stand for?
HTH: I understand second price aution, but I didn't figure out what is the 'best bid', can you show me?
Shorebreak: did you figure out an accurate AdWords traffic model? If yes, how? If not, What is your model of bidding?
RhinoFish: I am pretty interested in your feedback model, can you show us your 1 sec calculation here?

-Kevin

shorebreak

6:33 pm on Oct 12, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Kevin,

We did figure out a way to accurately estimate traffic, but it's a function of us managing ~US$250M/year in search spend across 15M+ keywords. You can get very accurate at estimating traffic for any given keyword when you have that much data.

That traffic estimation capability, however, is something that's only available to our clients.

-Shorebreak

DamonHD

8:19 pm on Oct 12, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Hi

AW/AS = AdWords/AdSense.

Rgds

Damon

RhinoFish

2:05 pm on Oct 13, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



"I am pretty interested in your feedback model, can you show us your 1 sec calculation here?"

Nope, but you have data and know where to look, so you should be able to inspect for a consistent thumbrule.

RhinoFish

2:17 pm on Oct 13, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



"with that data, how would you bid? bid up or bid down? how much? and why?"

it looks like your time frames overlap - last 7 and last 14 - so that confuses me... is it really two 7 day periods, one the most recent and the other the one preceeding that? that's what i'm assuming...

i'd need a third data point to decide which direction to move (between or lower) - takes 3 points to identify inflections... so maybe do a 20 cent bid data run... but i really think the amount of conversion data you have here (and therefore roi and all other derivatives) is too small a data set to make sound decisions. the conversions rates appear to be very different, and i think the small sample size is the reason - so lengthen the runs or do something else to get a larger sample size.

and this exercise is about determining optimum bid, but this data set tells me to work on conversion rate and ctr first. so i wouldn't be monkeying with bids for this, i'd be working elsewhere.

[edited by: RhinoFish at 2:19 pm (utc) on Oct. 13, 2006]

mike_ppc

3:24 pm on Oct 13, 2006 (gmt 0)

10+ Year Member



if I understood it correctly, in the last 7 days your efficiency went down, so your bid raise had an adverse effect - then, bid down.
Even your profit went down (74,75 - compared to 255,42 in the first 7/14)
Anyway, it's much too easy to see that. So, what do you want to say?

catchmeifucan

9:46 pm on Oct 13, 2006 (gmt 0)

10+ Year Member



Ok, let me add one line of data here.

Time¦Impressions¦Clicks¦CTR¦Postion¦Cost¦Bid¦CPC¦Conversions¦CPA¦ConvRate¦Value¦VPA VPC¦Profit¦ROI
Last Seven¦17,387¦827¦4.760%¦3.547¦$218.95¦$0.30¦$0.26¦10¦$21.89¦1.210%¦$293.70 ¦$29.37¦$0.36¦$74.75¦34.140%
last 7-14 ¦12,134¦475¦3.910%¦4.983¦$85.28¦$0.25¦$0.18¦12¦$7.11¦2.53%¦$340.70¦$28.39¦$0.72¦$255.42¦299.51%
last 14 days¦29,521¦1,302¦4.410%¦4.137¦$304.23¦$0.25¦$0.23¦22¦$13.83¦1.690%¦$634.40¦$28.84¦$0.49¦$330.17¦108.526%

By looking at the data set, yes last 7 days performed worse than the previous 7 day period in terms of conv rate and profit. But 1) is it statistically significant? 2). Is it caused by rasing bid? The first question is easier to answer, a mean test may help, however, the second question is not that easy to answer. it is clear that raising bid helped position, ctr, and clicks, but one can't conclude that the drop in conversion rate is caused by raising bid. (I had done some global analysis on the corelation between position and conv rate but there's no significant evidence that these two are corelated).

A brief observation would lead the conclusion to bid down, like suggested by mike_cpc, but what if the drop in conv rate is not caused by raising of bid but some other random factor? what if next week the conv rate comes back to 2.53%, which may lead a profit of $395.56 if the click volumn remains at 827?

To RhinoFish: 'i'd need a third data point to decide which direction to move (between or lower) - takes 3 points to identify inflections... so maybe do a 20 cent bid data run...'

Can you explain?

'but i really think the amount of conversion data you have here (and therefore roi and all other derivatives) is too small a data set to make sound decisions'

First of all, a keyword that receives 800 clicks and 10 conversations in a week is not a low volumn keyword. I believe the users here see a lot of keywords don't even receive 100 clicks in a week, at least for me, 80% of my keywords receive less than 100 clicks a week, I have to find a way to deal with these low volumn keywords.

Second, Of course, the more data you have, the less range it is given a confidence interval, however, Adwords space is so dynamic I doublt how useful it is to use data more than one month ago. So RhinoFish, how many data set do you use and why? 30 conversions? What if the data set is so sparse that you have to go back half a year to get that? How useful is that?

RhinoFish: I am guessing your emprical crunching and feedback model works approximately by comparing the performance of this 7 day period to the last 7 day period, you raise bid if you see better performance, and you lower your bid if you see things go worse. But how much do you raise and lower your bid? Do you have statistical and mathematical calculation to support the change of bidding? and are you really comfortable with it?

Show us your thumb rule. It may not as valid as you thought.

RhinoFish

1:18 pm on Oct 16, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



"To RhinoFish: 'i'd need a third data point to decide which direction to move (between or lower) - takes 3 points to identify inflections... so maybe do a 20 cent bid data run...'

Can you explain?"

you have 2 daya points listed and it seems the lower bid is better, but you don't yet know if the maximum occurs between the two points or below them both. if you take another step lower in bid and things drop off, you then know the inflection point in your optimization curve is between 20 and 30. if at 20, things look even better, you need to continue laddering down.

this is the same thing as saying... you have .: so far, but don't yet know if it'll be .:. or .:¦

RhinoFish

1:28 pm on Oct 16, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



"First of all, a keyword that receives 800 clicks and 10 conversations in a week is not a low volumn keyword. I believe the users here see a lot of keywords don't even receive 100 clicks in a week, at least for me, 80% of my keywords receive less than 100 clicks a week, I have to find a way to deal with these low volumn keywords.

Second, Of course, the more data you have, the less range it is given a confidence interval, however, Adwords space is so dynamic I doublt how useful it is to use data more than one month ago. So RhinoFish, how many data set do you use and why? 30 conversions? What if the data set is so sparse that you have to go back half a year to get that? How useful is that?"

Calling something low volume or not, or relaying experiences of others, doesn't mean anything. all i am saying is that 10 data points (conversions) is an insufficient sample size to make sound decisions. i said before that you'd need to do something to increase the sample size, that's all. if you need to group similar keywords to do slightly more macro approach, to get the volume and soze you need for sufficient analysis, I assert you'll end up making better decisions overall. reading certainty into very granular data points and sample sizes will lead you astray, i see it often. if you inspect your conversion ratios across similar words in each of the 2 7-day periods, if there's insufficient sample sizes, you'll see as I assert. use groupoing to get decent, meaningful sample sizes and hone the group in on their proper targets. cull from that, the voluminous words that are worth further inspection. but don't believe that microscopic analysis of 10 conversions over a 7 day period is sufficient to drive reasonable business decisions. and before anyone gets their panties in a wad, run several consecutive 7-day periods with everything held constant and inspect the groups for deviation - if samples sizes are sufficient (and little else has changed), you should see some smoothness in the data. but, if in 7 days, 4 conversion happened on 1 day, 2 on another and 1 on a third, you should guess that even picking a day of the week in which to dissect your 7-day periods, is flawed with this size of sampling.

statistical myopia is a common ailment.

RhinoFish

1:59 pm on Oct 16, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



"RhinoFish: I am guessing your emprical crunching and feedback model works approximately by comparing the performance of this 7 day period to the last 7 day period, you raise bid if you see better performance,"

in essence yes, but slightly more mechanical and the period isn't time based, but volume based. and i stop searching for perfect point well before most others because i believe the point is constantly shifting and that this approach leaves me more time to cover more markets. some would call it sloppy, i call it good enough. i monitor ongoing roi and reinspect anything that doesn't meet my requirements or shows degradation in any case.

"and you lower your bid if you see things go worse. But how much do you raise and lower your bid? Do you have statistical and mathematical calculation to support the change of bidding?"

bracketing once 2 data points exist. the intial data points are (1) guesstimate from my epc thumbrule and (2) 20% lower than that (an intentional bias towards bidding lower - hey, if you gotta pick a point, might as well be cheaper). here's a discussion for bracketing, it pertains to searches for mins, but the ideas the same:
[mathews.ecs.fullerton.edu...]
and i add a limit the iterations, as long as i'm positive roi - my "good enough" approach.

i also don't get all caught up in fibonacci versus secant versus others because i consider my data to have high deviations and there's no real need on my part to improve efficiency (i limit my iterations) or accuracy (i dismiss marginal differences in two near-bid apparent optimums). if a thumbrule gets you in the ballpark and you're making money that meets your reqs, it's better to add another arrow to the quiver than it is to polish the lone arrow. lone arrow optimization is a recipe for disaster, risk abatement is far more important to my business than maximizing roi to the nth degree. so like you scenario, where i suggested a 20 bid, a simple bracketing here is very likley to do just fine. find the inflection point, inspect for reasonable roi and set. if you're so inclined, and you find point b is a confirmed inflection point (meaning 25 was better than 30 and 20), then split again for optimization - but i'd caution that thinking 22 ver 23 is worth time inspecting is where the danger lies... your time is better spent adding more arrows.

RhinoFish

2:09 pm on Oct 16, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



"Show us your thumb rule. It may not (be) as valid as you thought."

hehehe, really? it's just a guesstimating method for detemrining where to start - that's all. it's doesn't need a high degree of accuracy. there are valid reasons why epc would be very different from prog to prog, so i already know it's just a guesstimating tool, not a crystal ball. you've gotta start somewhere and that somewhere doesn't have to be the end run answer to be effective.

i'm not looking for razor-like tools (that's foolsgold), i like to bludgeon things with a sledge hammer and then refine that to awl-dom... i'll leave the electron microscopy for everyone else.

DamonHD

8:33 pm on Oct 16, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Hi,

I agree with RhinoFish; no point in aiming for electron-microscopy when you and the whole of AS/AW is bouncing madly on an inflated kiddie's castle with some thrash-metal blaring out at full volume.

Unless you have VERY large volumes, and have access to a few missing variables that most of us dont, then you really will not have enough (current) data to to anything VERY clever.

IMHO: YMMV (IANAL, etc)...

Rgds

Damon

catchmeifucan

5:16 pm on Oct 17, 2006 (gmt 0)

10+ Year Member



RhinoFish: Very good discussion with you here. The essence of your 'theory' is : If you can't model something accurately, you don't bother to do so, instead, you go after an approximate that you think is close enough. I would say it is a good idea considering the dynamic space in AW.

I'll be keep exploring and wish more discussion here so we can address this problem from different angle.

catchmeifucan

6:04 pm on Oct 19, 2006 (gmt 0)

10+ Year Member



RhinoFish: How do you deal with keywords with few traffic? I have a delima here. I have a ad group with a few hundred of keywords, however, most of them(90%) receives less than 100 clicks in a month. An analysis on the spend on keywords with no conversions shows that I am spending more than 60% on them, but since each of them only receives few clicks I have no sufficient evidence to bid them down or pause them. but like I mentioned above, in total they cost me 60% more. What do you guys do?

RhinoFish

2:27 pm on Oct 20, 2006 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



if they're tightly related and in the same ad group (which they should be!), i treat them in an aggregate manner since the individual keywords don't have enough data to optimize.

you should spend more time and effort on the high volume words, so when i have this situation, i often split out the high volume words into a separate ad group. this allows me to view the low vol words at the ad group level for analysis. managing them individually can't be done (insufficient data) and isn't efficient (they're low vol words) - so I treat them one level up - at ad group level. and bid the lot at the same bid.

"An analysis on the spend on keywords with no conversions shows that I am spending more than 60% on them"
if you're just stripping out keywords that have no conversions and then adding their spend, you're taking an aggrtegated look at data sets that individually are of insufficient size. and you've purposely removed those with conversions. i liken this to a 100-sided die and you've rolled it 100 times and have collected the data set of numbers that didn't come up yet and are about to conclude those are bad... which I think would be a wrong conclusion. but the 60% part seems weirdly high - perhaps you've lowered bids on some words that have data and are left with skewed data on the "bad" set. if they're low vol and not converting, they shouldn't be soaking up that much budget. but i bid for roi and raise - typically come in low and raise to grab share. if you typically enter high, it might explain the 60% budge grab by these words. without seeing all the details, i'm really just guessing here though - my gut tells me they're likely not tightly related words...