Forum Moderators: buckworks & skibum

Message Too Old, No Replies

Bid Management Software Recomendations

         

is300

11:29 pm on Dec 8, 2004 (gmt 0)

10+ Year Member



Does anyone have any Bid Management software recommendaions? I'm looking at Atlas one point right now to handle my google and overture ads. I'm looking for something that can handle it all and keep the conversion data confidential.

redzone

6:29 am on Jan 6, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Cline,
Though I understand your strategies, I tend to agree with Shorebreak that automated solutions can better respond to varied campaign strategy at different points in the campaign.

Our advertisers constantly run into "same case scenario" where 3-5 advertisers are cohabitating semi-friendly on a specific keyword. Then a new advertiser jumps in and upsets the entire balance. A simple strategy in an automated environment is to raise the Acceptable Cost Per Conversion for a short duration, which will automatically increase position/bid, teaching the newbie to go play in a different sandbox.
Technology that incorporates variable Target Cost Per Action scheduling is also a key component for vertical markets that show lower conversion rates during late night hours or weekends.

The point I'm trying to get across is that even though automated Bid Management Analytics technology requires human monitoring and tuning, the processes run 24/7/365 in the background, freeing up valuable resources for other tasks that "require" the human touch. I have yet to see a bidding strategy that could not be applied to an automated process, in my 5 years working in the "paid search" space.

midwestguy

10:16 pm on Jan 6, 2005 (gmt 0)

10+ Year Member



Until I read this thread, I didn't appreciate just how sophisticated bid management software and strategies have become. Now I'm curious about the systems folks are talking about here. ;-)

What type of computer platform do these bid management software systems run on that handle large (i.e. $50-200K+ per month) campaigns? Solaris, Linux, Windows XP/2003, etc.? How powerful a computer?

What language are they developed in? Java with servlets running in an app. server, Perl, C++, etc.?

What is the database used? Oracle, PostgreSQL, mySQL, etc.? Is the database "mined" as a data warehouse?

I take it there are real time feeds from the search engine and servers being fed to the bid management system, right?

With bid management systems like this, where does the computer used typically "bottleneck"? File and database IO, CPU usage crunching the numbers, etc.?

Quite facinating to me! Thank you very much for sharing whatever you can on the above, as I'm always interested in what tools and approaches folks choose for what type of "back end" jobs, computer wise.

Thanks a bunch!

Louis

shorebreak

12:35 am on Jan 7, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Midwest guy,

You're essentially asking about the infrastructure these systems are built on; in today's age of infrastructure on demand, I don't think there's much of an oppty to differentiate based on the infrastructure itself. The core of these keyword management systems is the business logic and algorithms that take cost, revenue and margin data and make decisions.

That's another way of say I don't know the answer to your question #:^)

redzone

4:36 am on Jan 7, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Louis,

We're a MS shop, running Dell Hardware for our DB Clusters. Slave servers local networked to two mirrored DB Clusters co-lo'd on opposite coasts, and many remote Slave servers co-lo'd at various locations around the US. (Our philosophy is to remain fault tolerant against local/regional Network connectivity issues.)

The DB Clusters are fairly beefy, running Quad Xeon processors, w/ 15K RPM SCSI's, and a MTS (Microsoft Transaction Server) that sits between the load balancer and the DB servers running MS-SQL.

We're running native MS-VB and MS-C++ executables for both our background bid management functions, and our advertiser web interface. We found that scripting languages were too slow, and didn't provide us with the flexibility we wanted. NSoftware has a great ActiveX/.Net control package for http functions that we've used in development for several years.

Everything is real time, except for obtaining billable click/cost data from Overture/GAW. We track gross clicks/cost during a Calendar day, and then export the billable clicks/cost from Overture/GAW at night, for the previous day, and re-calculate ROAS/CPA for our entire keyword base, as the paid search SE's will filter some click activity as non-billable.

Bottlenecks in our environment is DB I/O. We not only provide real time bid cost data for our own systems, but also for 3rd party analytics systems. Installing an MTS server allowed us to move up to the next level in scalability, and dramatically relieved DB I/O stress.

A secondary bottleneck is due to the time that GAW/Overture has previous day billable click/cost data available. The fun begins trying to rollup all that data in the shortest time possible, to have accurate data available to advertisers.

It's an exciting environment to be a part of, as the majority of our processes are proprietary, and did not previously exist.

shorebreak

5:55 am on Jan 7, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Thanks Redzone. You've described the body; can you describe the brain?

redzone

3:47 pm on Jan 7, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Shorebreak,
We've found over the past several years, "One Size does Not Fit All"...

Think of our system brain, as Sybil.. :)

A Centralized Algo Platform, that can easily assume the correct personality, to fit a specific advertiser's needs.

Straight rules based bidding logic, just doesn't fit every advertiser's campaign goals.

For example: We service bid management and Lead Generation for an Educational Agency. They have (2) conflicting monthly goals. Generate enough leads to keep the educational institution happy, and maximize profitability in generating the leads. Now multiply this condition across several hundred campaigns they service.

Every campaign has it's own monthly "Lead Goal", and it's own profit per lead target. We took our core technology, and built a customized wrapper around it, that allowed their account reps. to be able to see lead forecasts versus monthly goals, and profitability forecasts versus targets. The key was to build a "What If" process that would allow the account rep to create scenario analysis models on an account, that would maximize profit, yet still maintain montly lead generation goals.

I only use the term "Bid management", because that's the terminology that is widely used. I try and think "outside the box", taking into account static constraints, and create technology solutions that are customized for specific business/agency/account goals.

The other side of the coin, is providing the experience and consulting time, to maximize the return in using any technology. I think most advertisers understand their objectives, and want to have control/input in obtaining them, but don't have the time to properly monitor results and forecasted projections. I would rather be thought of as a technology partner by clients, than a technology provider.

midwestguy

7:06 pm on Jan 7, 2005 (gmt 0)

10+ Year Member



Redzone,

Thank you do much for taking the time to share this. I am very grateful. It's quite helpful in aiding my understanding -- and very impressive, too!

Thank you again -- so very, very much!

Louis

P.S. Thank you too, Shorebreak!

redzone

10:44 am on Jan 8, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Shorebreak,

You didn't comment on my comments about your statement: "We've been trying to address the sparse data issue by applying algorithms that can cluster data from multiple keywords".

I was proof reading a case study earlier today for one of my partners, and came across an example of why "clustering" common words doesn't work. This client is in the event ticket business. Two keywords that were used in the case study are:
red sox tickets
boston red sox tickets

In your examples these two keywords would most likely be clustered together as common phrases.

BUT, the two keywords perform as different as night and day. Why?

Two completely different consumer audiences use the terms.

We have found that "Boston" area folks tend to search with just the phrase "red sox tickets". They are quite aware the red sox are based in Boston... :)

While the majority of consumers outside the Boston area tend to search "boston red sox tickets".

"red sox tickets" has outperformed "boston red sox tickets" by more than a 2:1 ratio over the previous baseball season. The sox did win the series this past year, so search volume for playoff and series tickets was huge during the month of October, but both of my example phrases were still receiving some search traffic in October.

Clustering these two keywords together would have penalized "red sox tickets" for the less profitable return on "boston red sox tickets".

There are numerous other keyword phrases, that follow the above example, and that is why we abandoned testing on clustering low volume keywords, early in the game.

We even treat: blue widget, "blue widget", [blue widget] as seperate keywords, and track ROAS/CPA seperately on multiple match types on the same phrase in GAW.

Your thoughts?

Robsp

1:39 pm on Jan 8, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



redzone,

This is also true for different geo's. The same words can have very different performance in an all languages/all countries vs a country targetted campaign.

shorebreak

3:57 pm on Jan 9, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Redzone, agree with you 100% that clustering keywords based on semantic similarities is useless; I should've been more clear in saying that we cluster sparse-data keywords based on impression, cost and revenue data similarities.

redzone

12:06 am on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



RobSp: Definitely agreed on the Geographics, as both GAW/Overture/eSpotting either allow Geo-Targeting, or have segregated engines based on Country, GeoTargeting is handled at the top level, rather than filtering at the bottom end.

Shorebreak, I'm still lost on the Sparse Data clustering concepts. We always referred to Sparse Data keywords as those that received few impressions/cost/revenue over a specific time cycle, compared to broader keywords that received enough data to perform some analytics action. If a keyword fit's the Sparse Data definition how can you cluster based on impressions/cost/revenue, or better yet, why would you cluster based on these metrics?

Until you receive a large enough data sample on a keyword/campaign, the probability of receiving an "Action->Sale/Lead" in the first 5 clicks is about the same as not recieving any Action in the first 100 clicks. My stance is always to leave "Sparse Data" keywords alone, until they have enough data sample to make an educated analytics decision. If the keyword isn't in top position, push it up over time to try and attain more traffic, etc...

shorebreak

4:56 pm on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



RZ,

I guess the approach to sparse data is part of our secret sauce.

Receptional

5:33 pm on Jan 10, 2005 (gmt 0)



Excellent debate, which I will have to read in much more depth later.

For our part, we have really tried to use automated software of various guises, but like others on this thread, have found flaws in the logic to date. However, we use a different system for live CPC analysis which is very helpful in determining ROI in real time more or less. Since this is also part of our web analytics this is often much more helpful for decision making as reports can be sent directly to marketing people about actual conversions by keyword and cost, allowing them to quickly extrapolate their own conclusions.

Using the bid managinment technology itself, however, is not so successful. The rules are too grey at this stage to bother programming - even (in fact especially) when dealing with many thousands of terms. Clustering is a capability that we have in our system, but our strategy is to exact match phrases where possible - dramatically reducing CPC but in doing so, very few terms offer enough volume to produce statistically relevent ROI information. It is this that ruins the logic. No hypothesis can be made based on less than severel hundred impressions and research shows that terms with many thousands of impressions tend to occur at the "research" stage of a web-users "buying cycle". When they get to the pricing or buying stage, their search terms are much more specific and the number of impressions drops to below statistically relevent levels.

This paradox has been our blessing in the end. When we really examined our position in the market place, we thought "hmm, if anyone ever builds something that truly IS devoid of human involvement, then as consultants we can no longer add value". So - as far as we are concerned, with or without bid management software to control a campaign, human intevention and analysis is still a vital ingredient.

Dixon.

redzone

6:11 pm on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Dixon,

Human intervention is inevitable, that is no doubt. We work very closely with account reps, in both a consulting role, and training them to analyze the data that our system generates.

My comment was that once an analyst determines strategies they want to implement into a campaign, these strategies can be automated, giving the analyst the resources to handle more campaign load.

redzone

6:15 pm on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Shorebreak,

Not sure about the need for the "smoke and mirrors" for the segment of a paid search campaign that represents less than 5% of the spend... :)

If a keyword generates less than "x" clicks per month, the effect of the cost/conversion expense on the campaign is minimal. Better to just let "time" be your friend, until enough data sample is available for an analytics decision to be made?

Receptional

7:06 pm on Jan 10, 2005 (gmt 0)



once an analyst determines strategies they want to implement into a campaign, these strategies can be automated, giving the analyst the resources to handle more campaign load.

In this I cannot disagree and in this instance, software will crucify a human in a race. But I find the problem to be having "statistical confidence" about a strategy in the first place, as the data usually pertains to volumes not statistically significant in volume terms when you break the searches down to exact matches and not statistically testable when you cluster terms into a group as the groups themselves are not linked to user behaviour patterns ("buy a widget" vs "find a widget" are not similar in nature).

We have found that clustering to more than 3 groups (highly desirable, desirable and acceptable) is probably as useful as it gets mathematically unless you have much more data than we tend to have. You may well do as you have your own (impressive looking) "kit" but it is by no means clear that "Buy a widget" is best bid at 40 cents and "find a widget" is best bid at 30 cents, however much data you have, since the dynamic nature of competitor bids and of days of the week (or seasons or times of day or fraud or market conditions or irrational bidding) distort any constants that statistically might be assumed.

I know that the marketing world is against me here and by virtue of us both being in internet marketing makes you right, whether I like it or not. The guys with the budgets generally demand software management solutions to back the campaigns, but when you get to the heavy maths, it is not so often that the evidence is overwhelming enough for me to trust to rules based bidding... yet. Deep Blue doesn't yet beat Kasparov on a regular basis either - and there is a similarity of problem here.

(Mind you - I must accept that Deep Blue would beat me every time...)

Dixon.

redzone

7:31 pm on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Dixon,

Very well put.... :)

I agree there are intangibles that can effect campaign performance, that are very difficult to build into automated processes.

I'm a firm believer in looking at both sides of the coin. Though I think most strategies can be automated, the power is in the historical data, and how it is represented.

I feel the biggest over-spend in paid search campaigns occurs because the majority of advertisers don't have the historical data segmented by time period within days. They either blindly pause their campaigns (weekends/nights) because they can't isolate specific keywords and time periods that are generating low conversions, or more frequently just leave the campaigns on 24/7/365.

Also, because many campaigns today, combine both performance, and branding, and/or lead quotas, the campaign analyst must have the tools available to customize analytics configuration, tracking, and reporting.

Robsp

7:57 pm on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



It's good we need humans once in a while to make decisions :)

I agree with Dixon that it can be very hard to make statistically correct decisions on limited data sets and have found very little research in this area. I find it hard to believe that old Direct mail statistics work exactly the same on PPC. Anyone with some insights on this?

shorebreak

9:04 pm on Jan 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Redzone,

For our clients - who spend over $100M annually in PPC - the long tail of their keyword sets produces a disproportionate percentage of their profits from PPC, so sparse data clustering is absolutely a competitive advantage. If 20% of the revenues come from the last 45,000 keywords in a 50,000 keyword portfolio, oftentimes 40-50% of the profit is in those same keywords if they're managed efficiently and more accurately than their competitors.

redzone

3:56 pm on Jan 11, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Shorebreak,

Not disagreeing that in many paid search campaigns that 80%+ of the active keywords product limited click activity in a Calendar month. What I am disagreeing is that the keywords should be clustered or grouped to try and determine some type of statistical behavior.

I've seen proper management of this group of keywords add up to an additional 15% to the bottom line, but not account for 40-50% of the overall profit for a campaign. That math doesn't quite work..

Also, what's up with the 100 million in annual PPC spend blurb in multiple posts? :) I think it's common knowledge that we all have developed analytics technology tied to paid search management. I think skibum/eWhisper have been gracious to let us openly discuss the technology. Let's try and stay away from the self promotion aspects? Portfolio spend doesn't necessarily equate to leading edge technology.

cline

5:55 pm on Jan 11, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Robsp, I'm an old direct mail statistician. The analysis problem of PPC looks fundamentally the same to me as direct mail, direct response print, direct response TV, and telemarketing. All of these media have their nuances, but PPC is not fundamentally different. The thing I've seen that looks most like PPC are in-house lists that are highly segmented by purchase theme. And the statistical rules I use to manage PPC are in essence identical, and produce identical results.

"Statistical significance" is an academic concept that confuses businesspeople more than it informs them. "Statistical significance" is an arbitrary level at which your results are considered worthy of being considered by the academic community. The difference between p=0.49 and p=0.051 is the difference between publishing or perishing for a post-doc. For a business analyst it's not worth remarking on.

Receptional, excellent post.

shorebreak

11:26 pm on Jan 26, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Redzone,

I don't feel I need to prove whether our math "works" or not, just giving my opinion. I list the $100M number because it lends some statistical significance given it's the largest sample size for ppc management out there.

This 52 message thread spans 2 pages: 52