| 4:14 pm on Apr 25, 2014 (gmt 0)|
For one of my sites they suggested an experiment several months ago, did it and it did not increase revenue. They recently suggested to do the same experiment again...
I wish they'd pull this feature into DFP where 90% of my AdSense is served. Much larger sample sizes...
| 6:50 pm on Apr 25, 2014 (gmt 0)|
I tried their suggestion a few months ago, and revenue for that site tanked. Needless to say, it was the last experiment suggestion I tried.
| 7:03 pm on Apr 25, 2014 (gmt 0)|
netmeg, how'd the experiment go?
| 8:37 pm on Apr 25, 2014 (gmt 0)|
I dunno, I only approved it today.
| 9:24 pm on Apr 25, 2014 (gmt 0)|
I tried a few experiments but i had only short term improvement.
| 1:49 am on Apr 26, 2014 (gmt 0)|
I've been using Experiments since it was launched, mostly with color schemes.
Most of the experiments I've run just confirmed that my setup is optimal -- the original (my current) almost always wins against other variations. I think there were just 2-4 times where the variation won, so I adapted them.
I also saw that suggestion about removing visible borders. I've been running it for 9 days now, but I haven't seen a clear winner yet. I'm willing to let it run in one of my smaller sites for a few more days
I actually love this feature. It makes experimenting so much easier.
| 12:51 pm on Apr 26, 2014 (gmt 0)|
Well so far, their version is winning.
| 5:59 pm on Apr 26, 2014 (gmt 0)|
I'm seeing this as a very useful tool. I ran one experiment awhile back and the results were so close as to be negligible (a border experiment). In that case I stayed with my original for personal aesthetic reasons. You can devise your own experiments and I think that over time, I'll be running a variety of them where fonts and colors are concerned. It's a nice feature to have and I'm glad they introduced it.
| 3:17 pm on Apr 27, 2014 (gmt 0)|
Just looking at a single day of a current experiment, I'm seeing something that really demonstrates why you need to run these for awhile. Where impressions are concerned, the ratio of variation impressions to original impressions is currently 3/2. Not a true split test. This also begs the question of when G determines to pull the trigger on loading impressions from either group. Maybe impressions from the original group are serverd during historically low-CTR times of the day and the variation impressions are served during peak hours. Not saying that's so but who's to know unless you monitor this constantly. A 3/2 ratio is not a balanced experiment and even, as with my last experiment, impressions balance out over time, it does make me wonder if both types are given the same opportunity to score during both peak hours and slack hours.
Right now I'm running a kind of bizarre experiment where I'm comparing Text Only to Text and Display Ads to see if there really is more money to be made by using the Text and Display Ads option over just Text. Doesn't help that the impressions aren't balanced but I'll give it a couple of weeks to a month before drawing any serious conclusions.
| 3:32 pm on Apr 27, 2014 (gmt 0)|
When you let Google split your traffic, they apply the multi-armed bandit model. This is not a true split but the configuration thats earns the most money gets more traffic to prove itself (very simple explanation)
| 4:50 pm on Apr 27, 2014 (gmt 0)|
|When you let Google split your traffic, they apply the multi-armed bandit model. |
Perhaps, but today the original version has more impressions and it's still loosing.
| 10:25 pm on Apr 27, 2014 (gmt 0)|
As always, with all these new fangled Google things....
I'll let you kids be the Guinea Pigs. If after a month or so it proves to be a universal winner, I'll try it.
| 12:04 pm on Apr 28, 2014 (gmt 0)|
I'm no kid, but I don't really see a down side to it. I was already doing my own A/B testing, but it's interesting to see what Google comes up with. (To be honest, I've let my own testing lapse because I'm trying to launch three new sites) So far they've made two suggestions and both seem to be trending in their favor.
| 1:31 pm on Apr 28, 2014 (gmt 0)|
I think the whole concept of A/B testing is over the heads of a lot of publishers (no offense intended to anyone) and/or the value of it is not so well appreciated by others. If Google it trying to keep it's program alive, it makes sense for them to introduce a simpler version of the process that can get more publishers to actively test their implementations of Adsense. It's an excellent initiative on G's part. Getting publishers to better present the ads on their site and make them more productive is good for Advertisers which is who G is really trying to please (well, them and their own accountants).
| 4:04 pm on Apr 28, 2014 (gmt 0)|
(I'm an advertiser; I know from A/B testing)
Interesting - my red vs blue link experiment is running neck and neck, but my text ads vs text/display is running 4 to 1 in favor of text only.
| 6:16 pm on Apr 28, 2014 (gmt 0)|
|When you let Google split your traffic, they apply the multi-armed bandit model |
OK, I get it now. Guess that was over my head when I first replied. Multi-armed bandit indeed. Explains a few things.
"If all else fails, read the instructions."
| 3:11 am on Apr 29, 2014 (gmt 0)|
I wish there was a way to test different ad sizes.
| 10:27 am on Apr 29, 2014 (gmt 0)|
I actually started using this tool the other week. Have been with adsense for way to long snd never really tried it. I have to say I'm pretty impressed so far. I agree with nickys on testing different ad sizes. And suggest level that as feedback, the adsense group is one of the most productive units of google I have seen.
But For example I started a test two days ago, and the variation is up over 250% with the main difference was changing the ad link color from a light blue to a darker blue.
The other difference I see is run text only vs text and images. I'll be the first to say I think images in adsense is the worst and I tend to avoid them at all cost. BUT using the testing so far in the short term has shown that more people invited to the party does increase the total RPM, which is the main metric I go by with my adsense account.
| 12:36 pm on Apr 29, 2014 (gmt 0)|
Since my sites are seasonal, and the type of image ads I get are for vacation spots and travel and whatnot, I'm wondering if I shouldn't re-do the image vs text test in a couple months. Most of my regulars aren't out yet with their new image ads.
nickys, you can test different ad sizes, but I don't think you can use the AdSense experiment to do it. Depending on how your site is built, there are all kinds of technologies you can use. Mine are WordPress, so I use an advertising plugin. Before the WordPress, I used a little PHP script to do it. YMMV.
| 1:01 pm on Apr 29, 2014 (gmt 0)|
|I wish there was a way to test different ad sizes. |
This can also be done with server-side code. In this case, you generate the ad unit (and its container div) at runtime based on whatever logic you want to employ. A random number generator will get you close to a split test (based on odd and even numbers). It's important to note that if you're showing one ad and hiding another that you DO NOT render both ad codes in the HTML. Hiding an ad with a hidden div is a no no. With the server side approach, your code grabs one ad code or another and builds it into your page at runtime.
| 1:18 am on May 1, 2014 (gmt 0)|
One more idea. If you could use the experiment feature to experiment with different locations. Now that would be awesome!
| 6:18 pm on May 1, 2014 (gmt 0)|
Have started to use the experiments - mainly looking at tweaking colours on the ads. (I haven't done a text vs text/image test yet)
I had also been doing split testing for some time on the ads using Analytics experiments and/or custom channels.
It seems to take a long time to get confidence levels above 95% on most ads - often experiments will get to late 80's/early 90's and stick there for quite a time (I think this may be a weakness of splitting distribution unevenly - something I'm really not convinced by, given experience with analytics experiments)
Have done red/blue/green tests on Ad Titles in the past - invariably blue has worked best. Have done some other colour tests on sites too and colour preferences seem to be much more gender linked than other design features.
carminejg3 - Some of my recent experiments have been with blue shades for the Ad Title - I have been getting about a 20% increase from softening the very bright default blue.
I have also been testing URL colours with mixed results - I find the default green tends to spoil the overall look of a site when combined with blue - but it still performs reasonably well, results are mixed with it showing positive in some channels and negative in others.
| 6:23 pm on May 1, 2014 (gmt 0)|
P.S. Would love some day parting tools in Adsense.
| 6:33 pm on May 1, 2014 (gmt 0)|
Day parting in AdSense? How would that work?
| 6:43 pm on May 1, 2014 (gmt 0)|
At one point in time I setup custom channels for each hour of the day and day of the week that I controlled and inserted into the adsense code from the server side. Allowed me to get an idea of what times of days and days of the week had the best AdSense performance. It was interesting to know but I wasn't really able to do anything actionable with the intel.
| 7:52 pm on May 1, 2014 (gmt 0)|
Many ways day parting in Adsense could work
For instance if your target audience is office workers - then you could run different coloured ads to target Asia Pacific office hours, European Office hourse and Americas office hours.
A quick background study of how colours relate to emotions in different cultures is necessary here.
(Alternatively you could include geo-targeting of Ad colours schemes for the same result)
Or if you have a site with a predominently gender A audience at certain times of day then you could use a more gender A targetted ad setup at those times of day. (Similarly for Age distributions and factors)
There may also be behavioural responses to certain colours at certain times of day - I haven't had time to do any background research on this but I definitely think it'd be worth testing.
| 8:19 pm on May 1, 2014 (gmt 0)|
Maybe. I dunno.
|Allowed me to get an idea of what times of days and days of the week had the best AdSense performance. It was interesting to know but I wasn't really able to do anything actionable with the intel. |
If your AdSense is linked to your Analytics acount, you can get that information now. But yea - what would you do with it?
| 9:29 pm on May 1, 2014 (gmt 0)|
I ran some tests of new designs for the header for a site not that long ago and found that the same design worked best across all platforms but one colour worked best on mobile and one on desktop.
It was only when I analysed the platforms with gender data that I understood why there was a difference in performance.
(Hmm - must go and try some platform specific ad colour schemes)
| 9:36 pm on May 1, 2014 (gmt 0)|
|If your AdSense is linked to your Analytics acount, you can get that information now. |
Unfortunately if you are using DFP it doesn't come into analytics.
| This 55 message thread spans 2 pages: 55 (  2 ) > > |