As stated, my tests didn't last for one Sunday. I started blocking Friday night UK time, and put the block list back late Sunday. The experiment lasted 2 planned days plus most of the Monday as it took that long for real ads to reappear.
However, in my experiment I said in the top post that the purpose was to see if Google had made any improvements to the targeting algorithm in the 7 months I've been blocking, and to see if the quality scores algorithm has any effect. I stressed that point throughout the trial, and I repeatedly stressed that I didn't think you could read much into any of the figures.
The result as far as I'm concerned was that having deleted the list, the MFA's came back very quickly and replaced well paying ads. Google have not made any attempt to weed out MFA's by either quality scores or targeting algo.
Back in July last year I reported extensively the results of my blocking MFA's for the first couple of weeks after starting to block. CTR dropped, and all other metrics rose - especially EPC and bottom line $$. Increasing earnings by blocking MFA's (especially long term) is a reality. One that Google don't really want to deal with, or know how to IMHO. Therefore my test concentrated on what ads showed.
If I had continued to allow MFA's on site, I would have lost money - I don't think anybody has any serious doubt on that fact.
As regards old and obsolete advertisers, for some time I've been saying that Google should provide information on the sites we have blocked. Specifically if they are a) still serving ads, and b) actually online. I personally go through the list to weed out these sites on a regular basis. I ask for the tools because it's quite a tedious job, but could be so simple if we had the tools above.
There were differences in the tests done by Nitrous and myself. His test concentrated on the financial aspect of releasing the blocklist. Yes, it was a short test, but the results were that he lost money. One of the main outcomes was indeed that the list was mostly redundant. So from this experiment we have learned that lists do need to be maintained. In my case the result was that Google's targeting hasn't changed - MFA's still need to be blocked.
Therefore both tests had their very valid points. The reality is that none of us can afford to carry out the experiment long term only to prove that we lose money, repeat visitors, site credibility and are smartpriced down to the point of owing Google for showing ads :)
I don't think you can reject the results out of hand. We both made it clear that there were limitations to the tests, and if you combine what we have learned from these two tests, it's a pretty powerful argument for effectively managing blocking.
1. I know from my past experience with blocking that removing MFA's from your site increases profits. I'm not the only one to do this - most people that have have reported a rise in earnings. Very few remove the block to see what happens.
2. It's clear that Google cannot, or will resolve the problem of MFA's appearing instead of real ads that pay money. The only way to do this is to block them yourself.
3. It's clear that the reason many lists are full is because they are not maintained.
Nobody can really argue against what both of us have repeatedly said in terms of quality of ads. If ads are relevant to what the visitor is there for, then it enhances the experience (especially as it seems these days most ads are in fact MFA's) and they will click back to your site, and will become repeat visitors. I know this having used a tracker for a while. It also enhances the look and quality of your site, as well as being more profitable for the publisher.
Replacing well paying ads (and I did name the ads that were replaced, and by what to Google) with MFA's is a no-brainer. Of course you are going to lose money!
I reject the statement that my test was "so flawed that any reasonable person would reject them out of hand". The purpose was to test if there were any improvements in targeting, and the result from that point is that there haven't been any. I did stress the purepose of the test throughout the thread. I did mention metrics, but only because people wanted to know, but I also stressed several times that I didn't think anything could be read into them.
The topics of long term effects of smart pricing, site credibility are relevant to the tests, but discussed more extensively in other past threads.