Statistics are easily manipulated, plus, Lapizuli's positions are hardly defensible. Even though his arguments sound great, they're full of miscomprehension concerning the efficacy of algorithmic models.
The entire problem with statistical analysis is that humans have been inculcated with misleading data, stemming from imprecise models for decades, leading to many false and possibly damaging beliefs.
A 50% drop in revenue is a relevant event for either the small or large web site, just as half the population catching the flu in Rhode Island would be as statistically important as the same thing happening in California. It's all relative, as Lapizuli noted.
Just because something is bigger doesn't mean that it's more accurate. The $1000 a day earner could be getting 2/3rds of the revenue from a handful of pages which were damaged by an algo change, whereas the smaller site could just as easily be a very smooth performer with plenty of hits from a variety of sources. In this instance, the revenue drop would be more of an aberration from the large site, not the small one.
The argument would make more sense if one said small websites dealing with X or Y versus large websites on the same topic. Then, you'd be comparing apples to apples. All other comparisons are generally "noise."
All of this argument about random fluctuations is really a waste of time, since G isn't telling us anything and most of us know these happen all the time. A more telling occurrence would be a "Black Swan" event, as described by Nassim Nicholas Taleb.
That's really what will determine our fates; that, and socionomics. The rest is just fluff.