A/B testing is ineffective under a certain amount of traffic
This isn't really the case, although it all depends on what degree of reliability you're comfortable with. Let's say you had 10 "identical" visitors and split them between A and B. All 5 converted on A, but none on B. Intuitively, it seems that page A is the better performing page.
Most likely, if you ran the tests for hundreds of visitors, you'd end up with a slightly better conversion rate on B, and a slightly worse one on A. But all you would have done is marginally increase the confidence level of repeating the result. A sample that is unnecessarily large (i.e. only slightly improving the confidence level) is really just wasting time and money.
The rule of thumb I use for Adwords is between 150 and 300 clicks if I need a 1% conversion rate. The higher the desired conversion rate the less visitors required to reach my own desired confidence level. If I want to convert at 50% and I have 20 visitors without any conversions, it's not likely to happen ;)
Of course, the other significant factor is the importance of the test. A few wasted clicks on Adwords is not a particularly high-risk outcome. If you're running clinical trials then 1% death rate is unlikely to be considered acceptable. You would want an extremely high confidence level ;)
There have been quite a few other posts on the subject, e.g.
Can't decide from the result [webmasterworld.com]
Statistical Significance [webmasterworld.com]