g1smd - 4:31 pm on Mar 24, 2013 (gmt 0)
I don't know how this would play out today but several years ago a 20 page site using a popular wiki package had Google spidering and indexing the thousands of old page revisions and thousands of "report" pages due to lack of robots.txt file or other proceses to keep them out.
Some swift redirecting ensued, some of it only for searchengine requests (not often I do that); later replaced by general robots.txt exclusions and some other htaccess magic, a wait of some 6 months or more and finally things were somewhat back to normal.
Traffic was rarely an issue as several large sites regularly brought most of the referrals. Google was a minor source of traffic for the most part.