That's too simplistic of an approach for 2 reasons:
1. Many bad bots also read robots.txt to avoid those traps so you only catch the stupid ones
2. Firefox and Google prefetch can also read those pages if the links appear on any pages and suddenly you have a bunch of people in your spider trap that aren't bots
Trust me, I do the same tricks but it requires more profiling of the behavior to determine if the visitor is really a bot or just someone using what I consider abusive browser technology.
One of the simplest things to do is kick the visitor into a random captcha mode once they step into the spider trap which let's a human using a browser with pre-fetch to exit but the robots get stuck.
What you could've encountered which happened to me was high speed scrapers/bots overloading your server at the same time Google/Y!/MSN was crawling and the SE's got page timeouts waiting for the scrapers. This situation will cause your SERPs to lower as the SEs seem to assume the page content may not be available and only raise your SERPs again when those pages can be properly crawled without interference so keeping bots out can raise your positions and income but for different reasons than you speculate.