Perhaps this is happening
At the exact time Google calls your robots.txt your server is unresponsive for a short period, Google then marks your site as having no robots.txt and proceeds as if there is none and starts indexing everything it can find.
It probably has a fail safe, that the pages have to be crawled a few times over a given period with robots.txt checked each time to ensure it has actually seen the robot.txt
Problem is, if your honey trap triggers your website to server a 403 or 404 error to a rogue crawler forever more after tripping a honeytrap.... then when google comes back to re-read you robots.txt it cant read it again and doesn't know it has done the wrong thing.
If this is the case, then honey traps that ban after a folder banned by robots.txt is crawled a single time, will never work, and you should remove this rule and try something differernt to stop rogues crawlers.