incrediBILL - 4:12 pm on Oct 13, 2010 (gmt 0)
Flood protection would only slow down the most amateurish of the scrapers. Some of them tend to be smarter than that and may even have randomised timing to make them look like a human user. What often gives them away is their sharp difference from human browsing patterns.
Flood protection still works. Even those that avoid floods trip up in other ways like stepping into a honeypot crawling their 3rd link, something the browsers nor the SEs do. There are other tells as well, such as when the scraper skews the avg. pages read by avg. humans.
It's hard to see any way to defeat scrapers altogether by blocking - even if you come up with the perfect piece of software, the potential currently exists to use botnets and defeating them will be real tricky. However, when the big boys are caught at it, naming and shaming might help.
Blocking all data centers is a big start.
Some botnets like "80legs" identify themselves, no problem there.
Many other botnets use fake UAs, fake headers, rotating UAs for the same IP (not common) or do other stupid things which make them pretty easy to spot most of the time.