enigma1 - 8:07 am on Oct 18, 2010 (gmt 0)
I'm not sure what any of this privacy stuff has to do with scrapers. Nothing. Off Topic.
It is related with methods webmasters deploy to identify humans vs bots and decide upon scraping. So you can't tell reliably, when browser filters are used, what's going on. Just because you block an IP when the UA is empty, doesn't mean there is a scrapper behind. Or when someone triggers a honeypot, because he uses a filter like the ones mentioned above, to strip out resources and then honeypot links become visible and can be accessed.
So my opinion is, without manual examination scraping identification methods may fail. And can only hurt site owners, not scrappers, because they likely use compromised systems.
So privacy can complicate scraping identification methods.