In another thread here, there is ref to Matt Cutts saying not to use robots.txt to block bot access to dupes [
youtube.com...]
This has me concerned.
On one of my sites, a few years ago, I had stupid url redirect script where it turned out that any url could be added as a parameter in the address bar to redirect.
When my vistor numbers shot up from a couple of thousand a day to 50 thousand, I found there were many links to that script of mine from forums, with (well you can guess the kind of stuff that was added as a parameter).
I immediately removed the script and got G to remove that page from their index. I also but a block on that old (and now non existant) url in robots.txt
However there are still many links out there to that url (google seems to keep finding more) and I get a number of people hittimg my site for that url (and getting a 404) every day.
What concerns me is that in WMT, Google shows 47,000 urls blocked by robots, on a site with under a thousand pages indexed. Where do I go from here. Should I remove the block and let google bot get a 404 with all that c#!p in the url.