Forum Moderators: goodroi
"You may want to try to use Google's url removal facility to remove those pages that they have already indexed - robots.txt standard does not require search engines to remove already cralwed pages from their index if such pages became disallowed at a later date."
So is it best to allow Googlebot to crawl once having set a robots metatag in the header to "noarchive" so as to flush Google's cache, and then set up the disallow?
I realise I could delete using the form, but there are quite a few pages and I'd like to automate the process.