Msg#: 4282564 posted 11:06 am on Mar 17, 2011 (gmt 0)
Using robots.txt will eventually get those pages out of Google's index. Since you mention they are low quality I doubt Google is crawling them often so it may take weeks or months for all of the low quality pages to be de-indexed. Which ios probably why (as tedster points out) no one has yet confirmed this made their rankings bounce back for the rest of the site.
Msg#: 4282564 posted 12:15 am on Mar 21, 2011 (gmt 0)
I feel I must chime in here. Read John Mu's comments carefully. He is NOT suggesting disallowing with robots.txt. He's saying to add a meta robots tag with NOINDEX in cases where you are working on improving content. Those are very different! The former blocks crawling of pages or wildcard-matching of pages. The latter tells Google to remove the pages completely, and must be applied on a page-by-page basis.
John actually suggests NOT disallowing crawling of those pages, because if you do, Googlebot is blind to them (including the NOINDEX meta tag).
Msg#: 4282564 posted 12:35 am on Mar 21, 2011 (gmt 0)
Thank you, Fred. I had not read those comments closely enough - you are correct. In fact, he specifically says "make sure that they're not disallowed by the robots.txt file."
Now re-thinking the opening question, I now assume that robots.txt will NOT be OK. It sounds like Google might be scoring a site based on the past record of URLs. It's still a bit ambiguous because that answer has a certain specific context, but my assumption now is we need to remove the URLs or enhance their content. If enhancing the content takes time to do, then us noindex during the process.