simonlondon - 1:40 pm on Aug 29, 2013 (gmt 0)
Agency has suggested that if the content of a page is not cached thanks to a robots.txt disallow, the content essentially doesn't exist anymore and therefore the links don't either.
This depends on whether the page/content is already indexed/cached when you put the disallow line in. Robots.txt is a crawl instruction tool, not an indexation instruction tool. If the page is already indexed, then simply adding that line will not remove it from the index, and the content still exists and the links do as well.