If you Disallow a page in robots it would seem that page could still acquire pagerank? Will that page be able to pass pagerank to pages it links to? I have looked around the web and have seen arguments for both sides.
(This is what I'd typed up before I saw that aakk9999 had posted. I'm going to go ahead and post as is.) Assuming that the links to B & C are the only two links from A...
No matter what you do, Google will split the PageRank between those two links... in this case sending 50% to each.
Taking it further... assuming plain vanilla links, if you had 3 links from A to 3 pages, B, C, and D, each would get 1/3 of the top down PageRank from A. If you had "n" links from A, the PageRank would be divided up n ways, and each page would receive 1/n of the PageRank distributed from A.
But, looking at the example of just B & C...
...if you disallow C in robots.txt, that PageRank will go no further, no matter what kind of linking you set up
from C to the rest of the site. You will have lost the use of that PageRank, because Google will not be spidering C, and will not therefore know of any links from C in order to follow them.
Similarly, if you used the rel="nofollow" attribute on the link from A to C, page C will also lose the PageRank as distributed from A. The PageRank will be divided up as before, ie, between B and C, but rel="nofollow" effectively creates a PageRank "black hole" on the link to C. The PageRank goes into the black hole to C, but it doesn't come out.
Now, other links to C from other pages on the web, or from other pages on your site, might also transmit PageRank to C. But if those links are also nofollowed on your site, then those too would create PageRank black holes, and the fractional PageRank from each source page would be thrown away. If you do that very many times, you will have lost PageRank that possibly could have been helpful elsewhere.
Suppose, though, that you did nothing, and just let page C be crawled and indexed. B and C would still be splitting the PageRank from A... but under this scenario a link or links from C could transmit the PageRank that C accumulates to other pages. You could recirculate the PageRank throughout your site. So, you might chose to link back to A from C, or you might choose to link to other pages within the site. Some PageRank will be lost due to a damping factor inherent in the PageRank algorithm, but most will be transmitted through links from C and then throughout the site, depending on navigation setup. While page C would be spidered and it would be in the index, it's easy enough to set it up that it won't rank for anything likely to be searched.
If you absolutely don't want page C or any reference to page C to appear in the serps, and there are sometimes reasons for this, you should use the noindex robots meta tag...
<meta name="robots" content="noindex,follow"> As I noted, the "follow" attribute, which is default behavior, allows PageRank to recirculate from C to other pages throughout your site. Note that the noindex,follow robots meta tag does not create a PageRank black hole.
The noindex meta tag is my method of choice for keeping user accessible pages out of the index... but I use noindex only if I want to hide a page from searchers. Otherwise, I let Google crawl and index the page. Given that Google has set things up so that there are PageRank black holes, there's no PageRank loss caused by letting Google crawl the page. As noted above, never use the robots meta tag in combination with robots.txt.
The challenge, of course, is to avoid very many unimportant links from home or from high up in your nav hierarchy, because homepage links to unimportant pages will be diverting PageRank from higher priority areas.
You can use iframes for some of your footer links, as has often been discussed here, or you can take it on some degree of faith that Google has figured it out enough that Google understands that these links are unimportant and doesn't drain PageRank off from more important navigation.