Shaddows - 11:33 am on Sep 3, 2012 (gmt 0)
1. If I block a page as 'do not' crawl, how the spiders still index it? If they don't crawl a page how can they index it? Crawling is the very first step to indexing right?
If Google DISCOVERS it, it INDEXES it. It starts accumulating PageRank, and all the other externally defined factors that exist in Google's world. The referenced thread has discovery examples.
2. Do the SE spiders actually care about what is in robots.txt?
Generally, but not stricty.
There is no problem with my robots.txt, though!
Can you post an exemplified version?