TheMadScientist - 1:28 am on Nov 7, 2012 (gmt 0)
Having a domain accessible both with and without the www won't confuse a bot any more than texas.example.com and california.example.com would ... Having the same content available on both can create duplicate content issues for algorithms, but it's actually handled by search algo's way better now than it used to be, because they basically just pick one to show in the results when both are available with the same content.
If your page is really not available via robots.txt you should see a URL only in the results, so if you see more than that (like the title) I would suggest starting by looking at the robots header/meta tag.
You don't possibly use the 'noarchive' robots header or meta tag do you? (That's the first one I would look at personally, because I would not be surprised at all if it's the issue.)
robots.txt block = URL only in the results.
robots header/meta noindex = no page in the results, so that's not the issue.
robots header/meta noarvchive = my first guess as to the problem. (I know it messes up the preview 'snap shot' of a page and I'm pretty sure it's tied to the description in some way too.)
Those are really the only 3 possibilities outside of a huge glitch at Google, because fetching the correct robots.txt file from each accessible domain/subdomain is critial when you're running a bot, and the chances of them getting that wrong after the time they've been running a bot is Very slim.
The only other thing I can think of as remotely possible is an erroneous redirect, but again, it should cause your page to be URL only in the results if somehow you're redirecting your robots.txt to some other domain/subdomain and that's easy enough to check by typing in your domain/robots.txt and making sure you stay at your domain.