Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: open
Recently, without realizing it, I had placed an faulty robots meta tag in the pages on two new websites.
I put <meta name=robots conteXt="index,follow"> instead of <meta name=robots conteNt="index,follow">
In both cases, the Googlebot visited and indexed the default.asp a few weeks ago, but hasn't been back to index the rest of the pages since.
I expect this is due to the little error I made. In the meanwhile I corrected it.
Will Googlebot be back to re-index the default.asp pages thus finding the error corrected and indexing the rest of the pages anyway. Or am I in a whole lot of trouble now?
The default behaviour, given no robots.txt or valid robots meta tag is to index pages and follow links, at the crawlers discretion;
Now you say default.asp has been indexed by Google, which would imply that Google has not made any assumption about the content of your broken robots meta tag.
Plus, there have been many threads here recently about Googlebot taking an index page and not coming back for weeks/months for anything more.
Just seems to be part of the way it works these days.