my programmer uploaded a robots.txt file that disallowed my entire website about 4 days ago. i just noticed it. my site wasn't getting crawled and i thought this was quite strange. of course, i deleted this file...but i hope i didn't find out too late.
is this going to hurt my site in any way? obviously my site is not in the serps, but i am hoping they come back now since I deleted this errounous file.
suffice to say, i'm real pissed at my programmer right now.
The major search engines will index the site eventually now that the incorrect robots.txt file is removed. I'd suggest you serve an actual file, rather than just deleting the previous one. You can simply use this syntax:
User-agent: * Disallow:
That disallows nothing, and it explicitly allows the crawlers in. It might speed things up a few days.
Tedster, I took your advice and used that file. my site is showing in the google results now..but when I log into google webmaster tools.....i still see quite a bit of my urls restricted, including my home page (Says "URL Restricted by Robots.txt", detected November 20, 2010). i'm assuming since they detected this around the time of the problem, google just stopped coming back to crawl my site? would do I do from here to get google to realize i am no longer restricting my home page from them? Thanks,