| 5:14 am on Aug 12, 2010 (gmt 0)|
Googlebot may be confused by the two sections where you mention User-agent: Googlebot. Looking at your robots.txt , it looks like you want to block all access from all robots. In that case the following should be sufficient:
|User-agent: * |
| 10:07 pm on Aug 12, 2010 (gmt 0)|
Except it won't block all bots. Many do not actually look at robots.txt. They find a domain, they scrape it. :(
Lots of info on this in the SE forum hereabouts but start by blocking every server farm you can find, then add broadband suppliers from the most likely scraping countries such as far east, eastern europe, america (south AND north). :(
| 11:38 pm on Aug 12, 2010 (gmt 0)|
have you tried the robots.txt test function on the Crawler access page of Google Webmaster Tools?
| 10:54 am on Aug 13, 2010 (gmt 0)|
This robots.txt is malformed... you need a BLANK LINE between each useragent and a BLANK LINE at the end of the file, too.
| 11:00 am on Aug 13, 2010 (gmt 0)|
Have the index be a form with a password. Upon correct completion of password, assign a cookie. Have every page check for the cookie. No cookie, redirect back to form page. Problem solved.
| 5:12 pm on Aug 19, 2010 (gmt 0)|
thanks for your time and help! much appreciated!
| 12:59 pm on Aug 26, 2010 (gmt 0)|
If you merely want it not crawled, then robots.txt will stop it crawling, but references to URLs on your site could still appear in Google SERPs as URL-only entries.
If you don't mind Google accessing the pages, but you don't want anything at all to appear in the SERPs then you need a meta robots noindex on every page.
If it really is private, then using .htpasswd is the way to go. That way there will be no access, no crawling, no indexing at all.