I need to create a couple of sites which must not be accessed by all search engine robots except for one. Does anybody have any comments on which would be the best method to use. I personally would prefer to use the robots.txt file method because I already know how to do that. Can i be 100% sure that it would work this way.
The robots.txt is used to tell well behaved spiders where they should and should not index so you probably need one of those. The key heer is well behaved, a bad bot will ignor your robots.txt and go for it anyway. If you have problems from a spider that is misbehaving (looking in secret areas, overload of the download etc) you would use your htaccess to block it.
Needinfo, the key word in your post is "must" - if keeping the content out of public search engines is important, then you should do this at the server level. I'd also add a ROBOTS NOINDEX meta tag to the pages in question, but this, too, is a suggestion to the bot.