Forum Moderators: phranque
This was discussed just a couple of hours ago: [webmasterworld.com...]
Are there any search engine bots still issuing HTTP/1.0 requests?
The intent is to be HTTP/1.0 compatible so that they can spider very old servers, but to provide the Host header required by HTTP/1.1 to support multiple domains sharing the same IP address (which isn't possible using strict HTTP/1.0 -- which resolves a domain to a server by IP address only, and does not send the client-requested domain name to the server at all).
In the case at hand, blocking by the presence of HTTP/1.0 at the end of THE_REQUEST won't be sufficient because legitimate search spiders such as Yahoo! Slurp will be blocked. A combination of user-agent and HTTP/1.0 presence may be enough if this "user" is not switching user-agents dramatically. If he is, then perhaps block with (THE_REQUEST ends with HTTP/1.0) and (HTTP_USER_AGENT is NOT a search engine spider).
Jim
Could someone tell me how to do this?
A possibility that you may accomplish what you desire in reverse?
Rather than seeking to deny somebody unknown?
Simply deny everybody and only allow access to the IP ranges of the known IP's (i. e.,"few friends").
This basically amounts to password protected zone and you'll need to make participants aware of the requirements OUTSIDE of the open forum.