Forum Moderators: open
However, when checked with http 0.9 and 1.0 WebBug returned
400 Bad Request and HTTP/1.0 500 Internal Server Error.
What version of http protocol is googlebot currently using?. Will it be able to crawl my non-existing pages?. how about other major search engines?.
I just want to make my dynamic site to be crawable to major search engines, particularly Google and MSN
I'm still trying to eliminate HTTP/1.0 500 Internal Server Error though. Any idea why does this error occur?.
If you are using a shared server then if you don't supply a HOST header your request will fail because the server doesn't know which site you are after.
I've never seen a HTTP/0.9 request so I'd suggest that 1.1 is the dominant form for browsers and some crawlers, with 1.0 being an alternative.
- Tony