Welcome to WebmasterWorld Guest from 220.127.116.11
Forum Moderators: open
In the case of a site I'm working on at the moment, if the engine doesn't accept cookies, it'll switch to passing a session id via GET, so how will the engine deal with that? I know that engines used to ignore URL's with QUERY_STRING's, because they used to get sucked into a loop with the end result of bringing both servers down, but what about now?
Do they ignore pages with QUERY_STRING's altogether? Do they follow the links anyway, but use some kind of intelligence to ignore loops? Or do they ignore the QUERY_STRING, and index the pages anyway? If it's the latter, they'll end up creating a load of useless session files on my machine, and I want to avoid that.
Any help appreciated. Ta.
or am I taking it out of context?
Excite, Google, Infoseek, Webcrawler and Hotbot are all setting cookies from the submit page. What am I missing?
Brett is talking about spiders accepting cookies when crawling a site. Getting them set on your machine when you visit a search engine page is a different story.
But if you push Brett, I'm sure he'll tell you that the cookies set on your machine by the engines do nothing also, except maybe tell them one of the elements of a real browser is present. I have turned off cookies many times at submission without ill effects.
Surely not? Would cookies have anything to do with this?
If you oversubmit in a given period from completely different machines on independent IP's you'll still get hammered for oversubmission, cookies or no cookies.