sinyala1 - 4:00 pm on May 11, 2002 (gmt 0)
Sorry to say but I don't think that's truthful if you say you can catch all cloaked sites. It's all server side. If you're redirected to a dafault.htm or html page HOW is it possible for a search engine to find out that it's a cloaked site? By checking it's databases against....a human viewer? HOW is it possible to detect this? You request web page, server side scripts tell the server what to do, you get sent to page requested. Since this is the index page and you could be sent to an actual index.html as the search engine spider how is a spider gunna tell it's cloaked? By trying different DNS/IP's? Spider traps would get all this and then be updated.