Welcome to WebmasterWorld Guest from 126.96.36.199
Forum Moderators: open
Suppose I serve up static HTML pages to ESE bots but provide dynamic page URL's for non-bot access (e.g. the result of clicking on a search result record). (I'm thinking this would be done with metatags, but I'm not really sure how it would be done. This is perhaps a whole 'nother topic.) My assumption is that (some?) bots would reject a non-static URL like "domain.com\gate.aspx?blahblahblah", but this may be a false assumption. (Note that the "domain.com\gate.aspx" part would be static and the accessed page dependent on passed URL parameters, "blahblahblah".) Assume that the textual content on the pages is the same and that we are White Hat -- just trying to help bots index properly.
The dynamic pages are necessary for our business application and we create the static correlates just for the bots.
Is there any reason why an ESE (or the company using it) would care that this kind of cloaking is occurring? There may be technical obstacles in setting it up, but if it can be set up, would there then be any technical or ethical obstacles in execution?
Thanks for any help regarding this.
Leaving aside the technical issues of how to set it up, let's look at the URL issue. You will have a parameter in the URL which indicates which content is to be served... Most search engines that I am aware of will accept a limited amount of parameters. I don't think you will have a problem there.
The cloaking issue is separate. Commercial search engines have traditionally "had a problem" with cloaking, and have taken steps to try to eliminate cloaked pages from their indexes. However, I do not think that the enterprise search engines you are referring to will devote the resources to "out" the cloaked pages. Whether they have a problem with cloaked pages or not will obviously vary from company to company.
have the search results click through a different link?
I'm not sure exactly what you mean by click through a different link...
To display different content to human and crawler visitors, it is necessary to identify the crawlers. Some people do it by checking the user-agent of the visitor, and some do it by checking the IP address of the visitor. If the user-agent or IP address is known to belong to a crawler, then they show the content meant for crawlers. If the user-agent or IP address doesn't belong to a crawler, they show the content meant for humans.