Forum Moderators: open

Message Too Old, No Replies

Browser sniffing for layout purposes -- affects spidering/indexing?

How robot behaves if content varies amid browser type?

         

Nika

6:16 am on Feb 5, 2005 (gmt 0)

10+ Year Member



Hi!

I'm using certain combination css style and h1 tag that allows me to insert higher along the html code (near up to top of code) my keywords and keyphrase wrapped into <h1></h1>. Initially I have designed the website so that didn't take care of placing keyword H tags near to the begining of the html body and now trying to stick the into there. I use H tags and keywords lower in code too, but wish to have those in upper part of text.

The said css/h1 combination worked out perfectly without disrupting the website design. However it did work for IE but didn't for Mozilla. Therefore I had to use browser detecting PHP code and vary the content wrapped in <h1></h1> subject to the browser detected. I simply eliminate the wrapped text if there is Mozilla out there and keep the text if IE is detected. Whatever,...it works.

However:

1. Could anybody tell wan't be there any harm in terms of page web ranking due to slightly different content for diferent users (IE and Mozilla) being generated? Cloaking?

2. How the spider thing works - will it index my IE destined content or will it ignore it because it's variant versus the browser type? I'm generating this text in php as a variable and transmiting it to the html template either empty or meaningful, in dependance on browser name. The whole thing was to deliver keywords to the search engine in some more favorable manner - so is this to work or robot will just omit it?

tedster

6:38 am on Feb 5, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



How does your script work if the user agent is neither IE nor Mozilla?

Sniffing scripts of all kinds can be very tricky and may lock out all kinds of user agents that you didn't think about (like Slurp and Googlebot)

When I use sniffing for anything at all, I always take the approach that one default version of the page must be served without passing any sniff test.

For instance, make the IE version of the page the default, then sniff for Mozilla, Opera, whatever you feel you need. Do this instead of making every possibile user agent go through the code "fork" - because you will probably not think of every posibility. This way you know that Slurp and all the other spiders will still get SOME version of your page.

That said, you should be able to serve the same HTML content to both Mozilla and IE without sniffing. You might want to deal with some IE filters in the CSS or something like that, but the HTML should be able to stay the same. And that's the best solution by far.

Nika

10:37 pm on Feb 5, 2005 (gmt 0)

10+ Year Member



Many thanks!

Nika

4:43 pm on Mar 10, 2005 (gmt 0)

10+ Year Member



Always problems! Now it's about changing link positions around the website and trying to understand how google gonna index them.

What happened is that I wrote new pages to the site and figured they would fit better linked from places where the links to pages /goods/ were before. Accordingly the /goods/, being replaced by the new pages, migrated to other locations within the website (manually, of course). After I finished these smart recompositons magic things start happening: 1. Google indexes new pages but keeps the old ones in its cache, even the url-s given in cache are pointed under old names; 2. Also it says it has no data about the old pages, though those are exactly what it has stored in cash; 3. Besides it fails to find any pages, neither new, nor old through search at least on first 100 pages of search results; 4. All in all - seems that Google has erased the old url-s from its database since couldn't verify them as existing ones and refuses to recognize them at new locations because it has new pages, those at old locations, indexed under the old url names in some ugly way keeping even the old content not just the url....

Could anybody know what is all this mass about and could I counterpart it?

tedster

3:22 am on Mar 11, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



If everything is still on the same domain, you could use a 301 redirect from the old urls to the new ones. However, it will take some time to filter through.

It's a server-side thing, and beyond the scope of this particular board which is for HTML and Browsers. But we have a Google Forum [webmasterworld.com] and Forums for both Apache [webmasterworld.com] and Microsoft [webmasterworld.com] servers. You should be able to find the information you need there.

Nika

9:44 pm on Apr 16, 2005 (gmt 0)

10+ Year Member



One more thing: could there be any harm from using different Names over similiar links to this or that internal pages? From the searchengine point of view. For example the page mydomain/collection_1.html ankored from the homepage as "COLLECTION" and from some other page as "GOODS" - both underlined with the same url. Any opinion on this?

tedster

10:36 pm on Apr 16, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



This is really well outside the topics handled by this forum (HTML and Browsers). It's really about Search Engine Promotion [webmasterworld.com], isn't it?

The short answer is that your anchor text certainlty does make a difference. But is varying internal anchor text in different spots around the site necessarily harmful? That's an in-depth conversation you should start in the forum I linked to, rather than here. That way we can keep each forum on-topic.

Nika

3:16 pm on Apr 17, 2005 (gmt 0)

10+ Year Member



Thanks tedster,

I just didn't think the point was worth of starting a new thread. However if it's a serious matter as you say, I post it to the right forum than.

Thanks again.