|Redirects and ranking|
| 12:41 am on Jun 2, 2000 (gmt 0)|
I'm working on a new site that will require some browser and resolution detection on its pages, with a redirect for some situations.
Are there some techniques to minimize the impact I've heard redirects can have on rankings? Currently we are plan on using a full basic page of MSIE friendly code with a JS detect in the HEAD. The redirect for Netscapers will be a "replace" so that the back button doesn't get confused.
Will this look like SPAM to the SEs?
| 8:10 am on Jun 2, 2000 (gmt 0)|
We used a js redirect, a meta refresh and a perl script program (not at the same time I add) and none worked well enough to say we'd continue. The js was the best but we dumped all redirects in favour of a simple click through. In the end, we decided it wasn't worth developing two sites to work with NS and IE. Sadly, IE won. Can't say i'm happy about that, but, in the real world, the client wanted good rankings and didn't want to pay for two sites.
The effect wasn't on the rankings, but was getting the site indexed in the first place. AV was the most sensitive and refused to index the site.
All redirects are now removed and the site is fully indexed.
| 12:36 pm on Jun 2, 2000 (gmt 0)|
I'm coming to the conclusion that we must go away from any automatic redirects, even though click pages will look pretty "geeky" on this particular site. We really prefer the mechanics to be invisible as much as possible.
I've got another question about the same issue. The content of the two versions of the site - MSIE and Netscape - is obviously very similar. Would it be best to use robots.txt and meta robots tags to keep the search eninge spiders away from seeing the set of near-duplicate pages?
Or is there a possibility of changing page titles and meta tags to gain an advantage and enhance the theme?
| 5:29 pm on Jun 2, 2000 (gmt 0)|
You may have just given us a clue to a small part of the AV problems many have been having. Once you cleaned out the redirects did you have to get AV to take a special look at your site, or did you just re-submitt?
We're now building in favor of click-throughs, but have several hundred pages remaining with JS or meta redirects. Guess I'll be busy!!!
In our new approach we're building the page that is clicked through to in a frame (header type) This avoids the appearence of lots of page that seem to do nothing but leave the site - and the spiders usually don't go past the frame page. By having a dozen or so of those frame pages on the site we also avoid lots of pages with only one destination on or off the site.
| 10:25 pm on Jun 3, 2000 (gmt 0)|
If you really want to keep the two separate formats (IE & Netscape), would strongly recommend that you try to keep the spiders out of one of your formats. I have heard that AV in particular has been ignoring robots.txt and meta robots tag. I have no evidence to conirm or deny this. By the number of questions I get about robots.txt and meta robots, I guess that there could be a good number of incorrectly configured files and tags on the web. In addition, I'm speculating that AV's scooter was configured to ignore these specific instructions to seek out these "spam" sites. I've heard so many people mentioning "...how to stop AV spidering, even with the file and tag in place..."
Even on my site, AV has spidered some on my draft pages, and found some specific, standalone home pages devised for directories. These pages look just like my index.htm page but they allow me to closely monitor entries from the directories. Unfortunately, AV has spidered and indexed some of these pages (although it should not) and I'm holding my breath incase I have a problem of being booted out for spamming, albeit unintentionally.
There are other people here with far greater knowledge on the technical aspects of redirects and may know of a working solution. My personal experience is that I will avoid any redirect, unfortunately. Redirects can amke the site work well from the users point of view. Good luck.
I tried experiments of putting links (normal and hidden) as high up the page as possible and setting refreshes to delay fro greater than 10-seconds, to see if that helped. Actually, it did. I can't remember which engines it solved problems with but decided to remove the scripts and meta refresh to solve the problem of indexing.
Once the site was tidied up, I resubmitted the key pages to the SEs - all of them. Not the directories, of course (unless they had not listed the site). It took many months to get indexed, but in the end - success!
Good luck - keep us informed.
| 9:38 am on Jun 5, 2000 (gmt 0)|
Thanks for all the help.
We've decided we do need to have two versions of the site, but one of them will be kept isolated -- totally away from the spiders (we hope!). We plan to do the browser testing and redirects by calling .js files, so the actual redirect code will not be on the page.
Without the input from this forum, our design crew would never have thought of this solution. In fact, we never would have known we were headed right into a problem until it blew up in our face.
In fact, we decided to move as much JS and CSS as possible off the page and into separate files. Besides keeping the redirects off the HTML page, this will also help the ratio of text to code.
I'll give a report on our results, whenever there is something new to report, and thanks again.