Forum Moderators: phranque
Some people link to [widget.com,...] but i want them to link to [widget.com...]
Is this useful for SEOs? Thank you.
this htaccess redirect [domainname.com...] to [domainname.com...] simply modifiy it for your needs
redman has already provided a good example of the code, so I'll take the SEO question...
The advantage of using a 301 redirect to redirect an alternate domain to your main domain is that it tells the search engines to list only the main domain, and to credit the link popularity or PageRank of the alternate domain(s) to the main domain. Over time, people who link to you may clean up their links when they check and see that their link is being redirected. The best approach to get "full credit" for all links is to ask everyone to update their links to point to your main domain, and then rely on the 301 redirect to take care of those who don't.
Jim
Yes, you're right. And to make it work even better with cookies, you might also want to correct the case of the requested domain by converting it to all-lowercase if needed.
BTW, Welcome to WebmasterWorld [webmasterworld.com]!
Jim
Is this correct?:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^\.domainname\.com$ [NC]
RewriteRule ^(.*)$ [domainname.com...] [R=301,L]
This redirects [domainname.com...] to [domainname.com?...]
p.s: about the cookies thing, that's really something i don't know about! Thanks!
You're almost there...
As it's written now, it redirects http://.domainname.com to http://www.domainname.com?
(You don't need the "\." at the beginning of your RewriteCond pattern. Further, I'd recommend leaving the "$" off the end of the RewriteCond pattern as well -- it can sometimes cause problems.)
Jim
I have tried this in my .htaccess (first line obviously nothing to do with this question) ...
AddType application/x-httpd-php .html .htm
Options +FollowSymLinks
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteOptions inherit
RewriteCond %{HTTP_HOST} widgets\.com$ [NC]
RewriteRule ^(.*)$ http://www.widgets.com/$1 [R=301,L]
</IfModule>
... but the request just loops and the page never loads. Could somebody tell me what I'm doing wrong? Thanks.
(By the way, I left the final $ on the RewriteCond pattern because I thought that would redirect widgets.com/any_URL to www.widgets.com/any_URL - is this not correct?)
I've used this code both to resolve any missing www and to handle parked .com and .org domains in one unique site (.net).
RewriteEngine On
RewriteCond %{HTTP_HOST} !^www\.domain\.net
RewriteRule ^(.*)$ [domain.net...] [R=301,L]
ErrorDocument 404 /erreur404.html
Is this correct, and will search engines understand that there is only one site?
BTW, the empty line seems to be necessary between RewriteRule and ErrorDocument - is this usual?
Peter.
Welcome to WebmasterWorld [webmasterworld.com]!
In order to support HTTP/1.0 user-agents, you might want to add another RewriteCond. This will prevent an endless redirection loop if an HTTP/1.0 client makes a request.
HTTP/1.0 clients do not send a HOST request header, which would leave %{HTTP_HOST} blank. Therefore, the code would redirect, resulting in another request with a blank HOST header. This process would continue until a timeout occurred.
This code will not redirect if the HOST header is missing (blank):
RewriteEngine On
RewriteCond %{HTTP_HOST} .
RewriteCond %{HTTP_HOST} !^www\.domain\.net
RewriteRule ^(.*)$ http://www.domain.net/$1 [R=301,L]
I'm not sure about the blank-line-required question. I usually place my ErrorDocument directives ahead of my mod_rewrite code.
Jim
I quickly added the missing line to .htaccess, and then checked the logs. Big and happy surprise (I hope): when googlebot came this morning it seems to have managed to read pages OK even though the faulty .htaccess was installed (and tested with a browser). There's 20 or more different pages giving result 200 like this in the log:
64.68.82.30 - - [09/Jan/2004:10:06:26 +0100] "GET /bla/bla/bla.html HTTP/1.0" 200 6274 "-" "Googlebot/2.1 (+http://www.googlebot.com/bot.html)"
How did I get away with it?
I should add that I'm on a shared host: several hundred sites, one IP address. Does this change anything regarding HTTP_HOST with 1.0 requests?
Peter.
GoogleBot advertises as an HTTP/1.0 client, but it does send a HOST header. It also uses some of the protocol extensions added by HTTP/1.1, such as responding properly to a 410-Gone status. Basically, it advertises as an HTTP/1.0 client, but sends a HOST header and is smart enough to adapt if it sees HTTP/1.1 responses.
I'd suspect that only one in ten thousand visitors would be coming through a true HTTP/1.0 client or proxy. I'm just a fan of covering all the bases so I don't get any nasty surprises. :)
If you are on a shared-IP virtual server, than any true HTTP/1.0 client won't be able to access your site - all it will be able to get to is the default server account. That's because with a shared IP, the HOST header is *required* in order to differentiate between all the hostnames sharing the common IP address. So, on a shared-IP setup, the line I added is pointless.
Jim