Welcome to WebmasterWorld Guest from 184.108.40.206
But don't try to remove wrong version with URL Console - it removes both versions for six months.
Right solution is to redirect with 301 from one version to another, and Google usually merges it well, but it takes it from some days to a few weeks. It's essential to leave at least one inbound to wrong URL version to ensure Google will crawl the redirect, otherwise wrong version stays in the index for very long time.
Most people use www.example.com with the prefix there when they type a domain name in obviously. But most web savvy people just type in the domain without the 'www' beforehand, and when I type in a domain without the prefix it does annoy me when I can't access the site and have to revert to typing out 'www' for the billionth time.
I don't know of any bad affects of using the domain name without the 'www', so I would advise you to let your visitors access your site with it.
I guess there is a bit of web history behind that somewhere.
The "www" was conventionally used to indicate the *web* access point of a server (www.example.com), as opposed to the ftp access point (ftp.example.com), the gopher access point (gopher.example.com), or the access points any of a number of other possible services.
www.example.com (Web service)
mail.example.com (e-mail service)
news.example.com (Usenet newsgroups)
ftp.example.com (FTP service)
ns.example.com (DNS name service)
Right now Google has been handling the domain.com and the www.domain.com as two separate sites so if you are using relative directory/page links you will end up with two duplicated sites in Googles eyes
On balance, I think adding www, which has more syllables than worldwide web, merely wastes talking time.
I'm pretty sure it's redundant, like the l in html
Do you mean I should change my internal links somehow (full URL?) and that would stop Google from seeing two sites?
It won't hurt and might help. The terminology involved is "relative" links, those without the http:// on the front of the <a href=""> stuff, and "absolute" links, meaning those with the full URL as the anchor.
My own thoughts on this, derived from problems I had in the past, are that if a SE bot comes in on the non-preferred version even once, (such as example.com rather than www.example.com), because of an incoming link in the wrong form, it can follow the relative links right through the entire site, (after caching links and getting back to them later etc), and think there are two versions of every page. It only needs to hit one page on the site at first to cause this confusion, then it merrily indexes all of your pages for a second time. Although G, and Y, and all of them should be smart enough to figure things out, it sometimes results in problems. It's best to use absolute internal linking and not have to worry, because any non-www page that gets crawled directs the bots via internal navigation to the www versions of the linked-to pages. Even better is using Apache and taking care of things with .htaccess, (with a mod rewrite), thereby making sure all requests for the wrong URL's are directed to the right URL versions right off the bat.
I have just recovered from that - took almost 5 months to get it all sorted out on one domain - but my rankings have dome nothing but climb to new heights since I fixed all of the sites.
The question on Windows servers has been covered a few times here in another forum - [webmasterworld.com...] is one of the msgs I found real quick
I haven't the time to post a long answer right now; but this question is asked several times per week, and there is much prior art in earlier threads...
I'd just like to add that it really doesn't matter for your ranking if your keep the "www." or if you remove it, using only "domain.com". For that matter you could use "zzz.domain.com" if you wanted to.
Use whatever you like best, just pick one and stay with that one. Consistency is key.
Oh, and let me repeat: It must be a 301.
If you use an Apache server, doing this the right way is less than a one-minute job to set up properly, and it costs you nothing.
Still, being the master of your own domain you should decide about that, and if you've already made up your mind, I'm sure you host can help.
Would it be possible to 404 anything that comes through on a non-www address? I think google is better at cleaning up pages that no longer exist.
Is this a good idea?
I helped a site clean up by making a "fake sitemap" which we put on another site. That sitemap only listed the URLs that we wanted removed (these URLs were the starting point of the redirect) and it took a few weeks for that to happen.
As an aside, two weeks ago, Google suddenly relisted all four versions of the URL for every page of the site. The non-www with trailing / are all fully indexed with title and description. The other three versions (non-www without trailing /, and www with trailing /, and www without trailing / on the URL) are all shown URL-only entries. Also included are some URL-only entries for the pages that were removed 3 months ago, and a page that hasn't existed for 18 months.
This happened just as the latest update started. It looks like Google has broken the way they handle 301 redirects, at the same time that they haven't fixed the 302 redirect URL-hijacking problem. Maybe they just need to respider the whole web to collect the new redirect-elsewhere/unique-content/duplicate-content status of every URL in their index. If that is so, then get that 301 redirect installed right away. It cannot harm you, and can only help.
What still applies, though, is that it must be a 301.
AFAIK, on MS servers (IIS, NT) either you can set up the redirect in the Control Panel, or you have to purchase/install an ISAPI filter.
Don't use the method "response.redirect" from ASP. That one returns a 302 server status code, not a 301. A 302 is the root of all evil. That one is more or less guaranteed to bring you trouble.