homepage Welcome to WebmasterWorld Guest from 54.234.228.64
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 246 message thread spans 9 pages: < < 246 ( 1 2 [3] 4 5 6 7 8 9 > >     
Why Does Google Treat "www" & "no-www" As Different?
Canonical Question
Simsi




msg:3094365
 7:58 pm on Sep 23, 2006 (gmt 0)

...why does Google want to treat the "www" and non-"www" versions of a website as different sites? Isn't it pretty obvious that they are one site?

Or am I missing something?

 

AlgorithmGuy




msg:3095798
 11:41 am on Sep 25, 2006 (gmt 0)

I think this canonical thing is just one of those things that they should flip on its head and assume if the content is the same on both variations, that it is the same website, then work backwards from there. Just fairer on the average webmaster guy IMO. Hell what do I know - I've just gone all supplemental and lost all my PR

;)

Simsi,

I have seen websites go under for exactly what you describe. I feel for the webmaster in question because I had trash content and stayed top but the more informative site went under in our niche.

Google put up information that another webmaster cannot harm your website.

A top competitor once was giving us hassle over copyright issues. We noticed the competitor that ranked above us had canonical weaknesses. We submitted 4 versions of his website to search engines making sure that the crawlers picked up on it. Google's cache later had all 4 versions displayed and all 4 had identical content. Not a month after that the Bourbon update relegated that site into oblivion. We were supported by links from DMOZ, another advantage we had over our competitor who was not listed in the directory because editors deemed his site to be of useless content and our site must have been deemed useful content. I say that his site was an authority in our niche and we played all the tricks with trash content. DMOZ got it wrong.

OK, I spilled the beans on what I did, but can you see the point? We were actually doing that site a favor by trying to promote all their domain versions. Nothing wrong in that. And no different than you pointing a link to another site.

[edited by: AlgorithmGuy at 11:52 am (utc) on Sep. 25, 2006]

Simsi




msg:3095815
 11:57 am on Sep 25, 2006 (gmt 0)

LOL - okay i think I see what you did there AG. But if Google assumed that non-www and www were one in the same website, would that not have prevented you from taking advantage of that? The competitor is one of those people that I think Google should be helping as they clearly didn't know their canonicals? Or did I miss the point? :-D

[edited by: Simsi at 11:58 am (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3095844
 12:26 pm on Sep 25, 2006 (gmt 0)

LOL - okay i think I see what you did there AG. But if Google assumed that non-www and www were one in the same website, would that not have prevented you from taking advantage of that? The competitor is one of those people that I think Google should be helping as they clearly didn't know their canonicals? Or did I miss the point? :-D

One of the reasons why I am interested in this thread you started is the very valid point you put forward.

No, you did not miss anything. Google is a money making machine. It's core is geared up to make money. It sucks the blood out of websites. near 40% of its 95% entire revenue that is adwords, is based on end users default method of searching.

An end user may click a few websites to many websites before he makes a purchase. I have witnessed a guy looking for a mechanical valve that costs 10 dollars. By the time he purchased that valve he had cost advertisers I estimate about 50 dollars. Google preys on the fact that people will look to compare, both price and value for money.

Google also preys on the obscurity or relevance. A click or two by an end user before he hits the target.

Google is not a search engine. Whoever thinks that google is a search engine is misleading themselves. It is a business that amasses the hard work of webmasters. It then displays what "it" wants the end user to see. This is the heart of google's pay per click.

Trust me. Do a search for any keyword. The stronger the niche the better explains this fact. You will be given ten results in the first page. You are not aware that the top ranking site could actually be the number 16th or 20th. anything from 0 to 75% or relevancy is filtered out by google. This promotes the adwords to be clicked with the end user desperately looking for the website that holds the information he wants.

Yes, many sites are filtered out that are far more relevant in that result given. This also forces those hidden websites to go pay per click or never be found. And the pay per click they assign to is the highest bidder wins.

Google is a business that gathers information and displays what it wants to display. That is not a definition of a search engine. Google's main goal is it's war against webmaster's who were the very people that championed it into popularity.

That is what google calls "Back-Rub" Do no Evil.

Canonicalization of a website is indeed not a difficult thing to do. It requires resources yes, but google appoints it's resources to lining the pockets of it's shareholders with hard cash.

[edited by: AlgorithmGuy at 12:34 pm (utc) on Sep. 25, 2006]

photopassjapan




msg:3095846
 12:33 pm on Sep 25, 2006 (gmt 0)

We noticed the competitor that ranked above us had canonical weaknesses. We submitted 4 versions of his website to search engines making sure that the crawlers picked up on it. Google's cache later had all 4 versions displayed and all 4 had identical content. Not a month after that the Bourbon update relegated that site into oblivion. We were supported by links from DMOZ, another advantage we had over our competitor who was not listed in the directory.

OK, I spilled the beans on what I did, but can you see the point? We were actually doing that site a favor by trying to promote all their domain versions. Nothing wrong in that. And no different than you pointing a link to another site.

O.O

...um...
Mm.

So this is what precaution means when it comes to addressing your own site's canonical issues. Even if you get away with only one version going public, same version in internal linking, all the inbound links you yourself have arranged and such... there might just be a source of information feeding the search engines you didn't expect. Perhaps someone linking to you with a different url... with all the good intentions of having you as a favorite. Perhaps someone helping you by posting your different urls to search engines with even better intentions. ;)

AlgorithmGuy




msg:3095851
 12:43 pm on Sep 25, 2006 (gmt 0)

...um...
Mm.

So this is what precaution means when it comes to addressing your own site's canonical issues. Even if you get away with only one version going public, same version in internal linking, all the inbound links you yourself have arranged and such... there might just be a source of information feeding the search engines you didn't expect. Perhaps someone linking to you with a different url... with all the good intentions of having you as a favorite. Perhaps someone helping you by posting your different urls to search engines with even better intentions.

Exactly. And do not forget that you omiting the www in your link exchange is no different. Whether intentional or not. If your link partner, competitor is on a static IP number, link to them via their IP number only. More chance of a duplicate content because google will indeed cache all three versions as having triplicate content and BANG the prized and patented secret algo kicks in and without hesitation tanks the site being done a favor. ;)

[edited by: AlgorithmGuy at 12:47 pm (utc) on Sep. 25, 2006]

lmo4103




msg:3095872
 1:01 pm on Sep 25, 2006 (gmt 0)

I this just google, or is it them all?

g1smd




msg:3095879
 1:12 pm on Sep 25, 2006 (gmt 0)

If they are clever, then the IP address access will have a 301 redirect on it to one site on the server, or to an error message.

AlgorithmGuy




msg:3095892
 1:22 pm on Sep 25, 2006 (gmt 0)

If they are clever, then the IP address access will have a 301 redirect on it to one site on the server, or to an error message.

gIsmd,

Don't forget, you would do a header check to see if they have that in placed on their server. Highly unlikely.

Besides, one can exploit limitations of what their sever is able to filter into a 301 in this way.

You are vulnerable and so are vast majority of websites.

Unless google addresses this problem, there is little you can do. A dot before the slash of your URL is another of many ways to give your website duplicate content penalties that google thrives on dishing out as a summarily automated duty.

texasville




msg:3095908
 1:38 pm on Sep 25, 2006 (gmt 0)

Really, what it all comes down to is this:

Google refuses to accept that non-www and www are one and the same site as long as they have the same suffix such as .com or .org.
Argue all you want about how they are different but try to buy just one of the versions. You can't. What registrar would even entertain the notion of seeing if just the www version of example.com was available. That is all nuts and we are all letting ourselves get TRAINED to think like Google.
We now do everything under the sun to try to compensate for Google algorythmic shortcomings. Thousands of webmasters have to do the work of a few short sighted engineers that didn't think of these loopholes when writing their formulas.
It's all absurd.

AlgorithmGuy




msg:3095915
 1:49 pm on Sep 25, 2006 (gmt 0)

If they are clever, then the IP address access will have a 301 redirect on it to one site on the server, or to an error message.

gIsmd,

theBear might want to comment on this.

I have a theory that might work against the nubered IP URL being attacked to cause duplicate content. A container to hold the IP in the apache server or any server. That container to point to the folder that the numbered IP can be used as a totally isolated website.

No 301 is better, far better and productive since you can utilize the numbered IP. To give a link to the canonical URL. The attacker will then be actually doing you a favour by linking to it or providing crawlers paths to the numbered IP.

The apache server can be configured to detect the request. No doubt about it, isolated and made beneficial intead of being vulnerable.

This is a workable solution. No 301 needed.

Simsi




msg:3095991
 2:57 pm on Sep 25, 2006 (gmt 0)

NOW I'm learning! Thanks AG :)

No 301 is better, far better and productive since you can utilize the numbered IP. To give a link to the canonical URL. The attacker will then be actually doing you a favour by linking to it or providing crawlers paths to the numbered IP.

Like it :-D Guess whose .htaccess file has just been amended!

If they are clever, then the IP address access will have a 301 redirect on it to one site on the server, or to an error message.

I'd say "knowledgable" more than "clever" g1. An academic with a website who is no techie is still clever, just not knowledgable in the field of canonicalisation ;) Splitting hairs I know :-D

[edited by: Simsi at 3:11 pm (utc) on Sep. 25, 2006]

jessejump




msg:3096004
 3:01 pm on Sep 25, 2006 (gmt 0)

>>>>> you start dealing with "canonical forms" in high school algebra where "3x + 5" is equal "5 + x * 3", but you always want to write it the first way). I'm sure Google expected the readers to have at least that background.

I took HS algebra, Trig, College Calculus - never heard the term. If someone did mention it, they didn't use it often.
Come on - it's a geek term - FGBG (For Geeks by Geeks)- the public doesn't know it in the URL sense.

AlgorithmGuy




msg:3096060
 3:33 pm on Sep 25, 2006 (gmt 0)

Simsi,

There are experts far better than I regarding .htaccess

Never do a redirect of any sort unless you really know what you are doing and for what purpose. A .htacces is a last resort when you compare the alternative more powerful methods such as mod-rewrite in apache at severend.

Once a site has gone live it is not too late to resolve, but .htaccess is notoriously ineffecient compared to the greater effects afforded by serverside configuration of an apache server.

On some servers a looping effect happens when you try to resolve a domain in .htaccess. It loops very fast, like lightning and can bring your website to it's kness.

Unfortunately, hosts are not search engine knowledgeable. 9 out of 10 maybe won't know what you are talking about.

I suggested to a server software company that their software was killing websites and to rectify it. They had no idea until I explained the process to them. Their server software was returning a 302 for a missing slash to resolve to the slashed url. This was causing untold complications to people that were using the software. The company acknowledged that they got it wrong and rectified the software. An immediate patch was sent out to all their customers. I was accredited with the update version patch. Surprisingly, nobody complained about it. I came across it by sheer chance when a friend and webmaster asked me why his website was in oblivion. A look at the logs disclosed how the server was redirecting harvesting bots and deep crawl bots. These bots were telling google that the website was a temporary website based on the 302 status codes the bots were given by missing trailing slash.

So google had two stories to deal with. An identical content under a 200 GET for a given website and another bot delivers an identical site with a temporary status. BANG. Problems. Oblivion.

Please do not make assumptions about redirects unless you know what you are doing.

[edited by: AlgorithmGuy at 3:48 pm (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3096073
 3:41 pm on Sep 25, 2006 (gmt 0)

I took HS algebra, Trig, College Calculus - never heard the term. If someone did mention it, they didn't use it often.
Come on - it's a geek term - FGBG (For Geeks by Geeks)- the public doesn't know it in the URL sense.

I used to think the term applied to various things other than search or websites etc.

Google decided to use the term just over a year ago. It was a deliberate attempt to mislead webmasters. In fact, google actually once said that "it's" interpretation of canonical may indeed mean another website that contains your content better able to display your content.

Yes, believe it or not. If a website copied your website and it had a higher pagerank. Google would deem the copier as the canonical holder of your URL. Since it has the same content, but a higher pagerank, the canonical vote goes to the site that has your content.

Trust me on this. This is fact, not fiction. Documented evidence exists to prove this.

Google's interpretation of canonical is not what we think it is.

lmo4103




msg:3096124
 4:20 pm on Sep 25, 2006 (gmt 0)

I just browsed around at Webmaster Help Center [google.com] and found some information relevant to the current affairs that I distinctly don't remember reading before.

Bewenched




msg:3096146
 4:29 pm on Sep 25, 2006 (gmt 0)

AlgorithmGuy,
Dont forget about the SSL version of the site as well. It did happen to our site with soneone linking to us as

https://www.example.com
instead of
http://www.example.com

dupe content! supplemental!

[edited by: tedster at 11:03 pm (utc) on Sep. 18, 2008]
[edit reason] switch to example.com [/edit]

AlgorithmGuy




msg:3096185
 4:51 pm on Sep 25, 2006 (gmt 0)

AlgorithmGuy,
Dont forget about the SSL version of the site as well. It did happen to our site with soneone linking to us as
https://www.ourdomain.com
instead of
[ourdomain.com...]

dupe content! supplimental!

It may be possible but I have never observed a site tank regarding its secure pages. However, anything is possible. Were are talking about google's canonicalization methods. And the one question on my mind is what google deems its in-house interpretation of the word is. It certainly not what you and I think.

It is about what google deems it and how webmasters websites are sent summarily into oblivion in it's index as a direct result of incompetence of a few hourly paid workers.

Documented evidence exists that google has the right to acclaim your canonical contents and URL to another website that contains your source code. If that website is a higher pagerank, then google deems that website to be the canonical URL and worthy holder of your websites contents. I read it in black and white. Since the higher pageranked site is on the up and the lower pagerank site deemed by it's notorious heuristics and Bayesian probabilities to be a receding website and diminishing in popularity is not worthy of a canonical status.

So all in all. Cononical means one thing to us, but a totally different thing to google.

Google reflects what it sees on the web. Whether right or wrong, that is how it goes about its business. It does not care that it brings businesses to its knees in its wake.

[edited by: AlgorithmGuy at 4:53 pm (utc) on Sep. 25, 2006]

Simsi




msg:3096196
 4:55 pm on Sep 25, 2006 (gmt 0)

AG:

So by dot before the slash, do you mean:

www.widgets.com./

?

Cheers

Simsi

tedster




msg:3096213
 5:08 pm on Sep 25, 2006 (gmt 0)

It does not care that it brings businesses to its knees in its wake.

From the people I know at Google, that's just too harshly critical an evalutation. They definitely know about the power they wield at Google and they do care, both as individuals and as a company. Google's organic search has helped to BUILD many companies -- and anyone who feels they've been "brought to their knees" by a shift at Google was first elevated by Google (for free!) but proceeded to take their elevated position for granted. Yes, it hurts, but stuff does happen. This is real life, and not some protected womb.

Google has a huge technical job to do, and of course their FIRST job is creating and maintaining their own solvent business -- their core job cannot be teaching web technology to people who are trying to run a website.

Even so, Google has been a pioneer in communicating with webmasters and getting the word out through many different channels about common technical issues. These canonical issues are one area that Google has made a lot of noise about, and as I see it, they've really gone above and beyond in getting out straight information.

[edited by: tedster at 5:11 pm (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3096214
 5:09 pm on Sep 25, 2006 (gmt 0)

AG:
So by dot before the slash, do you mean:

www.widgets.com./

?

Cheers

Simsi

As you can see below. Even msn is vulnerable. Google is not. Because google knows this trick and resolves at source. msn is too big a website for google to apply duplicate content penalties to. But your website may not be tolerated in the same manner.

The below is a dot in the url of msn. The header tells the agent that the site exists elswhere and, temporarily, the contents can be found there.

The mod's here might ban me for disclosing this information to you. I hope you won't let me down. Note the dot in the url and header report. Ignore the second redirect, that is deliberate by msn.

URL = [msn.com....]
UAG = Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)
AEN =
REQ = GET ; VER = 1.1 ; FMT = AUTO
Sending request:
GET / HTTP/1.1
Host: www.msn.com.
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)
Connection: close

• Finding host IP address...
• Host IP address = 207.68.173.76
• Finding TCP protocol...
• Binding to local socket...
• Connecting to host...
• Sending request...
• Waiting for response...

Receiving Header:
HTTP/1.1·302·Found(CR)(LF)
Date:·Mon,·25·Sep·2006·16:58:09·GMT(CR)(LF)
Server:·Microsoft-IIS/6.0(CR)(LF)
P3P:·CP="BUS·CUR·CONo·FIN·IVDo·ONL·OUR·PHY·SAMo·TELo"(CR)(LF)
S:·appB32(CR)(LF)
X-Powered-By:·ASP.NET(CR)(LF)
X-AspNet-Version:·2.0.50727(CR)(LF)
Location:·http://msid.msn.com/mps_id_sharing/redirect.asp?www.msn.com./(CR)(LF)
Cache-Control:·private(CR)(LF)
Content-Type:·text/html;·charset=utf-8(CR)(LF)
Content-Length:·178(CR)(LF)
(CR)(LF)

End of Header (Length = 362)
• Elapsed time so far: 1 seconds
• Waiting for additional response until connection closes...

Total bytes received = 540
Elapsed time so far: 1 seconds

[edited by: AlgorithmGuy at 5:11 pm (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3096249
 5:40 pm on Sep 25, 2006 (gmt 0)

tedster,

Noted about what you say regarding Google in its defence.

But can you please allow the slight violation of the TOS regarding the header report I put up.

It proves a point. Does it not? Contrary to how altruistic you think google is.

kwngian




msg:3096272
 5:58 pm on Sep 25, 2006 (gmt 0)


Why do that only to your competitors?

I don't believe all webmasters come here. I don't think many even know that having both www and non-www on the server may cause problems. What about servers on IIS?

What about those domains that enable wildcarding? we can submit subdomain A to Z and it will produce 26 versions of the same thing.

Do it to everybody that doesn't correctly address this issue and then probably google will notice, but not before they started boasting their advances in technology capability that enable them to increase in their index two, three or four folds(depending on how many versions you submitted).

Right now, they're too busy filtering undesirable contents. ;P

Adam_Lasnik




msg:3096653
 10:33 pm on Sep 25, 2006 (gmt 0)

Canonicalization is an important issue. Not a simple one.

Algorithmguy wrote:
Matt is a representative of google, I doubt if he has ever written anything that has not been filtered by the PR department before the contents of what he mentions becomes public. He has never been subjected to a bombardment of random questions by learned webmasters where he is on the spot. He speaks from a port hole, nudged and hinted at as to what and what not to disclose.

Neither Matt nor I submit our posts, videos, anything like that to PR. And yes, both of us (along with MANY other Googlers) have been subjected to "bombardments of random questions by learned webmasters." Clearly, Algorithmguy, you've not been to PubCon or Search Engine Strategies or our Meet the Engineers events... :)

Walkman wrote:
I wish google would put that in the algo so when they compare pages, if the domain.com and www.domain.com come with identical pages, all is OK

Lammert, in post 1043, provided an excellent description of just one of many challenges we face in this context (specifically, that www and non-www pages are NOT typically identical when our crawler fetches 'em).

Walkman wrote:
I know about the Webmaster Central and all but what % of people don't?

Hopefully fewer and fewer every day! :) Between Matt's posts and videos, the speaking engagements that Matt, Vanessa, Amanda, I, and other Googlers take part in, and much more to come... hopefully ANYONE putting up a Web site will know of Webmaster Central.

lmo4103
I just browsed around at Webmaster Help Center and found some information relevant to the current affairs that I distinctly don't remember reading before.

Indeed, and thanks for noticing! We've significantly expanded and refreshed our help docs over the last few months, and they will continue to be rapidly developed (in multiple languages). And feedback is ALWAYS welcomed in this area. We want our docs to be as informative and useful as possible!

AlgorithmGuy, I hope you'll forgive me for not quoting your extensive conspiracy theory about how we allegedly cripple our search results to earn a few extra bucks. Let me just note that our Search Quality team members (and, indeed, Googlers on the whole) are recognized and rewarded based upon how much we improve the user experience in search. We actually pore regularly over graphs measuring user happiness and other surprisingly quantitative measures. And to stave off future claims: no, we did not kill Kennedy. And we are not hiding Elvis (velvet or otherwise) in our parking garage.*

Lastly, I want to reassure y'all that we do *NOT* ever deliberately penalize sites based upon innocent mistakes, such as listing/linking both www and non-www pages and so on. We recognize and regret that not all Webmasters understand canonicalization issues and we're doing our best to encourage best practices with this stuff, but in the meantime, we're also working hard to do work on the backend to minimize related problems in this area.

Thanks for your patience and, as always, your feedback!

* Ergh, just to be safe, let me uber-clarify: We aren't hiding Elvis ANYWHERE. And he was *not* seen munching on a banana and peanut butter sandwich in one of our cafes.

[Edited to fix an icky grammatical issue and correctly identify one of Elvis' favorite snacks]

[edited by: Adam_Lasnik at 10:35 pm (utc) on Sep. 25, 2006]

powerstar




msg:3096685
 10:54 pm on Sep 25, 2006 (gmt 0)

Technically the www denotes a subdomain, and it can point to different content

Still comment knowledge and almost always www.domain.com and domain.com are the same. product.domain.com and product1.domain.com should be different but www. I never saw a www.domain.com and domain.com to be different content

Simsi




msg:3096690
 10:54 pm on Sep 25, 2006 (gmt 0)

Lammert, in post 1043, provided an excellent description of just one of many challenges we face in this context (specifically, that www and non-www pages are NOT typically identical when our crawler fetches 'em).

Silly question (for anyone)...how do I get to "post 1043" please? Edit: is this it: [webmasterworld.com...]

And secondly, does the above statement mean that in fact www and non-www are therefore not regarded as duplicate content in that case Adam, or does it simply mean that most sites serve different content on the two? Or maybe I should read post 1043 first :)

Cheers

Simsi

[edited by: Simsi at 11:04 pm (utc) on Sep. 25, 2006]

herewego




msg:3096703
 11:04 pm on Sep 25, 2006 (gmt 0)

I may be missing the point here, but doesn't Google give it's own way round this as well, using it's "preferred domain" system in it's Webmaster Tools section? Not 100% from what they say, but it's got to help I guess...
[google.com...]

Simsi




msg:3096704
 11:06 pm on Sep 25, 2006 (gmt 0)

I may be missing the point here, but doesn't Google give it's own way round this as well, using it's "preferred domain" system in it's Webmaster Tools setcion? Not 100% from what they say, but it's got to help I guess...

Yes that's true but my original question behind the thread is "why does Google rely on a webmaster to know about, understand and 'fix' a canonical when it seems more helpful and obvious to have the two referred to as one website by default and to let the CONTENT determine whether they are different or not" .

[edited by: Simsi at 11:09 pm (utc) on Sep. 25, 2006]

g1smd




msg:3096722
 11:17 pm on Sep 25, 2006 (gmt 0)

Just imagine if Google had spidered this thread at both www and at non-www.

The content would NOT be the same, because someone would have made an extra post in the meantime. That is an extra challenge for their bots too.

Anyway, I'm happy [webmasterworld.com] with how it works.

g1smd




msg:3096728
 11:21 pm on Sep 25, 2006 (gmt 0)

>> I never saw a www.domain.com and domain.com to be different content <<

I have, twice, in the last few weeks.

On one the www had the whole site, and non-www just a few adverts and links.

On another, domain.com had their product catalogue, and www had their forum.

texasville




msg:3096771
 12:05 am on Sep 26, 2006 (gmt 0)

>>>>>>Yes that's true but my original question behind the thread is "why does Google rely on a webmaster to know about, understand and 'fix' a canonical when it seems more helpful and obvious to have the two referred to as one website by default and to let the CONTENT
determine whether they are different or not" .<<<<<<<

I totally agree and as I stated before...why can't google fix it at their end instead of asking thousands of webmasters to spend many, many hours to fix the big hole in their algo.
>>>>>>>>>>>>>>>>>>
>> I never saw a www.domain.com and domain.com to be different content <<
I have, twice, in the last few weeks.

On one the www had the whole site, and non-www just a few adverts and links.

On another, domain.com had their product catalogue, and www had their forum.
>>>>>>>>>>>>>>>>>>>
and that would not cause dupe content either. See what I mean? Google is wiring OUR brains backwards!

zCat




msg:3096786
 12:30 am on Sep 26, 2006 (gmt 0)

Ergh, just to be safe, let me uber-clarify: We aren't hiding Elvis ANYWHERE. And he was *not* seen munching on a banana and peanut butter sandwich in one of our cafes.

Interesting, so you aren't hiding Elvis; and your cafes do not provide his favorite snack. This implies he is at large within the Googleplex but possibly on a healthy diet. I vote we name the next major seismic shift in the Google index "The King".

Seriously, there is one site I follow with a certain vested interest which has massive SEO problems, including canonical, but after a few ups and downs over the summer Google seems to have got things worked out; at least a site: search on the non-www domain returns only www pages.

This 246 message thread spans 9 pages: < < 246 ( 1 2 [3] 4 5 6 7 8 9 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved