homepage Welcome to WebmasterWorld Guest from 54.205.254.108
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 246 message thread spans 9 pages: < < 246 ( 1 [2] 3 4 5 6 7 8 9 > >     
Why Does Google Treat "www" & "no-www" As Different?
Canonical Question
Simsi




msg:3094365
 7:58 pm on Sep 23, 2006 (gmt 0)

...why does Google want to treat the "www" and non-"www" versions of a website as different sites? Isn't it pretty obvious that they are one site?

Or am I missing something?

 

AlgorithmGuy




msg:3095435
 2:03 am on Sep 25, 2006 (gmt 0)

theBear,

Again, in an apache server. One could creat 123456.com and www.123456.com containers withing the configuration file.

By creating two folders with same names that the configuration file points to, you have two websites under one domain. Their own unique content independent from each other.

So if an agent requested 123456.com he can be shown real estate. And the agent requesting www.123456.com can be shown a chat site.

Correct me if I am wrong please.

AlgorithmGuy




msg:3095442
 2:17 am on Sep 25, 2006 (gmt 0)

theBear,

Part 3

Sorry about the extra posts, but I need to clear up a few points.

If the other two suggestions I pointed out actually work. Then a definite conclusion can be drawn up.

123456.com is indeed what should be the canonical URL. And all else, including the www are merely subdomians.

When creating a subdomain, the subdomain is a variant of the www.

Yes? No?

Is it not a case where some clever idiot in the early days simply suggested the www and we all fell for it as being intrinsical to websites. When in fact the www is no more than a subdomain. And since google does not like subdomains the www is in fact a burden to webmasters and should never have been introduced to the internet as the original idiot/s did. Leading people astray.

One could in fact alter the www to internetaddress.123456.com and it would better suit and an advantage if that keyword internetaddress was used in a search.

theBear




msg:3095451
 2:26 am on Sep 25, 2006 (gmt 0)

It is possible to have 1000's of both domains and subdomains on one Apache server. You are thinking of name based sites sharing a single IP, The same thing can be done with multiple IP addresses on one server.

I have seen what would be called cross fire between name based sites on a single (shared) IP address (This may be the reason why under some conditions sites setup on shared IP address have ranking problems).

We have such critters on our small hosting setup (I have yet to see any cross fire on the sites we host so I expect several factors are at work in this situation).

It is also possible through server configuration and or programing to share the same data between sites on a server, or for that matter between different servers and even hosting systems. A bit is a bit.

AlgorithmGuy




msg:3095461
 2:37 am on Sep 25, 2006 (gmt 0)

It is possible to have 1000's of both domains and subdomains on one Apache server. You are thinking of name based sites sharing a single IP, The same thing can be done with multiple IP addresses on one server.

theBear, you are a genius.

Yes, that makes a great deal of sense.

Would you agree that the canonical terminology would be best applied to the non www domain. Clearing up confusion in the process.

I think the apache server clearly points out that the www is indeed a subdomain and nothing more and it makes the www a burden to webmasters since it is useless in search it has no reason to be in a URL.

theBear




msg:3095463
 2:43 am on Sep 25, 2006 (gmt 0)

AlgorithmGuy,

The original deal only got names because people can somewhat deal with names. With names came name mapping services. Now we have information in both different forms and places, add in a bit of code, a few suggestions and all heck breaks loose.

Wait until folks discover those port numbers and the fact that a server can appear on multiple ports and that even different servers can show up on the socalled web server port.

Name, numbers, and what it means to be canonical aren't the same.

[edited by: theBear at 2:44 am (utc) on Sep. 25, 2006]

theBear




msg:3095465
 2:54 am on Sep 25, 2006 (gmt 0)

"Would you agree that the canonical terminology would be best applied to the non www domain. Clearing up confusion in the process."

Sorry here we are going to disagree, the orginal setup was extremely flexible. Canonical really means that a page has exectly one name that it responds to. So it need not be restricted to the domain instead of a subdomain.

Flexibility always provides for confusion, most people have trouble dealing with vast degrees of flexibility, here there are many.

I like flexibility it provides me with many ways of doing something.

We can have dynamic static pages, static dynamic links, and all sorts of fun stuff.

[edited by: theBear at 2:55 am (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3095466
 2:55 am on Sep 25, 2006 (gmt 0)

Name, numbers, and what it means to be canonical aren't the same.

theBear,

Agreed, but can we not at least isolate a general misunderstanding and how google in particular has failed to address a simple problem.

The simple explanation from google has never come about regarding the purchasing of a domain name. Since it is a search engine, and a monopoly at that, it should bear the responsibility of informing people that it sees multiple websites on one site if links point to contradicting versions of a url that it finds in it's harvesting of links. It will send two deepcrawl requests to one website and return a duplicate content.

Google should have also mentioned that www actually is a useless subdomain. It cannot be anything other than a waste of typing and a disadvantage against a website that resolves at its root in the ANAME records to only display the non www canonical url. I say canonical because in hierarchical order, it is at the center of anything to do with a domain. Whether a subdomain or an internal folder. The domain name is the focal point of a url. All else is additional.

theBear




msg:3095469
 3:10 am on Sep 25, 2006 (gmt 0)

Now should Google say anything about what is what here?

Let me see, I would have a lot of trouble with Google saying much except to advise folks that a page should only answer to one name.

I have major problems with folks dictating how I go about making certain a page answers to only one name. You might say that I'm a member in good standing of the I'll give it a name of my own choosing fringe group.

AlgorithmGuy




msg:3095495
 3:28 am on Sep 25, 2006 (gmt 0)

Let me see, I would have a lot of trouble with Google saying much except to advise folks that a page should only answer to one name.

theBear,

Unfortunately, not all webmasters and website owners have anywhere near the knowledge you have regarding this issue and server control.

Vast majority are not aware of the terminology of canonicalization. And it was google who coined the phrase. Coined it without explaining what it meant and simply allowed the terminology to create confusion as to it's meaning. It is a classic staller.

Yes, I agree with you wholeheartedly that a page should only answer to one name. But the pages ability to answer is dictated by the server it is in. If the server tells that page to answer, it will answer to any request.

I have never come across a host that advises against duplicate content possibilities. In fact, I have come across hosts that argued with me and refused to alter a temporary status code when I told them that the header response was not what I wanted regarding a permanent 301. They used a 302 to resolve. When I came down on them heavier, they altered the header response to a 301. And believe it or not, some servers could not do a 301. I don't know if they still exist, but can you believe it that a windows based IIS I came across had no ability to provide a 301 header. Microsoft software. Can you believe it.

There is a lot to be desired in hosting companies it seems. We place our websites in their hands. Often a one man band outfit that has no clue so far as search technology is concerned. He probably sits with his feet up drinking a coffee and a smoke while crawlers ammas duplicate, triplicate contents from websites he is hosting.

needhelp




msg:3095499
 3:36 am on Sep 25, 2006 (gmt 0)

Sorry, just enough time to scan the post, so sorry if this is out of line but... in Google's Webmaster Tools, you can set canonical preferences for your site! I'm not sure if this can take the place of a 301 redirect, I hope so because my host won't do a 301 for me...

Anyway, just fyi about the Tool.

jdMorgan




msg:3095500
 3:37 am on Sep 25, 2006 (gmt 0)

Google hasn't failed at all. They are providing a courtesy service -- doing most Webmasters a favor, by running extra code in their back end to "figure out" that a particular site shows up under both the www and the non-www hostnames. It is utterly unfair of you to place the blame on them.

If you want to blame someone, maybe you should blame the hosting companies. They are the ones that set up servers that respond to both the www- and non-www hostnames. Of course, if they didn't, then you'd have other people complaining about that, too, because the hosting company would then decide whether all sites on its servers were accessible at the domain name or the www- subdomain (but not both).

Nope, can't blame them, either.

A "canonical" name is just what you want it to be. It means no more nor less than its dictionary definition of "usual" or "standard," "generally-accepted," or in our case, "preferred." So you may use "example.com" as your canonical hostname, or you may use its "www" subdomain -- or any other subdomain, as your canonical hostname.

But the word implies some responsibility as well; Once you pick one, use it and nothing else. Otherwise, it's not canonical anymore.

As to where "www" came from, I thought I covered that, but maybe that was in the other concurrent "canonical" thread. In ancient (pre-iPod) times, corporations and academic institutions would set up their intranet as "example.com" This would be inside their firewall -- no public access allowed. Then they might also set up another public site on the "www" subdomain outside of their firewall (intentional DMZ & port-forwarding gloss-over here). This was used because the www subdomain would be accessible to the "world-wide web." So it may not have been "standard" but it was "typical" that world-wide-web-accessible servers were put on the www subdomain. Simple as that.

But there is absolutely nothing special about the www subdomain that could be used as justification to assume the www subdomain to "be the same" as the root domain. You could use "public.example.com" or "web.example.com" for your public site if you wanted to. And as long as that's the subdomain you always used, it would indeed be the canonical name of your site.

The confusion came in when inexpensive Web servers/hosting became available to the masses, instead of being "corporate/academic only." Without trained IT departments behind us, it is left to us to learn what we need to know about the infrastructure of our sites.

So again, no-one is to blame for anything, but if you are wise, you will *tell* the search engines how to handle your sub/domains, instead of waiting around for some over-complicated algorithm in the depths of the Googleplex to figure it out. And the standard way of telling them is to either permanently redirect non-canonical domains to the canonical domain, to modify your DNS zone file so that only the canonical hostname resolves to a host, or to return a 403-Forbidden response to any attempt to access a non-canonical hostname.

Jim

theBear




msg:3095505
 3:40 am on Sep 25, 2006 (gmt 0)

If what I was told by some IIs types is true (and I have very good reason to believe my sources) there are facilities availible to generate the 301's.

I have enough problems without getting into servers I don't use.

It isn't just the server configuration or DNS configuration, you have other possible issues that have nothing to do with subdomain/domain naming.

[edited by: theBear at 3:44 am (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3095530
 4:19 am on Sep 25, 2006 (gmt 0)

Google hasn't failed at all. They are providing a courtesy service -- doing most Webmasters a favor, by running extra code in their back end to "figure out" that a particular site shows up under both the www and the non-www hostnames. It is utterly unfair of you to place the blame on them.

jdMorgan,

That was very good reading thanks. Informative and well presented.

But I am going to stick to my guns however, and not only maintain that google is to some degree responsible but contradict what you say about google providing a courtesy service.

Without google the internet would be far more democratic and no single search engine could wield the power of making or breaking internet based businesses. How can it be possible for google to be providing a service to a website that it has applied a penalty to simply because the webmaster is at a loss to understand the canolical issue?

I too would do webmasters a favor if I was profeteering from their blood sweat and tears.

Sorry, you made some fabulous points but idicated ultrustic favor towards google for no real reason.

walkman




msg:3095548
 5:21 am on Sep 25, 2006 (gmt 0)

yes, technically they are different, but in the www case 99% of the time is the same thing.

I wish google would put that in the algo so when they compare pages, if the domain.com and www.domain.com come with identical pages, all is OK; no need to penalize anything.

hutcheson




msg:3095555
 5:33 am on Sep 25, 2006 (gmt 0)

I don't think Google ever suspected people wouldn't know what "canonicalization" meant. It's a a standard Comp Sci term (borrowed from Math -- in fact, you start dealing with "canonical forms" in high school algebra where "3x + 5" is equal "5 + x * 3", but you always want to write it the first way). I'm sure Google expected the readers to have at least that background. And if not, nobody could blame Google for NOT thinking their job is "teaching remedial high school algebraic terminology"!

lammert




msg:3095582
 5:58 am on Sep 25, 2006 (gmt 0)

I wish google would put that in the algo so when they compare pages, if the domain.com and www.domain.com come with identical pages, all is OK; no need to penalize anything.

In the old days of the web, this was easily done. Websites were mainly stored in static HTML files and chances that URL example.com/somepath and www.example.com/somepath were returning exact the same content were quite high, even when the URLs were spidered on different times or days.

Nowadays, much content is produced on the fly with PHP, ASP etc from databases and websites have a much higher update frequency than a few years ago. Think of items like "current date and time", "quote of the day", "daily or even hourly updates on news sites", "syndicated content", just to name a few.

Chances that the content served by example.com/somepath and www.example.com/somepath is identical is extremely small nowadays. Looking in my log files, the size in bytes for every URL changes many times a day due to the effects mentioned above. How should we expect Google--or any other search engine--expect to glue together example.com and www.example.com, if we modern webmasters serve different content for both domain versions in the first place?

Another problem is the Last-Modified server header. With static (HTML) files, this date tells the browser and search engine bot the last date the file was updated. When example.com/someurl and www.example.com/someurl return the same date and time in the Last-Modified header, Google can assume that they are the same file.

With script-generated content (even when you use static "search engine friendly" URLs), the content of the Last-Modified server header changes with every request to the current date and time, or it is not sent at all by the server. How can Google determine if two URLs are the same if the last time it was modified differs?

The latter is one of the problems of people who rewrote a website from static HTML files to PHP generated content, even when they did a lot of effort to make all the old URLs work with internal URL rewrites. Even when a normal visitor doesn't see a difference between the static and dynamic generated site, there is a difference for the search engine. The Last-Modified: date, one of the anchors for the SE algorithm to determine of example.com/someurl and www.example.com/someurl are the same, doesn't give reliable information anymore, which may in some situation cause the algorithm to start thinking they are actually on two different domains with different content.

jdMorgan




msg:3095583
 6:01 am on Sep 25, 2006 (gmt 0)

On a properly-configured site, where non-canonical hostnames are redirected to the canonical host, and all internal links are utterly consistent, no back-end canonicalization processing by search providers is required. The spider need only "discover" all URLs in the site, and need not follow all possible linking paths through the site.

But on a misconfigured site, where multiple hostnames resolve directly to content without canonical redirects, and internal linking is inconsistent, the spider must traverse all link-paths, maintain a count of the various hostnames and page-names used in links, compare page contents along the way, and then use some sort of voting algorithm to determine the "probable" canonical hostname. Throw in inconsistent backlinks from other sites, and this bad dream becomes a nightmare.

The processing requirements of the two cases above differ by several orders of magnitude, and it's likely that in the second case, multiple crawl cycles --taking weeks or months-- will be required to determine the probable canonical hostname, even on a small site. Make that a site with 100,000 frequently-changing product pages, and the process may never complete.

So, I say they're doing the Webmaster a favor by even trying.

I won't address whether Google "owes" anything to anybody. By allowing them to crawl our sites, we accept the deal of them providing organic search ranking and traffic in exchange for our bandwidth and content snippets. To complain about the deal while still participating in it is disingenuous. Anyone is free to opt out if they so desire.

I recommend taking control and implementing the redirect function described at length above and in many other threads here to avoid the whole canonicalization issue. As far as the sites run by Webmasters who don't know about all this stuff, all I can think of to say is, "Life isn't fair." Their sites will suffer a small devaluation because they have split their PageRank across multiple URLs, but due to the logarithmic nature of PR, and the fact that it's only one of several hundred ranking factors, they won't be "penalized" to any great degree. This devaluation is a self-inflicted wound due to ignorance, and is not a penalty imposed by Google.

From all that I've seen, they save actual penalties for really egregious cases -- where violations of their "Webmaster Guidelines" are intentional. If you have four URLs resolving to your home page due to canonicalization problems, you may lose a little ranking. If you have four hundred URLs for one page, that's an entirely different matter...

Jim

walkman




msg:3095592
 6:16 am on Sep 25, 2006 (gmt 0)

>> So, I say they're doing the Webmaster a favor by even trying.

JD,
I see your point, but this is not "us" vs. "Google". Sure google can penalize a site with www and without it started by a not-so-tech-savy doctor, but why do so? In fact, it is in G's best interest, given that those pages maybe the most relevant one had a penalty not been slapped on them. Yes, I know (thanks to you and others in the Apache forum btw) how to redirect them, I know about the Webmaster Central and all but what % of people don't?

Moncao




msg:3095624
 7:31 am on Sep 25, 2006 (gmt 0)

[video.google.com...]

Simsi




msg:3095673
 9:08 am on Sep 25, 2006 (gmt 0)

I wish google would put that in the algo so when they compare pages, if the domain.com and www.domain.com come with identical pages, all is OK; no need to penalize anything.

My thoughts exactly Walkman. Surely Google can tell if the "www" and "non-www" contain different content, treat them as such or if they are the same, treat them as one. Penalising (as that appears to be the general assumption) for dupe content is unfair on the common man who didn't study algebraic maths at school and is clearly therefore an outcast!

It appears if you want good information from Google, you have to hope the author is a tech wizard too :(

[edited by: Simsi at 9:12 am (utc) on Sep. 25, 2006]

g1smd




msg:3095684
 9:27 am on Sep 25, 2006 (gmt 0)

I prefer to work to this logic. You buy the rights to use:

domain.com

on the Internet, the Internet being much much more than the web; and then you set up services on it that you require:

www.domain.com
smtp.domain.com
pop.domain.com
ftp.domain.com
irc.domain.com
etc

The fact that web server software usually assumes that you'll have a website directly at domain.com is a quirk that you can easily correct. You could just as easily have the mail server there or something else, or nothing.

AlgorithmGuy




msg:3095738
 10:28 am on Sep 25, 2006 (gmt 0)

My thoughts exactly Walkman. Surely Google can tell if the "www" and "non-www" contain different content, treat them as such or if they are the same, treat them as one. Penalising (as that appears to be the general assumption) for dupe content is unfair on the common man who didn't study algebraic maths at school and is clearly therefore an outcast!

It appears if you want good information from Google, you have to hope the author is a tech wizard too

Simsi,

"clearly therefore an outcast"

I am assuming you are relating to what hutcheson posted. If that is so, instead of you thanking all the posters here about their knowledgeable assistance to your original post, you seem to display a tendency to berate some peoples help towards your question.

You give no incentive to be helped. I just read the entire post again. hutcheson made probably the best point so far. A multitude of information had been directed your way. Not one word of thanks other than "you are missing the point" or your sad misinterpretations of a damned good post.

I believe a novice webmaster would walk away a much more learned webmaster after reading this thread. It does not take rocket science to see that theBear and others pulled in the wealth of comments here. Attracting some of the best webmasters in the process and you simply cannot see it.

Simsi




msg:3095748
 10:47 am on Sep 25, 2006 (gmt 0)

I am assuming you are relating to what hutcheson posted. If that is so, instead of you thanking all the posters here about their knowledgeable assistance to your original post, you seem to display a tendency to berate some peoples help towards your question.

You give no incentive to be helped. I just read the entire post again. hutcheson made probably the best point so far. A multitude of information had been directed your way. Not one word of thanks other than "you are missing the point" or your sad misinterpretations of a damned good post.

I believe a novice webmaster would walk away a much more learned webmaster after reading this thread. It does not take rocket science to see that theBear and others pulled in the wealth of comments here. Attracting some of the best webmasters in the process and you simply cannot see it.

You maybe right there - I'm trying to give up smoking so my tolerance levels are dipping and my vision is blurred :D I agree there have been some very interesting posts and some good advice in the process.

But the original point I was making was that it seems odd Google expects the average webmaster to display a level of knowledge of SEO and that it affects the end-user as much as the webmaster seeking out traffic. The responses, while often enlightening, largely seem to focus from an SEO-savvy angle. All well and good...but look at it this way: if you were an expert in blue widgets and provided information, you're now expected to understand the canonical issue to reach your audience. Conversely the audience may need you to understand this to find the information they are after.

The most helpful response to the original question was that of the possibility that a non-www domain can contain different content to the www domain. But I'd still argue that most people would expect "www" to contain the default content anyway, particularly in a non-commercial website.

Yes I can see the benefits of knowing SEO and yes I can see why there are logical ways to approach the usage of subdomains. But the intention of the original post was not to ellicit ways to best utilise subdomains, merely to ask why Google expects the common man to understand the Canonical issue.

No offense, arrogance or disrespect intended at all - if I wanted to do that I'd have gone to Usenet rather than WebmasterWorld ;) I never expected the post to provoke quite so much debate to be honest.

Cheers

Simsi

[edited by: Simsi at 10:58 am (utc) on Sep. 25, 2006]

simonmc




msg:3095753
 10:54 am on Sep 25, 2006 (gmt 0)

Regardless of all the so called fixes to the problem, it would take next to no effort for google to compare www and non www. If www = non www then index one only. If non www is not = to www then index both.

Not only would it save google massive amounts of hard disk and duplicate content processing power but it would also solve this stupid and unneccessary "NON-ISSUE"

g1smd




msg:3095755
 10:57 am on Sep 25, 2006 (gmt 0)

It is sad, but true, that most hosting companies have no clue about these matters either... but then again, most "hosting companies" are nothing but resellers of someone elses services, and they think they need no knowledge to start up in the business.

In the last year I have recommended to several dozen people that they dump their current hosting immediately. This was after they were told that either a 301 redirect was not possible, or that it was not required (and the the meta refresh, or 302 redirect, that they had just installed was all that was needed).

By education - what these forums are all about - maybe the word will eventually get out to most of those that need to know. In the meantime, by fixing these things on your site, you are ahead of the rest of the pack; those who are only publishing content and chasing links.

[edited by: g1smd at 10:59 am (utc) on Sep. 25, 2006]

AlgorithmGuy




msg:3095760
 10:58 am on Sep 25, 2006 (gmt 0)

Simsi,

In return, I too apologise to you.

I have a lot of respect regarding your knowledge and comments. Please accept my apologies as freely given.

Collectively, some very good webmasters here have made some unbelivebly informative posts. I have learned a few things I did not know, thanks to your questions beckoning such wealth of replies. Your contribution too ranks amongst them.

Unfortunately, some of us webmasters are a bit hot headed. I'm no acception to this anomaly amongst us.

g1smd




msg:3095763
 11:02 am on Sep 25, 2006 (gmt 0)

>> hot headed <<

I was like that once. Heck, check some of my posts here from four years ago. However, on these sorts of issues I have now been mulling them over for several years already, and have considered all sides many times before making a solid conclusion.

[edited by: g1smd at 11:03 am (utc) on Sep. 25, 2006]

Simsi




msg:3095764
 11:02 am on Sep 25, 2006 (gmt 0)

Simsi,
In return, I too apologise to you.

I have a lot of respect regarding your knowledge and comments. Please accept my apologies as freely given.

Collectively, some very good webmasters here have made some unbelivebly informative posts. I have learned a few things I did not know, thanks to your questions beckoning such wealth of replies. Your contribution too ranks amongst them.

Unfortunately, some of us webmasters are a bit hot headed. I'm no acception to this anomaly amongst us.

No worries AG. Nothing wrong in what you said at all. I can take criticisim when its constructively put and I agree anyway - LOL :-)

I just tend to look at things like this from a less-techie approach sometimes and it occurs to me that Google (etc) maybe sometimes get a bit too bogged down in technicalities rather than looking at things from a grass-roots level. Not unsurprising - I guess its why we have UAT and go-betweens between IT departments and the "users" :-)

I think this canonical thing is just one of those things that they should flip on its head and assume if the content is the same on both variations, that it is the same website, then work backwards from there. Just fairer on the average webmaster guy IMO. Hell what do I know - I've just gone all supplemental and lost all my PR :-D

PS If WebmasterWorld had some smilies maybe that would make life less stressful... ;-)

[edited by: Simsi at 11:12 am (utc) on Sep. 25, 2006]

g1smd




msg:3095777
 11:17 am on Sep 25, 2006 (gmt 0)

Problem is, what is a site? what is a domain? what is a sub-domain? and are they controlled by the same entity? Could one be a competitor encroaching on the namespace?

After www and non-www, people are going to want the same from .com vs. .co.uk; but to make that choice they would need to know, for sure, are they really controlled by the same entity?

AlgorithmGuy




msg:3095780
 11:19 am on Sep 25, 2006 (gmt 0)

>> hot headed <<
I was like that once. Heck, check some of my posts here from four years ago. However, on these sorts of issues I have now been mulling them over for several years already, and have considered all sides many times before making a solid conclusion.

gIsmd,

Your maturity expressed in your replies compliments the impulsively made posts.

If we were all the same, no diversity would exist. Much Like it does not exist in DMOZ where the vast majority of editors are of a robotic intelligence all endowed with the same rhetoric against webmasters.

Slightly off topic, sorry, but I could not resist that one. Take it with a pich of salt.

AlgorithmGuy




msg:3095798
 11:41 am on Sep 25, 2006 (gmt 0)

I think this canonical thing is just one of those things that they should flip on its head and assume if the content is the same on both variations, that it is the same website, then work backwards from there. Just fairer on the average webmaster guy IMO. Hell what do I know - I've just gone all supplemental and lost all my PR

;)

Simsi,

I have seen websites go under for exactly what you describe. I feel for the webmaster in question because I had trash content and stayed top but the more informative site went under in our niche.

Google put up information that another webmaster cannot harm your website.

A top competitor once was giving us hassle over copyright issues. We noticed the competitor that ranked above us had canonical weaknesses. We submitted 4 versions of his website to search engines making sure that the crawlers picked up on it. Google's cache later had all 4 versions displayed and all 4 had identical content. Not a month after that the Bourbon update relegated that site into oblivion. We were supported by links from DMOZ, another advantage we had over our competitor who was not listed in the directory because editors deemed his site to be of useless content and our site must have been deemed useful content. I say that his site was an authority in our niche and we played all the tricks with trash content. DMOZ got it wrong.

OK, I spilled the beans on what I did, but can you see the point? We were actually doing that site a favor by trying to promote all their domain versions. Nothing wrong in that. And no different than you pointing a link to another site.

[edited by: AlgorithmGuy at 11:52 am (utc) on Sep. 25, 2006]

This 246 message thread spans 9 pages: < < 246 ( 1 [2] 3 4 5 6 7 8 9 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved