homepage Welcome to WebmasterWorld Guest from 54.197.111.87
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 246 message thread spans 9 pages: < < 246 ( 1 2 3 4 [5] 6 7 8 9 > >     
Why Does Google Treat "www" & "no-www" As Different?
Canonical Question
Simsi




msg:3094365
 7:58 pm on Sep 23, 2006 (gmt 0)

...why does Google want to treat the "www" and non-"www" versions of a website as different sites? Isn't it pretty obvious that they are one site?

Or am I missing something?

 

Romeo




msg:3097454
 1:40 pm on Sep 26, 2006 (gmt 0)

What a long and heated discussion.

Why Does Google Treat "www" & "no-www" As Different?

Maybe the answer to the original question is easy:
Google treats them as different, because they are different things.

Why should G be blamed that some website owners prefer to host the same (or highly similar) content under 2 or even more different URLs?
The website owner is in control of his DNS A records, his webserver's config (hostnames, aliases, defaults, etc) or .htaccess redirects, not G.
If the website's owner doesn't handle that by himself but delegates to a provider who doesn't care -- well this doesn't seem to be G's fault either.

Au contraire: if G would not treat different stuff as different, would not others raise the question "Why Does Google Treat "www" & "no-www" *NOT* As Different?" then?

Perhaps it may be worth to remember the wise statement Jim made in a very early posting to this discussion:

Bottom line: If you want to rank well and profit from the Web, be prepared to either read and learn a lot of techinical stuff, or to pay someone who'se already done so. Like most other things in life, there are no shortcuts, and you get what you pay for.

Amen to that.

Kind regards,
R.

Simsi




msg:3097462
 1:43 pm on Sep 26, 2006 (gmt 0)

Can't agree with that Romeo. Providing (and finding) good information should not rely on technical ability.

lmo4103




msg:3097479
 1:57 pm on Sep 26, 2006 (gmt 0)

Funny, they treat www & no-www as different unless they are removing it (them)... wha?

Please note that using a robots.txt file and our automatic URL removal system to remove just the http or www version of your site will not work. This will remove both versions for 180 days, and we cannot manually add your site back to our index.
Webmaster Help Center [google.com]

To err is human...to really foul things up requires a computer - unknown
Just imagine what you can do with lots of computers.

hutcheson




msg:3097482
 1:59 pm on Sep 26, 2006 (gmt 0)

>Providing (and finding) good information should not rely on technical ability.

But, but ...

(goggles....head spins....stares incredulously....)

In my home universe, "technical ability" is the ability to recognize and understand good information ... and nothing more.

AlgorithmGuy




msg:3097495
 2:04 pm on Sep 26, 2006 (gmt 0)

What a long and heated discussion.
Why Does Google Treat "www" & "no-www" As Different?

Maybe the answer to the original question is easy:
Google treats them as different, because they are different things.

Romeo,

I agree that they are different. Not only that but a few other versions too that are within a single domain. And if you host on a dedicated IP yet another multitude of the same problem.

What can be done is for webmasters to be instructed by google as to which method of approach google wants to take so that a simple procedure is taken by webmasters.

Simsi is knowledgeable in websites. How come this problem is new to him. The reason is because this is technical stuff he had not come across before and this is knowledge that is essential if you want to protect your website.

The issue here is not about ranking simply because everybody wants to be at the top, but to be treated fairly.

I doubt if any webmaster here would want the damaging procedures I had disclosed to be targeted to their website.

The litmus test is this. Which webmaster will allow that sort of attack on their website?

I suspect none. But the irony is that it is happening whether we like it or not. Our websites are vulnerable. Far more than you think.

I can assure you that webmasters who think they are safe because they resolved the non www are not at all protected.

.

[edited by: AlgorithmGuy at 2:07 pm (utc) on Sep. 26, 2006]

Simsi




msg:3097507
 2:08 pm on Sep 26, 2006 (gmt 0)

But, but ...

(goggles....head spins....stares incredulously....)

In my home universe, "technical ability" is the ability to recognize and understand good information ... and nothing more.

LOL. Not quite sure how to take that. But hopefully you know what I'm getting at without the need for clarification. I have a few friends who are largely hobbyists who provide brilliant website resources on a variety of topics but wouldn't know what a canonical issue was, or how it affected them, if it walked up to them wearing a name badge.

If Google wants us to concentrate on content rather than SEO which is the impression I get, it would be better off treating these sorts of people as the norm, and us more SEO-savvy webmasters as the ones who can alter to fit.

[edited by: Simsi at 2:13 pm (utc) on Sep. 26, 2006]

AlgorithmGuy




msg:3097536
 2:22 pm on Sep 26, 2006 (gmt 0)

In my home universe, "technical ability" is the ability to recognize and understand good information ... and nothing more.

hutcheson,

We need your help here. ;)

This debate will get hotter unless something that is wrong is recified by google.

Certain elements in here can be of benefit to you in making better DMOZ's aggressive attitude against webmasters. An opportunity for you to see the "dynamics and constructive" ability of webmasters. Especially "unique" ways of discussion and "content rich" posts that DMOZ posts by editors lack.

"Unique and content rich" posts. Skilful replies, resourfulness in creativity amongst many other things that go into websites that DMOZ editors cast asunder as being useless to their directory.

AlgorithmGuy




msg:3097548
 2:28 pm on Sep 26, 2006 (gmt 0)

Simsi,

Belive it or not, if a website was not listed in DMOZ it used to mean that it had a disadvantage against a competitor that did.

Can you believe this. And it still exists today. I means an enourmous advantage to a competitor who knows how to make that link in DMOZ weigh a ton.

It can be a link that catapults a website from abscurity into a first page ranking if not tops spot for its given keyword.

What it boils down to is this..... You and a competitor apply to be listed in DMOZ. You have similar content. You both sweated and invested in your website. Dished out a lot of money. Created unique content much needed by the end user. You both submit your sites to DMOZ.

One editor looks at your website and determins that it is a frills and spills website and sees no benefit to its directory and discards your submission. Another editor looks at the competitors site and like the colors etc. His site gets listed. He know has an enourmous advantage over your website because he will see that page in DMOZ as a website given to him. He will promote that external page to the level that the DMOZ link that points to his website weighs a ton in google's eyes. He stays top and your site could even plummet because google now sees a vast difference between your inbound links and his.
.
.

[edited by: AlgorithmGuy at 2:34 pm (utc) on Sep. 26, 2006]

Simsi




msg:3097553
 2:32 pm on Sep 26, 2006 (gmt 0)

It can be a link that catapults a website from abscurity into a first page ranking if not tops spot for its given keyword.

I wish....my site sank like a stone since I got a DMOZ link :-) Coincidence though I'm sure.

AlgorithmGuy




msg:3097563
 2:37 pm on Sep 26, 2006 (gmt 0)

I wish....my site sank like a stone since I got a DMOZ link :-) Coincidence though I'm sure.

Google tries to understand theme, topic etc.

It is possible that a link pointing to your site can mislead the algo. It's called dillution. DMOZ is known to dilute because of misappropriation to links in wrong categories. It could be a strong link but not based on your website. Contradictions can result.

AlgorithmGuy




msg:3097579
 2:48 pm on Sep 26, 2006 (gmt 0)

In my home universe, "technical ability" is the ability to recognize and understand good information.

hutcheson,

Is it possible to convey your home or your domestic technical abilities and skilful husbandry to DMOZ or make a suggestion as an example of how to keep good house to other editors at DMOZ that need this kind of ability?

Or indeed, do you take this technical ability with you when you are at DMOZ?

Are they still haphazardly making descions that imbalance google's index?

Are the trainees still allowed to randomly pick sites out of search results they deem and categorize a few sites based on the inexepericed editors ideas about the sites.

In other words, the novice is encouraged to randomly pick a gorilla suit website based on SUITS and he then misappropriates the proper category? With his amended title and description that is very different to the website in question. Thus diluting the gorilla suit website, possibly causing it to tank in google? Especially when we know a websites title and description is very important and that a webmaster is more able to describe his site than a DMOZ editor?

Do you think that you have the skill to determine what a website is about?

.
.
.

[edited by: AlgorithmGuy at 3:02 pm (utc) on Sep. 26, 2006]

texasville




msg:3097588
 2:51 pm on Sep 26, 2006 (gmt 0)

>>>>>>>Perhaps it may be worth to remember the wise statement Jim made in a very early posting to this discussion:

Bottom line: If you want to rank well and profit from the Web, be prepared to either read and learn a lot of techinical stuff, or to pay someone who'se already done so. Like most other things in life, there are no shortcuts, and you get what you pay for. <<<<<<<<

That is still bull! That is assuming the site owner would know alll these things and be able to evaluate whether the hosting company would know.
I still say the canonical issue does not seem to be an issue in other se's. And sure there are issues with the way it should be set up. As in BigDave's statement that there is forum software that can serve the same content thousands of different ways BUT it is all served on the same base href! So why should a site be punished.
And it isn't neccessarily sites with faulty software! Google can't get it right on simple little static html sites.
Simsi- to answer your original question- Google treats them differently because Google is broken.

AlgorithmGuy




msg:3097650
 3:25 pm on Sep 26, 2006 (gmt 0)

Some facts about DMOZ editors.

Remove images from a website and many editors at DMOZ would have extreme diffuculty knowing what the website is about.

They seem to list websites they like and discard websites that do not appeal to them based solely on aesthics.

Google on the one hand tells a webmaster that he has to comply with its guidelines or be severely punished. DMOZ on the otherhand looks for aesthetical appeal and artistic unique content based on individual tastes of editors. The two worlds could not be further apart.

God help us.
.

BigDave




msg:3097818
 4:43 pm on Sep 26, 2006 (gmt 0)

It is very simple. The site owner does not have to read and know these things, they oinly have to know how to provide their content.

They do have to take responsibility for finding someone that knows enough, not just getting the cheapest option out there, at least if he cares about ranking well. He doesn't have to go with the most expensive option, but he should not buy soley on price.

The site owner is like a car owner. He has to learn enough to drive the car, but he does not need to know how to fix it if he does not want to get that technical.

But that car owner does need to get *someone* to fix his car!

There are certainly good, cheap options for car repair. He might have a retired mechanic for a neighbor, who still likes to work on cars. That might be the equivalent of having a friend that runs his own servers and is willing to put your site on them.

Then there are people that just go with the first place that they see that fixes cars, or they shop around for the cheapest place. That is the equivalent of what most people do with their web hosts.

Anyone with any sense would do their research to try and find a good mechanic, and hope that they are not the most expensive. It just so happens that there are good hosting packages out there that can set things up right for a novice, and at very reasonable prices.

It is up to those people that find out about these issues to post about their problems with the different hosts, to help these technophobes understand which hosts to avoid.

To me, this all sounds like the standard personal responsibility whine: "It's not my fault, it's that mean old Google! I shouldn't have to learn anything!"

In the ideal world, Google should be able to look at your content and properly rank it without even considering links. But it ain't a perfect world. If you want to rank well in the current world, you either need to be lucky, know what you are doing, or hire someone who knows what they are doing.

Right now, Google is at the top of the heap, and this is the way they do things. Google is full of technical people, and technical people are lothe to do the wrong thing by default just because the majority of the people out there use that default behavior.

And remember, just because a site serves the same content to requests on both www and non-www does not mean that they all have this problem. It takes a combination of things being mis-setup for this to happen. So any "95% of the sites are set up this way" claims are bogus.

And if that isn't good enough for you, write your own search engine that ranks strictly on content and show Google how it is done.

texasville




msg:3097835
 4:57 pm on Sep 26, 2006 (gmt 0)

>>>>>And if that isn't good enough for you, write your own search engine that ranks strictly on content and show Google how it is done. <<<<<<<

That's like saying if I don't like the choices of cars I drive to go start my own car manufacturing company. Bogus.
It still doesn't answer the question: why treat it all differently when it is on the same base href. It's silly and inefficient.
We have been given HINTS by the googleplex thru the years as to what we should do to repair the sites that google has trashed simply because their algos have huge flaws.

And many a person has been burned by poor mechanics because they do NOT have the knowledge to determine what the mechanic SHOULD know.

For now, I have done all the ranting I am going to do on this subject. I stand by my assertion that google is slowly training all webmasters to googlethink. I think too many of us are starting to enjoy puzzling out google and coming up with solutions to problems google has created.
I will give them this...google did try to solve certain problems with their search algos and give us cleaner results. But google has stopped short of cleaning up the mess their algo engineers created. I think it has no real monetary value to it for them and it is much easier to just train webmasters into thinking it is all their fault and they are the ones that need to ferret out the solutions. After all...why else does this whole google forum exist but to try to figure out why our sites are in the position they are in. Almost every new user here starts out coming here to find out why his site tanked.

lmo4103




msg:3097850
 5:10 pm on Sep 26, 2006 (gmt 0)

The question is: Why did www & no-www not used to hurt and now, all of a sudden, it really hurts?

AlgorithmGuy




msg:3097852
 5:14 pm on Sep 26, 2006 (gmt 0)

They do have to take responsibility for finding someone that knows enough, not just getting the cheapest option out there, at least if he cares about ranking well. He doesn't have to go with the most expensive option, but he should not buy soley on price.

BigDave,

Before I write what I think of your post, I'd like you to know that I respect your thoughts and acknowledge your expertise in such matters. You have been a great help to many webmasters by freely giving your good advice to questions regarding many subjects to do with webmastering.

But on this occasion I'd like to comment that this thread is about a webmaster that is up against all odds. I truly believe that Simsi can create a better website than I can. Why should I be able to rank higher because I know some tricks that he does not. My tricks are TRICKS and not skills in creating a website.

It is not his skills that is lacking. It is google that is the problem.

BigDave, find Simsi someone who knows enough to get him out of the trouble he is in. Direct Simsi to an able webmaster. I am sure that will be the end of his problems if you are correct in your comments.

After his site is fixed, we can assume what google says that his site cannot be influenced by another webmaster. We can then do Simsi a favor of the type I have described at great length to see if his site remains in google or tanks under the strength of the expert webmaster you choose for him.

But in reality, I can assure you that however you manage to help Simsi, his website can be brought down again with little effort and the skilled masters efforts would have been no more than what Simsi has already invested in his website.

All it takes to be an unfortunate webmaster is someone linking to you in the wrong manner. They may forget a slash and you are on a server that does a 302 to your caninical URL. Are you aware that some servers will give a temporary header to google regarding this. Two conflicting headers/or sites now exists in google for the same domain. All it took was a missing slash. And this is just one of a multitude of ways to get penalties inflicted on your website by google.

.
.

[edited by: AlgorithmGuy at 5:39 pm (utc) on Sep. 26, 2006]

BigDave




msg:3097877
 5:42 pm on Sep 26, 2006 (gmt 0)

Simsi cannot be helped until he (she?) takes responsibility for his problems.

As long as it is Google's fault, it is easier to whine about how Google is making it impossible for non-technical types.

The problem is that there is bad software out there, and there are bad setups by hosting providers.

Google has made some choices in how to deal with the issue, as have MSN, Yahoo! and Ask. Some of those choices work better for some websites than others.

The thing is, no one cares about Yahoo! MSN or ASK. They probably don't even notice if they don't rank in one of those.

So, yeah, in a way Google is broken. It would be wonderful if they could easily tell that they should merge pages on a site. But they can't and they don't. You have spouted off several "facts", but the only real fact here is that several sites have cannonical issues in google.

But with all that considered, google is not "at fault" for problems with your site. Google is responsible for Google. You are responsible for your site. If you want to rank in Google, you have to do some things to meet their requirements. If you don't do those things, it is not "Google's fault", it is your's.

So, what can simsi do? first admit that *he* has to do something, and not depend on google.

1. use all absolute URLs internally. Unless you are bandwidth limited, you should do that anyway for a wide variety of reasons.

2. contact current webhost and have them set up an aname record or a 301.

3. if current webhost will not make those changes, move to a new host.

4. learn how to make those changes yourself, or pay someone else to.

5. if current hosting setup does not allow you to make those changes, move to a new host.

6. If unwilling to do any of the suggestions, learn to live with it just like most hobby webmasters that have this problem.

theBear




msg:3097882
 5:46 pm on Sep 26, 2006 (gmt 0)

I'm going to jump in for just one comment, my eyes are under attack by the current seasonal crud. So I'm keeping a low monitor reading profile.

In regards to:

" The question is: Why did www & no-www not used to hurt and now, all of a sudden, it really hurts? "

I think that the basic issue has always been around and became more visible due to the sheer size of the web, its link structure, its volatility, an increase of intentional page sniper activity, possibly changes in hosting server setup defaults, in short all kinds of "minor" things could all contribute.

AlgorithmGuy




msg:3097899
 6:02 pm on Sep 26, 2006 (gmt 0)

But with all that considered, google is not "at fault" for problems with your site. Google is responsible for Google. You are responsible for your site. If you want to rank in Google, you have to do some things to meet their requirements. If you don't do those things, it is not "Google's fault", it is your's.

So, what can simsi do? first admit that *he* has to do something, and not depend on google.

BigDave,

I am inclined to agree with what you say. Your posts are solid and good reading. You've sort of brought this "runaway thread" back into reality.

Yes, your views are more geared up towards tackling this problem in the real world.

I'm a bit hot headed and may have pushed too hard to get an unatainable result for Simsi. Disclosing a lot of dirty tricks in the process.

He/she, we just don't know. It is a mystery in its own, yet another problem to solve.

Simsi




msg:3097905
 6:05 pm on Sep 26, 2006 (gmt 0)

Er....before you guys get too much into what I'm doing right or wrong, the thread isn't about me :) I do appreciate your help but really, it was just an innocent question as to why the common man needs to understand canonicals to distribute information.

My problems are seperate, may lie elsewhere and yes, are probably to do with something I should take responsibility for - I wasn't suggesting otherwise. Though I have learned some stuff in the process, notably from AG which I am truly grateful for.

So as much as I appreciate the sentiment, can you leave me out of it please before every webmaster and his dog assumes I'm an arrogant (and ignorant) idiot - which may well be true but I'm not in the mood to read it in print :-D

[edited by: Simsi at 6:08 pm (utc) on Sep. 26, 2006]

AlgorithmGuy




msg:3097920
 6:27 pm on Sep 26, 2006 (gmt 0)

Simsi,

As a final note on this thread, I'd like to make sure that a few things you may or may not know is disclosed to you regarding canonical tricks of the trade.

Look at your server headers for all mispelled possibilities that may resolve to your canonical URL. All of these are no good to your website.

And get your host, or do it yourself, so that no matter what happens all agents and crawlers requests are answered by only one resident version of your website domian.

This ensures that no matter what others might do outside your domain. only you answer the door. The same index page.

Resolve via a 301 only. Apache mod-rewrite is best. If not available talk to someone for best alternative before looking to your .htaccess.

This then ensures that google knows all errors point to your single domain and can attribute PR etc to one single domain instead of many.

In google sitemaps, ask google to treat your www and non www as the same website. We are not sure how google will do this, but it is worth doing.

If you are on a dedicated IP, protect yourself via the method I mentioned in an earlier post. Isolate the numbered conatiner to work to your benefit as a separate entity. Don't 301 that one and all its versions to 301 to the numbered container.
.................
The processes of above will afford you protection against the .DOT, missing trailing slash, non www, DOT without the trailing slash in all combinations with and without the www. Including the numbered IP access to your site. But that is totally isolated. It is only a byproduct of having a dedicated IP but a lethal byproduct that a host won't warn you about.

It really is your server you need to look at first. And BigDaves advise is strong. We have to look out for ourselves.

Many websites live in oblivion only because they are on a poorly configured server.

We live in the cyber era. But believe me, thousands of hosts across the globe are actually destructive to websites. Unknowing customers purchase hosting plans and never see the light of day because of a hosts inability to configure the server correctly.

Vast fortunes are paid to these hosts for the luxury of staying in oblivion.

Good luck.

[edited by: AlgorithmGuy at 6:41 pm (utc) on Sep. 26, 2006]

WolfLover




msg:3097934
 6:36 pm on Sep 26, 2006 (gmt 0)

To everone here that has posted. I greatly value your opinions and seek to learn from those who know more than I do. Here is my problem:

My site is over 3 years old. I WAS, until September 15, doing quite well, showing up in the serps at number one and two for MANY of my keywords. I made no significant changes to my site in recent times. My main site (the one that used to make a living for me) is an ecommerce site. I have been reading much about the duplicate content filter and feel that this may be what has happened to my site as of September 15. If you sell widgets and you sell these widgets in many different colors, shapes, sizes, and with many different options or variations, it is very difficult to make your product descriptions completely different.

Unlike many others who sell the same items on their websites, I have always written my own descriptions rather than just copying and pasting the manufacturers description of the product. However, having said that, there are still only so many words and ways to describe the same item that has only minor differences from each other.

I found this tool on an seo type website where you put in two different urls to compare them. It is called a Similar Page Checker. I have no idea whether this is a good tool or not, but for the sake of argument, lets say it is. I put in two of my product urls. One was for a black widget with a mandarin collar and one was for a black widget with a snap down collar. Using the similar page tool, it said that my page was 82% similar to each other.

When I tried the category url that contains many varieties of black widgets to the url of one product, it said my product page was 68% similar to my category page.

So, this sounds like a high percentage but since it is unknown what percentage of similarity to another page trips the duplicate content filter, I'm not sure what to change to make the pages LESS similar.

Does anyone have any idea? What is an ecommerce site to do that sells many items that are very similar, but different styles?

I've tried over the years to use varying descriptions even though they are very similar, I've not copied and pasted descriptions of products, and even though this is an ecommerce site, I've put up many other original content pages about the products and that would be of interest to the sector of people that buy and have interest in my products.

Also, I have no problem at all with changing my site, it's site content, etc. and working my behind off to get it done right. I was apparently doing things right before but as of whatever happened on September 15, I'm now doing it all wrong.

If I were to move my site from the ecommerce hosting site (using a template that has some limitations on what I can do), how would this situation affect me?
For instance:

My pages right now for categories is at www.mysite.com/cat_productcategory.cfm
My pages for individual products is at www.mysite.com/pd_black_widget.cfm

If I move my site to another host that does not use Cold Fusion and instead I build the site myself using .html pages rather than the .cfm pages, will this not remove the few pages that are still ranking well? For instance, my old page is at www.mysite.com/cat_widget.cfm and if I change it to www.mysite.com/cat_widget.html would that not be a new page and not my old page that is ranked well by the search engines?

I have one more question, sorry for the long post.

Right now, most of my over 1000 pages that are indexed are in the supplemental results. I am assuming this happened with the September 15th data push or whatever it was. I do not know as I never knew to look for supplemental results until this happened.

If your pages are now in the supplemental results, should you build a new page for that particular product or should you just change the page that is now supplemental? My understanding is if it is supplemental it may be there a long time or forever?

Lastly, I'd just like to say this. Even though Google is far from perfect, I do appreciate the fact that for the past three years, my site has ranked well, and earned me enough money so that I do not have to work outside the home any longer. I just want to know what is wrong with my site and I will happily change it to fit the current algo. I'm happy to learn, I have no problem with working 16 hours a day if that is what it takes, ( I do that many days anyway). I just would like to know exactly what is wrong so I can fix it. The mysteriousness of Googles algo is what is the most frustrating. I have no problem with following the rules and I realize it is not a right to be listed in a search engine as the search engine owners can choose to do what they want without regard to an individuals site especially if they feel they are doing what is best for their users. My main complaint, let us know what we are doing wrong so we can fix it.

Thank you if you actually got through my long post. ;-)

AlgorithmGuy




msg:3097954
 6:52 pm on Sep 26, 2006 (gmt 0)

I found this tool on an seo type website where you put in two different urls to compare them. It is called a Similar Page Checker.

WolfLover,

Google is very efficient in determining that your site should not be applied a penalty for pages that look similar. In fact it is a benefit to you that you have similar pages.

That tool will destroy your happiness. It is a gimik.

You stand a chance of an indented page in googles search results with similar pages.

You are going about yor problem of duplicate content in the wrong manner.

A page that is www.123456.com/123.html is an entity on its own and even if you had a million identical pages you will not get a penalty. But if you had 123456.com/123.html also, you are now cheating.

Look closely. the www.123456.com/123.html is a subdomain. 123456.com/123.html belongs to another domain.

But the above is academic if your website resolves and you would be right to be worried.

All your competitor has to do to create this anomily is to link to that phantom page, the loose cannon created by you for not resolving your website.

I'm affraid you have to look to see if this is not the case first before you judge yourself for duplicate content.

Two pages alike in one domain is great. But two identical pages on two different domains is duplicate content. You might have 2 3 4 or more.
.

[edited by: AlgorithmGuy at 6:58 pm (utc) on Sep. 26, 2006]

BigDave




msg:3098086
 7:48 pm on Sep 26, 2006 (gmt 0)

WolfLover,

While it is good to be concerned about duplicate content, and people are jumping up and down about it and Septermber 15, don't assume that it is the only reson you could have lost position.

Sometimes it really is that you get lost in the shuffle. My personal boring old blog, that is mostly non-duplicate recipes, lost 2/3 of it's traffic from Google on the 15th.

My major review site lost about 20% and has crawled back up to normal since then. very few of the pages are suseptable to duplicate content penalties.

My site dedicated to my favorite endorphine producing pod doubled in traffic. It is more suseptable to duplicate content than the big review site.

It is good that you are looking at the duplicate content issue, don't concentrat on it to the detriment of missing other potential issues.

swa66




msg:3098114
 8:13 pm on Sep 26, 2006 (gmt 0)

WIth all the smartnees pushed into the algorithm would it be so much to ask to consider not triggering on www.example.com and example.com giving out the same information. Actually that was often the standard way of doing things long before Google came about.

Duplicate content, usre, it's a search engine problem ass people abuse it, but having www.example.com and example.com hand out the same information isn't intentioannly cheating, just choose one to list and forget the other as long as it remains the same content.

I'm quite upset when search engines start to tell the world how to offer content. They should offer content so that their visitors are happy, the rest is the search engine's problem.

Bujt Google tells you not to dpulicate content by having subdomains that typically duplicate content, MSN gives you severe penalties for not hosting in the USA, ... I'm gettign tired of having to game search engines in order to be threated fairly by them.

ralent




msg:3098186
 9:00 pm on Sep 26, 2006 (gmt 0)

OK ... I have two built in A records in my DNS configuration panel that I have access to. They are *.mydomain.com and mydomain.com

I can also create a custom A record. My question is, can I just delete the mydomain.com record to prevent everyone from ever seeing the non www version of my site or will that create other problems?

steveb




msg:3098189
 9:04 pm on Sep 26, 2006 (gmt 0)

"Why did www & no-www not used to hurt and now, all of a sudden, it really hurts?"

By "all of a sudden" you mean THREE years ago?

This an old, old, old problem/issue that has had threads dedicated to it for quite a long time now.

And its always been pretty straightforward: do what you can to not have the same content on multiple URLs.

BigDave




msg:3098200
 9:10 pm on Sep 26, 2006 (gmt 0)

I'm quite upset when search engines start to tell the world how to offer content. They should offer content so that their visitors are happy, the rest is the search engine's problem.

I've never heard anyone from a SE telling anyone how to offer content. They would agree with you that "the rest is the search engine's problem".

The problem is, if you want to be listed and rank well, you sudddenly become concerned about why your site is not being listed.

If you want your site to be listed and rank well, you can do something about it, or just wait.

The odds are, that you care more about your ranking than Google does. There are a few truely authoritative pages out there that google wants at the top of their list. In almost every other case, there is someone else that has produced a sufficiently good page that could easily replace your page for the users.

You have to remember, no search engine is about producing the very best page for every single query. Search engines are about providing a page that is *good enough* to answer the searcher's question.

Unless your page is one of those very rare ones that Google wants to have as #1, no matter what, then you will have to occasionally aquiesce and take the search engine's needs into account.

You complain that they are dictating to you how to do things, while in the same breath, you are telling them how to do things. Not exactly fair, of you.

[edited by: BigDave at 9:12 pm (utc) on Sep. 26, 2006]

WolfLover




msg:3098279
 10:02 pm on Sep 26, 2006 (gmt 0)

It is good that you are looking at the duplicate content issue, don't concentrat on it to the detriment of missing other potential issues.

BigDave, thank you and everyone else who posted for your insight.

BigDave, when you say not to concentrate on the duplicate content issue, I really am not sure what other issue may be the culprit of my pages going supplemental and losing over half my traffic.

What could suddenly be wrong with my site that was not wrong with it before September 15? I do not have any of the blackhat techniques like hidden text, etc.

Also, my question about pages going supplemental. If your pages are now supplemental, should you just forget about them? Build new pages to replace them and delete the old pages? Most of these supplemental pages have a PR of 3 and 4 and though I know most people say that PR is not important, I am just wondering if you ever get your pages back from supplemental? Like if I change whatever may be the issue on my pages, are they likely to ever be brought back or is it a waste of time and I need to delete the pages and put my products on new pages?

Apparently supplemental results do not come up in the search engines except at the very bottom, obviously doing noone any good.

Anyone have experience with your pages going supplemental and getting them back? If so, how long did it take? What did you do to achieve this?

hutcheson




msg:3098330
 10:58 pm on Sep 26, 2006 (gmt 0)

>This debate will get hotter unless something that is wrong is recified by google.

That's OK. The forum can stand it.

But what you call a "wrong" happens to be the W3C definition, and I very much doubt if any W3C member cares how hot a debate forum gets. And after all, W3C tends to assume universal technical ability. (Which is valid from their perspective -- anyone they deal with, can either acquire it or rent it.)

And, when the silicon chips are down, Google is going to follow the W3C.

This 246 message thread spans 9 pages: < < 246 ( 1 2 3 4 [5] 6 7 8 9 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved