homepage Welcome to WebmasterWorld Guest from 54.211.73.232
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor 2014
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 246 message thread spans 9 pages: < < 246 ( 1 2 3 4 5 6 [7] 8 9 > >     
Why Does Google Treat "www" & "no-www" As Different?
Canonical Question
Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 7:58 pm on Sep 23, 2006 (gmt 0)

...why does Google want to treat the "www" and non-"www" versions of a website as different sites? Isn't it pretty obvious that they are one site?

Or am I missing something?

 

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 11:34 pm on Sep 27, 2006 (gmt 0)

OK. So with your proposed fix, what HTTP server response code do you get for domain.com/ and for www.domain.com/ then?

theBear

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3094363 posted 11:35 pm on Sep 27, 2006 (gmt 0)

"It becomes pointless and a useless excercise to do a 301 on your server since DNS records will bypass the server. Not a single crawler or agent will visit the server for a request for the resolved domain."

The aname method is the preferred method of advoiding the www/non-www problem at site setup time, it however will not fix the default.asp, index.php, index.html. etc issues.

AlgorithmGuy



 
Msg#: 3094363 posted 11:36 pm on Sep 27, 2006 (gmt 0)

It will only fix the www and non-www problem. I still prefer the 301 redirect, because while unwanted URLs still appear in the SERPs they still deliver visitors to the real site (via the redirect).

gIsmd,

Hang on a minute. Let's get this one straight.

If your ANAME RECORDS are sorted out, it does not matter that google has your cache of unresolved urls. I think you are getting mixed up.

If any URLS or any sort anywhere on the planed exist that are before you resolved via the ANAME RECORDS. Since the records are now sorted, even if someone clicks on a wrong url version that agent, crawler or whatever still must go through the DNS request first. The agent will be told that the URL resolves to the chosen. It will be impossible for you to click an unresolved URL ever again.

If billions of your unresolved urls are scattered everywhere. They are all resolved the instant you make better the ANAME RECORDS.

Your 301 would be pointless and never used. It will never do anything.

But a wise choice indeed is to resolve server issues with a 301.
.
.

[edited by: AlgorithmGuy at 11:50 pm (utc) on Sep. 27, 2006]

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 11:38 pm on Sep 27, 2006 (gmt 0)

OK. So with your proposed fix, what HTTP server response code do you get for domain.com/ and for www.domain.com/ then?

theBear

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3094363 posted 11:46 pm on Sep 27, 2006 (gmt 0)

g1smd on the name remaining in the DNS aname list you get a path to the server and a return code from the server at the end of the path.

For the name(s) removed from the DNS aname list you will eventually get (name(s)) could not be found from the DNS system.

AlgorithmGuy



 
Msg#: 3094363 posted 11:48 pm on Sep 27, 2006 (gmt 0)

The aname method is the preferred method of advoiding the www/non-www problem at site setup time, it however will not fix the default.asp, index.php, index.html. etc issues.

theBear,

Nicely put.

Thats right. A clean up process needs to be done. If internal linking is clean and flows through the site, any loose cannons can then be sorted out by a good mod-rewrite.

HSB to KFC then to BASE. ;)

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 11:52 pm on Sep 27, 2006 (gmt 0)

OK, so that should be done before a site goes online for the first time.

However, if both www and non-www are already indexed, then I am still going to recommend using the redirect. This is simply because where non-www URLs are still indexed and ranking, the redirect allows people to still access the site.

Your method effectively says "no site here" for non-www accesses,and gives no clue as to where it is now. I am sure that most people would not be savvy enought to realise that they need to add the www to their requested URL to make it work.

The redirect adds it for you.

theBear

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3094363 posted 11:56 pm on Sep 27, 2006 (gmt 0)

This of course doesn't take care of those pests known as vanity domains, nor does it cure the https issue.

Face it folks, it can be messy by the time you are done.

WolfLover

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 12:14 am on Sep 28, 2006 (gmt 0)

Thats right. A clean up process needs to be done. If internal linking is clean and flows through the site, any loose cannons can then be sorted out by a good mod-rewrite.

HSB to KFC then to BASE.

Ok, now you've lost me. What is this HSB to KFC to BASE?

Can anyone point me in the direction to make sure my internal linking is clean and flows?

The way this template is set up is:
Left column has a link to every category of products.
Each category page has several pages of product listings, from the category page, you click on a product and it brings you to the product page itself.

In the left hand column, I have content, information, etc. type pages. With the template, every single page you go to as deep down as it goes, you still always have the left and right and top navigation, so there is a way to reach any category page or information page from every single page of the site.

To me that seems easy as I know that some sites you go to, you play hell trying to find the home page again once you get two or three levels deep. This is not the case with my site, you can always find any other category, home, information, etc. page, no matter what page you are on.

Is this what you mean by clean?

BigDave

WebmasterWorld Senior Member bigdave us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 1:02 am on Sep 28, 2006 (gmt 0)

If you want to keep others from linking to '/index.html' (or whatever) instead of '/', then don't name it index.html. Name it '123456.html', and change your DirectoryIndex as well.

Then if someone tries to cause you problems by linking to /index.html instead of /, they will get a 404.

AlgorithmGuy



 
Msg#: 3094363 posted 2:33 am on Sep 28, 2006 (gmt 0)

OK. So with your proposed fix, what HTTP server response code do you get for domain.com/ and for www.domain.com/ then?

gIsmd,

OK, lets clear this one up.

You have say www.123456.com and 123456.com

At the ANAME you point say the 123456.com to resolve to www.123456.com

This method can be questioned since the sub domained version is selected. We just don't know if google hates or loves subdomains in this manner. So by default, it maybe wiser to opt for the www to resolve to the non www version. But in anycase, we are going to see what happens when a server request has been made.

Let us assume a million pages exist in google's cache, supplentals etc that have the non www pages listed. A human, an agent or a crawler goes to your server, and it can only do so AFTER it has consulted DNS RECORDS. The DNS tells it that no matter what it is looking for the chosen www.123456.com is the place to go.

When it gets to the host server, where the ANAME RECORDS point to, the server presents the page requested with a 200 GET if a change from last request or a no change 304 status code is presented.

If you are worried about error links such as missing slash, dot, links etc that exists in google. No matter what is clicked the agent must conform to the DNS records. There is no other way for the agent to find your host.

If googl sends out its deepcrawl bots armed with links harvested while your domain was not resolved. You are afforded protection because the deepcrawl bot must first find the URL it is looking for. It must observe the same path. DNS first the ANAME records then tell it where the server is.

The deep crawl bot now knows that the link 123456.com is not available. We simply do not know how google treatds this situation. DO YOU? But logic tells us that the bot should obey the DNS and ANAME RECORDS. These told it that the site it is looking for does not exist and the bot MUST treat this as "permanent" and that it "should" go to www123456.com for the info it is looking for.

The server will present the bot with the resolved or unresolved page. A 200 GET. or a 304. If that is not the case, then google has got it wrong. Logic has to prevail on this. If a missing slash was on the URL then the apache will give a header of a 301 status. Poorly configured servers may give a 302 which again is wrong and damaging to a website.

You cannot beat the simplicity of resolving at source. You kill the problem rather than nurture it.

AlgorithmGuy



 
Msg#: 3094363 posted 2:42 am on Sep 28, 2006 (gmt 0)

OK, so that should be done before a site goes online for the first time.
However, if both www and non-www are already indexed, then I am still going to recommend using the redirect. This is simply because where non-www URLs are still indexed and ranking, the redirect allows people to still access the site.

Your method effectively says "no site here" for non-www accesses,and gives no clue as to where it is now. I am sure that most people would not be savvy enought to realise that they need to add the www to their requested URL to make it work.

The redirect adds it for you.

gIsmd,

if you have the problem only of www and non www

Even ten years later you resove it at the ANAME RECORDS. You need not do a 301.

Because no path will lead to the request. The path to request no longer exists on the internet. It cannot be found nor manufactured. DEAD and buried. Impossible. Not even google with all its might can find that path.

Any and all agents must adhere to DNS and ANAME RECORDS.

There is absolutely no possibility of www being found if the DNS and ANAME RECOORDS say it is part of the www version and that all must go to the sever with the instruction that the non www answers to the www version.

I simply do not know how google handles this logic. If it is in any other way, then God help us all because this would indeed be terrible news to us all.

It would be a shocking revelation indeed.

.

AlgorithmGuy



 
Msg#: 3094363 posted 2:57 am on Sep 28, 2006 (gmt 0)

A few ideas to mull over.

theBear, please correct my theory.

Why not utilize the apache's power and benefit from unethical linking. Think like the Chinese.

A disadvantage can indeed be an advantage.

We tend to interpret a misdemeanor as a threat. This is not good for webmasters.

Better still, create a page or pages that answer to any unethical linking. Google will still consider the links in those pages that you make sure point to your proper internal pages.

An unethical webmaster is a luxury if you treat him as such. Give him the pages he wants. Benefit from it.

404's are untidy. Make all incoming links work for you. Good or bad. If you treat bad links as good, you are on the right track.

I have not gone to these extremes, but I am sure you will agree that it can be done on an apache server with the right commands.

If an unethical link points to index.html and you would rather it point to the root domain, 301 it. If your site is based on htm and not html, then make sure that the server default is htm and not html and create a html page so that you don't get a 404 or need to do a 301. Make that page exist and use the value of the inbound link.

I have indeed done a few things like this and it works.

.

[edited by: AlgorithmGuy at 2:58 am (utc) on Sep. 28, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 3:06 am on Sep 28, 2006 (gmt 0)

Ok, now you've lost me. What is this HSB to KFC to BASE?

Wolflover,

That is geek lingo between theBear and me, sort of coded messages.

It reads as follows.

I went to the Hong-Kong Shanghai Bank and drew money out. Got some Kentucky Fried Chicken and returned to base. I sort of thought that is what he asked in the RFC. ;). ;)
.

[edited by: AlgorithmGuy at 3:07 am (utc) on Sep. 28, 2006]

theBear

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3094363 posted 3:17 am on Sep 28, 2006 (gmt 0)

Well the 301s actually make use of most of the bad inbound links.

You can also put pages in certain spots to make use of intentionaly misdirected inbounds. A slightly diferent alignment of BigDave's confuse thy antagonist.

I think however the WolfLover needs his question answered your HSB KFC BASE threw a loop that way.

<added>More along the lines of link flow, clean navigation, correct internal linking.</added>

[edited by: theBear at 3:22 am (utc) on Sep. 28, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 3:23 am on Sep 28, 2006 (gmt 0)

To me that seems easy as I know that some sites you go to, you play hell trying to find the home page again once you get two or three levels deep. This is not the case with my site, you can always find any other category, home, information, etc. page, no matter what page you are on.

Wolflover,

I am not 100% sure on internal linking but........

We used to create say 100 pages and the navigation consisted of breaking down the large cluster of links.

Child links etc.

I'm not too sure. Again, we simply don't know for sure but if I were to hazard a guess, categorised linking is better, logically far better than the same number of links in every page. All googlebot is interested in is finding pages and harvesting links.

If say you have 500 pages and each page can communicate with all other pages, then you must have a 1/4 of a million links on your website. I don't know I am just making an example. This is detrimental to your website. Not even some linkfarms would have that many links.

Just a thought, it sounds like you have a large website. Be careful.

There is nothing wrong in breaking a large website into categories and pagerank amongst them can be better assertained by google.

[edited by: AlgorithmGuy at 3:25 am (utc) on Sep. 28, 2006]

Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 7:46 am on Sep 28, 2006 (gmt 0)

LOL.

So who still thinks Google is right to assume the "www" and "non-www" should be different by default :-D

God help the Layman :-)

"Hi Dad. Model railway website? Sure. Just make sure you fix up your A-Name record, stick in a 301 permanent redirect to avoid canonicals and ensure the META tags are written to avoid supplemental penalties and you'll be fine"

Hi Son. You need to get out more.

[edited by: Simsi at 7:55 am (utc) on Sep. 28, 2006]

amznVibe

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3094363 posted 9:16 am on Sep 28, 2006 (gmt 0)

If I had $1 for every website that doesn't have the no-www mapped to the www. I could retire!

(it's very annoying too, webmasters please check/fix this on your websites!)

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 9:19 am on Sep 28, 2006 (gmt 0)

OK, nice long answer, but I want a short answer to one specific point.

What HTTP status code do you get back when you try to access domain.com/this.page.html and what code do you get when you try to access www.domain.com/this.page.html using your method?

zCat

10+ Year Member



 
Msg#: 3094363 posted 9:21 am on Sep 28, 2006 (gmt 0)

If Dad's just putting up a hobby model railway site for family and friends, whether he ranks in Google etc. is not going to matter to him all that much (and if it does, then he's getting valuable free advice from his son ;).

If he's putting up an online model railway shop and wants to monetize organic (=free) traffic, then he is going to have to invest time and / or money to ensure that his online presence meets the prevailing conditions. Exactly as if he was opening a bricks-and-mortar store: if he wants to make money off it, he's going to have to put time and / or money into finding a good location to benefit from passing (=free) traffic, and also deal with all those arcane rules and regulations which authorities burden businesses with, which generally are a much larger PITA than setting up the odd 301 redirect.

tedster

WebmasterWorld Senior Member tedster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 9:34 am on Sep 28, 2006 (gmt 0)

So who still thinks Google is right to assume the "www" and "non-www" should be different by default

I do.

As was explained early in the thread, Google MUST assume this, because it technically is the case. These technical issues are the way that domains work. No opinion of ours will change that reality. When there were fewer pages on the web, when the areas of intense competition were not yet so intense, when Google was not so deeply scrutinized, then more people were oblivious to the issue. But the issue was ALWAYS THERE.

A lot of what I do is helping my clients make their content clear and unambiguous for the search engines. This "www" issue is one of a long checklist I go through -- best practices that are the foundation for helping search engines gain an optimal understanding of a website.

God help the Layman

Where isn't that true? Does a layman create a winning marketing campaign? Give a top-notch speech? Paint a masterpiece? Pick up a violin and perform a heart-moving Bach sonata? Equal opportunity does not mean that poor execution can get the same results that pristine execution will.

If someone cares about search engine traffic, and they don't want to focus on technical issues but would rather manage the business aspects, or write content, or whatever -- then they need to partner with someone who will handle the tech for them. This is the real world, and that's the way it works!

[edited by: tedster at 8:01 pm (utc) on Sep. 28, 2006]

Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 9:45 am on Sep 28, 2006 (gmt 0)

If someone cares about search engine traffic

If Dad's just putting up a hobby model railway site for family and friends, whether he ranks in Google etc. is not going to matter to him all that much

If someone cares about search engine traffic, and they don't want to focus on technical issues but would rather manage the business aspects, or write content, or whatever -- then they need to partner with someone who will handle the tech for them

The point I'm making though is that it's the end-user who wants good information on a topic who suffers if the hobbyist doesn't know the the techie side. Although I appreciate the Internet is a great business tool, it's as much a source of good information. The above largely assumes everyone is using the web simply to make money and that enthusiasts are happy to trawl through all the sales, auction and catalogue listings to reach the information from a guy in the know.

If "www" and "non-www" contained different information in more than 50% of websites, then fair enough. But they don't, so why default the assumption that they do?

[edited by: Simsi at 9:54 am (utc) on Sep. 28, 2006]

esllou

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 3094363 posted 10:57 am on Sep 28, 2006 (gmt 0)

Are more than 50% of websites standards compliant? I don't think so, but it doesn't mean we should just throw out standards.

This is all about trying to bring a little order to the Wild West.

photopassjapan

5+ Year Member



 
Msg#: 3094363 posted 11:02 am on Sep 28, 2006 (gmt 0)

Search engine results aren't on the verge of becoming useless because of too heavy competition between corporations and hobbyists, but MFA, SPAM and stolen content.

The duplicate filter and all filters by the way were put there to cover up holes in the technology which Google runs, and to weed out peopole who steal content, fake content, or fake services, online or offline.

As was explained early in the thread, Google MUST assume this, because it technically is the case. These technical issues are the way that domains work. No opinion of ours will change that reality. When there were fewer pages on the web, when the areas of intense competition were not yet so intense, when Google was not so deeply scrutinized, then more people were oblivious to the issue. But the issue was ALWAYS THERE.

I know that the tech side of canonical links has always been an issue. But you know i remember that i had a website ages ago... a really nothing but hobby site that i intentionally got the links out with both www and w/o www because that meant that SEs and people would find it either way!

...

It's not the canonical issue but rather the fact that the countermeasures for sites which don't offer anything real made the dupe content filter necessary. And all sites with canonical issues... meaning almost all sites... can become the collateral damage of this practice.

The root of the problem is...
No algo will weed out the unwanted results anymore. They are so damn easy to trick if you go at them one by one, and if you don't have a real site you won't face too many of them at a atime... for you won't have the CONTENT, USABILITY and DESIGN to worry about, am i not right?

For there are techies on all three sides. At Google, on here for example, and on the "shady" side, people who want to get on top of the SERPs although they don't have content, services or a website as a matter of fact. Have nothing but MFA or spam.

I agree that adressing it at this point means complying (sp?) with Google which right now tries to comply with spammers.

But i think we should get the word out that this should be the other way around. For spammers can be webmasters as well... and the circle becomes a downward spiral.

Google needs to implement human decision making for what's to be considered as spam, or dupe content. Even with a brutally sporadic efficiency... when it comes to fill out the holes in its algos... this is it.

And not the umpteenth technical shortcut that suffocates the entire free web.

Spam can probably find its way to any level Google or other webmasters can. Hows this for a thread title: "DNS spamming"... ( this was a joke... i hope )

asusplay

5+ Year Member



 
Msg#: 3094363 posted 11:10 am on Sep 28, 2006 (gmt 0)

Tedster I disagree with you in one respect. You say that Google is right in discrimating the www and non www versions due to technical issues, and that if you want to succed in getting SE traffic then you must learn the technical side. That's fair enough if you are techhie minded, or know enough about the IT side of things, but personally a lot of those things are beyond my capabilities, so therefore it is right that any websites I do are condemned before they've even begun? What hope does that give small online businesses and webmasters, or is the internet just supposed to be the realm of the big established corporations, white hat tecchie experts and black hatters?

Also many website owners have no access to aname records and a lot of what has been discussed in this topic. People on windows shared hosting don't even have access to IIS! So it's hard to rectify some of the most basic problems just because Google discriminates between the 2 versions even when there are no links to one of them.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 11:18 am on Sep 28, 2006 (gmt 0)

At the end of the day, web hosting companies should be setting up their servers correctly. Many do not. That is NOT Google's fault.

If your host does not let you do it, or they cannot do it themselves, or say that it cannot be done, then it really is time to find another host.

If you come up against a host that says that it isn't needed, then point them to the Google search results for "301 redirect" or "duplicate content" and see what they say.

It's a rare thing to find hosters list 301 redirects in their available features, even where available. Maybe it is time they started doing so? It is one of the factors that guides my hosting choice.

AlgorithmGuy



 
Msg#: 3094363 posted 11:33 am on Sep 28, 2006 (gmt 0)

OK, nice long answer, but I want a short answer to one specific point.
What HTTP status code do you get back when you try to access domain.com/this.page.html and what code do you get when you try to access www.domain.com/this.page.html using your method?

gIsmd,

domain.com/this.page.html
www.domain.com/this.page.html

This is the same page. Yes?

I've explained the method in detail.

Surely, it is not possible for a crawler or agent to go directly from browser to a server. Or google's bots to go from google's databanks directly to server. This would be totally wrong. Not logical. If google stores server info to ignore DNS commands then everything about google must be suspect.

Are you suggesting that the server for domain.com/this.page.html will give anything other than a 200 or a 304.

If so, I will lose faith in everything about the internet and say that everything up up the creek.

There can only be one possible path and that leads to www.domain.com/this.page.html because the ANAME RECORDS will tell agents that the versions are combined as one and the hearder report at the server should give a 200 or a 304. If you think I am wrong, please indicate, and I will be much wiser for my error.

But I cannot see an alternative logical route.

Yes, there may be an anomily if you have discovered that the ANAME is disregarded by an agent. That is a problem to do with the agent and not the records or DNS.

If a crawler or agent goes by the information of a server residing at a particular IP, then the agent maybe able to go directly to the server and ignore the updated DNS.

If this turns out to be the case, an exploitable loophole will have been exposed by you. And I will be one of the first to raise my hat in the discovery and start to let people know that the system is corrupt and we should treat the internet as a free for all.
.

[edited by: AlgorithmGuy at 11:37 am (utc) on Sep. 28, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 11:43 am on Sep 28, 2006 (gmt 0)

At the end of the day, web hosting companies should be setting up their servers correctly. Many do not.

gIsmd,

Spot on.

A friend once told me he could not implement a 301 that I instructed to him. He gave me his user and pass and I could not either. We called the host and they said that their setup is not geared to resolve via a 301.

This is absolutely crazy.

asusplay

5+ Year Member



 
Msg#: 3094363 posted 11:55 am on Sep 28, 2006 (gmt 0)

Yes, but the argument of web hosts should doesn't wash with me. Just because this should be the case doesn't mean that realistically it is the case.

Here's my scenario. I have sites done in ASP which need to be on windows hosting in the UK. I can either go on shared hosting which is affordable or dedicated hosting which in the UK is extortionate and unaffordable. Shared hosting does not allow access to IIS, so I can only do 301 redirects from non www to www versions on the page itself, but cannot do a redirect from /index.asp to (root). No windows host that I know of offers this capability, and i can't change my exisitng sites to php as it's not worth the complications.

So should my sites still be penalised? And I'm not alone. There's thousands in my situation.

mcskoufis

10+ Year Member



 
Msg#: 3094363 posted 12:17 pm on Sep 28, 2006 (gmt 0)

asusplay there are thousands indeed. AG you are so right about the google team and their arbitrary algo changes.

Matt Cutts was accepting webmaster questions at some point and I did post a couple on the issue. He answered all but these ones. And google's silence on the issue is an indication they have problems that they don't want to reveal.

I seriously don't see what the google engineer's comments contributed to this conversation. If you speak, speak about the matter not about how nice guys you are. I really could not care more.

What about not spending hours trying to understand the issues AG talks about or not even bothering to think a solution for a problem which google itself created, and instead focus on getting my sites better for my visitors?

Isn't that what webmasters should be doing, instead of trying to figure out solutions which may or may not help your sites. Isn't that what they say in their webmaster info? So very misleading as well.

And to my personal opinion they do this intentionally so that you open an adwords account. They have so many investors to satisfy, they need to be showing more and more profit as time progresses. Nothing else matters, nomatter what the googleguys and googlegirls say.

I have a business, I have investors, I know what is like. So don't give me this crap google please.

They drink your blood and spit on your face afterwards (judging by the answers the great Cutts is providing on his blog. Like reading your first ever SEO related tutorial.

And congradulations to AlgorithmGuy who has the technical knowledge to back these claims and to all the guys who are helping us poor w****rs to fix our sites which will take a year and we'll be out of business.

Great job google. So ethical... I am touched... haha... They pride to be ethical as well.

Basically my joe-punter business partner who knows nothing about computers/websites/internet told me yesterday that "you know what, yahoo seems to be getting much better results than google lately".

Don't want to appear supporting yahoo cause they are the same to me.

I think there is such a trend currently, cause with all their mystical algo-cooking they have screwed up big time. Not only webmasters are hit, but also their searchers. Most non-generic queries have spammy results in top 10.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 12:26 pm on Sep 28, 2006 (gmt 0)

>> domain.com/this.page.html
>> www.domain.com/this.page.html

>> This is the same page. Yes?

Yes, it is the same page. If both respond "200 OK" then that is the duplicate content problem.

If one reponds "301" and the other "200" then the problem is fixed.

Does your method "correct the URL to www" when you ask for non-www or does it just allow the server to serve the content, as is?

This 246 message thread spans 9 pages: < < 246 ( 1 2 3 4 5 6 [7] 8 9 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved