homepage Welcome to WebmasterWorld Guest from 54.166.14.218
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 246 message thread spans 9 pages: < < 246 ( 1 2 3 [4] 5 6 7 8 9 > >     
Why Does Google Treat "www" & "no-www" As Different?
Canonical Question
Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 7:58 pm on Sep 23, 2006 (gmt 0)

...why does Google want to treat the "www" and non-"www" versions of a website as different sites? Isn't it pretty obvious that they are one site?

Or am I missing something?

 

zCat

10+ Year Member



 
Msg#: 3094363 posted 12:30 am on Sep 26, 2006 (gmt 0)

Ergh, just to be safe, let me uber-clarify: We aren't hiding Elvis ANYWHERE. And he was *not* seen munching on a banana and peanut butter sandwich in one of our cafes.

Interesting, so you aren't hiding Elvis; and your cafes do not provide his favorite snack. This implies he is at large within the Googleplex but possibly on a healthy diet. I vote we name the next major seismic shift in the Google index "The King".

Seriously, there is one site I follow with a certain vested interest which has massive SEO problems, including canonical, but after a few ups and downs over the summer Google seems to have got things worked out; at least a site: search on the non-www domain returns only www pages.

texasville

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 12:43 am on Sep 26, 2006 (gmt 0)

>>>>>at least a site: search on the non-www domain returns only www pages. <<<<<

That don't sound right...

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 12:45 am on Sep 26, 2006 (gmt 0)

If you search for site:www.domain.com you should get to see only www pages.

If you search for site:domain.com you should get to see both www and non-www pages if both exist.

If the latter search returns only www pages and there are no non-www pages to be seen, then that is a Good Thing.

The latter count should be the same as a site:domain.com inurl:www count too, and site:domain.com -inurl:www should return zero in that case.

[edited by: g1smd at 1:03 am (utc) on Sep. 26, 2006]

TerrCan123

10+ Year Member



 
Msg#: 3094363 posted 1:01 am on Sep 26, 2006 (gmt 0)

A long time ago none of this even mattered to small webmasters that first discovered Google, they determined your site was valuable and sent traffic your way regardless of these problems.

Now I see these .info domains spamming message boards and at the top of SERPs, and all I can say is Google never adressed this www issue among others and instead penalized millions of sites that never needed to be. In the process many of those sites stopped being maintained, the affiliate sites that needed some money for hosting and other expenses were expunged and dried up, and now all that is left is the very top sites and the .info spammers.

A few midtier sites have somehow jumped through all the hoops and do show up in the SERPs, but in time they too will be penalized to only be replaced by a younger crop.

Also the blog sites that only have dribble are placed at the top with myspace and ebay. What was once a search engine for expert sites has evolved into an adword portal that only gives out the information you want if you give exact parameters and keep adding them to weed out the chaffe.

As far as I am concerned www and non-www are the same ,always were the same, always will be the same, and if 5% of website have some other way they utilize it then they should have been the exception not the other 95% that could care less about it or what it means.

I wish Google all the best as they have given me loads of free traffic over the years but this seems like such as easy issue, far easier than anything else they have worked on. Why they haven't removed all the penalties by now and just used one version without having to be told is rather amazing.

texasville

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 1:03 am on Sep 26, 2006 (gmt 0)

g1smd-
You are right...wasn't thinking...however..I have a question for you:

A site has all pages supplemental except for the www.example.com/ page (index.html).
When doing a site search using site: http://example.com -www I only get the supplemental pages and not the example.com/ page.
Site has the 301 in place to www from non-www.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 1:06 am on Sep 26, 2006 (gmt 0)

That was a statement. What is the question?

Quick. It's 02:07 here. Bedtime...

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 1:16 am on Sep 26, 2006 (gmt 0)

A question you should ask yourself, is what URLs are those Supplemental Results actually for?

Are they for URLs that issue a redirect, or are 404? If so then that is correct. They should be Supplemental, and they will disappear in a year.

If they are for URLs that return a "200 OK" then you likely have a Duplicate Content problem on your hands. Can you verify that there is only one live "200 OK" URL for each "page" of content that you have?

coldfused

5+ Year Member



 
Msg#: 3094363 posted 1:16 am on Sep 26, 2006 (gmt 0)

I have a question guys.

What if in IIS my sites are already set up as domain.com, not www.domain.com. Do I have to set up a website for www.domain.com, put all the files over from domain.com, and redirect domain.com to www.domain.com?

texasville

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 1:23 am on Sep 26, 2006 (gmt 0)

They are all 200 ok. I do seem to have that problem except that I have had the 301 in place for over a year. Should not be supplemental. I have checked everything. Used xenu.
But should they appear at all? They are all www pages and I used the -www command.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 1:38 am on Sep 26, 2006 (gmt 0)

If you see a load of normal www URLs when you do a site:www.domain.com search, and then you see the same www URLs, but this time marked as Supplemental, when you do site:www.domain.com -inurl:www then those Supplemental Results are usually for previous versions of content at those URLs and can safely be ignored.

g1smd

WebmasterWorld Senior Member g1smd us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 1:41 am on Sep 26, 2006 (gmt 0)

>> What if in IIS my sites are already set up as domain.com, not www.domain.com. Do I have to set up a website for www.domain.com, put all the files over from domain.com, and redirect domain.com to www.domain.com? <<

Leave it as it is. Leave the site on domain.com URLs. Get a site-wide 301 redirect from www to non-www if www is already active. Make sure that only non-www URLs are indexed for your site. If www is not active then leave it that way.

BigDave

WebmasterWorld Senior Member bigdave us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 2:05 am on Sep 26, 2006 (gmt 0)

Why doesn't google compare example.com and www.example.com to see if they are the same?

They do, and they have for years! But it takes them a while to make sure that it is the proper thing to do. If you change your site often, or if it is dynamic in nature, you make their job MUCH harder, if not impossible.

This isn't Google's fault. It is the doman owner's fault.

The domain owner is not necessarily expected to know what they are doing as far as SEO and webmastering (this really is a webmaster issue, not SEO). But the domain owner is responsible for hiring competent support.

If your hosting service does not supply you with proper support, get a new hosting service. If you can't, for some reason, it is still your responsibility to figure out who you need to hire to fix it.

Running a successful internet presence is a technical job. If you do not have the skills, yet you wish to be successful, you will have to pay for those skills or accept the consequences.

As for making assumptions that www and non-www are the same, I know of seven major educational institutions that do not run the same info on both domains. The welcome level pages are the same, but the deep, professor and class pages are not. I would be quite willing to bet that Google considers these .edu pages to me more important than those of any member of WW.

tedster

WebmasterWorld Senior Member tedster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 2:30 am on Sep 26, 2006 (gmt 0)

Silly question (for anyone)...how do I get to "post 1043" please?

Not silly at all. I think Adam picked up on lammert's post count rather than the post number. The post is earlier in this very thread, and it is #3095582

Nowadays, much content is produced on the fly with PHP, ASP etc from databases and websites have a much higher update frequency than a few years ago. Think of items like "current date and time", "quote of the day", "daily or even hourly updates on news sites", "syndicated content", just to name a few.

Chances that the content served by example.com/somepath and www.example.com/somepath is identical is extremely small nowadays. Looking in my log files, the size in bytes for every URL changes many times a day due to the effects mentioned above. How should we expect Google--or any other search engine--expect to glue together example.com and www.example.com, if we modern webmasters serve different content for both domain versions in the first place?

Another problem is the Last-Modified server header. With static (HTML) files, this date tells the browser and search engine bot the last date the file was updated. When example.com/someurl and www.example.com/someurl return the same date and time in the Last-Modified header, Google can assume that they are the same file.

With script-generated content (even when you use static "search engine friendly" URLs), the content of the Last-Modified server header changes with every request to the current date and time, or it is not sent at all by the server. How can Google determine if two URLs are the same if the last time it was modified differs?

[edited by: tedster at 8:09 am (utc) on Sep. 26, 2006]

texasville

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 2:32 am on Sep 26, 2006 (gmt 0)

g1smd- they are never normal. Always supplemental. That's why I have the question about the index page disappearing when I do the -www search.
I can't figure out why they are supplemental other than they have no inbound links other than from inside the site. All links just go to the .com.

texasville

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 2:39 am on Sep 26, 2006 (gmt 0)

BigDave-
>>>This isn't Google's fault. It is the doman owner's fault.<<<<

Sorry, but you are wrong. It IS google's fault. Google created that problem not anyone else. If you have to do things in your server to especially accomodate ONE search engine then it is that se's fault. They didn't even realize what they were doing for a long time and then it was released to the public by GG in round about ways. It was and is a mistake that has caused incredible problems for everyone. Pity the poor hobby site that is banished to obscurity because he can't afford to hire specialists to guide him thru the maze that google has vreated and can't seem to fix.

And I feel really sorry for the students that must have a very difficult time finding their information at those .edu sites because they have to figure out first whether it is www or non-www they are suppossed to go to. Sounds like a typical upper ed snafu to me.

BigDave

WebmasterWorld Senior Member bigdave us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 3:46 am on Sep 26, 2006 (gmt 0)

Sorry, but you are wrong. It IS google's fault. Google created that problem not anyone else

The only way that it could be Google's fault, is if they are your host. The last time I checked, the pages that they host don't have canonical problems because every user gets their own subdomain.

Google does identify and combine pages when it is possible. You, as a site owner, and whoever you pay to set up your server, are the ones that make it impossible for them.

They didn't even realize what they were doing for a long time and then it was released to the public by GG in round about ways

When did you notice this problem? Because they have been combining www and non-www sites since at least 2001.

Or did you mean that they did not realize that more sites were setting things up in a way that made it impossible for them to ever be combined?

Or are you talking about when the algo was changed, so that some that had been successfully combined, were now split up? If you had been following best practices, as many members here, including GoogleGuy, suggested, you would not have had any problem.

Pity the poor hobby site that is banished to obscurity because he can't afford to hire specialists to guide him thru the maze that google has vreated and can't seem to fix.

Yeah, it's a pity. But it's a hobby site, after all, so why do they care so much.

As for paying the specialists, I've got some for $3.99/month hosting fee on one site, that set up the servers well enough that I don't have to do anything to fix the canonical issues.

There are also sites out there with forum software that gives you thousands of different URIs to each page, and they have around a dozen parameters in that URI. Of course, it's Google's fault that they can't properly index that site either.

Give me a break.

And I feel really sorry for the students that must have a very difficult time finding their information at those .edu sites because they have to figure out first whether it is www or non-www they are suppossed to go to. Sounds like a typical upper ed snafu to me.

On the contrary, the students have little problem finding those pages, because the servers and links are all set up properly. Not only can the students find the information, so can the SEs.

You see, universities understand subdomains, even if you don't. They have had thousands of servers on subdomains since before there was a web.

And just to add to your confusion, start looking around at all the .edu sites that also have a www2, www3, etc. Usually their root page is the college's home page for those that mistype. That page will have absolute links to the www page, and the rest of the server is dedicated to other business.

Should google also treat URLs that begin "https:" the same way as those that start with "http:" just because a lot of sites serve up the same content on both?

Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 7:41 am on Sep 26, 2006 (gmt 0)

Tedster: thanks :)

Despite the apparent difficulties Google has in distinguishing content on www and non-www domains, I still think that Google does this the wrong way round. Reading all the above, I've seen nothing that suggests that a www and non-www domain should be treated differently as "the norm". The percentage of sites that provide different content on the two is no way over 50%. Probably not even 95%.

And the average Joe Hobbyist is stuffed, as is the audience who will benefit from his/her expert knowledge. Joe needs to get techie and fast or else the users will have to trawl through the affiliate and adsense spam and generate some money for other people to get his info.

Sorry. But it just seems upside down to me.

g1smd: I can appreciate why it has its uses to experienced webmasters. But to me its a counter-productive measure that helps to alienate the every-day information provider. I know it's just the age we live in, but little things like this help shape the future and it seems the hobbyist is fast becoming an unwanted breed.

We're heading for a Catch-22 IMO. Human-edited directories have their foibles in that they are slow to maintain, sometimes biased and often opinionated but they should arguably provide wheat rather than chaff if done preoperly. Algo's on the other hand are having to get very technical in order to try and beat the spam, although in Google's case a lot of that spam creates good revenue so there is a potential conflict of interest.

Until an Algo can successfully determine usefulness from content alone, if indeed that is ever possible (or wanted!), or until it can successfully determine the usefulness of web pages from user tracking, we're stuck between a rock and a hard place it seems.

In the meantime, I feel those responsible for producing the algos should at least try to balance tech with common sense and this, to me, is one of those instances.

[edited by: Simsi at 7:59 am (utc) on Sep. 26, 2006]

tedster

WebmasterWorld Senior Member tedster us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 3094363 posted 8:15 am on Sep 26, 2006 (gmt 0)

This is the way servers work on the Internet -- and every search engine wrestles with it, not just Google. Let me try an analogy to make the point more clearly.

You would think someone foolish if they rented a car without knowing something about how a car is operated, wouldn't you? Why should it be any different for someone who rents space on a server? If they want healthy results, they will need to know a little bit about how the technology works.

kwngian

10+ Year Member



 
Msg#: 3094363 posted 8:54 am on Sep 26, 2006 (gmt 0)


That's some biase opinion.

People in their own area of expertise for example finance or creative work aren't interested in webmastering, at least not to the extend of going through the nitty gritties. Filtering out their knowledge just because they aren't technical enough just isn't right or to the interest of the people they're suppose to serve.

Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 9:46 am on Sep 26, 2006 (gmt 0)

You would think someone foolish if they rented a car without knowing something about how a car is operated, wouldn't you? Why should it be any different for someone who rents space on a server? If they want healthy results, they will need to know a little bit about how the technology works.

But you don't expect them to have to re-wire the electrics to make the headlamps work properly Tedster.

[edited by: Simsi at 9:49 am (utc) on Sep. 26, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 10:17 am on Sep 26, 2006 (gmt 0)

Canonicalization is an important issue. Not a simple one.

Adam_Lasnik

This is soothsayer talk. An allegorical language indoctrinated by google to its specially chosen scholastically able orators who are ever eager to sermonize google's brainwashing of webmasters. Give the webmasters something to chew on, talk to them but don't say anything. Utter to them our words of rhetoric, but don’t make any promises. Tell them of our utopian lifestyle at google, but let them live in the gutters of forums. Make videos that entertain like pop stars, but don't disclose anything and make sure you look good, especially to women and teenagers.

Digress in your comments with peanut butter talk to channel webmasters anger towards something they like. Deflect technical questions towards sightings of mythical beings. When asked about canonicalization, talk about assassinations. Blather, digress and budge but never encourage. Utter words of sightings if you come across aware individuals.

I have never read such a gigantic post that said nothing. It must have taken you a couple of hours to formulate and pass through editing by your PR department as acceptable to post. And I bet you were behind Matt in the queue, sipping your favorite beverage in a plastic throwaway cup, waiting for your post to be stamped as "passed for publishing".

To any webmaster that believes in reality, and are able to surmise google’s rhetoric, that sightings of Elvis is not an intrinsically associate process to solve a canonical issue, I advise to disassociate from Adam Lasnik’s shamanistic propaganda and misleading comments that are specially formulated to divert the attentions of webmasters.

Canonical issues are easy to solve. It requires dedication. Skill and proficiency and an understanding of how the internet works. It also requires enthusiasm and a little know how of how to go about it. It does not require brainwashing and fairytale stories of Elvis eating hamburgers in roadside cafes or waiting in a queue behind Matt to get your enthusiasm endorsed.

What I wrote in my previous posts are cast iron facts. Note that Adam Lasnik made no comments about my being wrong. He knows himself that it is all correct and that the FINGER POINTS TO GOOGLE.

Any webmaster that takes sightings of Elvis, decoratively highlighted by Adam Lasnik, to be the Gospel Scripts of fixing a canonical issue, then you will ever remain in the grip of his fantasies.
.

Webmasters whos websites are summarily sent into oblivion in google's index are all too aware of how misleading google is. They are told that no penalty exists and they are indexed.

No penalty does not indicate a downgrade. Downgrade is not a penalty. The sandbot is not a penalty. An algo that demotes a website based on points added or deducted is not a penalty. That is why a site is without a penalty, but can be in total oblivion even though it may be the authority in its niche.

And don't forget. When you look at a search result in google. You see ten results. But the real results are hidden. You think you are given the top ten. But really, you are not aware that the page contains missing websites that are filtered out of that page.

A lot of what Adam says reminds me of some DMOZ editors.
.

The only thing worth while that I magaged to sift through in Adan Lasnik's "mammoth post" was indeed peanut butter. I love the stuff. It might be a good idea if he posts regularly so that it reminds me to purchase a jar or two before I run out of the stuff.
.

[edited by: AlgorithmGuy at 10:34 am (utc) on Sep. 26, 2006]

photopassjapan

5+ Year Member



 
Msg#: 3094363 posted 10:58 am on Sep 26, 2006 (gmt 0)

I think there was such a suggestion before. But i'll say it again.

Google will probably go down one way or another within a couple of years if it doesn't understand the growing discontent.

The only... and really only way to please the human users is to apply a random, anonymous board of about three people, ten seconds per site ( these "reviewers" shouldn't even know each other or who the other two would be ) and weed out the SERPs on a thematic, but random basis ( so no abuse would be possible by paying them for placement ). This would cost a lot of money? Yeah. But it would make Google revolutionary once again.

The three people would give sites a score from 0 ( should be out of the index ) to let's say 5 ( should be in top 10 ).

The three departments would be very broadly defined...

- Ethics and copyright
- Usability and design
- Informational value

Their decision would only finetune the algo's decisions.
That's the key... finetune it when the algo's results are so-so.
If a site gets 0 in either department, it should get reviewed by two others in that area before getting a strong verdict. Also if the algo signals problems on either front, the site should be custom checked.

Some might say producing such hybrid results would need Google to hire at least a thousand people who are at least novices in either area. And that would cost a fortune. Yeah... so? I thought Google is already MAKING a fortune of the service of being one ( and not the only one ) that basically just serves links to people who aren't like golden era net users, digging deep into sites' links following dozens of references just to find what they're after. The net can work without SEs... at least it could work without the SEs that filtered out directories because they were too easy to be used for spam. Cool, but then why are SERPs filled with spam? Google is a search engine, at least that's how i remember it. And this is the next step for internet search engines because the net has changed for the worse. The free net will be and right now is already pushed underground, those days are over... however if people want to use it for anything else than a shopping catalogue/reference guide of the monopolies and opportunist SEO/Techie combos, the SEs will need to catch up with the facts.

...

...

Can't beat AlgorythmGuy in post speed <:)

...

Techie side, yeah. But you know the internet became so damn popular because it was so user friendly. IT professionals, webmasters and techies, publishers, companies, users... all have their own goals, preferences, ways and interests. The internet still is but a place where these interests coincide.

And the fact is... if you're ignorant towards either side, you'll go down. The only problem right now is...
And in my opinion has been for years now... is who decides the emphasis. Who decides which sides needs to adapt more to the other.

A company that was a good choice before, exactly because it was THE option next to the greats of IT... chasing each others' tails for monopoly and not caring about their directories and SEs serving unrelated crap. But you know... now this cool company IS the monopoly not serving good enough results. The idea was to elevate the publisher responsibilities and the techie stuff high enough in importance to make it harder for irrelevant pages or downright spammers to rank high just because of semantics or a good domain.
And who would have thought at that time, that there are opportunist mathematicians, publishers and techies on this world?
Sounds silly to me. And i wonder how long we can live off of good memories.

Techies, publishers, companies, users, hosting companies... everyone has some interests in this... yeah.

But this thread was started by pointing out the fact that there's one factor in this... named Google...

Which can bend the balance of these people and their interests AT WILL. It is the hub, the nexus, the whatever, where all these lines cross each other right now. They stand there, deciding how to make money off of this fact... yes, of COURSE they're trying to make money. But the golden rule of the net is... first comes the good service, THEN the ideas on how to monetize. The first round has tired out years ago... the college kid image of Google is fading away, and before it disappears completely... it could still consider the fact that it won't be able to change users, techies, publishers, spammers, marketers, seos, advertising companies to comply with its original ideals ( not sure if Google follows those btw ). For the world is not perfect.

...it could still outwit the least compromisable ones of them...
But filter algos for dupe content-spam-seo tricks... are just not enough anymore. They hurt everyone on the previous list in an equal amount... leaving the garden not weeded out, just all kinds of plants reduced in numbers.

And before you get me wrong...

I LOVE GOOGLE. That's why i'm so mad at Google.
I want to have the Google i loved BACK... the one that was like an all-knowing kid from the neighbourhood i could ask anything, and get an unbiased, clean recommendation on what sites to see.

Right now this kid is like a mathematician stuck inside an encyclopedia / phonebook / star-map agent body... and i know the real person is struggling to get out, but algorythms just don't seem to show the way.

[edited by: photopassjapan at 11:11 am (utc) on Sep. 26, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 11:01 am on Sep 26, 2006 (gmt 0)

Google is the monopoly of the internet. It has responsibilities to webmasters.

To say that google is an entity that established itself is misleading. Webmasters and webmasters efforts created google. Without websites Back-Rub would never have seen the light of day.

Canonical issues are simple to solve.

Google should disclose what it is doing to make better this issue.

All it takes is for google to let webmasters know of what it is working on and how it is going about it. Resourceful webmasters can find a way to overcome what google is doing wrong.

If google is considering multiple websites on a single domain, it should disclose it. This way, a webmaster can make sure upon purchase of a domain that all possible versions resolve at source. He/she can then make sure that a properly configured server answers only to the domain chosen.

Google is the problem here, not what google is working on. It does what it wants in secret. This can only contradict webmasters.

Hosts should not be relied upon. The past 10 years at least shows just how inadequate a host is so far as search technology goes. Hosting on a good server adequately configured to host a website is essential.

Out of date registrars would be avoided. They would become a relic of the past. A domain name would consist of options for multiple websites on a single domain. Properly sold by people who understand search technology.

Search technology came long after the selling of domain names. Google came long after anything else and it has been allowed to usurp all in its wake.

All this duplicate content is a lot of crap. It confuses webmasters and it goes on and on without google addressing the very problem itself has caused to webmasters.
.

AlgorithmGuy



 
Msg#: 3094363 posted 11:19 am on Sep 26, 2006 (gmt 0)

...

Can't beat AlgorythmGuy in post speed ;)

...

Nor can the google soothsayer. I write according to my conscience. Not what is indoctrinated in me. The main difference being that I write about canonical issues without deception. Nor do I write about mythical observations. I write what can or cannot be done to help solve canonical issues that matter to a webmaster.

I support webmasters who are being misled by "rhetoric" ordained by google upon its sightseeing hedonists in pursuit of peanut butter and Elvis. They make out that sightings of iconic figures are a way to assuage webmasters concerns about canonical issues.

I don't wait in queues behind more senior google employees to have what I write endorsed by PR representatives who’s only bodily functions is the raising of a hand to stamp "PASSED FOR PUBLISHING" in a systematic manner looking after the interests of google. These PR representatives have authority. I bet that what Adam wrote, it dramatically changed before he posted it. Google's rhetoric is stamped in every word and sentence he wrote.

.

[edited by: AlgorithmGuy at 11:45 am (utc) on Sep. 26, 2006]

photopassjapan

5+ Year Member



 
Msg#: 3094363 posted 11:48 am on Sep 26, 2006 (gmt 0)

Mm... yeah but...

You know...

When you poke people where it hurts just to tell them to go see a doctor... they are likely to become mad at you.

Not sure about inner censorship, if Googlers have faith in what they do they are likely be able to give out comments on their own while keeping the interests of the team in mind. ( For which... due to this algo-only driven ranking and filtering... this secrecy is necessary when it comes to technology. The algo-only is the key problem though. If it wasn't so automated, there needn't be secrecy either. )

I imagine everyone at Google struggles in between their pride of having built the world's no.1 service from literally nothing... and the conscience of having built the service on the work of others ( you know... websites! ) who are now disappointed with the latest tendency. But i'd like to... i'd really like to see this algo filter problem... the canonical issue, the dupe content issue, the spam filter issue... all these things as Google looking for a way to keep itself on top, but with the quality of its service. If all this is necessary, all this is just trial and error, so be it.

Not that i don't understant your point of view, which is...
Yeah it does seem arrogant at times to be taken for a ride.
The only things that pop in my mind when trying to get through support or reading webmaster guidelines are...
Baka ni suruna. Mushi ni suruna.
In other words... Don't take me for stupid... and don't ignore me.

In fact when someone bends the subject with good rethorics... that's how politicians try to pave the way in the public opinion, where probably most people don't have a clue about the technical part of their reforms anyway.

However this isn't a TV debate for the elections, this is Webmasterworld. People here know just enough to see if a post from the officials has information in it, or is just an indication of them listening...

You know the fact that this thread has a post from anyone at Google has only one point of information to me. Which is: they know there's something wrong. They know that something's got to change. And that's not the hundreds of thousands or even millions of people who have websites... neither the spammers becoming honest all of a sudden.

I give my cheers to Google in good faith they are trying hard to find their ways back to being cool. A cool choice to search the net.

But at the same time AG is right.
This isn't a TV debate.
Officials could just as well give some actual points, information to chill the nerves of those who are upset. Rhethorics won't do.

I gave my constructive criticism by pointing out, and i point it out again:

The free net will only be saved with hybrid results. Algos can stay, but finetune them with human editors... ten seconds per site, and if there are problems, you can always go back and try again.
The net's too large? MFA sites need only 2 seconds to identify.
There's no end to it? Still, the SERPs would become cleaner and cleaner day by day. Not a final resolution. For there will NEVER be a final resolution, this is but the next step.

The first SE to find a way balancing man and machine once again will win.

Over and out ;)

[edited by: photopassjapan at 12:05 pm (utc) on Sep. 26, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 12:00 pm on Sep 26, 2006 (gmt 0)

The first SE to find a way balancing man and machine once again will win.

photopassjapan,

Interesting post. Constructive elements and suggestions that make for a delightful reading. Unlike the google representative who you mentioned thought this thread is about a TV political election.

I agree with a lot of what you say. You have a concerned attitude about how the web is manouvered in the wrong direction by google.

I've noted your observations and agree wholeheartedly with you that google should ameliorate the things it has done to cause us this concern of nightmare proportions.
.

[edited by: AlgorithmGuy at 12:04 pm (utc) on Sep. 26, 2006]

Simsi

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 3094363 posted 12:05 pm on Sep 26, 2006 (gmt 0)

Well this has kind of gone off at a tangent which wasn't really the aim of the thread but it certainly has provoked some debate for a one-liner and if nothing else AG's insights into canonical manipulation have been an education! Personally I think Google has done a lot for webmasters, but I feel that now maybe it's all starting to get a little bit too technical.

I guess the question arose because I am always reading people say if you get your content right, your visitors will come. Which is a logical assumption and a good aim, but clearly no longer true. Unless you delve into the technical intricacies of search engine algos and understand your canonicals, your positioning tags and link strategies, you can't compete on content alone. Indeed the whole importance of "natural" links may have been a good idea once before revenue generation was such a dominant force the web, but is now regarded widely as another webmaster-savvy "tool" to gain rank.

I consider myself very knowledgable in my field because the market is my hobby aswell as my business, but it's a field thats saturated with people making money - we are many of us, myself included, regarded as marketing extensions for service and product poviders. I always took the "content is king" approach and I am one of the few in my field that will chastise the service providers for their failings as much as I will point out the positives of their product. In my field I think that is necessary.

But where Yahoo and MSN recognise that and do me well, Google clearly views me as more of a technical failure, probably through no deliberate act, but due to the heavy technical weightings placed on my ability to rank. We can't all be top of the search, and by the nature of the game, only a relatively small proportion can be. But it is frustrating to realise that your content is secondary to your technical knowledge at this point in time. I wonder how much the Adsense beast has contributed to this, and the pressure on Google engineers to balance quality content with revenue generating SERPS.

That all said and done, to contradict myself a little, I still use Google to search and I still believe it has the best results. Just maybe not as good as they were - or could be - by some distance.

[edited by: Simsi at 12:15 pm (utc) on Sep. 26, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 12:20 pm on Sep 26, 2006 (gmt 0)

But it is frustrating to realise that your content is secondary to your technical knowledge at this point in time.

Simsi,

You represent honesty. Trust and goodwill. You put forward a debate that is of concern to you. A human plea for help against a mighty quasi search engine that can influence your destiny on the internet at its whim.

I know what you are saying and I know your frustration.

But google is not going to help you. It will not give you the slightest attention. Accept this fact and you will become more aware of how google works.

Canonical issues and other complex technical matters are indeed the be all and end all of staying at the top in google's search results. The more you know how google works the better you are able to compete.

Content is not King. Google has fostered spam like no other search engine on the planet.

It's hype. pomp and pageantry about pagerank and the number of links pointing to a site as being the democratic denominator of the web caused us untold problems. Believe it or not. Canonical issues are ingrained with this linking fiasco.

A website has many versions. Others linking to you may link directly, indirectly or mistype, forget a slash, redirect etc etc etc. You have no idea what this does to your domain. It turns google's appropriation of links that are the basis of it's pagerank upside down. Its algorithmic software then is so confused that you have multiple sites that you cannot possibly be indexed correctly within ranking.

It does not end there either. Google reserves the right to appropriate your websites canonical status to another website to represent your content. Note that Adam ignored any comments about all of what I informed you. If you are reading these posts and not see how he avoided all of what I notified you of, and you did not see his attempts to divert from the subject matter, then you are on your own against a machine that you cannot win against.

Get Adam to disprove the facts that I have written. Bring in any search engine expert to evaluate the canonical problems. You will find that Adam will ask the mod's here to erase the facts from view.

.

.

[edited by: AlgorithmGuy at 12:47 pm (utc) on Sep. 26, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 12:34 pm on Sep 26, 2006 (gmt 0)

Simsi,

I am here to help you.

I had no reason to be here. Other than seeing your original post.

Do not listen to a google representative. Don't listen to me either if you don't want to.

I took offence that Adam should try to divert the issue at hand.

I will singlehandedly challenge Adam and any google representative. Be they singular or a multitude of them. It is all about right and wrong. They are wrong, they are misleading webmasters. I've put my case. Not one post has yet contrdicted what I have informed you of what google is doing to webmasters websites. If you prize your website you should continue to try and get a proper answer from google. But the only way is to prove with the evidence that you have been mistreated.

Where in google's many propaganda pages do you see a note that tells you your website is vulnerable? They say the oposite instead. Nowhere does google say that you should make sure that your site resolves to answer to one name only. Nowhere does google say that if you use a dedicated IP you stand far greater chance of getting duplicate content in its index, when a simple solution exists.

Adam is in no position to help you. I can. It is up to you.

At worse, you know know some nasty secrets about google. You have an insight.

This thread should now become a passion of what you believe in. Right against wrong. Google has intervened with its representative. Adam would not have posted in this thread if this thread praised google. They have seen that you have raised alarm. They will not want you to uncover what is the problem with your website. Your passion to fight the wrong that google has done to your website is your only tool. The intricacies and technical turmoil cannot be overcome unless google discloses how it sees websites. Comments made by webmaster aiming to help are with good intent, but I know that google treats multiple domains within what you think is one domain. Weight is split. Rank is split and a multitude of other factors undisclosed determines where your site gets listed. Without knowing your problem, you will never solve it.

Google reflects what it sees. Irrespective of what you should do for your website to help it. It then filters out websites to promote its pay per click. End users are taken for a ride so are webmasters.

You will also experience other webmasters supporting google and those that may work against you and I. Stick to your guns.
.
.

[edited by: AlgorithmGuy at 1:13 pm (utc) on Sep. 26, 2006]

AlgorithmGuy



 
Msg#: 3094363 posted 1:21 pm on Sep 26, 2006 (gmt 0)

Simsi,

How do you know that a competitor is not the reason that your site is suffering. I showed you how easily that can be done. Why did Adam not contradict me?. The reason is because he knows there are many other ways and he is here to divert the subject and ease you away.

This is a similar DMOZ tactic. The DMOZ editors intervene only when the going gets rough, and they have little to say other than how good their operations are.

.
.
.

[edited by: AlgorithmGuy at 1:23 pm (utc) on Sep. 26, 2006]

Romeo

10+ Year Member



 
Msg#: 3094363 posted 1:40 pm on Sep 26, 2006 (gmt 0)

What a long and heated discussion.

Why Does Google Treat "www" & "no-www" As Different?

Maybe the answer to the original question is easy:
Google treats them as different, because they are different things.

Why should G be blamed that some website owners prefer to host the same (or highly similar) content under 2 or even more different URLs?
The website owner is in control of his DNS A records, his webserver's config (hostnames, aliases, defaults, etc) or .htaccess redirects, not G.
If the website's owner doesn't handle that by himself but delegates to a provider who doesn't care -- well this doesn't seem to be G's fault either.

Au contraire: if G would not treat different stuff as different, would not others raise the question "Why Does Google Treat "www" & "no-www" *NOT* As Different?" then?

Perhaps it may be worth to remember the wise statement Jim made in a very early posting to this discussion:

Bottom line: If you want to rank well and profit from the Web, be prepared to either read and learn a lot of techinical stuff, or to pay someone who'se already done so. Like most other things in life, there are no shortcuts, and you get what you pay for.

Amen to that.

Kind regards,
R.

This 246 message thread spans 9 pages: < < 246 ( 1 2 3 [4] 5 6 7 8 9 > >
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved