Forum Moderators: Robert Charlton & goodroi
This is the kind of thing that happens in a large company. The left hand NEVER knows what the right hand is doing. I hate to say it, but this is the difference between Google three years ago pre-IPO and todays post-IPO Google.
[edited by: oddsod at 7:10 pm (utc) on Mar. 8, 2005]
But isn't the reason for this obvious? The Allegra update has everything so screwed up that even Google must now resort to "blackhat" to rank well in ... Google. :))
Given the wording of the titles and the robots exclusion [adwords.google.com], I would tend to assume the latter.
The big question for me, is how this would look to Google if it were some other Web site (with a site search). There are often perfectly sensible reasons for doing things on a Web site that can look spammy to a third party (and to a search engine software).
So you asked your guy on the inside about this, or is that just conjecture on your part?
Personally, I think ciml's explanation makes much more sense. It doesn't seem like Google would let someone that clueless touch their code... unless things are really falling apart behind the scenes these days.
I wouldn't doubt there are several people polishing their resumes as we speak.
>for keywording their AdWords Support search?
Ya, that's why I thought it might be a 'test' system. Interesting 3rd level domain issues there with Robots.txt not being obeyed...hmmm
> downplay
We've pointed the finger at Yahoo for doing the same (last spring?).
We caught MSN doing it a year ago.
Altavista's seo help pages were cloaked in 99.
All the se's geo cloak.
All the major flash sites cloak.
All of Googles major sites from Ebay to Amazon cloak in one form or another (geo, language...etc)
All The major se's do stuff with xml with partners.
Inktomi used hand the core of the algo to their major pfi partners.
Ink partners used to "enhance" (aka: Optimize) feeds like crazy.
AdWords is delivered via geo targeting/cloaking.
AdSense are delivered via geo targeting/cloaking.
You really think that all those high ranking google subpages ruled the top of the index by merit for 5 years running?
If you do, let me tell about some land I got a few miles south of miami...
A sub story here is the shock and awe that a green segment of the seo community is expereincing. Playing "catch me if you can" is amusing, but makes no one any money and is a diversion at best. I see no old timers the least bit surprised by this. It is their (googles) se and they can do whatever they want with the index. If they want to say you shouldn't cloak and then turn around and cloak their own to the hilt - no big whop - that's business and it is the year 2005. No one ever said life was fair. /. response was a predicatable as the sun coming up. Why is anyone surprised at all? The surprise is that something like it hasn't happened before.
> downplay
Until I hear the other side of this thing...ya, I'll wait-n-see on that...
It only takes a handful to work on the algo. I'd guess there isn't 20 people that know the algo intimately. Even noted tech guru Matt Cutts has only a little slice (adult filter) of the algo that he's responsible for.
> two main problems a search engine faces are
> cloaking and over optimised keyword stuffed pages
Say what?!? I don't know what the 2 main issues they face are, but cloaking and stuff pages don't even make a top 100 list. However, it is a great way to control webmasters and get them cranked up when it doesn't work as billed.
The biggest issues Google have - have little to do with the algo and the index. Their main cause for concern is setting in an executive office in Redmond.
You guys did know that:
Yahoo has hired seo's (former moderators at WebmasterWorld at that).
MSN has hired seo's (former admins of WebmasterWorld at that) and are still looking for more from the sounds of it.
Amazon is currently looking for seo's.
Overture used to hire many seo's.
Ebay - some of the best seo'ers on the net.
Altavista blatently looked for seo's before they were bought.
What do you think all those people are doing?
[edited by: Brett_Tabke at 12:00 am (utc) on Mar. 9, 2005]
One person pointed out that they don't look at the cached version when they click on the search results so it wasn't effecting them at all...sigh.
Others couldn't understand what the big deal was, they don't understand that many of us eat and starve by our (or our clients') rankings; Google holding webmasters to one standard while not living up to it themselves is fishy. Or as was pointed out, ensuring that subordinates know what lines not to cross to avoid this kind of embarrassment.
Of course, when the math and physics guys start swarming all over one of their stories I have a tendency to wonder what the big deal is, too.
Incidentally, that was the first story I submitted to slashdot that they accepted. But they rejected the one I submitted a few weeks ago about Jenna Jameson's moaning ringtones...
This certainly wasn't an accident, but probably an ill conveived idea by some level of management that never crossed the desks of the higher ups. Cloaking is cloaking, and no matter what spin you put on it, or what their intentions were, it's still something they have openly been against.
If this was Microsoft, we'd have seen nothing but negative comments. We'd be talking about the evil empire, unethical practices, and attempts at world domination. However, placing Google's name in that slot, and it gets conveniently swiped under the rug. Sort of like the autolink/smarttag issue.
Google owns the search and can do as they please. Does it go against their motto of "don't be evil"? Yes. Does it go against their webmaster guidelines? Yes. Does it effect anyone on this board? No.
In the end, we all have to make our own perception of what Google really is. This isn't an SEO issue, it's a public image issue.
We'd have seen nothing but SEO's putting in resumes to get the cherry cloaking job on the net. No one - but no one would have been surprised.
I think the upset over the auto link was deserved. There is a line there where upset crosses into flame for the sake of flame (eg: Yahoo Directory 98-99, Altavista 99-00, Excite 99, SmartTags 02).
>Our non-webmaster geek-brethren are absolutely
>clueless about what the story was about.
Live, learn, and remember.
One person pointed out that they don't look at the cached version when they click on the search results so it wasn't effecting them at all...sigh.
And I'm sure that's pretty much how the rest of the world thinks. Who cares what was done with some keywords here or there, as long as the content is relevant. However, there are many out there who think differently and want others to think as they do as well.
Make pages for users, not for search engines. Don't deceive your users, or present different content to search engines than you display to users.
Don't employ cloaking or sneaky redirects.
Like these guys [google.com]
I load up Firefox with the User Agent switcher set to Googlebot and see text in the title that is different compared to what I see with IE or Firefox User Agent Default.
Those pages were primarily intended for the Google Search Appliances that do site search on individual help center pages. For example, [adwords.google.com...] has a search box, and that search is powered by a Google Search Appliance. In order to help the Google Search Appliance find answers to questions, the user support system checked for the user agent of "Googlebot" (the Google Search Appliance uses "Googlebot" as a user agent), and if it found it, it added additional information from the user support database into the title.
The issue is that in addition to being accessed via the internal site-search at each help center, these pages can be accessed by static links via the web. When the web-crawl Googlebot visits, the user support system thinks that it's the Google Search Appliance (the code only checks for "Googlebot") and adds these additional keywords.
That's the background, so let me talk about what we're doing. To be consistent with our guidelines, we're removing these pages from our index. I think the pages are already gone from most of our data centers--a search like [site:google.com/support] didn't return any of these pages when I checked. Once the pages are fully changed, people will have to follow the same procedure that anyone else would (email webmaster at google.com with the subject "Reinclusion request" to explain the situation).
In order to help the Google Search Appliance find answers to questions, the user support system checked for the user agent of "Googlebot" (the Google Search Appliance uses "Googlebot" as a user agent), and if it found it, it added additional information from the user support database into the title.
Doesn't the Google Search Appliance support metadata? Like the good old "keywords" :-)