Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Update Saga. Part 5

         

Brett_Tabke

8:26 pm on Nov 9, 2005 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



What say you?

Over and done with?

All done all through?

zeus

8:56 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



flyboy do you have a IP

flyboy

9:03 pm on Nov 10, 2005 (gmt 0)

10+ Year Member



on 66.102.11.104, 66.102.9.104, 66.102.11.99 66.102.9.99 using site:domain.com

Mountdoom

9:12 pm on Nov 10, 2005 (gmt 0)

10+ Year Member



Thought we were slowly getting back on track and then big G jiggled it's jagger and a whole lot of keywords fell through the holes. Since Jagger started we must have lost 80% traffic from around 2000 pages. For a minority of terms we are listed in top 10, for a majority we are > 100. Even for the title of our home page (not a competitive phrase) we appear > 150. Google may have implemented their algo, but there is still a helluva lot of data gathering going on with googlebot busier than a one-legged man in a tap dancing contest. It all feels half cooked to me, we're all over the place, which is why I'm not losing heart just yet - my gut feeling is that all will come right in the end (belief in my product perhaps). For others in a similar position, hang on!

2by4

9:15 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



LegalAlien:
"That sounds a lot closer to a library, than to the other media examples you gave."

Google is not a publically funded library, it is a publically held for profit corporation whose media product is the serps, which are in turn generated from the data you refered to. Think of it this way: The New York Times does not own the news it reports on, but it is nevertheless able to report on that news.

It's much more useful to think of Google as a media company than as a library in terms of its rights and supposed obligations. A public library has obligations to serve the public, that's what it's for. A for profit media corporation has an obligation to return a profit to its investors. In google's case, that means keeping income high enough to return such a profit, and they keep that income high enough by ensuring that for the average user, the serps are what those users were more or less looking for. Fewer users equal a drop in revenue.

When google does an update they are always taking a chance of losing users. However, historically, their position has remained remarkably stable, update after update. WebmasterWorld members have suggested hundreds, if not thousands, of times, that Google would lose users and fail, and this has not happened. So let's make the obvious conclusion: google knows its target market better than WebmasterWorld members. That's why all these posts about google failing are so pointless, how many times can we be wrong before we finally realize we are wrong?

Having gotten to that point, maybe we can start working on doing the analysis that webfusion mentioned earlier

[edited by: 2by4 at 9:23 pm (utc) on Nov. 10, 2005]

outland88

9:21 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Enjoyed your posts Legal Alien.

>"Our site receives over 4000 NON-SEARCH ENGINE free visitors per day, from articles, press releases, inbound links, etc. etc. All that traffic converts as well, if not better, than search engine traffic." <

I dare say everything mentioned in the above is right smack in the middle of Google so a divestiture of dependence on Google is not firmly established to me. The intent of the message is good but the plain fact is you and I wouldn’t be in these forums if we had cut the ties that bind. Google wouldn't even be crossing our minds.

Webmasters constitute a large percentage of buyers. They are the established and proven purchasers on the Internet not these mythical “users” I hear Google should devote so much time to. When webmasters are happy they’re spending time buying (from me) and they aren’t in these forums endlessly watching Google tinker and paw with their incomes.

steveb

9:22 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I'm still puzzled by a site that was hit Sept22 having a recovery for everything except two /sections/ of the site. All pages in those sections are pinned to the bottom of an allinurl search for the domain, and they rank ludicrously. The rest of the site was more or less "fixed" by the tweak/fix/refinement, but not these two site sections... all the more strangely because the sections have nothing in common. One has a single page; the other has a hundred pages with pages focused on many different things.

Anyone else see a recovery of most but not all of a domain hit Sept22? Even if you think you recovered from Sept22, if you have a domain under 1000 pages, how do the last pages listed at the bottom of a allinurl:example.com site:example.com search rank?

theBear

9:25 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I'm warming up the keyword generator right now BillyS.

2by4

9:31 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



BillyS, glad you posted some specifics, talking about this stuff generally is a pain.

In both cases, simply checking backlink counts shows the real source of the ranking, over 7000 in each case.

As is usually the case when I check on a site like this, the seo is all over the place, so people assume the stuff like keyword spamming is the cause of the ranking, when it almost never is. You're much better off ranking for high end keyword phrases having a small amount of relevant text than a huge mountain of every combination. That's what I find anyway.

This backlink inflation is what I always find for seoed sites beating ours. Not the true authority .gov, .edu stuff, but the commercial stuff.

I have no doubt whatsoever that the absolute highest priority for google currently is figuring out a way to start easing these sites out of the serps, one by one. But not as a block, I think it's going to happen slowly.

The main problem of course is learning how to tell the difference between a scraper/directory type backlink and a true contextual backlink. Not as easy as it sounds, that's my guess for the request for more and more spam reports.

CainIV

9:52 pm on Nov 10, 2005 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Anyone else see a recovery of most but not all of a domain hit Sept22?

This is exactly what I am seeing. Most of my sites have recovered to about July serp levels, but some of the older ranks I had on page are gone. A search using the method you described shows many of these pages are either listed as supps or have no Title and snippet...

Dayo - I can completely understand your issue as it is happening to two of my own sites. However, I do see that the non--www version of both sites has not been cached since late October, and the www has todays cache dates. Let us only hope that this helps...as I do have 301 redirects in place...

joeduck

9:56 pm on Nov 10, 2005 (gmt 0)

10+ Year Member



.... responsibility to its users, authors and publishers

Nicely put. Google lives up to this in many respects, but IMHO needs a better system to communicate with sites killed as collateral damage in the SPAM WARS XXIV.

This 1356 message thread spans 136 pages: 1356