Welcome to WebmasterWorld Guest from 188.8.131.52
Google is not a publically funded library, it is a publically held for profit corporation whose media product is the serps, which are in turn generated from the data you refered to. Think of it this way: The New York Times does not own the news it reports on, but it is nevertheless able to report on that news.
It's much more useful to think of Google as a media company than as a library in terms of its rights and supposed obligations. A public library has obligations to serve the public, that's what it's for. A for profit media corporation has an obligation to return a profit to its investors. In google's case, that means keeping income high enough to return such a profit, and they keep that income high enough by ensuring that for the average user, the serps are what those users were more or less looking for. Fewer users equal a drop in revenue.
When google does an update they are always taking a chance of losing users. However, historically, their position has remained remarkably stable, update after update. WebmasterWorld members have suggested hundreds, if not thousands, of times, that Google would lose users and fail, and this has not happened. So let's make the obvious conclusion: google knows its target market better than WebmasterWorld members. That's why all these posts about google failing are so pointless, how many times can we be wrong before we finally realize we are wrong?
Having gotten to that point, maybe we can start working on doing the analysis that webfusion mentioned earlier
[edited by: 2by4 at 9:23 pm (utc) on Nov. 10, 2005]
>"Our site receives over 4000 NON-SEARCH ENGINE free visitors per day, from articles, press releases, inbound links, etc. etc. All that traffic converts as well, if not better, than search engine traffic." <
I dare say everything mentioned in the above is right smack in the middle of Google so a divestiture of dependence on Google is not firmly established to me. The intent of the message is good but the plain fact is you and I wouldn’t be in these forums if we had cut the ties that bind. Google wouldn't even be crossing our minds.
Webmasters constitute a large percentage of buyers. They are the established and proven purchasers on the Internet not these mythical “users” I hear Google should devote so much time to. When webmasters are happy they’re spending time buying (from me) and they aren’t in these forums endlessly watching Google tinker and paw with their incomes.
Anyone else see a recovery of most but not all of a domain hit Sept22? Even if you think you recovered from Sept22, if you have a domain under 1000 pages, how do the last pages listed at the bottom of a allinurl:example.com site:example.com search rank?
In both cases, simply checking backlink counts shows the real source of the ranking, over 7000 in each case.
As is usually the case when I check on a site like this, the seo is all over the place, so people assume the stuff like keyword spamming is the cause of the ranking, when it almost never is. You're much better off ranking for high end keyword phrases having a small amount of relevant text than a huge mountain of every combination. That's what I find anyway.
This backlink inflation is what I always find for seoed sites beating ours. Not the true authority .gov, .edu stuff, but the commercial stuff.
I have no doubt whatsoever that the absolute highest priority for google currently is figuring out a way to start easing these sites out of the serps, one by one. But not as a block, I think it's going to happen slowly.
The main problem of course is learning how to tell the difference between a scraper/directory type backlink and a true contextual backlink. Not as easy as it sounds, that's my guess for the request for more and more spam reports.
Anyone else see a recovery of most but not all of a domain hit Sept22?
This is exactly what I am seeing. Most of my sites have recovered to about July serp levels, but some of the older ranks I had on page are gone. A search using the method you described shows many of these pages are either listed as supps or have no Title and snippet...
Dayo - I can completely understand your issue as it is happening to two of my own sites. However, I do see that the non--www version of both sites has not been cached since late October, and the www has todays cache dates. Let us only hope that this helps...as I do have 301 redirects in place...
My recent posts regarding the appearance of onerailway when searching for 'norwich web designers'. I have now found there pr7 site listed under unrelated subjects as 'Engineering work' - 291 million + & 'world record attempt' 41 million +.
These phrases are obviously nothing to do with their core business and yet the phrases are mentioned textually on their front page.
I leave it to you experts to analyse.
If I search for 'blue widgets' my site is returned in the top 2 positions, say /littlebluewidgets.html and /bigbluewidgets.html. While the main blue widgets page /bluewidgets.html is nowhere to be found.
Then, I search 'site:mysite blue widgets', and #1 result is /bluewidgets.html, followed by /littlebluewidgets.html and /bigbluewidgets.html.
Does this make any sense?
This is across all datacenters, including jagger3.
'Better Policies' gets Translink listed Number 2 out of 166 million
"Discovery Guide" gets firstgreatwestern listed in top ten
Hopefully they are trying to fix the sand box problem they created.....
fix - created?
Dunno if this is still on, but if you buy adwords for that domain you are suddenly there.