Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: open
We are now back to page #2 or #3 on most 2 word searches, we have reduced keyword density but I think it is more of a tweek that Google is doing.
Keep in coming Google.
No matter how G changes the algo, there will allway be ways to spam and cheat. ut if they continue with this one, the web will sooon be full of those baloon sites I was describing earlier on this thread.
Moving deck chairs around on the Titanic!
Can you sink a search engine?
VERY Long PS
Has anyone noticed that links that lead somewhere but not to anywhere that Googlebot can go seem to be being given the same weight in this algo as real open links. Most of the outbound links from these crappy directories are via some form of script with a long query string in the URL. As a side issue there's even one of these URLs listed in SERPs above me which redirects to my site.
The main point is that Google bot seems to note that there are links to somewhere with the right anchor text in them but it can't be checking where they are going it just uses them to feed the ranking algo irespective of whether they are blind alleys or not.
There seems to be two kinds of these links ones to a script on the current domain or an associated domain with a query string which is fed into the redirection script and ones on the affiliate domain in which the query string tells the click counter at the other end who sent the referral. Both of these kinds of link seem to be being counted by Google in this new algo.
Perhaps this is specifically the new Spam. If this is about DomainPark this makes sense because all of the links on the pages generated by Google DomainPark would be this kind of link but would have the right keywords in the anchor text.
What do you think?
Best wishes again
Has anyone got any theories about precisely *why* these pages with hundreds of unrelated outgoing links are performing so well in this daft algo?
A search engine ranks sites by keeping the cr*p at the bottom - and then sending the relevant non cr*p to the top. (simplified by true). When you have a filter applied to very content rich, SEO'd sites (that give the user what they are looking for), and the filter is probably 50x more harsh than it needs to be - you get the once good and relevant sites pushed to the bottom, exposing a "vacuum of cr*piness" that escape the filter - probably because they lack backward links but are still heavily spammy - that rise to the top. The “spammy directories” have always been there, just never so noticeable before.
[edited by: subway at 6:02 pm (utc) on Feb. 6, 2004]
"Post" as in after?
Is the Austin update over?
Are there are still G DNS entries missing in action?
Are There still multiple sets of serps? (or is it now everflux)
Has the fat lady sung already, Is GG joining in the tune?
I have no answers for the big Gs current state of being, just questions. The main one, is Austin Over?
Don't forget the adversarial relationship which they have created with many searchers.
Does anyone know when we can expect our pages to be removed from Google? We put that noindex,nofollow tag for googlebot all over our site on Tuesday but we still appear for all sorts of unrelated searches.
When will that filter though so that we have gone completely?
Google appears to have decided to create an adversarial relationship with webmasters
I would be cautious about posting this sort of message, because it is precisely what may be used to justify search results that continue to decline in quality.
They have no relationship whatsoever to pre-austing, pre-florida or pre-palezoic.
Previously, +a seemed to pull up the old results that were vaguely similar to normal results and vaguely similar to the old anchor text trash algo.
Now, I see results that seem to be algorithmically valuing "a" as if it was gold. It brings up sites where "...keyword a..." isin the title.
In other words, +a will never show old algorithm or "unfiltered" results. It will not show anything that can be sanely compared to the present "normal" results. Previously adding +a changed little. Now adding +a changes things to a different type of searching altogether.
But then, +www shouldn't change as much, since "...keyword www..." isn't as often in any page titles as "a". What I see here is a completely different ranking algorithm. There isn't a noticeable effect on "lost" sites.
Finally keyword +keyword... what the heck should this do anyway? Make it so the keyword appears at least twice on a page? Value double keyword density? Highly value sites that have the keyword twice in a title? Beats me. But what I get is yet another very different group of results... although these are at least vaguely similar to "normal" results -- except that two of the worst seo'ed sites on the planet that are genuine niche authority sites happen to rank drastically higher this way (like moving from 100+ to top twenty).
I dunno, but this is completely different than a couple days ago. Minimally +a is worthless for any comparison or speculating now.
Why would GG have problems following the links?
I thought that preventing Googlebot following links and thereby passing on PR was why this kind of obfuscated query was invented. Are you suggesting that robots are hitting PPC links and that Overture, Espotting etc are charging for the clicks?
One odd thing about some kinds of redirected URLs is that they actually get into the index as though they are themselves pages but they have no content and Google cannot list any description etc with the link to what it is purporting to be a page but is really a URL which ultimately links to a page. If Googlebot had followed it properly rather than stopping and indexing the URL as a page then it would have some content to also add to the index.