thats not your site going in and out of the sandbox.
Its different datacenters serving different results - one where you are prominent and one where you are not.
Well, put it this way. On one datacenter my website doesn't come up at all nor does any of my search terms. The other I am in the top 10 and my website mainpage comes up and everything. What could cause this?
I was in the sandbox for the last 6 months, and now it looks like I may be out of it. Since some of the data centers show that I appear to be out of the sandbox, does that mean I probably am out?
Going right back to Suggy's first post. Seems to me on page body keyword density has increased relevance with title content less.
Ran a little test (we have a 40,000 page site PR7 index page and very very hard hit).
Took one inside page (but linked from index page), one commercial search phrase (usually 750000 results only) for which we ranked top 5 before Allegra and >70 after Allegra and nearly tripled the keyword density on it. Nothing but nothing else changed.
After last spider (so with a fresh date 9th Feb for sure) now ranks 3 on g.com and g.co.uk. Other pages in SERPS look identical'ish.
Seems to simple to be true - we rank third on our company name - but our index page doesn't actually use our company name much. Fix that tonight and post back tomorrow.
<half-baked theory>This is really all about "local". The difference between post Allegra ranks on G.co.uk and G.com is much wider than pre Allegra. - ours is a UK site.</half-baked theory>
I notice that content management sites with very little actual content seem to be doing great.
There's been a few suggestions that because GoogleGuy has asked for some reports that the update must be finished however wouldnt it make more sense for GG to be asking midway through so the update could be fine tuned towards the end.
My two cents:
If you have 8 Billions of pages and some have just not ranked before - how to bring them into good positions without completely killing others, well ranking pages? Think about it, what would you do?
If you have thousands of people optimizing their pages and building links to increase their rankings - and you don't want that. You don't want all those realiable kw analyzers which are tracking thousands of keywords.
What would you do? Imagine, youre Google and you have to please Millions of Webmasters - and pleasing is keeping your importance because John Sixpack wants to see some referals from Google. Imagine you have an index that has grown very fast. And imagine you have millions of users daily.
So. What you do? Know what i would do? Building several different indexes and randomize the search results to split your traffic. Still serving relevant stuff - but there are always thousands of relevant results which you may serve to users looking for informations. I do believe Google is doing exactly that - and that we might throw all keyword tracking tools away :-)
[edited by: Brett_Tabke at 3:51 pm (utc) on Feb. 10, 2005]
[edit reason] off topic [/edit]
itloc if your theory was true then why do some websites sit solid at the top whilst others jump in and out every hour or so? I know first hand because I have several websites sitting still and others jumping around as google jumps through the different datacenters.
It has been said many times (and I agree) that if the serps aren't stable it is bad for searchers as they tend find a site the second time by remembering the keyword they used as opposed to adding the site to their favourites. This is just very confusing to the average joe surfer. I think things will stabilise within a week or two.
I think on-page factors might be more important again. I have 2 similar sites, one has lots of text on the home page the other is fairly minamilist, besides that they are very similar. Anyway, one has dropped, the other has shot up.
If it isn't related to this my other theory is it is related to IP addresses and cross-linking as both sites are on different servers.
PO: What if the top ten results were actually the top two results from 5 different schools of seo?
I'm seeing the inclusion of a few pages which wouldn't normally be top ten but through either onpage factors or lsi or something else have been included in the top ten.
Suggy - I'm also seeing an increase in subpages in the 'bad' version of the two rotating indexes.
Google seems to be penalizing sites on searches for their own company name possibly due to overuse of the name in inbound link text (lol). If this penalty has a general component (if it is not entirely keyword specific) then we might expect to see fewer home pages in this index since these pages would be the most likely recipient of a company name overuse penalty.
|Google seems to be penalizing sites on searches for their own company name possibly due to overuse of the name in inbound link text |
That would punish nearly everyone in DMOZ...
"Google seems to be penalizing sites on searches for their own company name possibly due to overuse of the name in inbound link text"
hahahaha. Google penalizing us for bad SEO. Sorry Google, will try to stuff as many keywords as possible in the anchor next time.
No big changes on our sites apart from some problems with 302-redirects that Google has problems with (giving a push to finally change it...).
Update not yet over
Results I monitor are jumping around big time presume more than 1 set of results being served from different datacentres
No rant EW
|No big changes on our sites apart from some problems with 302-redirects that Google has problems with (giving a push to finally change it...) |
... and 301s ..... and 404s ..... and anything with a vowel in the keyword .......
I know that this may not have anything to do with being bounced out etc... but everyone of my pages have this as the first line of code. I'm using class style sheets for text formatting, ie. font, font size, font color.
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
I found out some VERY strange and curoius things.
I just compared DCs with the McDar tool and I saw my sites ranking very well on nearly all DCs. But when I try to search on google.com, my site is not there and the results are different.
Taht means Google shows different results depending on if you look in the frame of the mcdar tool or on the real datacenter in an own browser window. But why?
After some research I found out that the mcdar tool uses the following format: [18.104.22.168...]
But if you search regularly on the DC the search URL is [22.214.171.124...]
The difference are the twowords "ie" and "search".
When searching the "ie" style I rank very well on nearly all DCs, when searching the "search" style I do not rank anywhere.
What'S this all about?
The &hl=de variable means that the search is German language. The McDar tool states no language so &hl=en (English) is assumed.
Err, the /ie means "International English" and you get taken to a site that says "Google English" rather than the usual "Google" as seen at www.google.com.
Right now Im seeing not only the two sets of datacenter results - but there is a third rolling around that is a completely different set of results I havent seen to date - and much different and definitely not any of the "update" sets so far - it may be a "freshie" update as the cache dates are Feb 8th, but its awfully significant changes for a fresh update.
Just noticed a new wrinkle in my site results
yesterday allinurl:mysite.com produced 60,000 results and only 350 or so were not in the supplemental results
also yesterday allinurl: and site: searches were returning different results.
now allinurl: and site: results return the same number of results, now only 5500 or so and I have nothing listed as being in the supplemental results
|Err, the /ie means "International English" and you get taken to a site that says "Google English" rather than the usual "Google" as seen at www.google.com. |
I doubt it. The site you're taken to depends on the IP address you use. All Google IP addresses have a stripped-down interface with no ads. They show title-only, but the snippet is there via a mouse-over. This interface is the /ie interface. It has been unchanged for at least 3 years. My guess is that it's an Internet Explorer interface for some feature in IE. Anyone else know more about this?
"All Google IP addresses have a stripped-down interface with no ads."
"They show title-only, but the snippet is there via a mouse-over. This interface is the /ie interface."
The ie interface has nothing to do with the IP addresses.
Has something gone seriously wrong with the algo? A search for [google.co.uk...] which is one of our key phrases returns a site with the words conservatory and advice in the domain name and fair enough they offer advice on conservatories. But 5 of the next 9 results are just directories linking to the number one result! In total 11 out of the first 50 results are just links to this site with very little supporting text around them. How is this useful to anyone searching for information and wanting unique content? I'm dismayed.
I'll take that back if I am wrong about the meaning of the /ie in the URL, but for sure the &hl=de is making a German language search in the other URL.
Whatever, when I visit a direct IP address, I see that directly underneath the "e" of the big Google logo there is the word "English". I don't ever see that at www.google.com.
|"All Google IP addresses have a stripped-down interface with no ads." |
You misinterpret. All Google IP addresses have the extra option of a stripped-down interface without the ads. I've been successfully scraping these for 2.5 years, thousands of times a day, hopping around various IP addresses. Are you telling me that I didn't know what I was doing all that time? By adding the /ie to any Google IP address -- as in [A.B.C.D...] -- you can see the simple interface.
www.google.com is nothing but a domain name that gets resolved to one of over 50 IP addresses. The Internet uses IP addresses, not domain names. Yahoo is different -- they intercept and redirect you unless the domain name is in the packet, even though you have reached their site with just their IP address. They've been doing this ever since the worm problems of last July.
All Google scrapers use the /ie interface because it is stable (the html code has never changed) and low-bandwidth (no ads or extra coding).
It doesn't always take you to a doorway page with "English" on the top. There are over 50 Google IP addresses you can try with the /ie interface, listed in other threads on WebmasterWorld. Try them all yourself and see.
"Are you telling me that I didn't know what I was doing all that time?"
No, I was telling you your first statement was wrong, which it was. They have the ie option as an option, which your second statement accurately states.
For all the very serious people here doing very seriuos new algo analysis - maybe you should have a look back on the mother of this tread where the more common members disscussing ups and downs. :-)
|>> There is no such thing anymore as a "dc". There are just some ip's which are serving different results - just like google.com. << |
thanks for the clarification. But in that case something is really wrong with Google. Why are my sites showing on prominent position when searching for english sites when they are obviously german. The language is german and I added a meta-language tag.
When searching in german google the sites are nowhere.
But for example on [126.96.36.199...] I am found on the good positions without changing the language.
Really frustrating. Made some good content and good optimaziation but Google gets my page in the wrong language.
Here's my summary for what it's worth:
20 primary keyphrases - mainly 'medical', not fiercely commercial.
Top 5 positions for these phrases over nearly 6 months, variety of pages.
Good keyphrase density, keyphrase in title, good anchor text, all pages 'static' - no dynamic elements to them
All 'white hat' - no dodgy techniques.
Now, all completely gone! No much coming up on a 'company name'/domain search.
My guess is this is either a totally revamped algo, or as I have seen previously, a 'work in progress' database.
What to do? Wait, hope and trust that Google will put this right.
I am seeing so many SERPS containing news releases from portal site, etc. that this cannot be providing good quality results.
Since the seperate thread didnt seem to get posted - what is everyones take on the possibility that in their "anti-spam" campaign, Google has started using the new whois feature "blacklist" that relys on the spew database of IP blocks listed as spamming emails? I have seen a very good correlation between some friends domains that have lost complete listings and the fact that their domains in a whois search show "listed" for the blacklist feature - the only problem I can see with this is that the spew database tends to blacklist entire blocks of IPs in response to the spam reports - so these friends sites just happened to be on a server that shared IP blocks with other dedicated servers that were spamming emails.
Not a conspiricy theory or anything - just a small parallel that seems to exist between different hosts/different servers having the same problems.
| This 114 message thread spans 4 pages: < < 114 ( 1 2  4 ) > > |