Forum Moderators: Robert Charlton & goodroi
However, we have one site that is still #1 for the two main keywords.
I have looked at various theories, to no avail so far.
Here's another ---
Do any of you have badly affected sites in which the home page has AdSense with pictures right above the AdSense banner?
I have four pix semi-aligned above the three- or four-text AdSense listings.
Google actually wrote me an email a while back saying this was okay as long as the pictures were not intended to mislead visitors, just to "draw the eye" to the AdSense area.
BUT, the site I have that's not affected by the 27 June screwup does NOT have these pix above the AdSense area.
Yes, another screwy theory --- anyone else think this might be a problem?
No it doesn't mean that at all.
June 27th was just another screw up. Problems with sites obviously has zero to do with an algo. What algo decides that a supplemental result that doesn't exist should replace the main index page for a domain, or that random pages froma domain will be pinned at the bottom of site: search results and rank horribly while the rest of the domain does just fine?
This has happened seven or eight times now, and while something new could happen next, the most likely thing to assume is things will next happen as they have in the past... some screwups this time will be fixed next time, while other screwups will take place.
It's interesting for me, my ranking is still much worse than before but my index shows up first for site:www.mysite.com but not for site:mysite.com
Can someone explain what the difference is between the results for site:www.mysite.com and just site:mysite.com?
I pay to have a hosting company host my site, I don't do it myself, I've never done anything special regarding www or no www, is there something important I am missing that I should ask them to do on the server?
Thanks :)
If you read matts blog it was a data refresh on the existing algo. That means they did not have all the data. Once the data dropped back in, serps adjusted. A lot of the "deindexed" sites came back into the index.
So, like I said, if you took a drop in the serps, that means other sites outrank you in the exisiting algo.
I am just pointing out that matt stated that the changes on the 27th are going to stick.
Matt also said their will be another data refresh in two weeks. They obviously realized they were missing some data.
No it doesn't. Where did you get that idea?
"So, like I said, if you took a drop in the serps, that means other sites outrank you in the exisiting algo."
And that's not what you said. You said "So if your site dropped that means all the other sites are doing better in the existing algo" which is illogical at best. I don't know what you think you mean to say, but a data refresh means the data was refreshed. Like Matt said it was not an algo change but merely a data refresh, and obviously refreshing the data can mean some will be added, some will be lost, and some will be misinterpreted.
I was refering to a lot of the big daddy dropped data being reindexed and the supplemental index being recrawled and refreshed. So, with newer data now in the index, if you suddenly fell in the serps, then you really need to look at what sites replaced you.
Just keep in mind, during big daddy when a lot of sites dropped, people actually looked at their sites and found mistakes and fixed them, I know I did. I was busy making meta's unique, getting w3c compliant (This helped me find a lot of simple errors that were preventing me from being spidered properly) etc.... This all means people actually improved their sites and finally googlebot caught the newer data. I started changing my site up right at the beginning of the dropped pages era of big daddy and finally on the 27th google finally caught it.
There is still a big problem when older sites drop off the charts altogether.
There is still a big problem when thousands of sites went supplimental overnight on the 27th.
There is still a big problem while google is reporting on site page counts that are not even within 20% of actual.
The site command is still a big problem reporting supps before index.
You can bet your left one Google is not even halfway happy about how things stand right now, you can also bet we are not looking at a finished product this week. Expect further changes, nothing is more certain.
So, with newer data now in the index, if you suddenly fell in the serps, then you really need to look at what sites replaced you.
Not true whatsoever, especially in the case of canonical errors with perfectly fine sites and hyphenated domain problems.
Contrary to belief, being W3c compliant, as good as it is for the web world, has nothing to do with ranking well.
So, with newer data now in the index, if you suddenly fell in the serps, then you really need to look at what sites replaced you.
Agree with CainIV
Not true whatsoever - if we were talking about a change where some sites appeared at the top of the serps while others lost ground to them because they had more BLs, Better PR, Better Title Tags, Better Keyword Density etc then fine.
But the fact is that the sites that have lost position are now not being read (I suppose this term fits) properly by Googlebot and one of the side effects of this is easy to see as the homepage is no longer top in a site:domain.com search.
This is not a new problem with Google and has been discussed lots of times and G have even talked about fixing it - but that was a year ago and still we are no nearer.
As Steveb says it is logical to assume that we will continue on this path of screw up after screw up as Google are unable to fix this problem to date.
How often does Google compute site-wide things like:
% of IBLs that are trusted
% of OBLs that are relevant
% of OBLs that are affiliate links
% of OBLs that are reciprocal
% of pages in the site that are near duplicates
Also "refreshing data used by an existing algorithm" doesn't state whether the same parameter settings (weightings?) were used in the existing algorithm.
side-wide quality factor < X ==> splat
Just Guessing
I can't imagine how the heck you come up with that. It should be clear at this point that first you need to look at your domain. Other domains are entirely irrelevant to this phenomenon. This is not an update, so there is nothing algorithmic to talk about. It's about one batch of data replacing another, which means some errors were fixed and some errors were made, and in this case it seems clear that many more errors were made than were fixed by the data being refreshed. Talk about algos or penalties or ranking is not just missing the point, but to go off in unrelated directions.
It's a data update that happened (MC) but what is the data? And how is that used?
The site: problem is not new, just the sites that are suffering from it are new to the problem.
However, at the same time, the site: problem is screwy - it makes no sense that Google list some obscure or even supplemental page before the home page.
Still, you can't fix Google, so maybe concentrate on what Google is seeing in your "data"
As far as the W3C, it helped me catch title tags that were not closed, bad href tags, etc... Once I fixed those and the pages were recrawled, they were no longer supplemental.
Unclosed title tags is a killer in google. I learned that lesson the hard way.
I see in my sector very updated cached supplemental pages where before the 27th the supplemental results were june-august. Now they are more recent. Supplementals do factor into the equation, as that index was recrawled/rescored it is going to cause changes. Those changes are finally hitting with the data refresh.
It is not that any of you have bad sites, most of your sites I looked at in the past and they are good. What I am saying is look at the sites that replaced you and the changes they made. See if you see a trend.
One of my sites suffered supplementals first since 22 sept. and traffic from google was low. On the 27 juni changes came and it was homepage and non supps first for me. I have extremely well results now.
I think use of the filter of 22th of sept. is lifted from the sites suffering it and applied again on new data. Those days the filter was assumed to be triggered by dup. content and internal linking (using to much keywords in them).
My ten cents.
Well I wait 1 month before write my opinion.
I manage different websites created with same html template.
For some old site and some relevant keywords (that stay in first 3 position in last year) I disappear or dropped from ranking. For other new sites I boost in first position; this is very strange.
The ranking that I see gives advantage to:
- Affiliate program (duplicate content)
- Keywords Stuffing in Title and H1 on top
- Keywords density (more 8% for keyword)
I hope in Matt Cutts that said I’d give is to make sure that the site adds value and has original content.
You can call this update / refresh many things, but a 'clean up' is not one.
There are many options available to you, just enter a word of you choise in the stars.
This update is a monumental '**** up' that no one will admit to and will keep being reffering to as a data refresh
whilst suffering fom the site: problem I actually got an answer from Google confirming that the site was not suffering because of any penalty, so I don't think it's that
Google also doesn't think it's a penalty to be hijacked--when your home page and other important pages completely disappear because someone has set up 302 redirects to them.
Re. the changes in site command --
I have a client that lost up to 80% of his pages since April (they fluctuate up and down) but just recently while the www version of his domain was showing only the home page in site command now the page count is the same as the non-www version of his domain (still about 57% of pages missing and most of them are totally unique content--none of them are directly linked from home page however).
This site has been up for about 9 years, had a 302 redirect set up in Feb and this is the first that Google has finally credited the site:www.domain command with more than the home page. Now both commands show the same amount of pages.
So it appears to me it is finally fixing the cannonicalization issue for this site although about 2/3 of the pages are still missing.
BTW, this site has been ranking #1 for most of their terms even before the design and while pages are missing, so those pages have not been penalized--it is hurting in traffic however (down by 1/3) because of missing pages.
So it appears to me it is finally fixing the cannonicalization issue
So if the're messing with canonicalization, could a www <<>> non-www 301 put an algo (beta) off base?
See also [webmasterworld.com...]
[edited by: Martin40 at 8:51 pm (utc) on July 11, 2006]