Forum Moderators: Robert Charlton & goodroi
Do you really believe that ALL changes we have been witnessing since 22nd Sept 2005 are due to [FILTERS] deployment? and how many kind of filters have we seen since that date?
It is not because Iīm "trying to figure out why did you drop that glass" (Copyright steveb 2005, All rights reserved) ;-) , but it will help alot to find out what the affected sites should change after this update/no update is over in order to recover (or before filing the next reinclusion request).
And suppose for "political" reasons, the folks at Google wouldnīt admit that they have been testing algo changes (possibly in addition to filters), shouldnīt we ourselves decide on whether its an update or just filters?
it will help alot to find out what the affected sites should change after this update/no update is over in order to recover (or before filing the next reinclusion request).
If your sites adhere to the Google Webmaster Guidelines and you don't have any obvious technical problems (such as non-www URLs not being redirected to www versions or vice versa), then why change anything at all? Wouldn't it make more sense to file a reinclusion request and let Google sort out its problems?
Making changes for the sake of change (a common knee-jerk reaction to loss of rankings) is a bad idea for three reasons, IMHO:
1) It's likely to be a waste of time;
2) It may compound the problem;
3) It introduces new variables that make it harder for SEs to troubleshoot their own problems.
Making changes for the sake of change (a common knee-jerk reaction to loss of rankings) is a bad idea for three reasons, IMHO:
1) It's likely to be a waste of time;
2) It may compound the problem;
3) It introduces new variables that make it harder for SEs to troubleshoot their own problems"
europeforvisitors seems to admit that google has a problem.or i am wrong?
I know I will not be changing anything on our site as a result of the recent traffic drop, unless Google issue some new guideline or such. Bar concentrating a bit more on repeat visitors...
No point trying to catch bubbles, especially when a lot of it is hearsay. Plus you might change a bunch of things then they fix whatever they had wrong regards your site, but to no benefit since you now have some other thing flagged or whatever.
I am just hopeful things will resolve, whether that takes a week or 6 months remains to be seen.
What I don't like is lack of transparency, though I understand that is open to abuse.
The answer should be obvious if you understand the problem. Adhering to the guidelines and doing things right yourself is not the issue here. The problem is Google has gone a step further and is creating problems.
Google keeps data on deleted pages for at least two years. This data can reassert itself at any time and reappear as a supplemental result. It seems clear especially if the content on the two year old page is now on a different page, you could have a problem, even though you did everything right and sensible in deleting the page two years ago.
People should be working on getting rid of their Supplemental listings. Using the remove URL tool doesn't work. Redirecting a supplemental page doesn't work. Deleting a page and linking to the 404 location doesn't work. Personally I'm trying something else, which I am thinking has a good chance to work.
The point is simple: webmasters have to do things to make it less likely that Google will do something blunderingly stupid.
I was about to ask, what is the best guess as to when Google might get around to fix these problems?
Seems like the fix is not at all trivial or they would have done something about it by now.
So are they really on a downward slope to oblivion? things don't usually work like that.. so when, how long might it take for them to regroup, fix the filter (or whatever) and get things moving again?
if (recent)past experience is anything to go by it could be one or two months ..... athough I really hope not!
europeforvisitors seems to admit that google has a problem.or i am wrong?
No search engine is perfect, and any change to algorithms or filters is likely to have at least some undesirable side effects. Furthermore, algorithms and filters must evolve--with or without side effects--simply to cope with the exponential growth of junk pages that have no value to anyone but their owners.
Disclaimer: I'm not suggesting that "problems at Google" are responsible for every penalty or loss of rankings. In some cases, Google obviously wants to purge types of content that may have squeaked through in the past, and for good reasons.
my domain name was registered about 8-9 years ago, and it's a very good one. No dashes or anything.
My guess is that Google has tightened filters regarding too similar anchor test, and I've got caught into it. I probably only have 4-5 inbound links with "domain name" as anchor, but dozens with "domain.com," since many people link to me that way.
And since linking to a site by using the domain name is perfectly normal, NATURAL, and makes good sense, WHY would Google filter out results like that?
If you're going to link to Google, chances are you're going to have "Google" or "Google.com" in the anchor link, most likely it will not be "Search Engine", "Search Directory", "Feeling Lucky?", or something else similar.
If Google is in fact filtering sites because too many are linking to them with their domain name, then Google is 100% solely responsible for this mess, as well as being ignorant, short sighted, and out of touch to how people truly link.
To filter a site due to too many anchor links being similar is an INVITATION to manipulation on the part of Google. If it's a widget site, it's perfectly normal, and should be expected that the word widget will appear in most of the links.
I have no idea if this is really happening or not, I'll leave that to the more knowledgeable people here. But I will say that I've probably heard more ridiculous things than this, but I'm not sure when!
My directory sitee has a list of names and a link saying "see more info about (name)", which goes to a detailed data page.
The title and metatags in the detailed data page include the (name) used in the anchor text...
All those detailed data pages got crushed...
I'm busy rewriting over 300 hand coded shtml pages that have worked just fine for three years. Such is the breaks. I'm taking the opportunity to update poor coding, working on each page individually, reducing keyword density, cleaning up generally and hoping for the best. I shudder to think of what will happen a month from now when Google decides my keywords are too infrequent...:(
It's not enough to offer solid content. I do that in spades. Each original page on my site represents a project that I have designed, constructed, photographed, written up and coded so that others can reproduce the project (for personal use, only, of course).
That's a lot of hard work, folks...and it so disheartening when you hit with www and non-www woes; 302 hijackers; and those thieving ppl at a well known network who adore frames and are now ranking higher than me for the search terms to my original projects - grrrrrr......
I've just started adding the frame-breaking code to the head of my pages. It's very satisfying to click on the links that are supposed to be "about" my subject and get directly to my site for once. HAH!
One unique circumstance added to my troubles. Months ago my web host had a server failure; the backups were very stale. I ended up with antique htm pages that I had thought deleted being restored to my server, resulting in duplicate content when I uploaded newer shtml pages. I thought I had deleted all re-uploaded htm pages but apparently missed a few. Thanks to Google, I'm now finding and exterminating these dinosaurs.
My point is: even if you have a valuable content-based site, chances are not all this mess is Google's fault. I think they are making a genuine effort to rid their index of scrapers and you gotta applaud that. I just wish it wasn't costing me so much in terms of time, money and ego.
My site filtered completely.
No need more pray ... Gods of Google made decision.
I think, we have not too many chances:
1.) Wait 3 or 6 month perhaps we will get back
2.) Remove the latest indexed links from Google
3.) Contact will Google until somebody will give an answer.
Or?
Yes, Googlebot decreases with the Googlebug
So what happens is you dont get crawled by Google, scrapers crawl you faster than Google, scrapers get crawled in Google therefore scrapers get rewarded as them having your content as the original - Google makes money of scrapers.
Google win win win.
Dont know if delibrate - certainly sickining.
Google - how many more times - work out the canonical url of my site please. FFS - what do I have to do - the 301 is done!!!!!. You know it is there - so what have you done sandboxed it or something.
I just wish us Webmaster and Google could Cooperate on sorting out this problem.
I assume there is someone at the plex who is trying to trouble shoot - but a bit of 2 way communication on Canonical urls would probably get it fixed much much sooner.
I have tried to explain the symptons - but get nothing back - am I just wasting time and energy?
>>I have tried to explain the symptons - but get nothing back - am I just wasting time and energy? <<
I guess the situation at the plex is; whenever the folks at Google Search Quality Team hear about "canonical issue", they get more curious and ask; Is it a kind of "Bacon polenta"? Mmmm. Bacon-y goodness! ;-)