| 5:19 am on Mar 17, 2011 (gmt 0)|
I've recently made some huge changes to my website, in an attempt to remove junk from Googles index. I felt a little anxious that the changes may not have the desired effect. But I've always had the view, that if something I do, lowers my rankings, I can undo them and have things return to how they were.
In the past when I've made SEO blunders, undoing the changes has always remedied the drop in traffic and ranks returned to their former glory. i.e my coder recently noindexed my entire website and we dropped 10k visitors a day but after he'd rectified the mistake, traffic quickly returned to normal.
What's been your experience? Have you ever made a mistake, reverted your changes but not regained SERPS? If so, what do you think would be the reason for this?
Does Google have a memory? Does it record past issues, that perhaps reflect on our rankings? I personally think it's more likely, that rankings are based on the current state of your website and web presence.
I've always aired on the side of caution. Built links at a steady pace etc, but I'm not entirely sure it's helped me. My competitors are buying links like they're going out of fashion and benefiting. I think Google often scare mongers or more so sends Matt Cutts out to do it for them.
I guess ultimately what I'm wondering, is it better to air on the side of caution or push the boundaries to learn exactly what you can and cannot get away with. I'm not talking about black hat SEO, but buying links, less caution at the rate in which you buy links, obtaining links from unrelated niches etc.
If Google doesn't have a memory of bad practice, then pushing the boundaries is likely to be more beneficial than sticking 100% to their guidelines, though it would still be important to test and monitor and of course be ethical in your practices.
I imagine leading SEO's are constantly pushing boundaries and not necessarily sticking tightly to Googles code.
Its 5:15am, I realise my post is a little jumbled, but still some food for thought.
| 5:26 am on Mar 17, 2011 (gmt 0)|
|Does Google have a memory? |
It's an elephant ... Never forgets.
|I personally think it's more likely, that rankings are based on the current state of your website and web presence. |
I've thought it was pattern based for a while and seem to have 'indirect confirmation' from Mu in his recently posted statement [reposted here 3 or 4 times I think] about rankings not recovering until they can see if 'the change is for good'.
|I guess ultimately what I'm wondering, is it better to air on the side of caution or push the boundaries to learn exactly what you can and cannot get away with. |
That's an 'each to their own' question in my opinion ... What's your level of risk tolerance?
| 10:04 am on Mar 17, 2011 (gmt 0)|
|Have you ever made a mistake, reverted your changes but not regained SERPS? |
YES! I have evidence that recently Google is less forgiving of mistakes. Here are three cases from the last year:
First: SEOMoz did a "catastrophic canonicalization" experiment -- [seomoz.org...]
Second: My client put noindex,follow on a set of important pages and couldn't figure out what was wrong for a month.
Third: My client introduced a bug such that the site was serving nearly blank pages to Googlebot for a weekend.
In all three cases, while there is a problem on the site, traffic drops. When the problem has been corrected, the sites have regained only half of the traffic they lost. The lingering effects have lasted months. My client still isn't in the SERPs where they used to be for many mid-tail terms.
My conclusion is that Google has a "site quality and stability" metric that is important for rankings and can get hurt by these types of mistakes.
I'm interested to hear more about your nofollow problem. How long was it live, how much traffic had you lost, and how quickly did you regain traffic?
| 10:55 am on Mar 17, 2011 (gmt 0)|
the SEOMoz poster got everything back about a month after if I read it correctly. Probably there's a 30 day time out thing, Google hands them after letting you know that they've noticed, say, hidden text. I'll worry, as in really worry, if nothing is restored after a month or so.
| 12:10 pm on Mar 17, 2011 (gmt 0)|
walkman, I don't think you are reading it correctly. Before the experiment they have 250 pages indexed. Even a month after the experiment they only had 150 pages indexed. Not sure about how much traffic they lost long term, but my experiences with our sites seem to indicate that there is a long term effect.
| 3:31 pm on Mar 17, 2011 (gmt 0)|
(Well, maybe Dr. Pete will come along and give an update)
FWIW I have made errors with redirections and a mobile site, and recovered in four hours. Just about anything we see is gonna be pretty much anecdotal.
| 3:42 pm on Mar 17, 2011 (gmt 0)|
Embarrassed to say that I apparently didn't have a WebmasterWorld account. My fear of @netmeg has overcome that obstacle ;)
I had to check the post/data, to refresh my own memory. It's always tough to separate out effects over time, but I've been a bit lax on writing on that blog, and it looks like the indexed page count did eventually return to normal. Traffic recovered more quickly, but most of that is because 2-3 pages get the vast majority of my traffic. Once those returned, organic searches 90% recovered. That may not be the case for sites with many long-tail rankings.
The biggest takeaway for me was that, unlike many SEO issues, the most powerful, most Google-friendly pages got hit the quickest and hardest. That's more than a little scary.
It's generally amazing how powerful the canonical tag is. Yesterday, I launched a post on a personal blog that I canonicalized to a permanent page. I gave out the blog post URL, though, which got a solid amount of Tweets and other social media activity. Usually, my posts take 3-4 days to index (the blog is pretty new) - this one was up in a few hours. What was interesting is that, even though most of the traffic went to the non-canonical URL, the canonical version showed up immediately in search. Google takes the tag pretty seriously, IMO.
| 3:49 pm on Mar 17, 2011 (gmt 0)|
Hey Dr Pete - thanks for the update and welcome to the forums. Looks like netmeg has some awesome powers ;)
| 3:50 pm on Mar 17, 2011 (gmt 0)|
@realmaverick - To your original question, I remember Matt Cutts mentioning once that Google looks at links over a time window, and that definitely fits my own observations. That's not to say they forget the past entirely, but I suspect that your recent link profile (all else being equal) has more impact than your link profile over all of time. Sometimes, good behavior in the present can offset bad behavior in the past.
Of course, that only applies to mild to moderate quality issues. A manual penalty may never go away, without Google lifting it. I think there are definitely some risks that are too big to take.
| 3:53 pm on Mar 17, 2011 (gmt 0)|
Hey, Ted. I have no good excuse for never poking my head in here, other than that the amount of my day spent on forums and Q&A is already approaching levels that may threaten my health and/or marriage :) Love your work, and I've always been a fan of Brett's.
| 4:03 pm on Mar 17, 2011 (gmt 0)|
|I suspect that your recent link profile (all else being equal) has more impact than your link profile over all of time. |
That fits with the 'link churn' and 'link weighting' wrt 'freshness' (if I remember correctly) from one of their patent applications fairly well ... The description of it was to the effect of: If two pages each have 10 inbound links, one having gained the links a year ago and one today it could indicate the page with the most recent links was 'fresher' than the one with the 'aged' or 'stale' links and may be a better result.
So, from a 'hey, did they patent that yet' perspective, yes, they did and you're probably right on wrt current link profile v historical, unless of course it was determined a stale result was more fitting to the query and then the historical profile would probably have more weight.
It's Google ... There's always an either or, right? lol