Forum Moderators: Robert Charlton & goodroi
First some background. The site is a well-established, deep-information site with many, many thousands of pages and a PR 6 on the home page. While we have attempted to get some links to us, most of the hundreds of links to us are spontaneous from a variety of professionals who find our content useful. Therefore, we're not at all dependent on the "latest" SEO tricks - totally white hat.
Up until this week, we got >15000 Google referals a day. We are not dependent on ranking for "blue widgets" or any other identifiable term - our referals come from thousands of different keywords a day which reflects the diversity of our content. Therefore, only a massive drop in the SERPs across the board can cause a >90% drop in referals, as we are seeing.
We still are in the index with the same number of pages and our backlinks don't seem changed. We still have the same PR showing throughout the site (for whatever that's worth since if there are changes, they probably wouldn't show immediately anyway).
Here's the kicker: Another site we own, let's call it widgetville.com, is showing up ahead of our real site, widgetville.org, in the SERPs when you search Google for "Widgetville". The higher widgetville.com site is shown without title or description. Widgetville.com has been 301 redirected to widgetville.org. Widgetville.com does have a backlink or two out in the world, but not the hundreds that the real site, widgetville.org, has so I don't understand the higher ranking.
If you search for "a bunch of widget words that you find on the front page", three other web sites who quote our mission statement appear on the page and our page doesn't. However, if you click on the link to show "omitted" results, we are listed as the omitted page.
In a way, it seems almost like our home page has been hijacked by our own non-functioning site. And it also seems to be like the whole canonical root problem that trips up some site owners except in our case it is between two domains, not a problem of Google getting confused between widgetville.com and widgetville.com/index.html.
We've had this problem before - a year ago - and I queried Google about the problem. I was told that it was a problem on their end, not mine, and they would fix it. The widgetworld.com listing was removed and within weeks, my traffic grew from a trickle to where I started hiring people to deal with the blossoming new customer base. Now all of that is threatened.
So, anyone want to take a crack at explaining this or giving advice on how to handle it?
I have taken one step to see what happens. I've removed the 301 redirect from widgetville.com and put a simple sentence on the page that says to click on the link to widgetville.org. I did this to disassociate widgetville.com from widgetville.org in case Google was somehow seeing duplicate content from the 301. Not sure how that would happen exactly since that is the prefered method of dealing with pages that are no longer valid, but this whole thing throws me for a loop.
What to do?
NOTHING. Go back to building content, finding links, and then wait, wait, wait. You will make a comeback.
The worst thing you could do, would be to go make a bunch of changes. Just let the algo work itself out.
Considering all that Google is playing with [webmasterworld.com] it was only a matter of time before sites like yours and EFV's took a hit. On the assumptions normally made vis a vis good content, Google compliant on-site and off-site practices etc, Google should sort itself out and put your site back to where it was.
"Clean" content sites that took a hit on March 23 are now making a gradual comeback.
Brett's advice is the best IMHO.
but, what we did while traffic was bad:
moved a bunch of sites to dedicated IPs
checked/tested that all sites had a 301 from domain.com to www.domain.com
checked reverse DNS
added a timestamp to the pages
some sites are back and have since seen an increase (more than before) in traffic, and some are making a gradual comeback.
(the traffic improvements have nothing to do with the changes we made, although they may help in the long run)
Doing the things above were simply to take my mind off what was going on, i know that it isn't easy but i found that and creating content/etc more constructive than worrying about the SERPS.
I know it's hard to take but it really is a matter of waiting for google to sort out whatever mess they created.
All the best
Dazz
In another post Google as a Black Box Giacomo proposed that we talk too much theory
In Google, 'webmasterworld' are nowhere for one of Bretts most famous posts.
Now do the same in Yahoo, Brett's post is the top result.
I think Brett owes us an apology for his very very bad SEO. He should go back and reword those pages immediately in order to rank better in Google.... ;)
(Edited for clarity)
After that I excluded some files in robots.txt. I did this because I have some duplicate content, which can be reached via www.mysite.com/show?id=1000 as well as via www.mysite.com/this_is_about_widgets_1000.html.
In my opinion our serps can't become worse, so this is the best time to do some real cleanup.
Like diamondgrl's site ours has a lot of different content and has been hit very hard by allegra. Now it is time to wait for a comeback. But it is very hard to keep patient.
And, yes, we still keep adding content :-)
Here is a theory -
Are you caught in the Google Glue? No way to escape the glue. Check back in a year or so.
Or are you caught in the Google Goo? Still a few trickle through from Google but not as much as before. Might be months before the goo can be removed.
Some minor things that I would work on would be to clean up your meta tags, as it sounds like there may be a problem there just so that you do get your optimized site description to show up again.
Some other things to look into in the meantime if this persists and really begins to hurt you bottom line would be to expand out any PPC advertising you are doing in order to try to offset this initial loss in Google traffic.
Primarily though, work on building out the additional content, it can only help in the long-run.
It turns out that even though there was a 301 on widgetville.com, there was a robots.txt file that excluded all robots. So Google couldn't see the 301. I finally figured that out after checking the .com logs and realizing Google was only checking robots.txt, not the index page.
Anyway, so now the .com is gone thanks to my elimination of the robots.txt file and now that Google sees the 301.
Meanwhile, the SERPs have been doing all kinds of nutty things with relation to the widgetville.org site. If I search for "Widget Ville" on Google, sometimes it comes up fourth in the SERPS, even though it is a completely unique name and my site has, since day one, always been #1 in the SERPS for a search of the business name.
I'm not sure if somehow this domain confusion sucked all the PR temporarily out of widgetville.org and transfered it to widgetville.com, in which case it will be slowly regained now that .com doesn't exist any longer. Not sure how and if that's possible. I'm just grasping for straws.
Of course, the .com confusion might have been completely independent of the drop in referrals and that the drop is better explained by the broader earthquake in the SERPs that others are facing. But I have a couple of other data points to indicate that sites very similar to this one did quite fine in the recent shakeup and that's why I'm thinking it's related to the domain confusion.
Time may - or may not - tell.
that page /site is WORTHLESS to Google right now, that's why I think it doesn't rank anywhere. If I search for "mydomain.com" (in quotes), I'm not even in the top 300, every other stupid directory or scrapper site ahead of me. Now there's only one "mydomain.com", and I've owned it for 8+ years.
As far as a fix? Ive tried three different ways including the emails to Googleguy with the special "topic" in the subject that he requested a year ago for this problem and quite honestly I still have yet to find a workable solution - Ive tried the 301 route, the url removal route and even moved domains to completely different servers on a totally diff. IP range in a different country - so far not a one has worked.
Anyway, so now the .com is gone thanks to my elimination of the robots.txt file and now that Google sees the 301.
Doesn't quite pass the sniff test.
If your 301 redirect was in .htaccess, vhosts.conf, or somewhere similar the web server itself should've issued the redirect before there was any processing of the request to access robots.txt. I would assume this was more a problem with your 301 redirection than the presence of the robots.txt file.
domains get a 50-50 split - or where people own the .net, .com.au etc, they get split throughout the domains.
I had that problem with a .net and .com until I did a 301 redirect from .net to the .com.
I redirected domain.net, www.domain.net and domain.com all to www.domain.com based on suggestions in WW and about 1 month later it seemed to have cleaned itself up and my SERPs appear to have gotten a small bump out of this change.
It does pass the smell test because the .htaccess file was 301ing the index page, not every page on the domain. So Google did see the robots.txt file. And therefore since robots.txt told it not to spider, it never saw the 301 on the index page.
And how did the widgetville.com get in the index in recent days if there's a robots.txt exclusion? It seems to have been resurrected from the bowels of some Google datacenter somewhere since at one point many many months ago there was content on the page and there was no robots.txt or 301 telling Google to go away.
Here is the story:
When checking site:example.com I found some old ^example.com URIs which I wanted go clean up and get removed from the index to not get into any dupe-trouble. Don't know where these URIs came from -- perhaps sloppy external links.
I have a long standing 301 rewrite [URI permanently no longer in use, all requests should use new URI] for all ^example.com to www.example.com.
So I tried the removal tool to get rid of those old ^example.com URIs manually, and was baffled: my server sends a 301 to the removal-check-bot, the bot then follows that 301 and fetches the new URI (all seen in the server's log) andfinally the removal tool said the "page" can't be removed, as it is found still up.
While there are 2 URIs behind a 301 -- an old abandoned and a new one, Google just does not see it this way -- it just sees one "page" (whatever this is), confuses the old abandoned URI with the new one and consequently refuses to just forget about the old URI!
I finally solved this mess by inserting another rewrite rule into the .htaccess just before the 301 redirect section to send the google-remove-bot a specially crafted plain 410-GONE -- some sort of ReWrite cloaking?
RewriteCond %{HTTP_HOST} ^example\.com
RewriteCond %{REMOTE_HOST} \.googlebot.com$ [OR]
RewriteCond %{HTTP_USER_AGENT} googlebot-urlconsole
ReWriteRule ^(.*)$ [example.com...] [G,L]
This worked perfectly, the bot got his 410 and was happy. It finally believed that these old pages are gone, and they have been removed from the index by now. Other users still get redirected as they should.
So whenever your 301s seem to be un-honoured, you may try this hard way as a last resort.
Regards,
R.
I also have a content site with lots of links from other content sites (which I didn't have to request), and I've always ranked well in Google. It seems strange to suddenly be getting more traffic from Yahoo than Google. I can only hope Google will reverse whatever detrimental changes it made and my traffic will recover.
anyone use any different status codes from 3xx or 2xx and noticing strange behavior with these?