| 3:49 pm on Oct 31, 2011 (gmt 0)|
Not all penalties or filters are reported in Google Webmaster Tools.
Let's start with the basics:
Have you checked your log file to see if Googlebot is regularly crawling your site?
Have you tried to retrieve your site as Googlebot to make sure there is no issue?
Have you checked your page headers to make sure no one added anything to them?
Have you checked your robots.txt file & htaccees file?
Have you checked your backlinks to make sure there is no change? Drop in good links or increase in spammy links?
| 3:57 pm on Oct 31, 2011 (gmt 0)|
This has happened on one of my sites today too.
I'm using CloudFlare, have a correct robots.txt file in place (not disallowing any bots, etc...) though im wondering if its anything to with the recent network issues our sites here in the UK appear to be having.
| 4:46 pm on Oct 31, 2011 (gmt 0)|
Are you searching from the UK or the US?
| 7:46 pm on Oct 31, 2011 (gmt 0)|
Adwords preview tool lets you see the serps in any country. Its not always 100% perfect but overall its a good option to use if you suspect ranking issues IMHO.
| 9:50 pm on Oct 31, 2011 (gmt 0)|
goodroi: The last time google cached my site was 6 days ago. I have seen the cached version, it looks right. I tried using a googlebot simulation tool which views the page like googlebot does. The page looked crap because all the uri are relative, not absolute, so the images and css could not be loaded. Headers (head-element, right?) has meta-, script-, title- and link-elements. Backlinks look normal. And we are not linking to any websites but our own.
Regarding AdWords, I did not quite understand what you suggested I do with it, will you please clarify? Thanks! :)
KJBweb: Beats me. I am sorry to hear you are experiencing a similar problem. It sucks.
Everyone: The website has an age verification system. Until recently (I removed this today), users who are not googlebots load a page called verify.php, whilst googlebot is sent straight to index.php. Users have to verify their age before sent to index.php. This happens server-side. Is this causing us to be dropped?
UPDATE: In my original thread, I wrote that I did a “site: my.domain” search, I did again this afternoon, now the website shows up. However, I cannot find the website when searching for the important search phrase.
Thanks for any further help guys! :D If you are ever in Norway, I’ll pay for your beer :D
| 8:12 am on Nov 1, 2011 (gmt 0)|
I suppose it just re-affirms what I already knew: not to rely on search engine traffic.
I'm not overly fussed as this week I'd planned to launch a PPC campaign for a new string of websites on Facebook; perhaps in due time the ranking will return on its own.
| 11:44 am on Nov 1, 2011 (gmt 0)|
Do you see the URL when you filter from Nettsøk to:
Sider på norsk
Sider fra Norge
Oversatte sider fra utlandet
Interested to see if your website appears when you filter for your search term.
| 1:34 pm on Nov 1, 2011 (gmt 0)|
Seoering, thanks for suggesting that. Unfortunately, it makes no difference.
| 2:10 pm on Nov 1, 2011 (gmt 0)|
^ Ok. Just that is what is happening to me right now. URLs disappearing from one filter but fine in another...
| 12:19 am on Nov 2, 2011 (gmt 0)|
I seriously need to sort this out. I was reading google's own information. They were writing that if you get a penalty, you can apply for a nee evaluation.
I suppose the first step I need to take is to find out if I have been given one.
Do you agree? Where can webmasters find out?
| 12:37 am on Nov 17, 2011 (gmt 0)|
We still need help. It feels like everything I built is dissapearing. The activity on the website has plunged.
I need help. I am begging for it.
Thank you for your time.
[edited by: tedster at 3:43 am (utc) on Nov 17, 2011]
| 6:36 am on Nov 17, 2011 (gmt 0)|
Marius, you are one of many that was severely hit by Panda. I truly hate to be the bearer of bad news, but I think it's important to devote as much time as you can seeking alternative traffic sources, as there is a strong possibility that you will not regain your Google position anytime soon, if ever.
Tedster has said in another thread that there is some evidence of some recovery by some people, so I suppose there is some hope, but it's hard to pay your bills on hope, so again, IMO the wisest course may be to immediately prepare for the worst (no Google recovery) and hope for the best. Do that by continuing to build the best site you can without regard to what you think that Google may want, because you'll never figure that out ~ that's the whole point of Panda (to be inscrutable).
If you can devote as much time to finding alternatives as you would to pleasing Google (which may be nearly impossible), you'll be laying the groundwork for a stronger longterm recovery. Because as others have found here, you may regain some of your positions in the next week or month or year, only to lose them again the day after. That's a recipe for constant stress ~ I know, I've been there ~ which begs the question: Is Google really worth that?
I think not.
| 6:45 am on Nov 17, 2011 (gmt 0)|
|I think it's important to devote as much time as you can seeking alternative traffic sources, as there is a strong possibility that you will not regain your Google position anytime soon, if ever. |
That is also true, and no, it's not very happy news.
I'd say Matt Cutts gave webmasters very good advice right before Panda ever rolled out when he said "chase your visitors, not Google's algorithm." That's only appropriate marketing anyway. Search engines are just a middle man, so find ways to go straight to the source!
And then comes the paradox. The more you succeed in building that kind of traffic, the more Google may reward you.
| 8:01 am on Nov 17, 2011 (gmt 0)|
Yes I agree. Its a #*$! idea to rely on one company. People doing this (like we did until today) let that company run the web site. We did that.
I am still in the process of determining if panda update is the cause of the change. How can I be sure?
There is at least one thing troubling me:
The search phrase is <snip>. Try it yourself if you want :)
I did a search, the top two results are a news article containing a case study of somebody claiming to be selling the product, second is a website which essentially aims to do exactly what our website is doing -- selling the product.
I read that the panda update would use AI to rate websites more accordingly to how a human would. How can it be that a website which does exactly what we do, just 10 times worse rank 50 places or more higher? It is essentially identical to our website. The only difference is low quality photographs, ours has photographs by a top photographer in our country. The website has a few instructions how to order the product, we have a shopping cart. We have a top level domain name.
We have more information. Apart from that, the two websites are identical. I fail to see why the above differences but us on page 5. It would be more logical that two similar websites ranked next to eachother?
We are on the middle of page 5, perhaps you can find us after reading the above description :) that would be fun.
Anyway, how can you be sure the panda has struck? It sounds probable, but I think the above is a good point. Do you?
Also, hit number four on page 1 of the search is a blog we own. What do you reckon?
Thank you for your time.M
[edited by: Robert_Charlton at 7:13 pm (utc) on Nov 17, 2011]
[edit reason] removed specific [/edit]
| 6:54 pm on Nov 17, 2011 (gmt 0)|
Before Panda, content was kind of king but, finding myself in the same position as you (ie. all original photography, all own content and lots of it), I am starting to think that content is actually a liability. Panda seems to hate content? Maybe it is illiterate and therefore can't tell what's comprehensive content and what's spam? Or, maybe it "knows" that users don't like content?
| 6:57 pm on Nov 17, 2011 (gmt 0)|
wot suggy said "content is actually a liability".
| 12:20 pm on Nov 18, 2011 (gmt 0)|
My initial thoughts
Haha! Suggy and Santapaws, that is such sad news (but probably correct) that my brain issues a humorous response to protect my consciousness against implications of what was written.
This is my hypothesis: As you know, the majority of the population of the world (and searchers of Google) respond positively to abstractions of sources of information which is calibrated to evoke as many pleasurable emotional responses as possible. My observation in that original content sadly rarely falls into this category. The short answer: it is too boring or unpleasurable for laymen.
I know countless examples. Everyone wants to know that the birth of the universe was big and violent, because it awakes feelings of excitement, fear or humility. A fraction of the population wants to know studies carried out to determine what the 'big violence' was.
And even fewer wants to read "Origin of Species". More people want to read "The Selfish Gene" because it is a more pleasurable read (Can't blame them). However, I think the result is a bit sad: pioneers are infrequently rewarded relative to the businessmen copying original contents, making it more pleasurable.
There is at least one problem with my hypothesis. Our website is in an industry aiming to arouse the viewer, aiding him to reach orgasm. The website was packed with pleasurable awakenings. I reckon our website probably induce more pleasure than the websites dominating us by 50 places or more.
I hope you enjoyed reading my thoughts. But let me try to be more more useful.
Trying to be useful
Has there so far been carried out any studies trying to make sense of what puts some websites further up than others? If nobody knows I will make a thread about it in a few days.
Please let me know.
Thank you for your time! :)
| 5:16 pm on Nov 18, 2011 (gmt 0)|
Not sure about all this pleasurable stuff, but what I can tell you is I am about to launch a new version of my website (e-commerce) with the content pared down to the miniumum to see what effect this has. Sad thing is I am ditching content that I know helps sell the product, because I have split-tested it with emphatic results. Unfortunately, converting 10% more is no good if the very same helps consign you to page 44 for your own product (yes we make it!)
| 5:45 pm on Nov 18, 2011 (gmt 0)|
@Suggy, [I am starting to think that content is actually a liability.]
I think that you could be right, and here is my reckoning:
1. Google used to say don't copy content... and that included manufacturers descriptions etc. They now have changed that to: if you copy content and display it better than everybody else we may reward that... even if it is copied content
2. It seems (especially clear with Siri) that we are going toward a ubiquitous web. One where search is dominated by many filters and the user intent is getting more defined as AI databases grow. As such, the web isn't going to be saddled with the need to have lots of "content"... just defining content. Content that defines the experience. Content that hits the points of interest that the user intends to receive/experience.
In other words: "Siri, find me the best hotel in Las Vegas for under $200 on January 30th - January 2nd" isn't going to produce a monologue from Siri:
Title: Las Vegas Hotels January 30th - January 2nd
Click her to go to Amazon.com. Las vegas hotels january 30th are so awesome. We love hotels in vegas during new years. Hotels in vegas for under $200 can be difficult during the impacted new years time. Most hotels in vegas get sold out during the dates Jan 30th - Feb. 2nd. Did we mention we loved hotels in Vegas during new years. Click here to book your trip now.