Forum Moderators: Robert Charlton & goodroi
Google Updates and SERP Changes - April 2011
[edited by: tedster at 4:42 pm (utc) on Apr 1, 2011]
When my domain didn't even rank for its own name. Do you see your domains ranking at all?
They were not even ranking for their own domain name so it was the usual -50 box. I checked all of them and not a single site was ranking for their domain name.
Has anybody tried really thinning a domain down to just a couple of key pages.. I have just been hit hard by the UK update and Im seriously considering 404ing 30,000 of my 31,4000 indexed pages.
Has any body recovered doing this.
It seems that trimming the site will help you recover from Panda if I am reading this right.
1. Removed G adsense
2. Removed G Analytics
3. Removed most of the thin pages
4. noindex/follow many thin pages
6. Added 80% images, 20% videos
Amit Singhal: Well, we named it internally after an engineer, and his name is Panda. So internally we called a big Panda. He was one of the key guys. He basically came up with the breakthrough a few months back that made it possible.
I've recently been seeing, beginning roughly Thursday (April 14), that, for a given query, Google appears to be shifting the entry page it returns for a site to pages that it's deciding are more useful. It could have been happening sooner.
What should we do with PHP pages where the main purpose of these are to redirect to another site depending on the parameter passed. Should these links be put in a directory that is blocked by robots? I can see that the spiders are indexing all the variations of these pages. Several hundred to be exact. Do these count as thin pages?
I have often changed out affiliate links. When I deleted links out of the redirect file, Google wasn't able to discover the 404, and continued to index them. Over the years, dozens were accumulating. At the time of Panda, 30 redirect links were indexed in Google and 90% of them were 404 (and have been for over a year, but Google didn't know it). I have since removed the robots.txt denial and the dead redirect links have been deleted from the index. Only 2 valid links remain, but I am trying to figure out the best way to handle this in the future. If I were to add the deny back into robots.txt, those links would simply reappear in the index because Googlebot keeps requesting them from memory...if it can't encounter a 404, then it will reindex them.
Can a noindex, nofollow be added into a php redirect file above the script?