Welcome to WebmasterWorld Guest from 126.96.36.199
I read in another thread that you wrote that you have a recip links page. That is probably what is causing your site some grief.
In addition, having reciprocal links (or a recip links page, or even a whole directory with links) is NOT what causes this phenomenon. There are sites with reciprocal link pages and even directories with a percentage of recips that are untouched and have top-notch rankings. And that is a verifiable fact.
Remember, the algo is completely automated with very little human input. You probably need to take a long hard look at who your linking to and if they are spamming.
Remember, Google guidelines state not to have your site link to bad neighborhoods. If one of the sites you are linking to is spamming Google, it can have a drastic effect on your site. Check to see if all the sites you link to are following Google guidelines. If they are not, you might want to drop that particular link.
If a site is SPAMMING by a pattern of linking out to bad neighborhoods, it'll cause a problem with the SITE - not individual content pages that are simply not ranking. This is not the case, not by any means.
I don't know how many times it has to be repeated and requested to please not try to accuse anyone with this phenomenon of somehow spamming, because there's no basis in reality and it can cause unnecessary stress that's unfounded and unjustified and without basis. Trying to help is always appreciated, but this is serious, it's no place for folks to be chasing windmills.
[edited by: tedster at 9:16 pm (utc) on Feb. 27, 2008]
Two of my domains that have suffered have both been in email spamming recently - I've received loads of returned email but just ignored it
If your server has been compromised and they are sending the messages through your server, then maybe (and that is a big maybe) that could be used as some part of a score.
But with it being so ridiculously easy to forge email headers it isn't reasonable to think that a search engine would apply any negative rank to a site based on the "from" address of spam email. It would be just way too easy to abuse and take out their competitors.
Just to pick up on your previous point - I have no recip links on my site - the only outbounds (about 20) are to big well established trust worthy sites yet I'm still getting it in the neck - unlike others I've yet to see any kind of bounce back either!
[edited by: LineOfSight at 9:27 pm (utc) on April 17, 2007]
[edited by: trinorthlighting at 9:47 pm (utc) on April 17, 2007]
It was good for me at the time - the quality of the links were very poor with lots off topic. I made a post previously [webmasterworld.com] that said that 4 out of the 10 main competitors for me that have been unaffected all use some kind of recips / link directory so I'm not sure that recip links are a bad thing and done correctly, do work in your favour.
[edited by: LineOfSight at 9:50 pm (utc) on April 17, 2007]
I really doubt you will find a "recip link" page or "exhange link" page on any of the sites in the number one position
I'm not getting involved in the spat here, just trying to get to the bottom of this....just checked for an uber competitive single keyword in my sector and the #1 site does use a 'directory' for generating recip links. That said, it doesn't contain too many links so there's not a great deal of margin for links to bad neighbourhoods at this stage
Where is the pattern in bull in a china shop?
Where is the pattern in a drunk wandering through a tea party?
There is no pattern. There is a large amount of collateral damage that is not happening by design or desire.
The -950 penalty (it's be nice if folks would find another thread if you want to talk about other stuff) is not something that is mystical or hard to put a finger on. It is for the most part very precise. Pages are penalized to an area of the results. It seems clear that penalization to this place can occur for different reasons, and by close to pure accident, but it wouldn't be right to assume that was all their was to it either. There could even be a "950 score" in the algorithm where is adding up scored points from any of dozens of algo elements could trigger the 950 penalty, from "too much varied anchor text" to "too many synonymns" to "has a purple background".
The pattern is there is no pattern. That seems to be very hard for a lot of people to swallow, but it jibes with the way Google has done things for five years. There are a lot of aspects to the phenomenon that can be analyzed, and things to try to minimize your risk (different things for legit sites than spam sites), but thinking you can fundamentally anaylze the "patterns" of a staggering drunk is a fool's errand.
It seems clear that penalization to this place can occur for different reasons, and by close to pure accident, but it wouldn't be right to assume that was all their was to it either.
Those are not typically gov dominated serps, but have some gov results near the end.
1. There are some very high trustrank "domains" that are getting thrown into this on a "page" basis, which would explain the .gov issue...trusted everywhere, but the internal pages (at least for .edu/.gov that I've found) just aren't cutting the mustard for this penaltyrerankingblooper. Whether it is due to the pages themselves not having enough external trust thrown at them or the pages being considered off-theme from the root domain, we really can't say with 100% certainty yet.
2. Doing my own little tests with on-theme top 10 ranking sites linking to a site exhibiting the end of serps phenomenon -- no change. If it is external on-theme link related, that isn't the only factor, which means there isn't a simple cure all.
I've been sidetracked with some other stuff, but has anyone performed the following analysis yet?
a. pick 3 sites from the top 10 and 3 sites that appear out of place in the end of serps.
b. pull list of unique domains from the localset of that phrase (in my instance, would be roughly 940).
c. how many links from the localset do each of those sites in section a have? Is it abnormal?
Run it for 30 odd phrases and if nothing is out of normal, then localset links probably isn't a reason for the reranking; if it is out of normal...well...definitely add it to the equation. The problem, as mentioned before, is that the localset could well be hundreds of thousands of sites due to co-occurance phrases.
If no one has done this yet, I'll have one of my programmers kick off some scripting to see what we come up with.
When I click on the Information Retrieval link I get 1-100 with Berkley in the top 100 twice (no filtering), wikipedia 1,2 and another wikipedia listing in the top 100.
I can't find Berkley at the bottom (900+), but do find one page from wikipedia, which is their fourth for the search terms. (Does this mean Berkley bounced back on a different Data Center or something? I didn't see Berkley at bottom with no filtering and 10 results either.)
I repeated the same search, 10 results per page, scrolled to the end, looked at the last 50, and the only site I saw that made me pause a little was a umd.edu page, but it was from 2005.
Do you need to look at 1000 results with the filtering off to see if there is a penalty?
(Like the wikipedia page above?)
I haven't had an issue with this, but would like to understand in case it's something I run into.
Sorry if this is a dumb question, but I don't know any of the sites for the searches appearing to be penalized, so the Berkley one is the only one I have a chance of getting my head around.
[edited by: jd01 at 4:19 am (utc) on April 18, 2007]
(After the 480lb woman story of course.)
Talk about a focused page. Don't view the source or calculate the keyword density.
I figured it out. It's the visitor behavior portion. I've been staring at the screen shaking my head for almost five minutes, wondering could it be so simple? 24 words in an H1, 2 photos with keyword names for each (no alt tags), 3 word title, 16 lines of code, no outbound links. I just figured out SEO!
really hoping to find the link to "480-Pound Woman Dies After Six Years On Couch".
It helps to realize how ludicrous this is getting. It makes me feel like spending more time on developing content and less on how erratic Google is getting.
Sorry, my error in posting the link Justin.
No worries on the link Marcia.
I wasn't seeing berkley.edu there, and was wondering if this meant it was 'bouncing' on some DCs?
IOW Recovering naturally, etc., because I'm about sure if you saw berkley at the bottom, you saw berkley at the bottom.
If not, the wikipedia example is fine. Just something I can get my head around so I know what to look for and how to look.
I was also wondering if the 'penalty' is applied more often to sites with more than one page on the topic?
EG wikipedia has 4, one of which ranks *very* poorly considering the site it is on.
A bunch of notes and edits. Not sure why, feeling some sarcasm I guess.
So, you plan to spam the search results for 'mouse' on your way out?
"Senior member of webmasterworld.com found with Mouse in hand…".
Maybe, I'll take Needle… ever since I read 'spindle' on Matt Cutts blog I've been tempted to stick one through my monitor while viewing his site to see if I can poke a hole in his page.
"Senior member of webmasterworld.com found with Needle in screen… note said, 'Take that Matt!'".