Forum Moderators: Robert Charlton & goodroi
My site, like many others has gone supple"mental". I have tried everything outside of completely re-building my site (it's huge so this would be quite an undertaking, plus Goog used to like it fine). I have used robots.txt to block any undesirable URLs and used .htaccess to fix my 301 www vs non-www problems (which never seemed to matter before). I even joined bloody Sitemaps to see better what was going on. According to Sitemaps, I am error free. All of this has added up to squat.
My earnest, desperate question is: what do I do next? I'm about ready to spit.
My site has been around almost 6 years, is only comprised of quality original content, and does nothing black hat. We produce a lot of new content regularly that is timely but no one sees it because it gets no Google love. We have a hard time getting one-way links because no one finds our articles without us telling them about them.
Is there anything I can do? My site was dinged by the update last Spring, and killed by BigDaddy. We are in Supp hell and dying on the vine. </rant>
Any advice is greatly welcome. Bless this forum and it's cadre of experts.
When I do a site: search for my domain, I pull up page after page of supp results, followed eventually by non-supp results. Interesting thing I just discoverd though - when I leave out the ".com" and just search for site:www.mydomain I get all the non-supp pages first.
I have seen pages enter the index and then drop out without a trace the next time Google twisted a knob. I have not been scraped or duplicated elsewhere on the web that I can tell.
[webmasterworld.com...]
[edited by: FrostyMug at 5:30 pm (utc) on July 9, 2006]
I do get a "view omitted results" link after about 4-5 pages of Supps. When I click the link, I get page after page of supps and then after about 30 pages of this I start getting non-supp results.
The supp results I see are a combo of real, existing proper URLs to my pages mixed with incorrect versions caused by my "force into frames" javascript. Those kinds of wrong URLs have been blocked by robots.txt for a few months now.
The difference in the two is this:
[mydomain.com...] is a good proper link.
[mydomain.com...] - is the wrong version caused by the javascript redirect.
Like I said earlier, the wrong ones are blocked by robots.txt, and my sitemaps analytics confirms this. But, still they persist and trip me up.