Forum Moderators: open
I am talking about traffic of nearly 7 k from google everyday and hence its a sizable decrease.
Looking for early answers on how we could check the things
I'm still sticking with theory number 2 on message 120.
[webmasterworld.com...]
Seems to be gaining ground, along with an improved dupe content filter.
Cabbie's post was really funny! Its nice when someone uses humour to make a point.Absolutely agree with all of that! But some may not have gotten the joke... and it's just not our style to poke fun at beginners... But what do I know!
<grumbling>Nobody EVER listens to me. Hmmm... Well maybe there's a good reason for that... </grumbling>Naw! :)
What was the darn topic of the thread? ;)
1) shopping site
2) free printables site
3) party supply site
All have white backgrounds.
All have full keywords, meta etc.
All were top in the search engines for about 3 years.
keywords on pages were the right amount
information & offers were on all pages for each topic.
Robots file is not blocking.
Google hasn't crawled but 1 page per day since crash, it is the same page.
Some page ranked pages of 4 are now showing 0.
one site is completely content, one is half and half, the other is just shopping.
All three are different topics.
What I don't understand is that they are all different from each other and were tops in the search engines for years as we kept updating to google's rules. Now "Pffft" gone.
I know...,I tried it.:(
I just hope my cat doesn't die on the day I get my traffic back.Cos the next time I lose 70% of my traffic I'll be looking for a cat to kill.
Thanks for the kind words Dave.:)
"If you cross-link disproportionately between multiple host-names you may be wiped out in Google for all the terms contained in your anchor text"
As an explanation for "disproportionately" a number of probably 100+ was given.
As the traffic drop occurred while PR and backlinks where updated, this might be a clue about what's happening.
Two of my site have lost about 90% of traffic. And both are sites that are quite heavily cross-linked to other sites (that offer related products, so it's just a service to generate sales and not a PR trick).
On the other hand: both sites have mirrors in different languages on seperate domains. And those sites are still getting the same amount of traffic (that's what puzzles me most: it looks like something happened at random).
* more aggressive duplication filter (I'm not posting reviews to Usenet any more, some archive/mirrors of old posts are ranking above my own pages!)
* devaluing of internal link anchor text (possibly with some kind of penalty for link structure and a lack of outgoing links)
From the SERPs i have seen google has lost a lot of ground over competitors.
I am seeing nothing but spam and poorly seo'ed sites at the top of many industries.
50 Links in Hidden text, spam keyword lists in noframes tags and the list continues.
If this is the way forward, then the internet is in trouble.
I've been doing some light reading over the last two days and this could also be true.
Apparently an easy way to find duplicate sites is to look at their link structure and not their content.
I would imagine if there were 50,000 sites with internal links that had the anchor text "furry green widgets for gerbils" the target pages would be classfied into one set and possibly only one result is pulled out from that set. Similar URLs end up pointing to similar content (which is perhaps why google keeps telling us .. use "-" to seperate .. when we know other seperators work.
So, how about this.
<a href="/furry-green-widgets.html">Buy furry green widgets</a>
Remove all the stop words, generic words and noise from here.. and you're left with a page very specifically about "furry green widgets".
This could take out a lot of the datafeed driven sites and sites in travel etc, which have relied heavily on internal anchor text.
Yeah, I know I'm going to get slapped around for focusing on this... BUT
Iocaine Powder's basic meta-strategy is simple: Use past performance to judge future results.
and
One must remember, when participating in a contest of this type, that we are not attempting to model natural phenomena or predict user actions; instead, these programs are competing against hostile opponents who are trying very hard not to be predictable. <snip><snip>Since it's the "dumb" entries that make RoShamBo interesting, so the challenge is mostly to figure out the things entrants who are not very bright will think of
Cool for me .. I'm generally not very bright with SEO techniques and do the dumbest things which well .. *shakes his head*
<added>Anyone remember when we were told that allinanchor or allintext (sorry.. i barely knew that command then..) was a good for your results? What happened? Everyone focused on that ... and then... </added>
[edited by: shri at 2:58 am (utc) on Aug. 12, 2004]
The way my site works is that pages are created via template, and content fills itself in little by little (user driven content) over time.
Pages with content are showing fine in the SERPS. Pages without content are showing as URLs only.
Previously, any new page would be indexed and would appear normally in the SERPS, regardless of how much content had been filled in.
It would make sense that internal links from those low content pages classified as url only would be devalued in Google's eyes. Hence across the board lower rankings for my site.
Seems to be amped up dupe content filtering, leading to decreased internal linking clout.
They have copied hundreds of sites and cloaked their porn site using the content of these hundreds of other sites.
If you find the offending sites in the index via a detailed enough search for your site, you will see that the cache is a copy of your site and the real page is a fake front page which related to the topic of your site but is really a front end to their porn site signup.
Both Google and Yahoo have been fooled into accepting the duplicate pages into their indexes and hitting the original or the copy or both with a duplicate content penalty.
E-mails to both Yahoo and Google have been ignored.
A company with offices (or co-conpirators) in Uraquay, Mexico, and Brazil may be part of the cause of this.They have copied hundreds of sites and cloaked their porn site using the content of these hundreds of other sites.
If you find the offending sites in the index via a detailed enough search for your site, you will see that the cache is a copy of your site and the real page is a fake front page which related to the topic of your site but is really a front end to their porn site signup.
Both Google and Yahoo have been fooled into accepting the duplicate pages into their indexes and hitting the original or the copy or both with a duplicate content penalty.
E-mails to both Yahoo and Google have been ignored.
Thta's a very intriguing theory. Certainly, I've seen more and more excerpts of my site showing up in the SERPs (with better positioning than my page which they link to). Never dug deeper to check for full copies. How would we check this network's cloaked pages? Could you PM me the network's address.
I have seen this in 3 websites as of now.
Cheers
Copper
Certainly, I've seen more and more excerpts of my site showing up in the SERPs (with better positioning than my page which they link to).
I've found excerpts (or whole pages) from my site on other sites, too. One blatant offender is in Georgia (the republic, not the U.S. state). I just ran across some of my content on a "scraper" site in Russia whose Webmaster address (found on WHOIS) is at a teen porn domain.
I don't know how Google can determine whose duplicate content came first. Is it practical for Google to maintain its own "Wayback Machine" for everything on the Web? I suppose one technique that Google could use would be to look for things like shady SEO techniques, where inbound links are coming from, whether pages appear to be computer-generated "scraper site" pages, etc. and make a judgment based on the perceived overall legitimacy of each site. If Google isn't trying something like this now, maybe it should be doing so.
[edited by: europeforvisitors at 6:40 am (utc) on Aug. 12, 2004]