homepage Welcome to WebmasterWorld Guest from 54.166.84.82
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

This 233 message thread spans 8 pages: < < 233 ( 1 2 3 4 5 6 7 [8]     
A real Google conundrum
Established site losing all its Google traffic
diamondgrl

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 1:07 am on Mar 27, 2005 (gmt 0)

I have a well-established site that over the past few days has lost almost all its Google referals. I think I know what's wrong but have no idea how to fix it.

First some background. The site is a well-established, deep-information site with many, many thousands of pages and a PR 6 on the home page. While we have attempted to get some links to us, most of the hundreds of links to us are spontaneous from a variety of professionals who find our content useful. Therefore, we're not at all dependent on the "latest" SEO tricks - totally white hat.

Up until this week, we got >15000 Google referals a day. We are not dependent on ranking for "blue widgets" or any other identifiable term - our referals come from thousands of different keywords a day which reflects the diversity of our content. Therefore, only a massive drop in the SERPs across the board can cause a >90% drop in referals, as we are seeing.

We still are in the index with the same number of pages and our backlinks don't seem changed. We still have the same PR showing throughout the site (for whatever that's worth since if there are changes, they probably wouldn't show immediately anyway).

Here's the kicker: Another site we own, let's call it widgetville.com, is showing up ahead of our real site, widgetville.org, in the SERPs when you search Google for "Widgetville". The higher widgetville.com site is shown without title or description. Widgetville.com has been 301 redirected to widgetville.org. Widgetville.com does have a backlink or two out in the world, but not the hundreds that the real site, widgetville.org, has so I don't understand the higher ranking.

If you search for "a bunch of widget words that you find on the front page", three other web sites who quote our mission statement appear on the page and our page doesn't. However, if you click on the link to show "omitted" results, we are listed as the omitted page.

In a way, it seems almost like our home page has been hijacked by our own non-functioning site. And it also seems to be like the whole canonical root problem that trips up some site owners except in our case it is between two domains, not a problem of Google getting confused between widgetville.com and widgetville.com/index.html.

We've had this problem before - a year ago - and I queried Google about the problem. I was told that it was a problem on their end, not mine, and they would fix it. The widgetworld.com listing was removed and within weeks, my traffic grew from a trickle to where I started hiring people to deal with the blossoming new customer base. Now all of that is threatened.

So, anyone want to take a crack at explaining this or giving advice on how to handle it?

I have taken one step to see what happens. I've removed the 301 redirect from widgetville.com and put a simple sentence on the page that says to click on the link to widgetville.org. I did this to disassociate widgetville.com from widgetville.org in case Google was somehow seeing duplicate content from the 301. Not sure how that would happen exactly since that is the prefered method of dealing with pages that are no longer valid, but this whole thing throws me for a loop.

 

MrSpeed

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 2:45 am on May 26, 2005 (gmt 0)

I'd be interested to know what the phrase was that you searched for. I worded my comment to avoid saying "100% original content"!

I did search with a sentance from one of the hotel descriptions.

Google seems to be on a rampage against datafeed type sites.

max_mm

10+ Year Member



 
Msg#: 28756 posted 3:25 am on May 26, 2005 (gmt 0)

Google seems to be on a rampage against datafeed type sites.

I beg to differ. It should read backwords.

"Google seems to be on a rampage against sites which datafeeds scrapp content from."

fearlessrick

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 3:54 am on May 26, 2005 (gmt 0)

Once again, the scourge of the corporate world has struck. Anyone notice how well ebay stores and amazon affiliates did on Bourbon? And how poorly Yahoo stores do? I've done dozens of searches for all kinds of things the past few days and I found that if you're a big corporation with a trademarked brand, you'e fine, no matter what (I think Mike No Name mentioned this earlier in this thread).

Also, I don't find many blockbuster pages when searching for DVDs. Is that odd? No Yahoo, no Blockbuster, black is white, up is down, day is night. Sound familiar? Big Brother is watching, and his name is Google.

A politicized search algo... interesting concept, but wrong.

In 12-18 months I expect to see plenty of "What Went Wrong" kinds of exposes on the big G and how they went down the toilet.

Too much bourbon, that's a good start.

dbr1066

10+ Year Member



 
Msg#: 28756 posted 6:34 am on May 26, 2005 (gmt 0)

joeduck
No css in layout, we've used it for text formatting for over a year, so I don't think css is an issue here

MrSpeed
If you're right, there's a lot of perfectly legitimate commercial (and mixed information and e-commerce) based sites across a whole range of topics that are in a whole heap of trouble. This can't be good news.

aleksl



 
Msg#: 28756 posted 7:40 pm on May 31, 2005 (gmt 0)

Ressurrecting this discussion...

I was thinking about the Google crapping out over the long weekend, and here's an original though.

Assumption: Google$ is trying to use filters to penalize sites that get too many links at once.

Thoughts: This kind of blind thinking is going to put Google$ out of business.

Explanation: Google$ engineers seem to not grasp a simpe concept - that linear progress does not exist. That everything typically evolves in strides. You create a popular website - you will get many links quickly. You open a store and promote it (and if Google$ is against "promotion" per se on the web, it is done), and you will get a lot of publicity (and therefore links).

There is no definitive way of distinguishing between a link campaign and a popular website getting many links quickly. No way, no how. A link is a link. You can theorize about its location and importance and such, argue that FFA and link purchasing is wrong - the flaw is not in that action, it is in Google's algo.

Again - this thinking is based on original assumption above. If that is not the case, disregard it.

Our sites are down to 1% of its traffic from Google$. As it turnes out, it was brining <20% of the revenue and 75% of all traffic. What a vaste of bandwidth!

joeduck

10+ Year Member



 
Msg#: 28756 posted 12:34 am on Jun 1, 2005 (gmt 0)

aleksl -

I challenged max-mm's ideas earlier about sites getting killed at Google due to huge numbers of incoming scraper links, but I now think he may be accurately identifying the meat of the problem for many sites like those in this thread.

Like diamongrl's site we now see inferior sites listed above ours even when I'm searching on our own unique text content.

If Google was sloppy implementing 'anti spam' rules, they may be downgrading sites with huge incoming link patterns. Probably 10% of those sites are legitimate ones, but that may be an acceptable ratio to them at this time.

eyezshine

10+ Year Member



 
Msg#: 28756 posted 1:04 am on Jun 1, 2005 (gmt 0)

I not only think they are downgrading for fast link growth but are also downgrading for link shrinkage also. So you get a double whammy when google downgrades your site.

So first you get a lot of links from scraper sites and then google downgrades you which causes your site to loose it's rankings which causes your links to disappear from scrapers that use google results. then google sees a decline in links to your pages and further penalizes your pages which causes them to go supplimental.

It's a roller coaster ride. My next site I build is going to block all engines except for google and see if that stops the sandbox problem. This could be the reason for the sandbox of new sites because they get penalized before they can rank in google because of the yahoo scrapers creating tons of links too fast.

sailorjwd

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 2:04 am on Jun 1, 2005 (gmt 0)

Of the three likely reasons for my site's crash in Google I too give strong possiblity to 10's of thousands of scapper links to my site.

I think this is due to previous incredible rankings in Google across thousands of key phrases - many attractive to Adsense scrappers.

As I've read in other posts Yahoo recently gave folks the ability to link into their results. Since I had even better ranking in Yahoo across about 100 topics I suddenly gained thousands upon thousands of scapper adsense linkers via Yahoo.

Even Amazon is killing me with my Adwords URL which is getting into the G serps and may be interpreted as a link - by the thousands.

And, I have a few prices on my website and now some price grabber sites are spamming the results with redirect links to me!

So, I need to add.. way to go Google Goobers.

<added> All this in a little 250 page site... give me a break!<>

aleksl



 
Msg#: 28756 posted 3:18 am on Jun 1, 2005 (gmt 0)

Funny how this world works...

1. Yahoo! opens its search API (march 1st 2005 or about that).
2. Thousands of scrapers and legit sites go and get Yahoo! content
3. This produces thousands of extra links to sites that were top results in Yahoo!
4. Google$ can't handle it - takes only about 3 months to find that out - its filters start penalizing sites left and right, especially those in top 10 that don't already have thousands of backlinks.

joeduck: Probably 10% of those sites are legitimate ones, but that may be an acceptable ratio to them at this time.

It all depends on what is the ratio of good-to-bad sites. Example: if Google$ thinks that 70% of the web is spam, and 10% in reality are not, we are only left with 30% instead of 37% legit sites - i.e. 20% of all legitimate web is banned. This 70% number maybe high, but considering the amount of "p0rn" searches...

danny

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 3:44 am on Jun 1, 2005 (gmt 0)

Maybe "mostly original content" doesn't cut it anymore.

My site is 100% original content.

Well, maybe not quite 100%, since I occasionally include quotations from the books I'm reviewing. (But even there they are usually quotations that don't appear anywhere else.)

Sparkys_Dad

5+ Year Member



 
Msg#: 28756 posted 7:35 am on Jun 1, 2005 (gmt 0)

Reading this thread makes me feel very lucky, as my site (600 pages only 160 inbound links according to G--I always ignore or refuse requests to trade links) has remained rock-steady throughout Bourbon. Based on traffic, May has been our second best month ever (first was March '05).

However, as G continues to apply filter after filter to their results and steadily increases the number of variables in their algo, many a lab rat suffers needlessly. I understand very well that it may only be a matter of time before they point their needle at my kazoo. And while G often rolls back some of their filtering after they grow tired of torturing helpless rodents, there is no guarantee of this.

I'm not certain that it is wise to spend time adding fresh content to a site whose rankings have dropped precipitously, as this may very well be throwing good money after bad. There are no guarantees with G, and your problem--in their eyes--may have to do with some obscure matter of form and not content.

I would opt for the fail-safe approach. Continue to monitor the SERPs but leave the old site be, and start a brand new one. If you don't already have a back-up domain, buy one today and get 600 words of original relevant content onto the default page pronto. Then submit it to G. Should you ever be forced to use this back-up, you will already be in the index and stand a far better chance of evading the dreaded sand box.

Should the needle ever come, I would immediately proceed to build the new site from scratch, taking utmost care to avoid reaching back into the old site folder for anything.

Devise a new and improved text-based and robot-friendly navigation scheme. Plan a strategy for unique but logical page titles, descriptions, headers and alt tags. Keep the layout simple and completely separate form from content with a brand new style sheet. Use breadcrumbs and a site index. Wherever possible avoid reliance upon java scripts and Flash. Keep the ratio of content to code as high as you possibly can. Submit to Yahoo!, DMOZ and the majors, but don't trade links with anyone. Forget sub-domains, robots.txt, htaccess and redirects. I have never had the need to employ any of these and have never had a major problem with an update. Above all else keep it content rich but simple. The fewer variables you have, the easier it will be to dissect problems in the future. Then continue to update it and add ONLY unique content on a regular basis.

Use what you've learned since you designed the old site to improve the new one. I bet that you'll be pleasantly surprised at how much good stuff you've absorbed just hanging around this often-dreary place.

The current landscape of the Web makes writing off Google an impossibility. This may or may not change in the future. But whatever machinations lie ahead with G, I intend to be ready for that needle when it comes.

max_mm

10+ Year Member



 
Msg#: 28756 posted 1:00 pm on Jun 1, 2005 (gmt 0)

I challenged max-mm's ideas earlier about sites getting killed at Google due to huge numbers of incoming scraper links, but I now think he may be accurately identifying the meat of the problem for many sites like those in this thread.

Like diamongrl's site we now see inferior sites listed above ours even when I'm searching on our own unique text content.

If Google was sloppy implementing 'anti spam' rules, they may be downgrading sites with huge incoming link patterns. Probably 10% of those sites are legitimate ones, but that may be an acceptable ratio to them at this time.

Common sense really....

Lets stop for a second and look at the facts:

1) It is a known fact that G will penalize your site for gaining too many links too fast.

2) It is a known fact that G can't properly handle 301 and 302 redirects and will penalize your site.

3) It is a known fact that G is having problems with identifying good form bogus links ( doe's rel=nofolow tells you something?, why would they need such tag unless spam is already causing major havoc with their algo)

4) It is a known fact that G will penalize your site for dup content.

Do i need to list more facts?

And what all the above facts have in common? They are all describing the technics scarpers are using to link to your site. Millions upon millions of scrapers.

Do the math and you'll see that the real reason to our problems is right there in front of our faces.

Hope you are reading this GG. Seems like you guys are fast becoming clueless.
All you need to do is ask Yahoo to embed a special token in the URLs they provide to their search API members AND DISCOUNT THOSE LINKS (now tagged as scraper links) on your next crawl. Might help you clean up a good number of scrapers in the process too.

aleksl



 
Msg#: 28756 posted 2:35 pm on Jun 1, 2005 (gmt 0)

max_mm: All you need to do is ask Yahoo to embed a special token in the URLs they provide

don't think this will ever happen. Yahoo is VERY happy with Google choking on its API content, in fact only that one reason (to get rid of a competitor) would've been enough reason for them to release its API. Why would Yahoo want to help Google?

Let me add this to the puzzle, so we can all stop and think how deep is Google$ in it:

- they are hand-editing SERPs for major keywords (remember Yahoo! few years back did the same?)
- they are relying too heavily on PageRank which is manipulated by webmasters, and they don't seem to have another "know-how" to replace it.
- because of fundamental flaw in backlinks theory (i.e. that "good sites will have gradual increase...") Google$ is trying to put patch after patch onto a sinking ship (SERPs editors, URL removal, spam reports, dup. content filters, backlink growth), and this IMHO doesn't look good at all.

It would be interesting if a guru here of Google Guy will step up to the plate and answer these questions.

max_mm

10+ Year Member



 
Msg#: 28756 posted 3:06 pm on Jun 1, 2005 (gmt 0)

don't think this will ever happen. Yahoo is VERY happy with Google choking on its API content, in fact only that one reason (to get rid of a competitor) would've been enough reason for them to release its API. Why would Yahoo want to help Google?

You are absoulotly correct. In fact i recall now when i first checked yahoos API page (about a month ago). They spesificly asked webmasters not to place the Yahho logo on the generated API feed pages without special permission.

Why did they start offering the free feeds to webmasters to begin with? What do they possibly have to gain from millions of scrapers querying the yahoo servers evry given second and post the feeds back to their pages?.

It made no sense to me when i first saw it, i must admit. Yahoo offering free feeds? i knew back then that somthing must be cooking.

The returned feeds are not linked to Yahoo by any way. Plain page titles, description and sites URLs. Nothings goes through Yahoo (when the user clicks the returned URLs). What do they stand to gain from offering this service free UNLESS the big picture includes chocking a certain competitorís page rank algo and have this competitor's flug ship No.1 money makers Adsense/Adwords, end up on millions of spammy sites.

Check out yahooís API page and see for yourself (not going to post the link but it can be easily found here) how this brilliant plan all work. It may or may not be accidental......but it sure smells.

Something to think about Google.

claus

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 8:24 pm on Jun 1, 2005 (gmt 0)

>> Yahoo to embed a special token in the URLs they provide

*cough* They've done so from the start. Take a look at the "ClickUrl" field in the sample response on this page [developer.yahoo.net]. It's something like:

http //rds.yahoo.com/S=2766679/K=madonna/v=2/XP=yws/SID=e/l=WS1/R=2/ ... (etc.)

joeduck

10+ Year Member



 
Msg#: 28756 posted 12:48 am on Jun 2, 2005 (gmt 0)

Claus - are you saying that the token means the scraper links are NOT related to Yahoo or that they have been removed by the scraper sites or? What's your take on this idea that Google is penalizing incoming links from spam sites?

max_mm

10+ Year Member



 
Msg#: 28756 posted 1:48 am on Jun 2, 2005 (gmt 0)


*cough* They've done so from the start. Take a look at the "ClickUrl" field in the sample response on this page. It's something like:

http //rds.yahoo.com/S=2766679/K=madonna/v=2/XP=yws/SID=e/l=WS1/R=2/ ... (etc.)

claus,

I already experimented a little with the Yahoo feeds. The API returns a click URL (like the one you listed, that goes through Yahoo) and the natural plain site URL. It is up to you, the webmaster to decide which URL to use on your pages. Hence the yahoo URL can be easily filtered out.

Guess which URL 99.9% scrapers use?

ann

WebmasterWorld Senior Member ann us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 28756 posted 7:59 am on Jun 2, 2005 (gmt 0)

This sounds like we are caught in the middle of warring search engines and the blame SEEMS to lie more with Yahoo than with Google.

I think I may NOT show any Yahoo ads when they come out...MSN anyone?

ann

WebmasterWorld Senior Member ann us a WebmasterWorld Top Contributor of All Time 10+ Year Member



 
Msg#: 28756 posted 8:34 am on Jun 2, 2005 (gmt 0)

On 2nd thought, Think I will stick with Google. :)

claus

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 12:42 am on Jun 3, 2005 (gmt 0)

Claus - are you saying that the token means the scraper links are NOT related to Yahoo or that they have been removed by the scraper sites or?

Don't really know what to answer. Some scrapers scrape web pages, some scrape SERPs (among these, some scrape Yahoo, Google, MSN, etc), and others still use APIs and other feeds.

I guess I'm trying to say that "scrapers" are many different things, technically speaking. And then, max_mm is right, you can use another URL if you want to.

What's your take on this idea that Google is penalizing incoming links from spam sites?

Google (the engine) tries to identify "spam sites" and they (the algorithms) do treat those sites differently from other sites - including the links to and from. However, there are several different kinds of "spam sites" and Google does not always manage to identify all of them.

Discussion on this issue is influenced by the ongoing update. AFAIK, that one is pretty big, and I'm really not going to draw any conclusions at all already. It's too early. I'm 100% sure that some of those that have disappeared will come back though (and, unfortunately also that some might still disappear).

the yahoo URL can be easily filtered out.

Yes of course, but I just wanted to add to the disussion that the Yahoo people had actually done something to make it possible to identify the source (if the programmer chooses to use it).

I don't think the Yahoo Developers Network is an attack on Google. Not in that sense at least. It's something that will make geeks think about all kinds of weird things to do with Yahoo SERPs, so of course it will create goodwill and that will benefit Yahoo, but I really don't think it's something they thought of as being harmful in any way.

max_mm

10+ Year Member



 
Msg#: 28756 posted 2:55 am on Jun 3, 2005 (gmt 0)

claus is 100% in everything he/she have said above.

Yes of course, but I just wanted to add to the disussion that the Yahoo people had actually done something to make it possible to identify the source (if the programmer chooses to use it).

I just wanted to add re the above, that unfortunately most scrapers are using, or would use the plain natural site URL and not the Yahoo clickable URL. Because if they do the later it will quickly identify their pages as SERPs scarpers, something i am sure most smart spammers don't want to do.

Hence the need from yahoo to force a token in the API's urls and limit this issue. It is affecting too many web sites and i really doubt Googleís ability to detect with more than maybe 40% accuracy incoming links from scarped pages.

As claus said, although many of these scraper sites are easily detectable there is still a very considerable number that are not, and they go under the "radar" undetected, influencing other sites page rank as a result.

GG says that this update is not yet over, however i am very sceptical my sites will ever return to their original pre-bourbon position. Each and every one of my affected sites have gained an enormous unproportional amount of incoming scraper links and dropped off the index completly. I see evidence of a direct relation to the scraper issues and the chronology of their disappearance is directly related to the massive fast growth in incoming spam links and bad redirects.

The worst part, there is absolutely nothing I can do about junk sites linking to my sites and my 7 years old business (which I was extremely proud of) is taking a huge hit and is sinking in front of my eyes.

I wish GoogleGuy could comment on this burnning issue.

fearlessrick

WebmasterWorld Senior Member 10+ Year Member



 
Msg#: 28756 posted 3:59 am on Jun 3, 2005 (gmt 0)

Just thinking about this Yahoo API issue...

If Yahoo was so easily able to subvert Google's SERPs, they are truly vulnerable, and to an extraordinary degree, so are individual websites, if those sites depend upon SE traffic to a large degree.

Also, if Yahoo is partially or wholly responsible for the abject failure of Google's algo (and after two weeks, despite what GG and other G supporters contend, one has to at least admit the possibility that G's algo has been serverely compromised), then this would amount to merely the first salvo in what should, and almost surely will become an extended conflict between two enormous rivals.

Sites who depend on G for traffic, or income or as an advertising venue may wish to take a step back and reassess their alliances. Yahoo is a heavyweight and they've already exposed their own version of the publisher's network, so the battle is soon to be joined in earnest.

Personally, I think Google is headed quickly to the scrapheap of internet history. They entered the game late and Yahoo, not to mention Microsoft, have significant advantages, not the least of them experience.

reseller

WebmasterWorld Senior Member 5+ Year Member



 
Msg#: 28756 posted 6:17 am on Jun 3, 2005 (gmt 0)

max_mm

>The returned feeds are not linked to Yahoo by any way. Plain page titles, description and sites URLs. Nothings goes through Yahoo (when the user clicks the returned URLs). What do they stand to gain from offering this service free UNLESS the big picture includes chocking a certain competitorís page rank algo and have this competitor's flug ship No.1 money makers Adsense/Adwords, end up on millions of spammy sites.<

But this thing is ultimately hurting whitehat publishers, not only Google.

I have always believed in the benefit of free competition in all directions but this one doesn't seem to be fair at all.

This 233 message thread spans 8 pages: < < 233 ( 1 2 3 4 5 6 7 [8]
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved