homepage Welcome to WebmasterWorld Guest from 54.204.64.152
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Subscribe to WebmasterWorld
Visit PubCon.com
Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

    
Subdomain seems to be penalized
lakr




msg:3468668
 4:11 am on Oct 4, 2007 (gmt 0)

Dear all,

It seems to me that one of my subdomains is at penalty by Google. Cited by:

1. When I search for : "subdomain.example", it is not on top like it used to be. though I got more links for it.

2. It is not on top for its title with quote: "Subdomain keyword keyword keyword"

3. It is now at 350 (+ 350 compared to last week) for a very competitive keyword, which it used to be at: top 20.

My Guess for ban:

- It contains links to my affiliate programs, but I did use "nofollow" tag for those links. And Many other sites are using this affiliate, but they are still okey.
- Keyword density: it contains many repetitive keywords: A Hotel, B hotel, C hotel, D hotel... N hotel. So there is a "hotel" repetition here, I do not mean to spam Google, but I feel that those keywords are hurting me. right?

I am not here to blame anything, but just raise a question for everyone to discuss, then learn something from my case. And we could find the best solution to address this problem.

Anyone here could suggest me to do anything when all pages on my site are linking to this subdomain, and their rankings are still okey.

Much appreciated.!

Lkr

 

Robert Charlton




msg:3468680
 4:57 am on Oct 4, 2007 (gmt 0)

lakr - This may be a variation of a problem that's showing up on site searches, and is also being reported in the serps thread.

See Ted's comment in this thread, which otherwise doesn't apply to your problem...

Index page near bottom on site:example.com results
[webmasterworld.com...]

When you see this, sometimes it's a temporary oddity as Google moves data around. If someone is seeing this right now for the first time, or even seeing the complete absence of the domain root on a site: query - especially when it's never been an issue before and traffic is still the same - then I'd just wait. Google is really shuffling data at the moment and in that kind of a period, stuff often happens that gets quickly straightened out....

Others are noting missing home pages in the October SERPs thread. It's not far fetched to imagine that the default address of a subdomain might exhibit the same problem right now.

Or, as you suggest, you may have a different problem. I'd look for problems, but I'd sit tight for a while before I made any changes.

lakr




msg:3511189
 4:43 am on Nov 22, 2007 (gmt 0)

Dear all,

I seem to find the reason the penalty: I recently found that a website copied 100% content on my subdomain (what I read here is proxy hack. My site is pure hand-written HTML, and I do not know how it could be hacked.

Its ranking now is extremely bad, which once was great.

Please tell me how to deal with this problem?

Thank you very much, I really need your help now because I am living on it now, all of my income generated from it. So please.

Lkr

[edited by: tedster at 9:35 pm (utc) on Nov. 22, 2007]
[edit reason] spliced two threads together [/edit]

tedster




msg:3511218
 5:42 am on Nov 22, 2007 (gmt 0)

what I read here is proxy hack

A proxy hijack is not really a flaw with your server's security. It is a flaw in Google's current methods of indexing. There are extra steps you can take to short-circuit a proxy hijack. We had a long discussion about this recently and it's archived in the Hot Topics area [webmasterworld.com], which is always pinned to the top of this forum's index page.

See [webmasterworld.com...] for help on how to defend against proxy server urls hijacking your rankings.

lakr




msg:3511249
 8:41 am on Nov 22, 2007 (gmt 0)

Thank you Tedster, I also noticed that this kind of evil happened only to the websites that I submit its content to Digg.

I know the website proxy hacked my sites, it also created an account on Digg. How evil they are.

Did Google do anything to prevent it?

Tks.

Lkr

tedster




msg:3511526
 7:26 pm on Nov 22, 2007 (gmt 0)

The proxy server is not necessarily a bad guy here - it's just a proxy server. It acts as a proxy for the user agent that makes the original request.

ANY public proxy server can be used by anyone to request content from any public webserver at all. The proxy server is transparent to this process and it does not host or steal your content. Your server is still hosting and supplying the content, and simply replying to the proxy server's request. The proxy server just passes your information back to the user agent that made the original request.

The problem is that the REAL GOOGLEBOT can crawl and index any public site through any proxy server.

The thread I linked to above discusses this problem. The first line of defense is to verify that googlebot user-agent requests are coming from an official googlebot IP address, as discussed here: How to verify Googlebot is Googlebot [webmasterworld.com]

Here's the information that Matt Cutts provided from Google's crawl team:

Telling webmasters to use DNS to verify on a case-by-case basis seems like the best way to go. I think the recommended technique would be to do a reverse DNS lookup, verify that the name is in the googlebot.com domain, and then do a corresponding forward DNS->IP lookup using that googlebot.com name; eg:

> host 66.249.66.1
1.66.249.66.in-addr.arpa domain name pointer crawl-66-249-66-1.googlebot.com.

> host crawl-66-249-66-1.googlebot.com
crawl-66-249-66-1.googlebot.com has address 66.249.66.1

I don't think just doing a reverse DNS lookup is sufficient, because a spoofer could set up reverse DNS to point to crawl-a-b-c-d.googlebot.com.

[googlewebmastercentral.blogspot.com...]

Getting proxy-crawled by the REAL googlebot is the situation that causes you ranking hijack problems at Google. But if the REAL googlebot requests your content through a proxy server, the IP address of the request will not match the googlebot user agent. Your server can detect this mismatch of IP address and the googlebot user agent and refuse the proxied request. That fixes the issue.

ADDED COMMENTS
This problematic hijack process gets started because, somehow, googlebot got a proxy server url for your content and then crawled your site through the proxy. How did Google get those proxied urls? It can happen either accidentally (publicly open server logs, for example) or maliciously. But discovering how those proxy urls were provided to Google is not a fruitul defense. It also doesn't matter WHICH proxy server is involved.

If your server uses the googlebot verification technique outlined above (forward and revere DNS look-ups), you will break the chain of events that causes confusion in Google's index.

There is a separate situation where someone scrapes your content and then actually hosts it on their own pages, often after some slicing and dicing. This is not a proxy hijack, although the impact on your rankings may be similar. The double IP verification of googlebot will not stop scraping, except in the case where the scraper request spoofs the googlebot user agent. There are many other ways to scrape content, however.

The best defense against scraped content outranking your original version is to build a strong and trusted website of your own. Scrapers have trouble outranking you, or getting your domain filtered out, unless your domain is already weakened in Google's view.

Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved