Forum Moderators: open
Normally the site grows at a tempo of 200 to 500 pages a month indexed by Google and others ... but since about 1-week I noticed that my site was loosing about
5,000 to 10,000 pages a week in the Google Index.
At first I simply presumed that this was the unpredictable Google flux, until yesterday, the main index-page from www.widget.com disappeared completely our of the Google index.
The index-page was always in the top-3 position for our main topics, aka keywords.
I tried all the techniques to find my index page, such as: allinurl:, site:, direct link etc ... etc, but the index page has simply vanished from the Google index
As a last resource I took a special chunk of text, which can only belong to my index-page: "company name own name town postcode" (which is a sentence of 9
words), from my index page and searched for this in Google.
My index page did not show up, but instead 2 other pages from other sites showed up as having the this information on their page.
Lets call them:
www.foo1.net and www.foo2.net
Wanting to know what my "company text" was doing on those pages I clicked on:
www.foo1.com/mykeyword/www-widget-com.html
(with mykeyword being my site's main topic)
The page could not load and the message:
"The page cannot be displayed"
was displayed in my browser window
Still wanting to know what was going on, I clicked " Cached" on the Google serps ... AND YES ... there was my index-page as fresh as it could be, updated only yesterday by Google himself (I have a daily date on the page).
Thinking that foo was using a 301 or 302 redirect, I used the "Check Headers Tool" from
webmasterworld only to get a code 200 for my index-page on this other site.
So, foo is using a Meta-redirect ... very fast I made a little robot in perl using LWP and adding a little code that would recognized any kind of redirect.
Fetched the page, but again got a code 200 with no redirects at all.
Thinking the site of foo was up again I tried again to load the page and foo's page with IE, netscape and Opera but always got:
"The page cannot be displayed"
Tried it a couple of times with the same result: LWP can fetch the page but browsers can not load any of the pages from foo's site.
Wanting to know more I typed in Google:
"site:www.foo1.com"
to get a huge load of pages listed, all constructed in the same way, such as:
www.foo1.com/some-important-keyword/www-some-good-site-com.html
Also I found some more of my own best ranking pages in this list and after checking the Google index all of those pages from my site has disappeared from the Google index.
None of all the pages found using "site:www.foo1.com" can be loaded with a browser but they can all be fetched with LWP and all of those pages are cached in their original form in the Google-Cache under the Cache-Link of foo
I have send an email to Google about this and am still waiting for a responds.
When you look at the source of your link, it's a redirect to *their* site, not yours.
So it would be a *huge* bug in Google to interpret a redirect to /index.php on a domain as redirect to an arbitrary page on another domain. This would make Google absolutely unusable :)
The only possible explanation against cloaking would be: It was a link to your site, but they suddenly changed it to their site yesterday *before* I looked at it and Google takes some time to adopt the new version. This can be solved by looking at it in a few days. But I bet in a few days the situation will still be the same, so it's cloaked.
It's a shame I can't get links from email or other websites into WebmasterWorld forum threads to work, otherwise I'd be publicising this thread all over - and not just in places frequented by webmasters and techies.
Almost time to vote with the feet and withdraw paid advertising from G and Y!, perhaps.
The "Google owes nothing to anyone except the surfer" line of argument has had its day, I believe. The web moves on and it's about time they set up a proper means of communication with individual webmasters, and I don't care how many zillion pages there are in their index - it's no excuse.
Of course they owe nothing to the webmasters. But in the surfer's interest they should equally level the playing field for webmasters.
There'll always be extremes: Webmaster that'd never violate the guidelines and the ones that will do it on every occasion. But if the undecided majority in the middle sees that it pays off and bears no risk, SERPs won't get better.
The offending site's webmaster has emailed me several times now, asking if he can help, doesn't understand, does not blackhat, blah, blah, blah.
But after some stickies back and forth with some people who are helping, I decided to try some things to see if this guy is either telling the truth or a really good con artist.
I searched for all the links on his site...
site:http.www.widget.com
I got a LOT of links.
I spent the last hour clicking every link to see where it went. I even found my link in there. And guess what? Out of all the links, mine was the only one that redirected. to his home page. Not sure why, but that is what happened
That doesn't tell me much in itself because my link doesn't exist on his site anymore and a redirect to the home page would be perfectly acceptable.
But I did not find any sites that had a meta refresh going to the correct site. So one of 2 things has happened here. Either my site was the only one that had a meta refresh originally (which I highly doubt) or the webmaster of the offending site got rid of all the meta refreshes. This seems doable. At least that is what I deduced so far. The links were actually going to the offending site (as it should be).
BUT! And this is really BIG but...
Next I ran a lot of the links through a header checker and guess what? All of them show redirects to the correct sites. So I wonder how this is being pulled off? The links in the SERPs go to the offending site now and stop, but the header checker shows a redirect to the original site and then a 200 on that site. What The?
Next I randomly checked the cache for the links in the serps and found almost half of every link shows the original homepage cached exactly like mine.
Next I randomly checked the offending links against the homepages using the link: command. Once again almost half of the backlinks for the offending site are exactly the same as the original homepages.
So what does this all tell me? I have no idea! Whatever is happening appears to be random. The offending links are taking on the cache and the backlinks of the original sites. Why some and not others, I am not really sure.
Could be the guy was caught and he is trying to reverse the process but google is slow on showing the changes in the SERPs. Could be that he has no control over what is happening and the SERPs are just falling where they may.
Anywayz...
Thought I would do an update. I am going to keep this alive 'til I get my site back.
p.s. Sent new emails today to google. No response yet, not even the standard auto-reply.
It could just be me, since we've been doing a lot of moving, but has anyone else noticed if the specific pages apparently being "hijacked" are ones they've recently redirected themselves or made major changes to (like title or major keywords)?
All the ones I've noticed it on, in our case, are only ones which we recently 301'd and or title changed.
I have a theory that they may have been around far longer than we knew, but were being highly penalized in the SERPs, but are only now "bobbing" to the surface of the SERPs as we change the originals.
It has nothing to do with the webmaster doing any cloaking.
Cloaking is not required but it is possible that it is being used to cover tracks - that's all.
Here's a scary thought - one for the chainsaw.
What happens if, in addition to a meta redirect, you add a robots noindex meta?
I'd say it's an even money bet that the target page will be nuked without leaving any explanation in the Google cache.
Kaled.
Googlebot visits this site about every 4-5 days, last seen 9/19, so it will be a few days before I know anything. I also stickymailed webdude with specifics so he can follow along and confirm things.
Googlebot only seems to treat a Meta Refresh like a 302 when a page redirects to an external page.
Eg: site1.com/bla.html => site2.com/blabla.htmlHas anyone seen an internal Meta Refresh acting like a 302?
Eg: site1.com/bla.html => site1.com/blabla.html
DaveAtIFG, could you try both internal and external meta refreshes? It is my belief that they are treated differently.
You guys are really stupid, sorry.I am beginning to agree with this too. Not the stupid part. However, I believe there might be other factors involved. I can't seem to put my finger on it. So far on the sites I have found that are exhibiting the cache and backlinks problem seem to be random.
It's a bug in google, I experienced it myself because I linked sites with a Location:.. redirect too.
It has nothing to do with the webmaster doing any cloaking.
As some have said it *can* be innocent... i think that is also right sometimes though the flaw is also being manipulated in black hat seo i believe having traced ip's back to various seo and marketing folks.I agree with this too. It can be innocent, especially if it is a bug. But this is what I am trying to find out in my case. I don't want to flame a site that is innocent. So how do you find out? I have had some check and have gotten differing views no this.
I even found s cloak checker on the web, but it seemed cheesy and did not show what it found. It just basically said "yes" or "no" on the cloaking. It's hard to put any faith in that.
MikeNoLastName wrote...
It could just be me, since we've been doing a lot of moving, but has anyone else noticed if the specific pages apparently being "hijacked" are ones they've recently redirected themselves or made major changes to (like title or major keywords)?Now, I find this extremely interesting because of 2 points here. Yes, I changed the title of my homepage about 2 and a half months ago from "Name of Site - Dedicated to Location Widget" to Location Widget at Name of Site" Could there be a correlation here!
Also, I renamed some of the filenemes on the site. I added 301s to all the renamed files as per other threads and advice. I thought nothing of this at the time. I knew it would take a while for the SERPs to straighten out. BUT, I posted in other threads of a problem I thought I was having with googlebot and the 301s. It appeared that the problem was two-fold. First, every time googlebot started a deep crawl, it would get to one of these redirects and then just stop. No more crawling of any other pages. It would then repeat the process the next day, crawl until it hit a 301, then stop. Next problem was that googlebot, in random fashion, would keep coming back and try to crawl the 301s. Just the 301s and nothing else. It was as if googlebot was having problems with the redirect. I checked the redirects with a header checker and it showed the 301 was being directed correctly, just googlebot couldn't seem to follow them.
If this is related in some way as to why this offending site has replaced my link in the SERPs and shows the exact same backlinks and a cache snapshot of my homepage, I am at a total loss. I would just like to get this mess straightened out.
I stuck a couple test pages up tonight to see how Google reacts to a 301, a 302, and a meta refresh. These are on an old and stable site. A header check on the page with the meta refresh returns a 302 response.Thanks DaveAtIFG. This is going to be interesting and maybe we can get to the bottom of this.
The affiliate is running their link to the client through a click-management script, and the affiliate site has a VERY high 'rank' within G. At this point, my client is going to end the relationship with the affiliate if they do not change the way they are linking, even though they are a KEY partner.
They made this decision because of the last email reply from G on this issue, basically saying that both URL's (the client URL, and the affiliate URL) have the same content, so G is listing the one with the higher 'rank'.
I have to admit to being blown away by the reply from G, and I'm hoping it was from a low-level customer support person who was just wrong...although it would explain the existing behavior of G on this issue.
In this case the client's URL has been hijacked by another site with a higher 'rank', and G acknowledged it...and even explained it with a comment about the 'same content'.
They didn't address the fact that there is really only one page/site of content and that the affiliate is just re-directing.
I can't believe this is the way they will leave things, because it just doesn't make sense.
I found a cloaking checker that does give you the html of the pages it's looking at. When I used "Googlebot" as the user agent the only differences I could find was that Googlebot doesn't see one of two cookies, and that is the tracking cookie. Of course, I don't know if Google doesn't see any tracking cookies, or just in this case. Also, Googlebot doesn't get the mySQL database error. Anybody?
I checked lots of the sites that this directory links to earlier, but I could never find any other instance of hijackings. Just my site.
Today, the other site still shows up in Google when searching for my domain name. I did try the Google URL remover, but it said it couldn't remove it because it's a live link.
Since the site removed the link when I asked them to, and the link and cached pages of my site disappeared shortly thereafter, I can't see filing a DMCA complaint. There is no cache when searching for a domain name, and when you try to get any page that doesn't exist on the server of the directory that linked to me, you get the same page that is showing up when you search my domain name. It seems to me my problem is now solely with Google still showing that site instead of mine when searching for my domain name, and I don't know how to make it go away since Google won't help.