Forum Moderators: Robert Charlton & goodroi
I was hit hard by the Dec 7th change and I know several others too. I'd like to get some feedback from those affected hopefully track down what our sites have in common so we can hopefully do something about it. Here's what I believe to be the relevant stats about my site. Please post some stats about yours and whether or not you were affected (positive or negative) by this change.
Type of Site: credit cards
Age: 2.5 years
Inbound Links:
- 50% from syndicating articles
- 20% bought
- 10% from link directories
- 20% "natural"
Outbound Links: Only one, but ROS exchange with an Australian credit card site
Has outbound affiliate links: Yes
Does url re-writing: Yes
Sub-Domains: One for a forum
Rank before: 1-5 for primary keywords
Rank after: 18 - not found for primary keywords
[edited by: tedster at 3:34 am (utc) on Dec. 14, 2006]
Most of my traffic is from other sites anyway, as it is the main site for the topic. So looking at the stats it is pretty clear I can live without them.
They penalize me, and yet still crawl all over my site day in day out? Every few hours in fact. Talk about bare faced cheek.
Well it stops here, as I have just blocked them with robots.txt. Why should they take my bandwidth when all they do in return is insult me by chopping me out of their index?
Thankfully, I can now put this mess to bed and move on.
Best of luck to all you guys, especially those of you who need them to work properly to make a living from it.
Just how did Google get to your WS_FTP logs to start with?Because they are spying out my system? I don't know.
The other app. 10 pages which includes endless lists with essential keys, backlinks and my domain name for several times (maybe 100 times per list), are not linked from any place in the web. I received the link only via an e-mail from my programmer, not more... How can google find out this and follows it? Toolbar? (I've got no google desktop).
I'm sure the keyword-lists has influenced the wohle results for my domain very negative. Now it is all hidden and disabled by robots. I hope the good positions will be back after two spider runs.
Good luck for everybody from my side too
and check all the pages which are in the index ;o)
Google doesn't have to spy out anything most folks leak so much information it is comical.
For all other, I mean the once common we have is the time period in which the desaster began. If it's not an data error, it could be a very agressive new filter which was started with the highest adjustable level and now will continuously be screwed back in the next weeks.
My advice to myself is to just sit, wait, and hope. Work on other sites and other things I can control.
Is that true for all directories? Some folks have been known to have only part of their site blocked.
How long has that option statement been in the server configuration file?
Does anyone who works on your site view those files via browsers with toolbars installed?
No need to accuse Google of spying when there are tons of other paths.
Including in some case error messages generated by scripts believe it or not.
I'm just trying to understand how those logs got exposed and if it was a server generated directory index that did it then a lot of files would have been exposed.
Now fibalogger, I'm just some old furry woodland critter and thus know very little about these world wide wobbly things, however I do remember something about web servers returning a directory index if the was no default index page in a directory.Google doesn't have to spy out anything most folks leak so much information it is comical.
Maybe one year ago I renamed a file that was indexed by Goolge via ftp on the webserver. The renamed file was never accessed by browser and directory listining ist denied, but weeks later I found the renamed file in the Google index. From this moment I delete all files on the webserver that are not needed.
I donīt think Google is spying the the webmasters. The bot ist stupid and he takes every file that he can get. Often webserver misconfigurations are the problem.
I guess the stupibot has added the list content to the whole domain contents, mixed it togehther and the keyspam filter was released then.
That is so bloody angry :(
Now the bad pages are no more existing. How many days I have to wait for correction within the serps? Is there a possipility to accelerate the procedure? At first I've submitted the directory (which is disabled now) manually to google, for a new spider run. Maybe it works... :o/
Sometimes we think "penalized" and leave it with that, but it appears that Goog has different penalties; the +30 is one of the toughest ones. In that case I think it's harder to get back on top, it seems that removing the pages might not be enough...you need to either wait for a long while or submit a reinclusion request. Anyway, I am not sure what penalty you or anyone has, but apparently not all penalties are created equal so double check.
And watch on all your data on the webserver(s) you have, because I mean google follows not only links. Maybe html or txt pages which contains none linked http:// could be interpreted as hyperlink too.
I'm absolutely gutted, disgusted, angry, stressed, deflated at what my site is being subjected to. This is after an already stressful year where Google's guidelines were followed with care.
The absolute worst thing about this whole event though has to be the fact that we have received no acknowledgement from anyone at Google that something is going wrong. And something IS going wrong. Very wrong.
My Google respect levels are hemorrhaging.
itloc
Additionally since the original page contains a video that is uploaded on youtube and has a small text snippet from the original text ranks also higher. :\
Overall damage is 7% on that site (could be school holidays) and 30% on the other site. Not that bad, but still that index is well mixed up.
Still with the "allinurl: www.mydomain.com site: www.mydomain.com" i see my site properly indexed. Strange things with "site: www.mydomain.com". If i issue this command with the SafeSearch Filtering preference set to "Use moderate filtering (Filter explicit images only - default behavior)" i don`t see my first page (homepage) in results, but with the SafeSearch Filtering preference set to "Use strict filtering (Filter both explicit text and explicit images)" i see my homepage... That`s strange - content filtering algo altered? And one more thing to state - in the google webmasters tools Diagnostic - Summary page i get a message that my home page is unreachable! The message is: "We can't currently access your home page because of an unreachable error"... I checked deep in the web server logs and all is ok - googlebot successfully crawles my homepage every day...