Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Switching Domains: My Attempt To Escape Negative SEO - My Progress Log

         

SerpsGuy

9:23 am on Oct 1, 2014 (gmt 0)

10+ Year Member



I have been spammed mercilessly for about 4 months. Over 1.2 million negative seo links from all over the world pointing to my domain.

I previously had page one rankings for almost all pages on my website. Once the spam hit, My primary income website sank like a brick in water. The spam was not just simple links, it was my own content, copied and pasted on thousands of blogs. Public and Private.

My content was stolen, and then broken up into small pieces to be left as comments on thousands of blogs, news sites, etc. Lots of the spam linked back to me with the same keyword. After a year and three months of trying to clean it up, leaving matt cutts comments on his blog asking for help, and disavowing over a million links- I gave up.

I did nothing for about 6 months. I basically lost motivation to do anything. Which was pretty stupid of me, because now I am pretty broke and I have nothing going for me. But if you have ever had your hard work destroyed, you understand how I felt and still sort of feel.

A few days ago I saw google was performing another update, that could possibly help new websites. So, I rewrote a lot of my work, and decided to try starting anew. This log is to share as much as I can to help other people who might have experienced the same scenario.


-----

Friday, September 26th 2014 - 4:42PM

Requested Removal of SpammedDomain from Webmaster Tools

Saturday, September 29th 2014 - 5:03PM

Registered NewDomainName.com
Changed SpammedSite.com - Blocked all robots & Removed Content.
5:17PM - Began Uploading Exported Wordpress XML File.

Monday, September 29th, 2014 - 4:38AM

Pinged Blog twice, and submitted a single link to reddit.com (sorry reddit fans, its just a fast way to get google to index a link)

1:19PM- page submitted to reddit is indexed in Google. No other pages indexed yet.

6:40PM
Registered email for a new webmaster tools account

6:42PM - Submitted Homepage and All Child Links to be Crawled.

Wednesday, October 1st 2014

Found some of my content is still not unique enough- stolen content still on wordpress.com, blogger.com, and weebly.com Submitted DMCA Requests.

I see some rankings on pages 4-7, but nothing higher than that.

aakk9999

10:05 am on Oct 1, 2014 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Good idea to keep the progress log and keep us updated. I am sure many will be interested in how it goes.

SerpsGuy

10:53 am on Oct 1, 2014 (gmt 0)

10+ Year Member



I tried to edit to add information, but I guess I cannot. I was hoping to keep the journal primarily in one post. Oh well

Adding to the above: The new website was not redirected at all. And as a note to anyone who is currently trusting googles word that neg SEO cannot affect you - It is a lie. Do not believe it.

Wednesday, October 1st 2014 - 4:06AM

Of 150 Pages Submitted, 52 are indexed in google. Most of them are around page 6-7. This is something I noticed when my site was new a few years ago. If anyone has seen the page 6-7 thing and can share info it would be appreciated.

clazzdev

7:08 pm on Oct 1, 2014 (gmt 0)

10+ Year Member



Really sorry to hear about what happened to you. Thanks for sharing your progress and good luck to you - I'll definitely be checking into this thread.

SerpsGuy

8:39 pm on Oct 1, 2014 (gmt 0)

10+ Year Member



Wednesday, October 1st 2014 - I checked my site: pages and earlier it showed 150 pages. Several hours later, it shows 42 pages. So I think google is still processing the pages, or maybe updating.

No new rankings, infact they seem to be worse for the terms that were previously ranked.

seoskunk

1:16 am on Oct 2, 2014 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



It's sad you had to give up your domain. I find the whole story a bit sad tbh. I had a negative campaign that ran for years and its exhausting so I know how you feel.

As long as this stuff works there will always be a market for it. I think Google is working quite hard to prevent this stuff happening and rewarding original content.

londrum

8:51 am on Oct 2, 2014 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I don't mean to be a harbinger of doom, but the attack sounds like it was targeted directly at you (rather than it being automated, and just unlucky). If your new domain starts getting some rankings back, how do you know the same thing won't just happen again? That really would get you down.

I'm not saying you should still be going after the guy who did it, because it sounds like you have had enough of that already, but if it was me I would protect myself by building up other traffic sources. Forget Google for a while. Do some heavy social media and look into other stuff -- web apps, ebooks, advertising, there are loads of other ways to get traffic without relying on Google. Negative seo won't have any affect on those

SerpsGuy

7:18 pm on Oct 2, 2014 (gmt 0)

10+ Year Member



I completely agree, I believe I was targeted. Not sure why exactly though. I suppose I could do some marketing on other channels, like other web properties. The thing is I am really against web spam, and anything I put on another site would just be a rehash of my original works. Not sure how I feel about that. I want to provide the best information I can, and spinning my own content to sound different would be lowering the quality of my work.

Do you have any specific suggestions though? I am listening, and would appreciate any feedback.

londrum

7:55 pm on Oct 2, 2014 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



building web apps (like the ones on iphones and ipads, for example) aren't much of a jump if you can already build a website. you can still build it in HTML. and you can charge for them as well, which is a bonus. it shouldn't be too hard to tie it in with your website... but of course it depends what type of website you've got.

wordpress plugins are easy to write as well, that's just HTML too. if you can get one of those in the wordpress directory (very easy) then you can potentially get loads of downloads — people will stick your plugin on their own website. you just need to work out a way to feed them back to your own site.

i do a few ebooks as well. you can write ePubs in... HTML! (with just a couple of extra weird files thrown in). you can sell those on amazon easy peasy, and stick a few links back to your site within the book.

rish3

8:43 pm on Oct 2, 2014 (gmt 0)

10+ Year Member Top Contributors Of The Month



The two pieces of work seem to be:

1) Getting back your existing content. I don't see a real solution other than a rewrite. Perhaps a lot of work, but it seems to me you could rewrite it without losing quality.

2) Protecting new content.

- Add the noarchive robots meta tag, so that if you're going to have your content scraped, they have to scrape it from your site. (vs the wayback machine, google cache, etc).

- Put in some anti-scraper defenses. Google "blocking bad bots", for example.

- Since you seem to have an industrial-strength problem, you could try an industrial-strength solution.

Google's spider understands javascript. Obfuscate your content, and use javascript to clean it up when the page loads. I made a simplistic demo here: [jsfiddle.net...]


Notable is that the code uses window.location.host as the "decryption" key, so if someone scraped that page, and put it on a different domain, it wouldn't display the content.

A smart scraper could look at it and figure out how to steal the content, but most wouldn't bother.

SerpsGuy

10:32 pm on Oct 2, 2014 (gmt 0)

10+ Year Member



Thank you Rish3, and Londrum.

Londrum: I actually do plugins in my spare time. I just finished a review plugin- But I cannot really justify linking to my specific niche sites via a generic review plugin. Maybe I can try to give it away to other people in my niche in trade for a link or something though? Ideas?

Rish3 - In your experience, does the noarchive tag affect rankings negatively?

I just recently blocked bad bots via robots.txt, but I read that bad bots arent going to obey robots.txt anyway so its almost useless.
EDIT: I just googled what you suggested, and added the following to my .htaccess [ <link removed> ]

[edited by: SerpsGuy at 10:52 pm (utc) on Oct 2, 2014]

[edited by: aakk9999 at 2:12 pm (utc) on Oct 5, 2014]

SerpsGuy

10:35 pm on Oct 2, 2014 (gmt 0)

10+ Year Member



Thursday, October 2nd 2014

NewSite:
104 indexed pages.
No Worthwhile Rankings Yet.
Reported Duplicate Content Removed This Morning

I am considering reindexing my old website and just rewriting all the content on my new site. It's two chances to rank given the chance that google actually fixes the duplicate content issues.

I'm unsure about it. Its an option.

seoskunk

10:55 pm on Oct 4, 2014 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I am considering reindexing my old website


Yep sounds like a good idea, to go back to your old site and 404 the new site. Sites can recover.

lucy24

8:08 am on Oct 5, 2014 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I just recently blocked bad bots via robots.txt

Bad bots don't honor-- or even read-- robots.txt. That's part of the definition of a bad bot.

I hope you didn't throw away the IP bans you'd accumulated for the old site. You don't want to have to build up that list from scratch, even if you're starting new on everything else.

not2easy

1:26 pm on Oct 5, 2014 (gmt 0)

WebmasterWorld Administrator 10+ Year Member Top Contributors Of The Month



EDIT: I just googled what you suggested, and added the following to my .htaccess

I hope you did not add that to your htaccess file, it cannot do what it says it does.
RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] 
RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR]
RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR]
RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR]


1. It is full of old UAs that haven't been seen in a decade.
2. the ^ character just before the UA name means that the UA can't start with 'Mozilla' or anything other than that exact name (Caps and all). A quick look at logs with actual bad (or good) UAs should tell you that it is rare that the UA begins with its name.
3. The UA is one bit of information in your access logs that is simple for scrapers to fake.
(4. Sites that promote bad practices to people trying to solve a problem should be beaten with sticks.)

All that code does is irritate your server. To learn the correct way to deal with scrapers, visit the SE Spiders /UA ID forum: [webmasterworld.com...] and start in the Library. (the link is right next to the Forum Charter there).