|Google SEO Advice - Site was MIA for Several Years|
| 7:39 pm on May 15, 2014 (gmt 0)|
My site - while listed on the internet for the past 6 years (or so) was:
1. Not updated at all.
2. Had several pages missing/broken/errors/etc.
3. Was obviously de-listed from Google (i looked through Webmastertools, and it said something to the extent that there were 495 broken or missing pages).
I have now rebuilt my site - updated it - but where do i go from here? I'm sure SEO has changed significantly, so i'm not sure where to focus?
Also - since Google has obviously found many broken pages - what can i do? Can i redirect these old sites anyhow?
Any help is appreciated with respect to where do i go from here?
| 11:51 pm on May 15, 2014 (gmt 0)|
If the broken links can be redirected to current and proper pages, do that... however, what do you mean by broken pages?
As for the other... prepare a sitemap and submit (G, B, etc) and see what happens. Fresh new content will get a quick look as some sites fall in "current", ie. not updated that often and the crawlers figure that out and changed their schedules. A sitemap will let them know you are alive and kicking.
| 2:10 am on May 16, 2014 (gmt 0)|
|Also - since Google has obviously found many broken pages - what can i do? Can i redirect these old sites anyhow? |
When your fingers typed "sites" your brain meant "pages" right?
Take it on a case-by-case basis. At this point you don't even need to think about search engines. Much. Think about new humans randomly browsing the site, or old humans with bookmarks dating from 2007.
First and most important: make sure all your own links are correct. A site's internal links should never lead to a redirect. (There are rare exceptions, generally having to do with mixed http/https sites, but this probably won't apply to you.)
Then eyeball the list of bad links and nonexistent URLs. Some of those 495 will be garbage: misspelled links from other places, parameters appended by robots, sometimes even the search engine pulling URLs out of its ###. The rest fall into three groups: pages that now have a different name; pages you've intentionally removed; pages that for whatever reason don't exist.
Pages that have been renamed and/or relocated get 301 redirects. Sometimes two or three pages might all get redirected to one, but never ever do mass redirects. That's what your nice custom 404 page is for. Conversely, if an old page's content has been divided among two or more new pages, make your own decisions about whether to redirect.
Pages that you've intentionally removed get a 410 response, which you have to code explicitly. If this applies to you, you will also need to make a nice custom 410 page, or at least name your custom 404 page as the 410 page. (The Apache default 410 page is scary, and I don't suppose other servers are any better.)
Pages that simply don't exist can be ignored. But if you notice a significant number of requests for some specific nonexistent pages, it may be worthwhile to rack your brains and figure out what page they think they're asking for.
If you need help on the mechanics, there are subforums for both Apache and IIS. (If you're Apache, you will see me again. If it's IIS, you won't ;))
Note that wmt lists both 410 and 404 as "crawl errors", but it doesn't mean you've done anything wrong. It's just for your information.
Finally: if you throw in a lot of 301s all at once, you will find the major search engines asking for garbage URLs like xcilthnkyjdghfl.html. This is OK; they're just making sure you're not doing "soft 404s" such as mass-redirecting to the root.