Forum Moderators: Robert Charlton & goodroi
I've pointed all internal links to no follow and fixed the duplicating
title tag attributes. The site has no duplicate pages & redirects. Also it has valid meta title and description attributes.
The problem is when someone does a simple search query for a specific keyword my website appears in the last 3-4 pages. Example: if there are 730 result pages, my website will be on the 728 or 729 page; if the result pages are 200 my website will be on the 199 page.
I think this is some kind of google hold/ban. That's why I need to know what could cause my site results to appear at the end of google queries?
<Sorry, no specific URLs.
See Forum Charter [webmasterworld.com]>
[edited by: tedster at 9:05 am (utc) on Dec. 19, 2006]
How are you checking the serps? thats a lot of pages to manually scan
I am guessing here, but are you using automated scanning software of some sort?
If you are, well, you should read Google webmaster guidelines an other terms of service
I mean the internal links - in order not to cause repeated linking.
This doesn't sound right at all to me. There's no reason to use rel="nofollow"on internal linking. You may have created some problems with that -- I would suggest removing the nofollow attribute from your internal links. It tells Google 'I don't trust the target page of this link, do not send it any Page Rank from this page.'
If you have something like this on the home page:
menu-bar HOME WIDGETS BLUE-WIDGETS ABOUT CONTACT
Then only the homepage is crawled and indexed but not WIDGETS BLUE-WIDGETS ABOUT CONTACT
In order to get the whole site spidered you need to remove the no-follow.
Also, even if those results are displayed as normal, if they are shown after supplemental results from other sites... they're practically supplemental themselves waiting for the tag to be stamped on them. ( Might not be the case though )
More interestingly... if you don't have sitemaps submitted... a good hint would be the answer to what their last cache date is?
I'm with tedster on this, that you should not have put the nofollow on the links. Even if you have G sitemaps and thus let the bot know where to look for the pages, all the relevance coming from the internal links would be eventually dropped, not to mention that PR will be choked right on the homepage. Sooner or later all pages will drop to zero.
Meaning you WILL have your pages indexed in G normally ( until they'll get marked as supplemental for not having PR at all ) ...but they won't come up for anything, and even if they do, a PR1 page will outrank them.
You'll need to weed out which pages to index and which not... and put the NOINDEX tag in the header. And/or disallow them in the robots.txt. But nofollow WITHIN a site's internal navigation sounds pretty counter productive if you ask me ;)
Not that i'm telling all this from experience, i'm just improvising based on what i've read here... and am interested in your results ;D
Then this summer I've decided to remove all the meta keyword and description tags
Huh...?
You mean there are NO description tags?
That might be a problem.
Also the PR you see now is not live, it's from sometime late august.
It might be completely different by now.
There's no "sandbox", it's the effect from having no trust, a penalty keeping you from gaining it or not being linked from trusted sites. A reinclusion request might have called in someone from google re-evaluating your site and thus lift a penalty but... then again it might still be there ( if there was any ), hence the low positions. Or it might have been issued again because of an offsite factor they didn't see just by looking at your site. You should double check any links you have in and out.
There's no mirror site, is there?
If you've gone over the following checklist:
- remove the nofollow from internal links
- add unique meta description and title to ALL pages
- make up your mind on canonical URLs and do the 301 redirects
- from www to non-www or the other way around
- from index.? to /
- if there are any dupe pages... that can be disallowed by a pattern, disallow them in the robots.txt
- if there's no pattern use NOINDEX in the html header
- go through your incoming and outgoing links ( remove bad sites )
- remove massive footers, especially with tons of anchor text links
- internal navigation should be consistent with the words it uses
- remove sitewide outbound links
- remove sitewide inbound links to you from other sites
- remove unnecessary syndicated content, blog rss, news, temperature, whatever :P
- see if anyone is scraping you or not ( are those articles ONLY on your site? )
- see if the pages are being cached with the changes
- add one or two weeks for the links to be refreshed in the database
... once this is done and STILL nothing...
Then it might be safe to say that you ARE doing something that calls for a penalty in G's eyes :P ( or the site is constantly being filtered by a quality filter, which is not a penalty )
(i haven't seen your site... besides how would i know ;)
...
Also if the 302s were AFTER your reinclusion request that got you your trust back, that might be the reason for losing it again.
Okay... too many options.
...
What was the cache date of the pages?
Next step will be probably to remove the whole website from google's index and submit it again later for reinclusion in order to regain trust.
site:subdomain.domain.tld inurl:directoryname
And you'll see how it really is.
The site: command won't show supplementals for subdirectories.
btw. you can't remove a site and then get it indexed again... well you can but most parameters will remain the same. But as far as i see it's not trust that's causing you problems...
site:subdomain.domain.tld inurl:directoryname
I very strongly suggest you add meta descriptions back to the pages, stat.
But as far as i see it's not trust that's causing you problems...