Forum Moderators: open
Greetings All,
I’m a new subscriber/old webmaster desperately seeking some opinions of the “new era” of Google seo. Thanks in advance for everyone’s help, and I look forward to a long relationship with WebmasterWorld. – Sorry for the long post :)
Now to the problem……
I have been one of the lucky thousands of recipients of the PR0 penalty from Google, and have no idea why – I have never used questionable tactics for marketing or design that could be considered spam. Although others may disagree I cannot see any PR returning in the near future and am turning to other solutions.
I am considering launching a new domain (newsite.com) and blocking the googlebot from the PR0 site (oldsite.com) using both “User-agent: * Disallow: googlebot” in the robot.txt, and <META NAME="GOOGLEBOT" CONTENT="NOINDEX, NOFOLLOW"> on all pages.
The oldsite.com is too valuable, having high placement on many other search engines, not to mention the hundreds of hours spend on datafeeds, ppc advertising, etc. to bring down. My worry is that having these two sites will be seen as “identical content” should a manual review be done by Google -- even though the oldsite.com will be taken out of Google’s index specifically using their advertised protocol.
In short: Will both of these sites be penalized with an identical content penalty even though one site is not indexed by Google at my request?
I’m at a loss and was hoping that someone out there has tried this route to overcome the PR0 bug. I know, it is insanity when one wants to stop Google from indexing their site – but I’m desperate :)
There has GOT to be some reason for PR0. I know you have reached another conclusion already, but have you carefully reviewed Google's Terms of Service to see if you can't figure out the likely cause? Have you written to Google? I would not abandon your old site. Don't give up! Fix the problem, and you will be back in the index in a month or two. It would take longer than that to re-establish the rank of a site that took several years to build.
That being said, you shouldn't get a duplicate site penalty if you are preventing the Googlebot from seeing the original content on the old site. However, you may find another PR0 if you put the same site on a new domain... without first finding out what tripped a SPAM filter.
I believe a PR0 is manually applied by Google techs... not a SPAM filter penalty. A PR0 is applied only in blatant cases of violating Google's TOS. If I were you, I'd work to figure out the problem and fix it. Good luck.
[example.com...]
[example.com...]
[example.com...] (or similar)
[example.com...]
and if I were to type your web address into my browser - www.mydomain.com, would I be redirected to one of the above versions or even to a dynamically created page with session ID's etc?
[edited by: ciml at 4:59 pm (utc) on Jan. 21, 2004]
[edit reason] Examplified domains. [/edit]
It is easy to assume that PR0 is a penalty -- perhaps because it is the easiest explanation to formulate. However, I find it hard to believe that a site that is 500+ pages, completely static HTML, online for over 6 months, has a number of high PR inbound links, would not even manage a PR1 on the index page. Believe me guys, the day I see any life in that green bar at all is the day I will start sleeping a little better.
Internetheaven:
I’d be happy to entertain any theories at this point. To answer your questions:
All internal pages point to the index page using -- HREF="http://www.example.com"
I have no redirects in place, and do not use any dynamically generated assets whatsoever, as I mentioned previously, the site is about as simple as one could get – all straight HTML 4.0.
I’m sure that does not help you at all, but it’s the way it is. I have emailed Google and have received a response – I suppose I’m a lucky one. I am going to put a little more time and effort into getting some quality inbound links and other avenues of marketing, and wait then wait for that next index.
At this point I have put the idea of another site on hold, and will give the existing site the benefit of the doubt. But if I can’t get this PR thing straightened out soon I’m about to give up Google for this domain.
[edited by: Marcia at 9:26 pm (utc) on Jan. 21, 2004]
[edit reason] Examplified domain name. [/edit]
I have noticed PR becoming less and less important in the ranking game. I don't actually use PR in any research I do on increasing rankings as it does not seem to make a difference.
Also, could you confirm that it is a white bar you are seeing, not a grey bar.
Basically i want to know logic applied if any to determine duplicate content? Does Google compare 3,307,998,701 pages with each other?
No, there isn't a difference to Google. If the url ends in a file name (.htm, .html, .cfm, .php, etc.), it won't be followed by a slash. If the url is a domain server / directory, it will have the forward slash on the end. Google's index handles this automatically.
[edited by: ciml at 5:07 pm (utc) on Jan. 21, 2004]
[edit reason] Examplified domains. [/edit]
Suffered is an understatement. The only traffic I see from Google is paid for through Adwords.
Nileshkurhade:
“Does Google compare 3,307,998,701 pages with each other?”
Here’s my take on this issue (IMHO), but for what it’s worth.
You have to keep in mind that Google does not really have to compare 4 billion pages. Many of the penalties that are applied by Google are slapped to you by an engineer that quickly reviews your site (or so Google has told me). I’m sure that you have a few competitors that may not be happy about you dominating search rankings with identical (or close to) sites. If one of your competitors brings this to Google’s attention in a quick email (as had happened to one of my sites about a year ago), you can probably bet they will take a look. If your sites are not sale/product based this probably isn’t a big concern.
document_title_date.html
rather than the former
000serial.html
I kept the old files around, though all links on the old pages now point to the new pages.
What is the best way to handle the transition? Will I incurr a penalty for having two copies of each page? Should I do a meta refresh (with or without the actual content there?), or should I have my web server send out redirects? Or, should I do nothing?
Thanks,
Sean
By deleting the content on the old page, you won't have duplicate content.