Forum Moderators: open
Domain_1 - I used to run this domain earlier this year and it got indexed by google.
Domain_2 - My new Domain that has been getting hammered by the Googlebot for the past month.
If someone were to try to query (sp?) either of my domains i.e. typing the domain directly into address bar both domains were tied together.
I have removed all content from domain_1 and made a redirect to domain_2. Each domain now answers seperately.
The weird thing is when I check the serp results in the update google is listing my pages from domain_2 but with the domain_1 name?
Example search result: domain_1/domain_2 page name.
I should probably add that I submitted domain_2 to google about 2 1/2 weeks ago.
Any thoughts on what could be going on? Should I remove the redirect from domain_1 to domain_2 and let google keep crawling?
Were both of these domains separate sites, containing unique information, and you are now pointing domain_1 at domain_2?
What types of SERP results are you getting - words and phrases common to both sites?
Do you have any of the domain_1 addresses hardlinked within domain_2?
My hosting company (my brother's company) tied both domain names to one website. So if you were to type in domain_1 or domain_2, one
website would answer.
Does that make any sense?
Right now, I have two individual websites. So that when someone types
is domain_1, website_1 answers the request. When someone types in
domain_2, website_2 answers the request.
There are no hard links between the two sites.
Domain_1 has redirect to domain_2. So if you type in domain_1
you are automatically redirected to domain_2.
I know this is kind of a messed up deal, I don't want to anger the googlebot.
The thing of it is, I already had some decent links to domain_1
and I didn't want to lose the traffic and just toss the domain.
HTTP/1.1 302 Object Moved
Location: [mydomain.com...]
Server: Microsoft-IIS/5.0
Content-Type: text/html
Content-Length: 150
I typed in redirectdomain.com and got the results above.
But you might want to think about using ROBOTS.TXT exclusions to control what the googlebot (and any other spider) can see....
because what could get you banned in google is par for the course in Inktomi it would seem!
Methinks it best to stay away from getting "too tricky" with the technical stuff. What I mean is that it sure would be easy enough to dynamically include your files based on the requesting user agent and IP address.
but that'd be cloaking/stealthing.. and that's a bad thing unless you have a mighty good reason! (and a note from your mother)
But ROBOTS.TXT exclusions are allowed.
(Google doesn't like it if you exclude Javascript [.js] or Cascading Style Sheets [.css] since you can effectively do creative cloaking/stealthing if you put your mind to it... such as dynamically serving/cloaking/stealthing the ROBOTS.TXT file as well!)
One last thing. From what I read from a Google interview you won't have any problems sharing an IP if it is a TRUE VIRTUAL DOMAIN. i.e. get the right configuration and you'll be fine.
'cause what'd happen if someone typed in the IP address of your site rather than a domain name?