Forum Moderators: Robert Charlton & goodroi

Message Too Old, No Replies

Perplexing Situation

         

Decius

3:30 am on Jan 8, 2007 (gmt 0)

10+ Year Member



Given the following properties:

1. URL A is only different from URL B by domain. (ie domain1.com vs domain2.com)
2. URL A has the exact same content as URL B. (duplicate)
3. URL A has no inbound links except from URL B.
4. URL B has many incoming links

Can anyone explain to me how URL A can rank higher than URL B?

(this is not a theoretical question)

theBear

4:08 am on Jan 8, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Well I'd say that url B voted for url A as being the top dog.

A link is a vote that the linked to page is in fact higher in the pecking order than the linked from page for the terms used as link text etc.

Decius

5:33 am on Jan 8, 2007 (gmt 0)

10+ Year Member



That's not really a viable explanation.

If Google thinks that URL B is authoritative enough to fling URL A into the listings quite high, then it is reason to say the URL B would at least exist in the listings. However, URL B is nowhere to be seen.

Since URL B does not exist, Google has certainly concluded that the content on there is not reliable, and that URL B is not good for users. Why would Google consider a link more trustworthy than the content?

And if Google does consider a link more trustworthy than the content, this is a contradiction. How can a site on "widgets" be trusted to promote a site on "widgets" when the original site itself is not considered relevent in regards to "widgets"?

This situation makes no sense to me.

theBear

5:45 am on Jan 8, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



"Since URL B does not exist, Google has certainly concluded that the content on there is not reliable, and that URL B is not good for users. Why would Google consider a link more trustworthy than the content?"

Incorrect url B does exist in fact you said it existed and it voted for url A as being the authority. Under that situation I would not expect url B to rank over url A.

What is your theory?

freelistfool

6:04 am on Jan 8, 2007 (gmt 0)

10+ Year Member



Matt Cutts mentioned in a recent post that one thing webmasters can do to help resolve duplicate page issues is to add a link from the duplicate content to the real content...I think he was talking about syndicated content at the time. You do just the opposite so Google most likely picked the "duplicate" page as the "real" one.

oneguy

6:23 am on Jan 8, 2007 (gmt 0)

10+ Year Member



3. URL A has no inbound links except from URL B.

Is that definitely true?

Can anyone explain to me how URL A can rank higher than URL B?

Maybe, URL A has much better on site factors. They matter. Sometimes, they matter a lot.

jambad

3:43 pm on Jan 8, 2007 (gmt 0)

10+ Year Member



which url does Google see as oldest?

Decius

7:19 pm on Jan 8, 2007 (gmt 0)

10+ Year Member



Incorrect url B does exist in fact you said it existed and it voted for url A as being the authority

By "not exist" I mean it does not exist in the listings. Obviously it exists as one of the factors is it has many inbound links.

Matt Cutts mentioned in a recent post that one thing webmasters can do to help resolve duplicate page issues is to add a link from the duplicate content to the real content...I think he was talking about syndicated content at the time. You do just the opposite so Google most likely picked the "duplicate" page as the "real" one.

This is possible, but unlikely since the original content is reasonably new and has no duplicates (except for URL A)

3. URL A has no inbound links except from URL B.

Is that definitely true?

Yes. URL A is an internal test server that only exists for testing purposes.

Maybe, URL A has much better on site factors. They matter. Sometimes, they matter a lot.

As stated, the only difference is the base domain. They are duplicate pages with duplicate URLs.

which url does Google see as oldest?

Certainly URL B. It isn't much older, but that domain is much, much older.

---------------------

Let's change the scenario a bit.

1. URL A is only different from URL B by domain. (ie domain1.com vs domain2.com)
2. URL A has the exact same content as URL B. (duplicate)
3. URL A has no inbound links except from URL C.
4. URL B has many incoming links including a link from URL C.

If URL A ranks higher and quicker than URL B in a very short period of time, can it be concluded that URL B is penalized by Google? To the point where new domains are more likely to move higher?

I would assume yes... this is the only explanation I can come up with.

------------------

Domain for URL B is over 2.5 years old. Domain for URL A existed (according to Google) for about 1 day.

I have controlled DOMAIN B for the last 2.5 years, and the only un-Google thing I have done is bought links at some point far in the past. No other known filters or rules have been broken. A majority of this was done to try to fight the sandbox effect initiated in 2004.

------------------

This is my conclusion now, and it answers a lot of confusion that I have as of yet been unable to solve:

Incoming links that are not relevent, and most likely ones that are site-wide will actually penalize you, and may penalize you indefinitely. This is in contradiction to the held belief that Google would not permit outside sites to control your own success or failure, and would most likely just invalidate incomming links they deem non-natural.

------------------

There are many glaring errors with this method of business:

1. Google is permitting webmasters to affect other webmasters, by in essence creating "negative votes".

2. Google is using a very arbitrary method of determining the nature of a website's intentions (site-wide inbound links from irrelevent sources!= black hat).

3. Google does not permit open communication about such penalties which prevents webmasters from determining their mistakes. This is in an effort to combat black hats. But combating black hats should conducted via the algorithm, not via hidden penalties.

4. This entire method of penalization is far more likely to catch genuine business owners than black hats. Why? Because genuine business owners are far less likely to register a new domain and move their business there overnight than a black hat would...

5. ...and registering a new domain and filtering traffic through that seems to be a very easy and viable solution (as this entire example proves).

------------------

I think a lot of webmasters that launched specifically the year of the IPO, in 2004, with the initiation of the horrid sandbox, suffered very similar events: They ranked very poorly solely because they were new and tried various things to offset this. These efforts did not work because of the sandbox effect, and also served to have their domains blacklisted. Since there was so much talk of the sandbox, they conclude that if they just wait it out suddenly the site will appear in the listings. But due to their efforts that now have their domain blacklisted, no matter how long they wait, build links, or build their brand, they remain out of the listings.

When the easy solution is, switch domains.

Edit: "When the easy solution is, switch domains." + "just like a black hat would"

[edited by: Decius at 7:22 pm (utc) on Jan. 8, 2007]

tedster

7:24 pm on Jan 8, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Has URL B collected a lot of off-topic links?

Decius

7:25 pm on Jan 8, 2007 (gmt 0)

10+ Year Member



No, but DOMAIN B did.