I thought the only difference was that relative links permit you to navigate your website while it is on your hard disk.
When the site is on the web there is no difference no?
Yeah, my original thoughts too.
But I've run into some issues w/ my Google Webmaster Tool Error Reporting.
For Example it says in my:
HTML suggestions - Duplicate meta descriptions
Where it's bolting onto the end of a good URL, a link which is also on that page, which in turn does not bring up the correct page.
|Where it's bolting onto the end of a good URL, a link which is also on that page, which in turn does not bring up the correct page. |
Check that the link has a leading slash i.e.
Relative links - You can lift and drop the whole site to a different location (eg from test server to production) or for sites without their own domain (unlikely outside of personal sites these days)switch web hosts.
Absolute links - if your site is scraped the internal links will pull visitors back to your own site
It used to be said that relative links would save a tiny fraction of i/o which could add up in the days of 28k modems.
wheel; Right I see the "/" and it's picking it up from within the page because it's a relative link. Which in turn causes me to wonder about Absolute linkage.
So, which is better, more effective Relative or Absolute?
Well there you go. Nobody can screw up absolute URL's. They can and do screw up relative urls. Therefore, use absolute URL's.
In terms of testing sites with absolute URL's, that's easy. Just load your hosts file with the domain pointed to your test IP. Done.
|1. Easier to code |
2. Easy for search engines to index (But that is ONLY to a point as Google WMT junks up the URL's and gives me errors)
Plus I question both of these. Not sure why relative URL's are easier to code - you have to have a base somewhere might as well be the domain. and I really don't see why relative url's would be easier to index. Google doesn't have a problem indexing pages.
Agree with wheel. Absolute URLs have lots of advantages not the least of which is that it makes it harder for your site to be scraped.
If you're going to use relative URLs, always opt for "root" relative URLs. By this I mean the relative URL should start with "/" and contain the complete path from the root of your web down to the page to be displayed like "/somefolder/somepage.html".
Using folder relative URLs like ("pagename.html", "./pagename.html", or "../pagename.html") makes it a piece of cake for someone to scrape your site. They don't have to fix a single URL in your code to have a site like yours up and running in no time.
I use absolute on everything - links, images, script files.
If your site is dynamic then set yourself up a variable called 'website' which is [thefulldomainpath.xtn...] before /the-rest-of-the-url.
Then when you move the site from test to live environment all you need to do is change that variable.
if you're obsessive about page weight like me, then you can hack of huge chunks out of it by using relative urls.
imagine if you had 20 like these on a page:
you can cut them all down to just this:
londrum, page weight matters, but after gzipping the output, 20 references like that would probably make about 0 noticeable difference, even on a dialup connection.
not on its own, no. but if you're trying to get page weight down, then relative URLs are the way to go.
In my experience absolute internal links are more effective than relevant links in terms of rankings. I won't go into the mechanics behind why I believe this unless someone wants the drawn out story.
Try telling this to the programmer on the team!
i like stories. give us the drawn out story.
yes CainIV give us the full story please?
I would like to have the drawn out story also.
Ok, well here goes...
A while back (circa 3 years ago) we hired a development team to build out a series of websites for us. The framework was a custom code 'local directory' framework which rewrote se-friendly urls from the db dynamically.
At the time, this question came up, and it was a significant enough issue within this particular IT team that it warranted some research (and since the frameworks would not be ready and finalized in dev for at least three months, I had some time to test the theory myself.
I created a control test whereby I created two almost identical websites (approximately 50 pages), in the same niche, on the same host but on different c class ips. Title tags were, for the most part, almost identical with slight variations.
One website was built entirely with relative links and one was not. All internal linking was 'very' close to the same.
I indexed the websites and revisited my little test.
What I found was that the absolute link website was leading - by a margin, but not by enough to conclude anything.
So I pointed links from the exact same sources to both websites using the same anchor text and pointing at the (relatively) same pages.
What I found was that once links were pointed at the websites in the same fashion, both improved, but the absolute link website improved immensely more than the relative links website.
What is more interesting, is that I then began removing links to the website using absolute links, and still it outperformed the other.
I believe Google weighs on both a domain and page level. And I believe that when Google sees a link pointed at another page using the entire absolute url, it counts that as a full page reference and passes appropriate weight.
When it follows an absolute url when spidering internal links, it "sees" the reference vote the same for the destination page, it counts the reference the same as it would a link from another website assuming (and especially when) that the referring page (on your website) has inbound links to it.
When it spiders and follows a relative url internally on the same domain, it treats it as a direct reference to a 'sister' or sibling page as opposed to a 'page level' entity.
The reason why you cannot then simply internally link with absolute links and no inbound links and expect huge results is that those pages also need to be fed with inbound links, and once they are, Google treats them in this respect like they would any other page on the Internet.
A good example of this the barrage of mashup / content algo websites that massively interlink and gain significant results on less link equity.
I have tested this as well using keyword anchors linking "home" in a domain using relative links (/index.html) and using absolute links and have seen the same results where the absolute version always performed better.
Again, it is my theory only...and apart from the control tests we can do, difficult to prove.
Definitely open to debate and discussion!
Question: Does the hostname in the absolutely-linked URLs contain a keyword that is relevant to the niche of these sites?
Good point Jim...
PLUS... if the full url contains two or three keywords, would changing root-relative links to full-absolute links increase the page KW density over the limit, and incur a filter?
I suspect it does.
The increased page size will definitely significantly lower page loading speed, to which G has become sensitive to the point of obsessive.
I'm sticking with root-relative internal links.
Absolute links, absolutely. I did otherwise early on, and learnt my lesson. Spent many hours afterwards fixing things.
|if the full url contains two or three keywords, would changing root-relative links to full-absolute links increase the page KW density over the limit, and incur a filter? |
Can't see that happening unless one is trying to game things already with many unncessary kw's.
|The increased page size will definitely significantly lower page loading speed, to which G has become sensitive to the point of obsessive. |
I don't know, man. If your code is clean, with no auto-generated bloat, it's a minor factor.
I personally go for relative, to keep the size down.
With CainIV's interesting experiment, and Angonasec comment on keywords I will keep my mind open.
An idea for keyword stuffing (don't try this at home)
I would hope search engines process the final path when gathering keywords so this is not possible.
Another possible negative for relative is that spiders can also get it wrong. Recently a spider (LexxeBot) has been trying to index pages that it has obviously miss calculated the absolute path to.
Stefan: No bloat at all, hand-made static html and css. Clean and valid at w3c. Recent experience using FF and Google's page speed tools has shown me that extra kb are far from a minor factor. E.g. Before we began polishing one page scored 90/100, tiny reductions in "weight" of the html and css brought us up to 99%.
"So what?" you say?
Google noticed and rewarded the page :) without any other changes.
No effort has ever been made to trick any SE on our NFP site. The +domain itself contains keywords+, simply because they perfectly describes the site's purpose. We bought it long before we knew what a keyword was, when Google was still in a garage.
I'm convinced the great lump of extra kb (and poss the extra domain kw) that using absolute internal links would create, would be detrimental to our Google ranking.
As I noted above, G seems hypersensitive nowadays.
welcome to WebmasterWorld [webmasterworld.com], Tiggerito!
|An idea for keyword stuffing (don't try this at home) |
allowing dot segments in the path can cause canonicalization issues, so make sure you know what you are doing or you may get more than you asked for.
I prefer root-relative links like "/that-page" or "/some-folder/some-stuff", always with a leading slash.
With the site URL structure not at all being the same as the server internal folder structure, when using relative URLs (URLs that do NOT begin with a leading slash) it's not a simple thing to work out where the e.g. images folder URL is relative to the containing virtual URL folder the current HTML pageview is being served as.
When that resource is linked as "/media/thumbs/somepic.png" there's no ambiguity to programmer, user or SE bot alike as you're "counting from the root"
Never, ever, use the leading dot "../../../some-resource" construct. You're playing with fire.
After all this discussion let me give you a different perspective that I see all the time that most people aren't familiar with.
Just because you see something in Google's WMTs doesn't mean Google found that on your site.
Often scrapers incorrectly parse your page and jumble the URL, happens to me all the time, then Google crawls that page that someone else made with the jumbled URL.
Next thing you know, you have hundreds or even thousands of bogus URLs hitting your site courtesy of Google, Bing, Yahoo, etc. crawling 3rd party idiot sites mucking up WMTs.
So if you can't reproduce the problem, it probably doesn't actually exist, except on sites run by idiot scrapers.
I remain open on the question.
I have always built sites with menu navigation and the like as relative links because it is easier as I can browse the site on my hard drive but when I have added local keyword links in the footers I have used absolute because then I don't have to worry about the location of the referring page and thus change the relative link appropriately.
Thanks for the input. I never really thought about it like that before. Thanks.
|Question: Does the hostname in the absolutely-linked URLs contain a keyword that is relevant to the niche of these sites? |
In my test, yes both websites did.
absolute links all the way.