| 3:42 pm on Apr 26, 2007 (gmt 0)|
I do not believe it would make any difference.
| 4:19 pm on Apr 26, 2007 (gmt 0)|
I tried it once, but it didn't seem to make a difference.
Just make sure all your links to your home page go to http://www.example.com/ and not http://www.example.com/index.html, and same for any directories.
| 5:26 pm on Apr 26, 2007 (gmt 0)|
No difference? That's cool
| 9:07 pm on Apr 26, 2007 (gmt 0)|
I have found hard coded to be more effective.
| 11:36 pm on Apr 26, 2007 (gmt 0)|
Unless these absolute paths make a big difference in the ratio of code to text on your pages, I would not do it. If it doesn't cut down on page size or code to text ratio, then it really isn't improving anything. If it ain't broke, don't fix it... especially if what you're doing isn't really fixing (or improving) anything. I say leave links alone unless you have a good reason. Just my opinion.
| 11:49 pm on Apr 26, 2007 (gmt 0)|
you should include a base href in meta before change.
| 12:43 am on Apr 27, 2007 (gmt 0)|
>>I mean, is it safe to create a relative urls on this site?
If you do this don't forget the base href as mentioned. Also I found that many a sloppy webmaster linking back to your content will forget about the relative link...
| 9:09 am on Apr 27, 2007 (gmt 0)|
Absolute paths are said to be safer than relative for hacking.
| 10:56 am on Apr 27, 2007 (gmt 0)|
Only do it if you are positive your hosting co has your server httpd conf file set correctly - otherwise you are setting yourself up for issues with canonical pages that Google has only partially fixed
| 12:20 pm on Apr 27, 2007 (gmt 0)|
" Absolute paths are said to be safer than relative for hacking."
This idea has been mentioned when hijacking has been discussed. Why take chances.
| 1:18 pm on Apr 27, 2007 (gmt 0)|
That which is fully specified leaves no room for misinterpretation on the part of the search engine bots.
Even if the server configuration is correct a mistooke in selecting the correct domain for a site on a shared name based server by a search engine bot can cause all kinds of duplicated content.
| 11:41 am on May 7, 2007 (gmt 0)|
Thanks all for your answers
Let me ask you one more: is it dangerous to use a combination of absolute and relative links? I'm asking this because a long time ago I saw a thread that said it was harmful.
| 1:52 pm on May 7, 2007 (gmt 0)|
"you should include a base href in meta before change."
always mean to ask - should the base href statement just be on the home page or does it need to be on every page of a site?
| 6:13 pm on May 7, 2007 (gmt 0)|
I never use links like "somefile.html" or "../otherfile.html". They are too dangerous.
I mostly use links that begin with a "/", something like "/thatfolder/somefile.html" etc.
The base tag (one per page) is very useful in this situation too.
| 9:08 pm on May 7, 2007 (gmt 0)|
Hi, I just came across this topic and its osmething I am interested in, instead of creating a new topic I thought I would ask my questions here so it can be expanded upon.
Firstly, a few weeks ago my server(dedicated server) was going slow, my host looked into and fixed it they said, however, that I should change all my absolute URLS to relative because absolute URLS can cause more http connections or something like that, is that true?
I am also worried about the fact that relative paths are more dangerous? can someone elaborate on how they are more dangerous? I changed alot of my links to use ../file.html and now I am just a tad worried.
Should I change all my relative URLs back to absolute?
| 10:44 pm on May 7, 2007 (gmt 0)|
At the bare minimum start the URL with a slash so that it counts structure from the root of the site.
| 12:16 pm on May 11, 2007 (gmt 0)|
absolute links are helpful when your content is scraped. sure you have duplicate content on many sites but at least the content has absolute links pointing to you :)
| 12:26 pm on May 11, 2007 (gmt 0)|
I just fixed a site that was using relative links with this type of stuff too ../../. I converted all of the links to absolute and it actually made a very noticeable difference, especially with Yahoo.
| 12:43 pm on May 11, 2007 (gmt 0)|
| 12:53 pm on May 11, 2007 (gmt 0)|
I vote for it not making any difference to the serps - or as little as makes no difference. If it's any help, Wikipedia seems to be doing pretty well with relative links.
|absolute links are helpful when your content is scraped. sure you have duplicate content on many sites but at least the content has absolute links pointing to you :) |
Fair point, but balance this with the knowledge that you have then recieved links from what is clearly a bad neighbourhood. six of one, half a dozen of the other really.
As sites get more complex, relative URLs are easier to work with I believe. But isn't there some kind of apache module or HTaccess command that can make relative links hard links on the fly? Ther HAS to be!
| 1:56 pm on May 11, 2007 (gmt 0)|
.htaccess has nothing to do with it, actually.
Page-relative or server-relative links are resolved by the client, that is, they're resolved by the visitor's browser or by the search engine robot. The use of hard-coded absolute URLs eliminates the possibility of problems caused by errors in these clients' handling of relative URLs.
While these errors are rare, they do happen occasionally. So by using relative links, you're introducing an external dependency on your site -- You are counting on the client to correctly resolve the relative links.
Using <base href> in the <head> section of your HTML can help in those cases where the client is slightly befuddled.
On the other hand, the client might just as well make an error in handling an absolute link or a <base href>, so this factor is not an absolutely-deciding one.
By using page-relative links, you can preserve the linking function of your site while working on it using your local computer with no server installed. So ease-of-development gives the nod to page-relative links.
Using relative links also reduces the size of your on-page code, a factor that might rise to importance if your page has many links but is otherwise 'thin' on content. It's also easier to change domains on a site written with relative links, but if you have to change domains, you've got bigger problems than just editing links.
This 'ease of changing domains' is what makes relative-link sites attractive to site-copiers and scrapers, though.
There is nothing inherently dangerous about using a mixture of page-relative, server-relative, and canonical links, except as each has its advantages and drawbacks noted above.
Just to define my terms:
Page-relative: <img src="image.gif"> or <img src="../image.gif"> or <img src="images/image.gif"> or <img src="../images/image.gif>"
Server-relative: <img src="/image.gif"> or <img src="/images/image.gif>"
Absolute or canonical: <img src="http://www.example.com/image.gif"> or <img src="http://www.example.com/images/image.gif">
Again, the key point here is that it is the client that resolves relative links.
| 2:19 pm on May 11, 2007 (gmt 0)|
I agree wholeheartedly JD. (Who wouldn't agree with JD)
|Again, the key point here is that it is the client that resolves relative links. |
The only kind of client that might get it wrong is some hand-made scraper program. A decent search engine bot would never, ever have a problem. If tey did - it wouldn't be your problem, it would be everyione's.
| 10:50 pm on May 11, 2007 (gmt 0)|
Using your terminology, I vote for server-relative links.
| 11:07 pm on May 11, 2007 (gmt 0)|
|Absolute paths are said to be safer than relative for hacking. |
This is a very good point and also why I mentioned using base href on all pages.
We should also detemine what is ment by a relative link. There's a difference between "/page.htm" and "../page.htm" and "page.htm"
My advice would be to use "/page.htm" if going relative with a base href in meta.
From a protecting against hackers point of view and not what happens client side.
[edited by: Keniki at 11:12 pm (utc) on May 11, 2007]
| 6:28 am on May 14, 2007 (gmt 0)|
In theory, the way that you express your links should mean absolutely nothing to the spider, since the logical way for a spider to handle a link is (1) convert it into an absolute URL (2) then do whatever it is you do.
However, that's just in "theory." In "practice" it's a little different...
I wrote a web spider once, and I hacked together my own parsing subroutine that took the URI out of the HREF="..." part and tried to convert it into an absolute URL.
I soon realized that my spider had some bugs with relative links like "../../articles/page.html"
My solution? Toss those stupid links out! Who needs em? It's not like my little Perl script was going to spider the entire web anyhow, and it already had a million times more URLs to process than it ever could possibly process in a lifetime, so I just tossed 'em out. No big deal--my client was happy, and no time was wasted on such a trivial detail that the end user will be completely oblivious to.
Which gets me to thinking: could it be that the programmers at Yahoo Inktomi etc. were similarly minded, that they had a "who needs 'em" attitude when it came to certain link structures? I tend to doubt it, but on the other hand I wouldn't be entirely surprised.
So my motto is, keep things as easy as possible for the software to handle, whether that software is a web browser, or a search engine spider, or whatever.
"In theory, there's no difference between theory and practice. In practice, there is." -- Yogi Berra
| 6:57 am on May 14, 2007 (gmt 0)|
|"Absolute paths are said to be safer than relative for hacking." |
Of course you mean to say "Absolute paths are safer than relative to PREVENT hacking." In other words, "Don't use relative links because if you do, you might get hacked."
I've heard this before myself. I think this is hogwash, pure and simple. You can use relative links out the wazoo, left and right, all day long, and you haven't changed your security situation one bit.
Well, a witchdoctor told me that if I hang garlic outside my door it will keep evil spirits away. He's probably just a crazy old superstitious fool, but I better go hang some garlic anyhow, because, "why risk it?"
My guess is that some HTML coders heard the word "hacker" and "relative path" used in the same sentence so often, that they irrationally became afraid of relative links, without actually understanding what the issue is.
If there really is an issue here, I'd love to know about it, but I strongly suspect it's just an old yarn that was born out of ignorance.
| 7:17 am on May 14, 2007 (gmt 0)|
You're right, there's nothing here that addresses hacking. It's scraping that people try to undermine by using absolute URLs, I think. Only the dumbest of scrapers are fooled - but there are still some dumb ones out there.
| 7:44 am on May 15, 2007 (gmt 0)|
As per my basic seo knowledge. I always prefer to use Absolute urls. They are safer and will benefit in the long run (keeping in mind my site is not a huge one, it has merely 20 pages).