Forum Moderators: open
Would work a lot better than;
<a href="http://wwWebmasterWorldebsite.com"</a>
<a href="index.html">Home</a> is ok as mentioned above but there's no harm in stuffing in a link at the bottom of every page that says something like <a href="widgets.com">Widgets</a>
Different search engines use different criteria to rank sites so use hyperlinks to your pages which have anchor text that describes the pages.
Sorry if my post looks like 'teach to suck eggs 101' just trying to help, without seeing your site its not easy to offer advice, sticky your url to me and I'll look at it and give you my opinion for what its worth.
The earlier posts in the thread got tangled up with our "abbreviation filter" that expands abbreviations of WebmasterWorld to the full text. But if we use example.com to avoid that issue, we're talking about:
<a href="/">Home</a>
-- or --
<a href="http://www.example.com/">Home</a>
On a slightly more cynical note.. using the full URL in your 'home' link can get you a *wealth* of inbound links from the more careless of those darling folks who just like to rip off the content of others.
It sounds like you've had some interesting experiences in this regard, but I'm rather at a loss in figuring out exactly what? You mean using <a href="ht*p://mysite.com">Home</a> can somehow lead to getting your content 'ripped off'?
I've come across a couple of sites which were whole-cloth copied like this, but pretty much assumed that this was so oafish that only a one-in-a-million genuine oafey oaf would even contemplate it.
Apparently I'm wrong.
Tedster, you have captured my question.
What is your feeling on which is better to use?
I have never used the bare "/" to send people to the domain root, so I have no experience there. Although I admire the minimalism of "/", I always use "http://www.example.com/" and have had no trouble with search engines seeing the link.
I feel it is the very safest choice, because it will be the most common form of inbound links from other websites - so there's no chance of a technical complication inside the secret caves of the search engine algorithms.
Tedster, I humbly disagree with the 'safe' theory. Any robot will already know the 'host' and resolve any relative links correctly.
Why add code bloat to your pages when it's not needed?
Just my 2cents :)
Birdman
If the boss asks for a current copy of the website on CD I can just download the files, burn them to CD and it works exactly as it did on the server. Of course you'll lose any SSI or other server dependent functions.
As I said, if you are having no problems, you may convince me to switch. But I have been bitten one too many times by apparently simple technology glitches that do make a difference (such as with or without "www", or "_" vs. "-", or "index.html" vs. "http://www.example.com")
I am a crazy man about code bloat, so I like
the file size savings. The most recent home page
design I did had a total page weight of 11.6kb
(including css, js and image files). I was so
excited I added a couple rollovers as a luxury
that I could afford!
No matter what technology a search engine uses, it will have a url class to handle the different methods of internal linking. I'm working on a tool that uses PERL URI and it does a good job of it.
Sorry to be sounding like an expert here, because I'm NOT ;)
One question(Tedster): Do you link to all your pages with absolute URLs or just back to the homepage?
Birdman
I'm with you on the code bloat thing!
handle the different methods of internal linking
Yes, but Google is awful at seeing the root of the domain and knowing that it is the same as index.html or home.htm or whatever. Partly, of course, because there is no guarantee that both URL point ot the same resource. Nevertheless, two differently formed but equivalent URLs often collect independent PageRank, backlink anchor text, etc.
As I said, it's not Googlebot I worry about, it's how their back end crunches the data that Googlebot collects. The fact that you can see your internally linked pages with a link: command is heartening.