Forum Moderators: open
Back in the old days, I remember being told that with in the HTML coding, that copy that is placed higher on the page get more weight. Which means that if you have a complext table layout which pushes the copy down the page, Google would be busy looking at all the coding, rather than the copy on the bottom of the page.
Is this really still true? Was this ever true?
You might use CSS [webmasterworld.com], but some of us think that CSS is not really easy. ;)
From my experience (limited, of course), tightly-coded, recurrent navigation does not impair positioning.
Split your page into a header table and for the content & links use 2 tables. Put a table a the top with the content in and align to the right, put a table below it aligned to the left with navigation in - use percentages for widths e.g. 75% for content and 25% for links.
That would put the content above in the HTML but would show next to the navigation in a browser.
Works for me.
I would actually be rather shocked if HTML tags before the copy made any difference. By that, I mean what is strictly between '<' and '>'. Of course you can end up with a lot of text that is not a part of your content mixed in with those tags pushing down your content, and that is a different story.
A site might get away with that if the competition was no better, but if a site came along that had a better "signal to noise" ratio in its source code, that new site would likely gain ground, all other things being equal.
I have been able to prove this personally by changing page construction and seeing radically different rankings even taking into account links.
From a programming point of view, it weould be significantly more difficult to take the position in the file into account than the position within the content.
The first thing you do is filter out the content between tags that you give bonus points to, such as title and h1.
Then you just strip out all tags. They just slow down all the additional processing that you will do, and complicate your parsing algos. That's file parsing 101, throw out the garbage before doing your real work.
more difficult to take the position in the file into account than the position within the content.
I don't know enough about programming to comment on that. Intuitively I'd have guessed it would be the reverse.
What I do know is this: when I find ways to move the content "up" and reduce code clutter on my own pages, or persuade a client to do the same, the effect on rankings is always either neutral or positive, never negative.
You're talking about a very 'simple' algo for digesting HTML. Just take note of some important stuff (h1, strong etc.) and then ignore it.
I can't prove this, but I think they are using something more sophisticated that leaves all the tags in place while parsing, this way you don't lose ANY information during parsing, and you always can react according to the state you're currently in.
When it makes sense, you can expect that google will choose the simple approach. Especially when it will lead to a speed improvement with no detrimental effects to their results.
Also, in practice I haven't seen any sign that Googles favours pages with less tags.