| 8:55 pm on Jan 3, 2009 (gmt 0)|
Ive not heard that before ever. Cant see how it wold slow G down.
| 9:18 pm on Jan 3, 2009 (gmt 0)|
>told by a web-designer
I think I see the problem.
My guess is that his theory is that the html is often heavier with tables therefore it must slow the spider. From a practical standpoint, at least for basic seo, I've not known tables to cause any problems. I suppose that in very, very competitive serps -everything else being the same- one might argue that table-less design might edge out a table-based page. The trouble is, of course, that everything else is never the same.
Tell him to stick with his color palette.
| 1:37 am on Jan 4, 2009 (gmt 0)|
it's not really true that spiders have a problem with crawl speed and tables.
the issue is clarity of semantic purpose for less physically-abled visitors (which includes spiders).
if the table is used to represent tabular content, then it is the semantically correct markup to use.
if the table is used for style and layout, then you are adding mud to the meaning and purpose of your content.
| 10:51 am on Jan 4, 2009 (gmt 0)|
nah, the engine bots don't read the markup, they strip it out, so it isn't an issue.
The only time it matters is where the way the page content linealises means it comes out as gobbledygook - its a pretty rare case for table based layouts and virtually unheard of for css based layouts.
| 2:29 pm on Jan 4, 2009 (gmt 0)|
I understand now, tables increase the the text to HTML ratios.
| 6:16 pm on Jan 4, 2009 (gmt 0)|
Thank you for all of your answers. My tables dont seem to have caused me any issues in the past so i think i will carry on using them. I understand the idea of increasing the html ratio, as obviously i increases the number of levels the spiders have to crawl through.
>tell him to stick with his colour palette
:) i might just do that!
| 5:27 pm on Jan 5, 2009 (gmt 0)|
html ratio, number of levels...these things are irrelevant.
Simply put - the search engines strip out all HTML markup (tags) and only read your content. Whether you use tables, divs or marquees it doesn't matter.
| 5:47 pm on Jan 5, 2009 (gmt 0)|
|Simply put - the search engines strip out all HTML markup (tags) and only read your content. |
What about semantics? Do the search engines interpret the HTML markup prior to stripping?
| 5:59 pm on Jan 5, 2009 (gmt 0)|
I'm with pageoneresults. I don't think any of the search engines strip all of the HTML from documents.
If I was writing a search engine, I would preserve all of the HTML in my internal copy. The aim would be for all of the tags to be recognised, even if the current algorithm did not interpret the use of a particular tag or element. Titles, headings, links - they might be some of the first tags I'd start trying to interpret, but I'd like to be able to interpret all of them. CSS classes and IDs too ;)
I've also never been that concerned by HTML/text ratio. Either a tag gets interpreted, or it gets ignored. Badly formed tags get mis-interpreted, but other than that, I don't think sloppy code is a major barrier. Maybe a missed opportunity, but not really a barrier, unless it's seriously malformed.
It would be nice to be able to assume that a table contained tabular data, I guess.
| 6:19 pm on Jan 5, 2009 (gmt 0)|
I'm going to jump out on a limb here and say that there may be some challenges when using <table>s for layout purposes. It has been so long since I've used a tabled structure other than for forms and I can draft up one heck of table if you want one. :)
I remember years ago when participating in discussions about using <table>s. There were some inherent problems back then when you enclosed the entire design in one encompassing <table>. The UA had to parse that entire <table> before displaying the contents within. Users would end up with a blank page for a second or two while the parsing took place. Back then, the recommendation was to break the site up into multiple tables so they would display one at a time and the user would not be left with a blank page on first load.
I would think that using a tabled layout for today's Internet is against best practice and may also be void of any "real" semantics. I'd like to think that the SEs are processing html markup as it was intended to be processed. I think they always have. They've just perfected that process over the years and refined it to the nth degree, or at least Google have.
I'm sure there are a few <table> enthusiasts here who do quite well with their layouts. I think that is a prime example of how well the bots have progressed in interpreting the meaning of the content. But, they can only "guess" so much. Why leave that to chance?
| 7:48 pm on Jan 5, 2009 (gmt 0)|
Technically yes the search engines do see your markup and do save it. Google's cache clearly exhibits this practice.
Realistically I don't see why the search engines would impose any kind of explicit benefit or penalty due to your markup. They are just returning content based upon a query. This however, is just speculation.
You certainly could suffer from an implicit penalty due to the spiders not understanding or not being able to find your content. However both semantic and table-based sites can run into this same problem if the coder didn't know what he was doing. I find it hard to believe (and have yet to see) that nesting a few tables is going to affect your rankings.
Anecdotally I own both table-based and semantics-based websites, and I'm not seeing any perceivable difference when it comes to the SERPS, crawl rate, etc.
| 9:39 pm on Jan 5, 2009 (gmt 0)|
Hmmm... thinking some more - I know I have read comments from Google people that the HTML is stripped and doesn't affect the index (can't recall if it was Matt Cutts or Vanessa Fox, but I'm pretty sure it was one of the two, ages ago).
We all know that choosing what goes in the title element and the Hn elements, among others, does affect how the site returns in the SERPs, as well as the anchor text of a link affecting that link.
So obviously the HTML does affect how the page is parsed in some way.
I'm now on the fence on this one guys - without some testing that i am too busy to do we can't know what the engines are doing with the HTML. :(
(But I still don't think that tables that linearise properly will be a problem for the crawl)
Edit: trivial spelling
[edited by: leadegroot at 10:04 pm (utc) on Jan. 5, 2009]
| 9:54 pm on Jan 5, 2009 (gmt 0)|
Ah-ha, I almost forgot, I've been down this path before. :)
How Do Search Engine Robots Work?
|search engines consist of five discrete software components: |
1. Spider : a robotic browser like program that downloads webpages.
2. Crawler : a wandering spider that automatically follows links found on pages.
3. Indexer : a blender like program that dissects webpages that are downloaded by spiders.
4. The Database : a warehouse of the pages downloaded and processed.
5. search engine Results engine : digs search results out of the database
I would think it is the Indexer that handles the semantic side of things?
| 10:12 pm on Jan 5, 2009 (gmt 0)|
|I would think it is the Indexer that handles the semantic side of things? |
Yes, that is my understanding.
So if we accept this logic of how-the-bots-handle-pages, then a table can't slow down the crawl, beyond page bloat making the page bigger, because it is just markup, and the spider doesn't care. It is the Indexer, conceptually, that could be slowed down by a table. But IMHO it would have to be really broken or mega huge before it was an issue. (I know it is a common opinion that broken html (incorrectly nested tags, etc) can limit the engine's ability to list the page.)
I think that if eg Google had found table based layout were an issue then we would see semantic markup on their pages. We don't, its all old school table based HTML.
| 11:11 pm on Jan 5, 2009 (gmt 0)|
Almost one year to the day Edward, LOL