Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: phranque
If a page is not validating, why would you thing it would be properly spidered? Proper code gives the spiders direction.
You should try and clean up the code as much as possible. Start at the top of the page. Validation errors cascade, so cleaning one up may resolve many.
However do be aware that if your code contains certain types of coding errors, there is the possibility that at least a part of the page content may fail to be spidered at all. If it isn't spidered then it can't be indexed or ranked.
Remember that one of the things Google notices is how other sites respond to you. Over time, the difference between providing a glitch-free user experience for "90% of the market" or aiming for 99% could quite conceivably result in one site acquiring a stronger link profile than another. That's not guaranteed, nothing is ever guaranteed in SEO, but considering that competitive SEO is often a game of inches no advantage should be tossed aside lightly.
Never turn down a chance to do something better than the other guy.
validation has only the purpose to avoid glitches and strange visualization problems?
If it causes visualization problems you will probably be aware of it, if it causes indexing problems will you know ? Minor violations are probably are not an issue, but a missing end tag for example could easily result in the structure of the page being misinterpreted.
Im not sure how much weight Google places on validated or unvalidated pages (if you validate google's homepage, they have many errors), but it can't hurt and mostly takes little effort.
Validating your pages just ensures that you don't have to worry about the above happening. At least you'll know all of your elements are in proper working order. That for me is one of the better feelings of doing this. I see those green checkmarks in my Developer Toolbar showing valid HTML/XHTML/CSS and I'm one happy camper.
I've also spent the last few years building a toolset that mimics the bots and browser. We've spent an untold number of hours performing updates to deal with FAILed syntax in web pages. When we first launched, we were getting errors almost every day because we didn't think about this, or that. After 2 years of that crap, I think we've finally got it! In the process though, I did learn a bit about how bots traverse code and what can make or break them. I've seen some pretty nasty HTML get processed by our bot. I'm sure Google is light years ahead of us in that area.