Forum Moderators: open
Others will cause great chunks of your pages to be skipped while the spider/etc finds the next bit it can make sense of.
Diferent spiders will choke on different errors, as will newer/older versions of the same spider.
You could aim to develop an encyclopedic knowledge of which errors are "safe" (or even beneficial) under given conditions.
That could take years of experimenting and self-denial.
Or you could take zero risk by not deliberately inserting errors into HTML.
Personally, I've never seen the point of deliberately inserting errors into HTML -- it's a bit like deliberately inserting errors into address labels. Sure, 99% of the time the mail carrier will get it right. But that's hardly a justification for the problems on the other 1%.
View 1. If it renders in most to all browsers correctly, it doesn't really matter -- just as long as people can properly view your page content.
View 2. Even if it does render correctly, it's considered sloppy, and not best practice. And it may break in the future when new browser updates come out that aren't as forgiving and its previous version.
Although View 1 is practical in the sense that it does the function you want and need, it's generally frowned upon. Personally, I fall into View 2, and although not ALL of my code is perfect, I do try to make it that way.
[edited by: nanotopia at 5:41 pm (utc) on June 13, 2005]
Spiders do have error recovery routines for common situations. But the worst types of errors can be generated by missing "close quotes", doubled close quotes, missing angle brackets and small typo-like errors in the markup.
No search engine is going to let a non-standard attribute affect your ranking - however it's still better to learn to write to standards. The disciple of it has very positive side affects.