Forum Moderators: open
Some analysts believe the appointment means that the much criticised browser will get a polish before Longhorn is released and IE's importance begins to fade.
As for handling broken web pages, life would be a lot simpler for all of us if invalid web pages broke completely rather than being interpreted differently be each browser, spider, and device--but that ship sailed a long time ago. That's one reason the W3C introduced XHTML, where any error does result in a completely broken page, since it's XML.
Of course, XHTML has major issues with backwards compatibility and lack of new functionality. Intel can afford to throw away X86 compatibility in Itanium because the Itanium line massively outperforms Xeon and Opteron in a lot of applications, but XHTML doesn't provide us with a large boost in functionality, certainly not large enough to throw away IE compatibility. Hence, Intel's Itanium sales are booming, and XHTML is a total flop.
Outdated IE6 May get Makeover before Longhorn
Which is a very smart move, since the current IE version in longhorn is only 6.5, and the only "changes" are changes that include whatever XP SP2 already include, plus some "OS integration" pieces to fit Longhorn. But the rendering engine is the exact same. :o
As for handling broken web pages, life would be a lot simpler for all of us if invalid web pages broke completely rather than being interpreted differently be each browser, spider, and device--but that ship sailed a long time ago.
No, it wouldn't. A fundamental maxim of programming is that a prgram should be liberal in what it accepts, strict in what it creates. All browsers should render a valid page in a similar manner (some of the implementation is purposely left to the UA even by W3C recs), but if there are problems on the page, it should offer something, anything.
- personally, I often use invalid pages for testing scripts (no root element, no body element, just <pre> and some output). I appreciate that the browser always gives me something.
- everyone makes mistakes
- it is up to the developer who cares about such things to use a validator and make sure the pages are right. It is not up to MS, Mozilla or the W3C to force everyone to have valid pages. Why should the guy who just wants to put a picture of his grandson on the web need to know how to validate a page? Not failing on the first error is a GOOD thing.
- requiring every page to validate to some DOCTYPE makes those pages more accessible, but it makes the web as a whole less accessible since only geeks like us know/care what the W3C is and how to find the validator.
Think of it as the difference between catching things at compile time and catching them at runtime. It's a basic principle of software development.
A fundamental principle of XML is that it will break if there is a mistake: nothing will be rendered.
There are good reasons for making a language behave one way, and good reasons to make it behave the other way. So which should we use? That really depends on what we want to use a language for: a web markup language for use by the general public (everyone from kids to granparents) should be very forgiving, whereas a web markup language for use by experienced developers should be very strict.
I think it is a mistake to allow any language to fall down the gap in between: to be partly forgiving and partly strict. A language that isn't fully one way or the other doesn't have the advantages of either.
ergophobe, I think you've missed the point. If the page just failed on the first error, you wouldn't need a validator to know there was a problem
Nope. I understood your point. I just disagree. As Purple says (and as I said in the previous post), the web should be an open environment and people should be able to get a page up with a minimum of knowledge. If a page fails totally on each error, you are demanding that everyone who creates web pages have a complete understanding of the markup language.
If you want a page to fail utterly, all you have to do is use xhtml and serve it up with an xhtml mime type.
Tom
When I first started programming (c++ and java), I remember it being pretty frustrating working on something for hours without being able to see any results because there is a mistake somewhere in my code.
I am pretty sure, not as many poeple would tackle webpages for themselves if it was a similar enviroment. Or after a couple hours of either a blank page or parse errors they would give up.
my 2 cents
The point that I'm really trying to make, though, is somewhat different. I'll grant that if you are building a site that is conducting transactions, giving out important information and so, you want a site that is robust. In that case you hire developer who knows not only how to use a validator, but who also knows how to do error checking, form validation and so on.
On the other hand, if my niece wants to put pages up for friends to see (which she was doing when she was 10 or so), all she cares about is getting the photo to show more or less. This should be as simple as possible and if the page is full of errors yet the browser can figure it out, that's great.
By analogy, if you are going to publish a book, you should "validate" against the Chicago Manual of Style or AP Style Manual if you want a professional product. The word processor, however, shouldn't fail utterly to display anything if your letter to an old friend is not up to CMS or AP standards.
Tom