Herenvardo - 10:31 pm on Nov 3, 2008 (gmt 0)
...I suspect this isn't going to break the web
I already admitted it, the <i> element redefinition was a bad example. And still, it will alter how some older documents are handled by some UAs, which steps out of the definition of "backwards-compatibility". Also, I gave you a more practical example on my previous post, which I'm not going to repeat again; and you just ignored it; so I don't think there is any point on keeping discussing this one.
Ok. So you see well-formedness as an issue. Actually, it's a feature: a page being well-formed means that the browser can unambiguosly figure out its structure and semantics from the markup (which is needed to ensure that CSS is applied correctly, or that scripts run properly). If you prefer to write non-well-formed documents and let the browsers randomly guess how to deal with them, then don't bother about standards at all: just toss in the elements you want, and if the browser can handle it, it will.
In other words: it is completely irrelevant how much flexibility and/or extensibility a standard (like XML and its derivates) offers, if the document author doesn't care about standards to begin with.
But what about namespace clashes eg <input> in XForms and XHTML1 Forms?
In the event that, for some obscure reason, you really needed to use both XForms and "classic" forms in the same document :o XML provides with several mechanisms to deal with namespace clashes (it was designed with extensibility in mind after all, so it had to somehow deal with this ;)). If you tell me an example of that obscure reason, I can probably suggest you the best approach to solve the clashes for your specific case, but on the general case, an xmlns attribute on each <form> should be enough (unless you really want to mix both form models within a single form, which I don't find too advisable, but still would be syntactically doable).
The problem for XHTML is that extensibility doesn't require "fail on error" as a cost...
Actually, the cost wasn't that big. Currently, almost no browser (if any at all) is 100% compliant with the standards it claims to support, so they could have just decided to skip that part. Actually, I used to use a FF addon to achieve exactly this: when I got an XHTML page crashing due to an XML error, the addon re-feeded it to the parser as "text/html". Actually, any browser could have taken a similar approach: if the XML parsing fails, just show something on the status bar like "This page cannot be rendered according to standards", and allow the user to get a deeper explanation by some means (for example, by clicking on that notice). Why all browsers decided to strictly follow the most controversial characteristic of XML, while ignoring or getting wrong most of the other standards is beyond my knowledge.