Forum Moderators: open
Suffice to say, I'm confused. I don't know enough about them to make an intelligent choice myself. I'd sure like to hear what other people have to say in defense of (or against) each method!
Thanks for the tip! Very handy! :))
Jordan
========
Here is a bookmarklet / favlet to show the current mode based on Nick's tip:
javascript:(function(){var mode=document.compatMode,m;[sup][b]*[/b][/sup]
if(mode){if(mode=='BackCompat')m='Quirks';[sup][b]*[/b][/sup]
else if(mode=='CSS1Compat')m='Standards Compliance';[sup][b]*[/b][/sup]
else m='';alert('The document is being rendered in '+m+' Mode.');}})();
* - line breaks added to stop sidescroll
I'm not sure what you mean by 'buggy-html mode'. I understood that XHTML 1.0 Strict, served as text/html, triggered standards-compliance mode.
AIUI serving XHTML as text/xml is problematic, so it appears most folks serve XHTML as text/html which triggers tag-soup mode in IE. I haven't really looked into this as I can see no benefit in moving from good HTML + CSS to XHTML + CSS anyway ;)
See: [hixie.ch...]
MonkeeSage:
"W3C's new XHTML standards have no discernible purpose/advantage..."
I see at least these:
- Gets rid of attributes and elements that have been deprecated by advances in other standards (e.g., CSS, DOM)
...which is fairly pointless since browsers will probably have to support tag soup for all time due to the billions of web pages already out there.
- Forces well-formedness (which ensures compatibility / portability with other well-formed standards)
And the benefit of this is? I know it means we can do stuff like XSLT for different browsers but we can already do all that kind of stuff with simple programming.
- Enables the possibility for faster rendering (even if it is not yet taken advantage of by any particular browser)
Rendering time is really not a bottleneck in the RW - connection speed is. XHTML requires slightly more markup ;)
- Modularizes document entities based on specific content types (XHTML 1.1)
And this helps us and users how? I don't mean to be harsh, but as soon as we all recognise that XHTML is a waste of time the sooner we can get around to starting to innovate again. Innovation does not generally mean doing the same thing over and over again in more complicated ways. It means doing new things :)
Given a choice between folks all over the world being able to publish their own web pages, or only the programming priesthood being allowed/capable of publishing I would go for the former every time. I remember when techie-generated content dominated the web wayback in early-mid 90s. It was mind-blowingly great - but nowhere near as fantastic as the (more inclusive) web is now! :)
In the real world we don't continue making Model-T's just because some people couldn't handle a Corvette. We continue to advance the standards and try to educate the unlearned (or else simply take away their driving priviliges if they refuse to learn)...why would this be any different?
Personally, I want a browser that is as small and fast as possible, and the only way I see that happening is if the standards become more and more strict, precise and enforced.
Think of the rendering time that is consumed presently just because the browser cannot assume that there will always be a </p> when it finds a <p>, &c. Just view a large page, and connection speed isn't a consideration, it's the 10 or 20 seconds (after your cable modem has long since downloaded the page) during which about 6 lines at a time slowly appear on the screen as the document is parsed. Using a well-formed markup parser like Xerces will parse 250K of markup in literally seconds.
But it's just too strict (is the complaint). But it's strict because of the speed and power (just like driving laws are strict). And if they can't handle actually making corrections to a page instead of just making it in twenty minutes and that's that...well, let them use a text file or something. They can serve text files on the web too. Or better yet just give them a weblog and tell them to pipe down, hehe. ;)
'That's my story and I'm sticking to it!'
Of course, these are just my views and are only represtative of a small number of people (who make residence in my head and talk to me when no one else is looking). :)
Jordan
Ps. "...most folks serve XHTML as text/html which triggers tag-soup mode in IE."
Using the favlet I made from Nick's tip about the compatMode attribute, I can't find any doctype, HTML or XHTML that triggers 'CSS1Compat' mode in IE, all that I've tried so far are showing 'BackCompat'. IE is just a stinky old browser, not an argument against progressing the standards... ;P
========
Added:
"BTW: I emailed Karl Dubost, QA Manager at the W3C with a link to this discussion but he said he "cannot reach the forum" - don't know what that means, maybe he's using an XHTML2 user agent?!"
LOL! That, or IE. :D
... I can't find any doctype, HTML or XHTML that triggers 'CSS1Compat' mode in IE, all that I've tried so far are showing 'BackCompat'.
The following DOCTYPE triggers CSS1Compat in Windows IE6 and (I'm told) Mac IE5+. (Assuming the XML prolog is omitted.)
Nick
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
I found out the problem I was having. You can't have *anything* before the doctype declaration or it goes Quirks...I had just commented out the XML declaration. I totally removed it and several DTDs that I had previously tested put it in Standards Compliance then. XHTML 1.0 Strict is one of them also. Thanks for the pointers. :)
mattur:
Well, not webcops, but a DMV at least. But if there are webcops I'm sure the uniforms will be a take offs from the RCMP uniforms, hehehe. Red will look simply smashing on you. Anyhow, all of my head voices have come to a concensus, even if the rest of the world hasn't...now if I can just get them to hear the voices too...they are saying 'if you build it [XHTML2], they will code...'*, hehe. ;p
* - A chocolate candy to anyone who knows what this reference is to.
Jordan
After years of waiting for people to desert their outmoded browsers - after all XHTML was introduced in 1999! - I decided to bite the bullet about two weeks ago and start converting my pages from HTML 4.01 tag soup to validated XHTML 1.0.
To my surprise, my soup wasn't nearly so soupy as I expected - from the beginning I have always nested tags properly and (through personal aesthetic preference) always written tags in lowercase. I have also been using <strong> and <em> instead of <b> and <i> since about June 1997 (about 2 months after I started learning HTML 3.2!) so I have long been clear on the point about markup describing document structure rather than appearance.
So now what?! I have gone from a page that virtually validated in HTML 4.01 to a page that perfectly validates in XHTML 1.0 but is:
a) longer
b) apparently, according to this discussion and what I've read elsewhere, triggers quirks mode in IE exactly as I suspect my dodgy tag soup did before.
This doesn't strike me as progress.
At the same time as tuning into XHTML, I've switched from a table-based layout to CSS...
But CSS 1 and 2 work just as well with HTML 4.01 as they do with XHTML 1.0, don't they? (and the former wouldn't trigger quirks mode, would it?).
Three things are important to me right at this point:
1) My pages should render as quickly as possible in all browsers (IE, Opera, Firebird, Konqueror, the lot...)
2) My pages should validate
3) Whichever validated HTML spec I use, it should be able to work seamlessly (browser deficiencies notwithstanding) with CSS 1 & 2.
Can anyone - and I'm listening to MonkeeSage especially here, because idealistically I most favour his approach - tell me why I wouldn't be better off, spending five minutes with a text editor and changing the validated XHTML 1.0 transitional into validated HTML 4.01 transitional?
I'm a bit of a pipe-dreamer. In real life, I say be pragmatic. Whatever gets the job done, use that. With the present state of things, you'll probably not gain a single advantage from using XHTML, and your pages will just be longer.
In the world of ivory towers, XHTML may (arguably) be better, but down here in the real world, there isn't much to gain from it right now. mattur is right that the king has no clothes when it comes to the actual state of affairs.
Sounds like you already have good coding habits, so I really can't see anything to gain from using XHTML. So if there is anything to lose by it, I'd say don't use it. :)
Jordan
As mentioned above, it's possible that a xhtml page has marginally more markup than html, but I wouldn't expect this to have a major influence on serps.
What's this mean as far as browsers go? It means that all browsers have to approach all html/xhtml out there as potentially buggy, no matter what the doc type declaration is. Only now they actually change how they render stuff, slightly.
This validdation problem isn't a hypothetical problem, it's how almost all web pages are working today, and will continue to work in the foreseeable future.
Keeping a page error free is hard, making it error free is hard, and it's guaranteed that most of the 'webdesigners' out there have no clue what this means, or why it matters, which means this issue isn't going anywhere, browsers will have to keep treating all html coming in as buggy no matter what the doc type declaration is.
In terms of rendering, now that the boxes out there are mostly > 1 gigahertz speed, that just isn't that important, maybe you can find a miniscule difference, but if you're into fast rendering, use opera 7. Besides, I just built a friend a box using an old 200 mghz machine, and the pages render fine, a little slow, especially with advanced css stuff, but that's because the processor is being asked to do so much more work now in terms of creating the page look.
To me it's purely a challenge to create error free xhtml, that's all it is, there is no real reason to do it.
XML of course has to be error free, but if you are tranforming it with XSL or whatever, it can be whatever you want it to be, html 4, whatever you want.
Jordan
"Using XHTML/CSS for an Effective SEO Campaign"
[alistapart.com...]
"Better living through XHTML"
[alistapart.com...]
;)
Jordan
The second is Zeldman's usual xhtml is "future proof"/"more semantic" nonsense. He actually suggests re-visiting your old sites just to upgrade them to xhtml! Let's just say you can tell he's a graphic designer, not a techie ;)
So that's it then. We're all agreed that xhtml is completely pointless ;)
The first article contradicts itself by advocating xhtml then emphasising "maintaining a good content to code ratio". Cross out xhtml and substitute html throughout the article.
Does it? Mabye the idea that XHTML means a higher code to content ratio is just mattur's "usual xhtml is useless nonsense." ;)))
*MonkeeSage runs away before he gets something thrown at him*
I know that XHTML can be a bit more code, but if you know how to code HTML correctly, there is no reason it must be. Where is code bloat going to be introduced? Closing tags. But, if you know how to code well-formed HTML, you have the same closing tags. And the XHTML version of a well-formed HTML page can actually be smaller since it deprecates some tags and attributes (doesn't mean it will be -- YBMV -- Your Bloat May Vary ;p ).
Also in the second article it's not just Jeff Zeldman, it's also the New York Public Libraries (NYPL), who have 5 or 6 pages dedicated to XHTML advocacy (and their list of benefits is almost exactly the same as the one I compiled earlier in this thread).
Untill browser support gets better, I'm not a real strong advocate of converting your well-formed HTML pages to XHTML in the real world (as my post to ronin idicates), but on the theoretical-intellectual side of things, I like to stir up the dust and see where things stand when it all settles. ;)
Jordan
AIUI Zeldman did the NYPL pages - hence the bogus nature of the xhtml "benefits" I cited above in #msg6. The pages were created before the xhtml2 draft was released. Zeldman doesn't understand techie stuff. Don't believe the hype...
I would love a new version of xhtml or html, but imho it has to: be easy to use, backwards compatible, do new things (i.e. there are clear benefits to encourage /normal/ people to upgrade their browsers) and <controversial>finally abandon the semantic web pipe dream</controversial>.
xhtml2 seems doomed because it isn't backwards compatible. In other words it will require "this site requires Mozilla 2.0, please upgrade your browser" web-blocks. To me this doesn't seem like progress. No current browser will be able to access an xhtml2 site, including those on phones, pdas, webtv etc that cannot be upgraded.
Well, I can't argue with you about the semantic web. I'm a pipe dreamer, but I'm not crazy. ;) Berners-Lee has seemingly made the mistake of following the earlier Russell and Wittgenstein. Instead of using natural language as a model, he wants to base markup on truth tables, sorties and rigorous analysis. IMHO, he is going to end up with a language in which every element dies the death of a thousand qualifications. As someone (I forget who) said, 'When a man speaks, alas, it is no longer the man who speaks.' The problem of abstract and particular is quite a nasty one, especially as it relates to conlangs.
Anyway, now that we have thoroughly confused everyone, I guess we'll really just have to wait and see what happens when XHTML2 hits the streets, and see what direction W3C decides to move in after that. I have a feeling that they will be moving back toward markup that describes content in it's natural relations, rather than markup that describes unique ideas or families of ideas. Form follows function in markup, not other way 'round.
Jordan