Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: incrediBILL
So practically, what does this mean? From what I've learned so far it looks like the jump from transitional 4.0 to strict 4.0 is a much bigger jump than going from strict 4.0 to XHTML.
Would it make more sense to go straight to XHTML? Looks like it's just a matter of writing strict HTML 4 and adding on a small handful of extra rules. Things like closing every element, for instance -- writing <br></br> or <br /> instead of <br> seems a little odd, but it's no biggie. Well, maybe I'll need to update my editor first.
By the way, I found a very simple tutorial [w3schools.com]. I can use the W3C for looking things up, but I have a lot of trouble learning something new there.
So what is everyone doing, or planning to do? Strict XHTML requires CSS -- so external stylesheets seems to be the first order of business. From there, it seems a relatively simple jump to strict XHTML 1.0. Am I right?
Who thinks like that?
er... TBL, w3c, Zeldman, ...
And to be clear: XML is a great technology. But that does not justify the xhtml concept. Web pages are for people. SOAP interfaces are for machines ;)
A colleague of mine used to rant about using w3c valid html 2 (I think), back in the mid 90s. He always went on about how when standards-compliant browsers came through his pages would work and mine wouldn't. He put those little w3c html 2 valid tick buttons on his sites. He traded off working well back then for working well in the future. His pages are still all over the place in modern browsers. Mine still work without any problems. So I have to admit I am probably slightly over-sceptical about "future compatibility" claims... ;)
Our standard front end is HTML 4.01 transition with CSS1. We provide different stylesheets for different browsers, sometimes modified based upon cookie, etc., we do a LOT in CSS. The PHP scripts generate those pages.
Our administrative tool's 1.x version was HTML + Flash 5. The Flash 5 menu was drawn on the fly from an XML output (from the PHP 4.0 toolkit). Our newed admin tool, 2.x, is Java2 1.4 on the client, but the server portion is still in PHP. The Admin program is an XML application written in PHP 4.2.
Without XML, we'd lose our mind. For a new project, we're contemplating exporting portions as web services through SOAP. XML is AWESOME.
However, for web pages, XML/XHTML is pointless. The well-formedness, well I do it for my code, but I need to smack some of our developers around. Those of us that learned HTML years ago need to get out of the "tags should be in all-caps" habit that changed. Well-formed XHTML is easier for a browser to parse than dodgy-HTML, which is a good thing. You are likely to have fewer problems with minor browser revisions.
However, don't tell us that XHTML is the future, it isn't. The innovations will all be on the server-side until NN4 dies (never?!?).
I worry less about the perfect appearance on NN4, but my UI NEEDS to work on NN4. Those of your that write off 5%-8% of your users? Well, that would cost us almost $100k/year, we can't do that.
Do what works for you, but writing off 8% of your users because of your standards-compliant Jihad is not a decision that I'll join. I work on robot-friendly sites because search engines help me make money. Throwing it away to be "compliant" is counter-productive.
The idea that XHTML forces you to abandon old browsers is plain old FUD.
I am one of those with 8% NS4 visitors, and I serve them validating XHTML 1.0 without a problem. There are a few small visual differences, but those are neglectible. My visitors come for the information I offer, and don't really care whether the menu column uses a line spacing of 1.4 or 1.5 (if they even notice).
Dismissing XHTML 1.0 after your own site validates to HTML 4.01 is just silly. The two are essentially the same and can be converted back and forth automatically in most cases.
No current browser really supports XHTML 2.0, so there's no reason to get worked up about it yet. Yes, the development of new browser features is slowing down, and this is a good thing. But defining the standards now that future browsers are supposed to support in three or five years is definitively better than the situation we had in the past. Do your really want each vendor to invent their own "line" tag?
with a reformed-smoker's anti-tobacco-like zeal, are shouting "pages should work for everyone and tags should be closed
So people are really just losing their minds moving towards a mirage, rather than trying to make it so your information can have a greater audience by separating structure from presentation.
XHTML/XML are kinda pointless.
XHTML forces you to use the proper "grammer" of HTML. If we could get everyone to use it, we might finally be able to get rid of all those maddening IE vs. Netscape compatablity issues.
Think of it this way:
Imaging how hard it must be for the folks at Microsoft, Netscape and Opera to develop their browsers knowing that 98% of the coders out there are using invalid HTML? They need to take into account the fact that some bonehead might put <b><i>Wow!</b></i> when they really mean <b><i>Wow!</i></b>. They've had to go out of their way to make sure that broken code gets displayed okay.
If everyone wrote XHTML, that wouldn't be valid. Think about how much more reliable and fast the browsers could be?
I think this dispute about XHTML is really caused by the fact that the purity of coding one can envision with XHTML and CSS is more promise than reality at this point.
Well, yes, that's true -- to a point. For most web sites you can probably everything need with XHTML and CSS today. Support for advanced functionality is spotty (and even a few "core" things in NN4), but unless we developers keep pushing those limits, what reason will Microsoft and Netscape have to add more support?
To support your point you add "and if Amazon used xhtml then everyone could programmatically steal all their content really easily!"
Maybe I'm a bit dense, but how does XHTML make stealing content really easy? XHTML tags look just like well written HTML tags. There's no context like you have with XML. How would parsing through all of that be any easier than parsing through standard HTML?
BTW, this is a great topic! Way to get everyone involved!
Actually, that's exactly what Amazon wants you to do. They have even gone the extra mile to provide a pure XML feed [associates.amazon.com], in order to make it especially easy and painless for you to "steal their content".
There goes another "argument"... ;)
Perhaps you missed my post earlier about building a valid XHTML Transitional site that both functions and looks fine in NN4?
Saying that XHTML doesn't work in NN4 is just not true. You should quit trumpeting it as though it were.
and on and on...
[edited by: madcat at 11:20 pm (utc) on Sep. 6, 2002]
Aside from the personal satisfaction it gave me it also gives me peace of mind knowing that my pages should work well and look nice regardless of what browser they're using.
If you're looking for some more tangible benefit, I anticipate having to spend less time dealing with website issues because now I am confident that whatever the problem is it can't be my code so I can just tell the complainer that it must be their browser and be done with it. :)
The separation of content from display is a meaningless point.
The content is separated, itís in the database.
Iím a novice and know nothing of databases but it seems to me that content can only be separate. No matter the vehicle, if itís different, it can't be housed or altered globally.
Separating display from content has to be a step forward for any site with similar pages.
Regarding backward compatibility, as a novice Iíve found it ďrelativelyĒ simple to create XHTML 1.0 strict pages that look and work fine in 4x browsers using CSS to alter layout. Just looking at what was used to create pages in the past and what can be done now was enough to convince me, not to mention the relative assurance of forward compatibility.
how does XHTML make stealing content really easy
xhtml is html reformulated as xml. xml and hence xhtml can be easily retrieved and processed using an xml parser such as MS' msxml4.0. Or any other parser. You can do the same with plain old html (screen scraping), xhtml just makes it a bit easier.
That's why the w3c have put an "x" in front of "html". The idea is that the semantic web will have all these machines semi-autonomously pulling x(ht)ml files and processing them without human intervention, using the semantic hints in the markup. Thats why the limitations inherent in shared meta information spaces are relevant. Thats why the w3c have reformulated html as xhtml - xhtml is the "bridge" between html and the proposed semantic web's xml.
They have even gone the extra mile to provide a pure XML feed
Thats right. As I said above, web pages for humans, SOAP for machines. We don't need xhtml. SOAP/xml-rpc are ideal solutions for this kind of thing.
I s'pose if you are unfamiliar with xml then xhtml is a straightforward introduction. If you are a developer then you're probably already familiar with xml, because it is everywhere - already.
I think the w3c are blinkered - AIUI they're attempting to retrofit the existing dominant technology for new purposes. I want to see the Next Big Thing that replaces the web. It'll probably have it's own protocol, and the clients will still support/subsume http, ftp, gopher etc in the same way that browsers subsumed (sort of) gopher and ftp. But it'll be something new. If you're struggling to get folks to upgrade their browser, why not instead get them to install a new client that does something new when they start it?
I'm doing my bit to be The One who thinks up the Next Big Thing: I've assembled some resources including several PCs, a LAN, an internet connection, literally dozens of O'Reilly books, a copy of How Buildings Learn, The Matrix (Collector's Edition) on dvd and a pretty-damn-big widescreen Sony telly. I'll keep you all updated on my progress. So far, all I've done is watch The Matrix. 137 times. So far. But I'm hoping for a major breakthrough any day now... in the meantime my money's on Dave Winer coming up with something ;)
it is either your part of the solution (and be with the standards) or ... part of the problem (and that is why browsers still have their quirk modes).
quirk modes of browser still and would remains for bad coded sites ...
This is a battle of coders who believe that in reality standards would not fit all their target audience due to old browser and browser different implementation of the standards .... vs the coders who believe in the standards where in reality it is not possible to achieve 100%, which I personally believe.
Well I am part of those coders (dreamers) who believe in the standards ... why?!?
it would be better to have a common goal and hit it partially than to have no goal at all and code for specific browser.
And before I forget, ... YES Its time for XHTML ... it is not tomorrow we do not need for the browsers to catch up with the code .. they know their faults and they should be working on it!