Forum Moderators: open
wow....the table says it all.
Google better not be determining ranking with code validation [webmasterworld.com] after looking at that ;)
Fair enough SE's should have ethics, but hypocrisy will never be a good line to spin :)
I complained to the webmaster at Zeldman's ALA that the page I was reading (one on Doctypes) didn't validate and had a large number of errors.
His first response were that there were no errors on ALA pages because they were written by a world famous expert. Then when I sent him a copy of the validation results, he said the errors were related to that specific page and were "a bunch of harmless, mostly meaningless warnings."
I was disillusioned!
All I see is a small header identifiying the US version, and a search box -- well, that's really all I usually want to see, but you can download a search box with very little code!
I dropped them a note. They need to do better than this.
Remaining problems at DMOZ are mostly a problem with <LI> tags on pages that say "This category needs an editor" and some unescaped ampersands in a few site URLs. These will probably be fixed in due course, but are trivial compared to previous validation problems. Some pages have not yet been regenerated, this process will take a few days.
How's this message ? Written and sent using the new O2 XDA unit that I am trying out. Niice.
[edited by: g1smd at 3:51 pm (utc) on July 17, 2002]
AFAIK, that is the first search engine to be in compliance with HTML standards since Yahoo was in 1996.
I'm not going to challenge that. :)
Validation isn't everything, though. I'd be pretty happy with sites that sports so called well-formed HTML (that is, no spagetti code with mergingof elements).
Because there are lots of tools and robots that use ODP it makes sense to aim at valid HTML. That makes writing tools and robots simpler than having to support some tailor-made parsing.
Any directory and search engine that wants to keep up by offering web services and XML-based features will have to think validating code.
Valid HTML is also quite useful if you're using CSS.
It is reported that unescaped ampersands in URLs on ODP should all be now fixed and properly escaped. The ODP should now be error free in Public Category pages (except there may be the odd '&' in a site title to find and correct). It may take a while for changes to travel through the whole directory as it takes a few days for all pages to be regenerated.
There are some known HTML problems in the new guidelines pages, but these are still being edited for content.
ummm yeeeeah I'm going to have to disagree with their site design. (My best impression of Office Space movie)
While Overture showed up fine in my Opera, the use of Flash really sucks and does not even work. I even gave it a shot on a 56 k connection bwaaaahahaha !
Okay - forgive my rude comments. Wonderful site! Just wonderful... ummm
eboda
I'm really surprised that others mentioned on the list have not responded. This thread has been referenced in some high profile areas on the net and you would think that at least one other besides dmoz would take the steps to validate.
Hats off to dmoz for an excellent move towards promoting W3C validation. I really think this is an historical event. Could we get dmoz to sport the W3C validation icon? Might as well rub it in for the others. That might just do it and cause them to go over the edge! ;)
In your post above you stated:
"This thread has been referenced in some high profile areas on the net..."
I'm curious -- how do you know this? Is it because you frequent the referred to high profile areas...or is there a method of checking to see where a thread is mentioned or referred to on the net (that's what I'm wondering about)?
Thanks,
Louis
You can bet that every major news source relative to our industry has one of Brett's rss feeds. When Brett develops his charts, people listen. Heck, Google execs probably have this thread bookmarked. dmoz now has a copy of it framed in their corporate offices. MSN probably has a copy of it on a dartboard.
It would be nice to know just how popular this topic has been since it first posted. Hey Brett, is that top secret information?
[lists.w3.org...]
Its the European Lycos sites, except for lycos.de
If you try to validate them now, there are a bunch of errors, (including the & problem, alt tags missing, etc.).
If you look at www.lycos.fr for instance, it's what was described in the link (look at it in NN4 then in MSIE or Gecko- attempt made), but now it doesn't validate AT ALL. In other words, they started to do it right, but now the pages are getting worse and worse (as far as validation is concerned)
Most of the Lycos Europe sites will shortly be moving to a new design, which
will validate to XHTML 1.0 Transitional, and use CSS for layout.Netscape 4 users will get a plain-text version, with no formatting.
Extremely interesting in fact! I'm certain they will get their "growing pains" sorted out... If the need any help, or run into any tough problems, they can just post here. ;)
As quoted from webstandards.org...
> We would like to think that Web developers and journalists have good intentions. But like the rest of us, sometimes they just miss the point, for various reasons. Perhaps it will require a wake up call of the sort that AOL is sure to spring on those of us who haven't been paying attention; when a browser that supports standards and discourages Web developer laziness and laxity becomes the default for AOL's millions of consumers, maybe we'll hear a different tune. It's up to you.
Any directory and search engine that wants to keep up by offering web services and XML-based features will have to think validating code.
Not so. Google offers their soap api, but their web pages don't validate.
pageoneresults:
But we do all know by now that validation <> cross browser compatibility, don't we...? I've got pages/sites that haven't changed since early 90s - they don't validate but they still work fine (*and* are accessible), even in latest browsers. I wonder what xhtml/css support will be like in 8 years... will it be fixed by then? ;)
It seems to me that folks often suppose a false dichotomy: a page validates (and "therefore" works in all browsers) OR it only works in browser X. This is patently not the case...
validation results [validator.w3.org]
[edited by: Brett_Tabke at 5:07 pm (utc) on Aug. 6, 2002]
[edit reason] shortened long url [/edit]
Brett, would it be possible to get an update on the tabled results? I think this thread needs to stay alive an not be buried on the board. Many still do not realize the importance of this and I for one am doing everything I can to promote Web Standards.
It's pretty sad that the first site I've seen in a long while that has problems displaying in Opera is from a MAJOR search engine organization. What's even worse, is I've seen similar designs that work just fine when rendered in Opera. Sad... very sad.
See: XHTML settings were even worse [validator.w3.org].
OK Querystring problem
"http://somelink.asp?a=one&b=two"
I've substituted the & with & and although it's & that's showing in my source code
The validator is showing:
the unescaped &
with this warning:
Error: unknown entity "b"
followed by a second error pointing at the = sign:
^Error: reference not terminated by refc delimiter
I do understand the error but cannot understand why validator is not picking up the character code from the source?
I've tried not putting in & and just putting in the & still no joy?
anybody know how to validate this one?
Suzy
forgot to change same links on an included page!
but now I'm getting a complety different error..bahhhh!
The asp page now appears perfectly (running on my server), but when I attempt to validate it, it looks like a completely different source is being used, well half of it..
it's as if the validator is validating a exception error, yet this exception error is not appearing on my page
reference the second half of the link mentioned in the previous post (the bit after the &). I'm using the request.querystring to request the value from the second pair, I'm obviously getting it as my pages are displaying and I've checked using the response.write statement, but the validator is not getting it?
is there anything I should know or is this a parsing problem and/or can I still use the Valid icons
getting annoyed now!
Suzy
Brett. Thanks for the New Stats at: [webmasterworld.com...] .
It is only the ODP (DMOZ) that has code that validates on results pages. It's amazing that no other directory or search engine has managed to correct their errors yet!
From discussions in some other forum [htmlforums.com], it appears that [zapmeta.com...] is now also aware of their errors, and the need to validate the code.
Keep an eye on their validation results [validator.w3.org] for the fixed version to appear.
Not so. Google offers their soap api, but their web pages don't validate.
Read my statement one more time, please. I wrote that they have to think validating code, not that their web pages needed to validate to get such web services going.
Regarding the chart [webmasterworld.com], both Dmoz and AllTheWeb have problems when the SERP contains URLs with stuff like ampersands. When you have to fix stuff like that for some XML-based web service you may as well use correct URLs on the web pages as well. ;)