| 8:27 pm on Jan 3, 2002 (gmt 0)|
>anyone have any ideas how spiders will get round this?
XSL/ XSLT allows you to 'pour' -- for lack of a better word -- your data with XML tags into an HTML template. I see it in my mind sort of like a CSS sheet for XML tags. You specify which XML tags go into an HTML template.
I've only played with it, but I'll try to fetch an expert. I am sure someone can explain it better than I can.
| 7:29 pm on Jan 4, 2002 (gmt 0)|
I don't think there will be a widespread takeover anytime soon, HTML is way too entrenched and XML merely builds on that structure.
| 7:34 pm on Jan 4, 2002 (gmt 0)|
I have definitely noticed a lot less XML hype recently.
| 12:58 am on Jan 9, 2002 (gmt 0)|
Most commercial engines record XML to the extent that it is on the Web. I've got a XHTML site indexed and positioning as one would expect or even better. I think XML is a SEO killer app because you can markup with pure content and have the specs formatting the page in the DTD and XSL for standalone XML or CSS for XHTML. It can look like a text page with very very little code.
My suggestion is learn it!
| 10:28 am on Jan 9, 2002 (gmt 0)|
There is really no need to worry about XML or XSL. This is because when a spider (or any browser) visits a site built around XML and XSL they recieve HTML. Therefore, the is no change for the spiders.
I know this because I am SEO for a company who's site is built around ASP, XML and XSL. All of these transform easily (to the spiders and browsers) into HTML. The code is transformed.
XML + XSL = HTML.
| 12:17 pm on Jan 9, 2002 (gmt 0)|
>> XML + XSL = HTML
Yup. The reason XML will get popular is that XML can also be used to exchange other data between apps. The XML>>HTML thing is just one example of what can be done. Having the ability to push/pull info from the web and your back-office systems lets you do some really quite neat things
| 12:54 pm on Jan 9, 2002 (gmt 0)|
Anyone recommend a good source/site to start learning XML ?
| 2:50 pm on Jan 9, 2002 (gmt 0)|
>>>XML + XSL = HTML<<<
In many server side case scenarios this is absolutely true. However, everything depends on your implementation. This would be true when using a XML app to compile server-side and deliver html (browser specific). Many off the shelf products do this now. You can define your own XML doc that is closer to what I mention above- pure text, very little code. The spider does *not* record HTML but XML. I have verified this independently.
As for resources, I like IBM, WDVL and CNET.
| 3:37 pm on Jan 9, 2002 (gmt 0)|
Of course, XML+XSL=HTML is implemented purposefully to display across all browsers where your own doc definition may not.
| 3:43 pm on Jan 9, 2002 (gmt 0)|
>> The spider does *not* record HTML but XML. I have verified this independently.
May I ask who these people who have verified this are? I work VERY closly with a number of the major search engines and directories and I can *assure* you that a spider will read the code that is sent to the browser NOT the XML (unless you fail to transform it).
| 8:58 pm on Jan 9, 2002 (gmt 0)|
Yes yes. Of course.
I did not mean to imply they don't record what the browser sees. Send XML to a browser with Gecko or MSXML.exe and you're serving XML. That is all I meant. I apologize and didn't mean to confuse.
Since XML does not work without a parser, we're forced to use XHTML at the moment which is fully backwards compatible (in the W3C standards sense anyway). It works wonderfully and search engines record it (more) easily. My guess is XML will still transform the Web - it is only a matter of time.
| 2:42 am on Jan 10, 2002 (gmt 0)|
Basically, SEO will become a more useless task than it is right now. SEO will be more about following good practice and taking all of the right steps rather than trying to "fool" search engines. Unless you using things like cloaking, of course - but I hope that nobody considers that when doing SEO.
What I'm talking about is the ultimate goal, down the road - when many web sites use XML (that will be translated via XSL into HTML, of course) search engines will be able to read and understand the XML of the page, not the HTML. And this will give search engines the ability to understand what you have writen, instead of just the way you have formatted it.
When XML is ubiquitous, search engines will answer your questions in the results page rather than linking to another page that answers your question - that's a fundamental difference. And this will only happen, of course, if TBL's vision for the web is actually realized. And that doesn't look likely at present - look at the abysmal rate of CSS adoption.