Welcome to WebmasterWorld Guest from 54.147.20.131

Forum Moderators: mademetop

Message Too Old, No Replies

XML and SEO..

How will the takeover of HTML by XML affect SEO?

     

Borny

5:00 pm on Jan 3, 2002 (gmt 0)



Hi, seasons greetings to all,

Does anyone have any thoughts / opinions on how the eventual takeover of HTML by XML will affect SEO processes?

My understanding of XML is very minimal; I understand that custom tags can be specified for an individual site - this, I would imagine, will cause lots of trouble for spiders - anyone have any ideas how spiders will get round this? It certainly seems that they will have to become a LOT more advanced than currently if they are to keep doing the job they do.

Any thoughts / opinions would be appreciated.

Many thanks,

Borny.

grnidone

8:27 pm on Jan 3, 2002 (gmt 0)



>anyone have any ideas how spiders will get round this?

Yes.

XSL/ XSLT allows you to 'pour' -- for lack of a better word -- your data with XML tags into an HTML template. I see it in my mind sort of like a CSS sheet for XML tags. You specify which XML tags go into an HTML template.

I've only played with it, but I'll try to fetch an expert. I am sure someone can explain it better than I can.

JamesR

7:29 pm on Jan 4, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I don't think there will be a widespread takeover anytime soon, HTML is way too entrenched and XML merely builds on that structure.

agerhart

7:34 pm on Jan 4, 2002 (gmt 0)

WebmasterWorld Senior Member agerhart is a WebmasterWorld Top Contributor of All Time 10+ Year Member



I have definitely noticed a lot less XML hype recently.

detlev

12:58 am on Jan 9, 2002 (gmt 0)



Most commercial engines record XML to the extent that it is on the Web. I've got a XHTML site indexed and positioning as one would expect or even better. I think XML is a SEO killer app because you can markup with pure content and have the specs formatting the page in the DTD and XSL for standalone XML or CSS for XHTML. It can look like a text page with very very little code.

My suggestion is learn it!

-detlev

cfel2000

10:28 am on Jan 9, 2002 (gmt 0)

10+ Year Member



There is really no need to worry about XML or XSL. This is because when a spider (or any browser) visits a site built around XML and XSL they recieve HTML. Therefore, the is no change for the spiders.

I know this because I am SEO for a company who's site is built around ASP, XML and XSL. All of these transform easily (to the spiders and browsers) into HTML. The code is transformed.

XML + XSL = HTML.

TallTroll

12:17 pm on Jan 9, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



>> XML + XSL = HTML

Yup. The reason XML will get popular is that XML can also be used to exchange other data between apps. The XML>>HTML thing is just one example of what can be done. Having the ability to push/pull info from the web and your back-office systems lets you do some really quite neat things

kapow

12:54 pm on Jan 9, 2002 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Anyone recommend a good source/site to start learning XML ?

detlev

2:50 pm on Jan 9, 2002 (gmt 0)



>>>XML + XSL = HTML<<<

In many server side case scenarios this is absolutely true. However, everything depends on your implementation. This would be true when using a XML app to compile server-side and deliver html (browser specific). Many off the shelf products do this now. You can define your own XML doc that is closer to what I mention above- pure text, very little code. The spider does *not* record HTML but XML. I have verified this independently.

As for resources, I like IBM, WDVL and CNET.

-d

detlev

3:37 pm on Jan 9, 2002 (gmt 0)



Of course, XML+XSL=HTML is implemented purposefully to display across all browsers where your own doc definition may not.

Good luck!

-d

cfel2000

3:43 pm on Jan 9, 2002 (gmt 0)

10+ Year Member



>> The spider does *not* record HTML but XML. I have verified this independently.

May I ask who these people who have verified this are? I work VERY closly with a number of the major search engines and directories and I can *assure* you that a spider will read the code that is sent to the browser NOT the XML (unless you fail to transform it).

detlev

8:58 pm on Jan 9, 2002 (gmt 0)



Yes yes. Of course.

I did not mean to imply they don't record what the browser sees. Send XML to a browser with Gecko or MSXML.exe and you're serving XML. That is all I meant. I apologize and didn't mean to confuse.

Since XML does not work without a parser, we're forced to use XHTML at the moment which is fully backwards compatible (in the W3C standards sense anyway). It works wonderfully and search engines record it (more) easily. My guess is XML will still transform the Web - it is only a matter of time.

-detlev

prowsej

2:42 am on Jan 10, 2002 (gmt 0)

10+ Year Member



Basically, SEO will become a more useless task than it is right now. SEO will be more about following good practice and taking all of the right steps rather than trying to "fool" search engines. Unless you using things like cloaking, of course - but I hope that nobody considers that when doing SEO.

What I'm talking about is the ultimate goal, down the road - when many web sites use XML (that will be translated via XSL into HTML, of course) search engines will be able to read and understand the XML of the page, not the HTML. And this will give search engines the ability to understand what you have writen, instead of just the way you have formatted it.

When XML is ubiquitous, search engines will answer your questions in the results page rather than linking to another page that answers your question - that's a fundamental difference. And this will only happen, of course, if TBL's vision for the web is actually realized. And that doesn't look likely at present - look at the abysmal rate of CSS adoption.

 

Featured Threads

Hot Threads This Week

Hot Threads This Month