Forum Moderators: open

Message Too Old, No Replies

shtml validation.

help with error, please.

         

awoyo

7:11 pm on Jan 31, 2003 (gmt 0)

10+ Year Member



Howdy,

All my html pages validate at w3.org but when I change the extension of index.html to .shtml the validateor gives me this error...

"Sorry, I am unable to validate this document because its content type is application/octet-stream, which is not currently supported by this service."

Is there a standard "meta" type string I can use to bring the page into compliance, or does not supported *really* mean not supported?

What I use in the .html pages is.... (Changing the content type to shtml does not help).

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title></title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">

Jim

DrDoc

7:19 pm on Jan 31, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Well, let me ask you this: Are you uploading the page? Or are you checking it online?

If you're uploading it, why not change the extension to *.html and then change it back when you're done validating it .. Otherwise I can't see why it shouldn't work if it's validating it online.

I can't say that I've run into this problem before, since I'm validating everything on my computer ..

awoyo

7:47 pm on Jan 31, 2003 (gmt 0)

10+ Year Member



Thanks Doc, actually, I have done what you said but my next concern would be a google question. If the page don't validate, how does googlebot like or dislike it?

dingman

7:58 pm on Jan 31, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



its content type is application/octet-stream

So tell the server that .shtml files are HTML! It will then tell the validator the same, and all should be well.


AddType text/html .shtml
AddHandler server-parsed .shtml

tedster

8:06 pm on Jan 31, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Googlebot isn't purist about valid pages. If it were, Google would not have a very big database! But some kinds of errors can be a problem, and we have no way of knowing what will or won't cause a hiccup.

Case in point: I had a page that was doing well on Google for a particular search. Then all of a sudden it vanished. No penalty, and it was still in the index. But it was not returned for that particular phrase any longer.

Turns out there was a small change in the code around that money phrase -- a class was added to a <b> tag -- and the page wasn't validated afterwards.

The situation? Somehow, the closing ">" got clipped and now all the following text looked like it was inside the tag.

<b class="new" My site's big money phrase was here.</b>

Googlebot had only one choice: ignore it. But a simple click on the validate button would have caught that.

Bots are like simple browsers - roughly version 2 or 3. They only do minimal error recovery. It simply pays to validate your pages ANYTIME you make an edit. You don't know what may be a problem for them.