Welcome to WebmasterWorld Guest from 184.108.40.206
Forum Moderators: mademetop
Well, they are inexperienced and incorrectly formatted like this:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<TITLE>NPSGLOBAL -Retail Merchant Accounts Software and Services</TITLE>
Notice the Doctype in the wrong place and the dual <HEAD> tags.
Will this negatively affect spidering or ranking on Google, Altavista??
i just did a test this week.
and i used the meta style tag that google likes.
my position on google improved significantly (#1)
so i think that it does deinitely affects google if you use the wrong type or have syntax errors.
you will never be excluded for having bad html on your page, but i won't help you much either
[edited by: ikbenhet1 at 9:20 pm (utc) on Aug. 31, 2002]
When I started using the validator it really amazed my how many stupid errors there where in my code. I guess that is only natural when you are high on coffee and beyond your bed time ;-)
it really amazed my how many stupid errors there where in my code
I'm with you on that one. I use Homesite and I've now developed a "validate button" reflex after I do any editing at all.
I wish that the page builders for many major sites would be required to do it as well. It's the stupid errors, copy/paste mistakes etc that can really do you in over time.
I'm certain that, although Googlebot can accommodate "some" errors, there are certain types of errors (like a missing ">"), and perhaps a threshhold of total errors as well, that make part or all of a page impossible for them to index.
I know for a fact that Googlebot can index very sloppy code. When I
created my site I knew nothing about HTML and thus made a huge amount
of mistakes. You can't imagine how much. But as my pages looked fine
I did not even know that anything was wrong. There was no spam or
anything like that, just very sloppy code. But apparently Googlebot had no problems in indexing and I got PR6 before I even knew that there were errors.
Recently I created another smaller site. Although my code was not
as sloppy as before I made one big error, the same one on
almost every page = a missing ">". Googlebot indexed the site before
I managed to correct it. But again: the bot had no problems and the
site got PR5.
I am eternally grateful for Google for not penalizing us novices
for coding errors we make.
"and i used the meta style tag that google likes.
my position on google improved significantly (#1"
1. Is this just the standard meta tag stuff, or something special?
2. I typically use the "Clean up HTML" command in Dreamweaver. For Dreamweaver experts, does this clean up any html mistakes I've made?
This cleans up some obvious stuff like nested or empty tags etc., but doesn't come close to catching the stuff that a "real" page validator will.
No, you only need it to send your code to the validator, and to tell the browser what type of document is going to be served.
I would update the doctype statement from <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> to the newer version <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> as the earlier one is obsolete.
I'm actually using a HTML validator as my primary HTML editor.
How can you use a validator, i.e. a program that checks your HTML code against some DTD, use as an editor? Or are you using some editor that checks your code while you enter it just like word´s spell checker does? That would be a nice feature, although I´m not sure whether constantly checking against the DTD would make the editor too slow.
<p keyword1 sentence, well written copy, etc.
<p> keyword2 paragraph with more choice content.
... is that the "keyword1" sentence looks like part of the tag - like a tag attribute. So the words in the "keyword1" sentence probably won't be included in the algo's computations, even though the page itself will be indexed.
Once a spider sees a correct tag further along in the page, then it's back on course. So, the keyowrd2 paragraph would make it.
Thank you, Tedster. A good clarification about the missing ">".
Could someone please explain the "charset" part in this:
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
Is it OK only for pages in English? Or for other languages too?
I tried site search but encountered a problem with some "validator" threads (not in other threads):
I saw the page but before I could read it, the browser changed it into an error message. I tried it several times but the result was always the same. Has anyone else experienced this? I have had no problems with my computer or with my browser otherwise.
UTF is the standard encoding for XML files. When editing your XHTML files you need to be aware of the incompatibility with the ISO-8859-1 (Latin1) encoding.
And thank you all! You know so much and are so helpful that it is a delight to read your messages. I know no other place where one would learn as much as here.