No and yes as some improper coding could cause the bots to get hung up and time out but if that wasn't the case clean coding will really not better you in the serps.
It is a good thing what you did as now whatever browser your site is viewed from you can feel confidant it will present itself as designed, in most cases.
I feel I have gotten a boost from clean coding from MSN more than Google or yahoo.
What you did to clean your coding up is Not about the rankings but is about taking pride in your work.
While cleaning up code may not have a direct affect on search engine ranking it might:
- Help increase repeat visitors.
- Increase the likelihood of getting a link from an authority site.
There are probably many more benefits, but these two stand out in my brain (besides pride of ownership).
I spent a lot of time over the past 6 mos cleaning up my code so that each page validates. While I don't think there's any benefit to Google if you skip an Alt tag here and there, there is the idea that this could matter in the future. As Google marches along it increases the characteristics that are important to the algo. By doing this housekeeping, you are ready if this happens.
There's also a benefit to you. By learning what it takes to write valid code - and especially the strict version rather than transitional - you will improve your comprehension of the medium you work in significantly - so you will be able to accomplish a wider variety of objectives on your pages, and with greater speed.
But Google absolutely does not currently rate a site by how valid the code is- many spokespeople form Google have said this, many times.
Interesting to hear all of your responses - Thanks for the input.
It definitely felt good to go through and clean up all the code, a positive step in the direction of becoming an actual (professional) webmaster.
Cleaning up the code won't help your site in the SERPS but it will help via page load speed and compatability and it will also stop bots falling over themselves when they come accross a peice of code they don't understand.
This has been said before, but it bears repeating, I think. The most problematic kinds of true errors in your HTML mark-up can be very difficult to spot by eye - they really need a tool, such as the W3C HTML Validator [validator.w3.org], to be ruled out with certainty.
What are these problems? Things like an unclosed quotation mark or a missing angle bracket on a tag. You can stare for hours at your source code and miss that kind of thing. But until you fix that kind of error, there is a section you intended as content that just looks like an invalid attribute, or something like that. Browsers have different error recovery routines, and just because the content displays on screen is no guarantee that Google's index will "see" it as content.
Eventually Google's error recovery routines may pick up a clue farther along in the code - and after that point, the rest of the page can be indexed. But there can easily be a gap, sometimes with important content, that just gets skipped. I speak here from painful experience.
There are several types of HTML coding errors that potentially can be bot stoppers.
It is best to run a sample of pages through the HTML validator to make sure that nothing major is going on.
[Hmmm. Tedster types quicker.]
also use Google Webmaster Central tools to list your sitemap and see what errors it finds helps
|Cleaning up the code won't help your site in the SERPS but it will help via page load speed |
I don't think that's the case. Pages that validate have - in most cases - more code than those which don't validate. That's probably the main reason Google homepage doesn't validate at all...
OK, I admit it, I use a WYSIWHG editor.
The one thing that constantly gets dinged when trying to validate is the <meta blah/> tag that has no ending tag, but all built into the defining tag.
If I was to go through the site and change every <tag/> to <tag> </tag>, could this actually improve the ranking of the page?
I only mention this because in another thread, the front page "bullet" in the source was affecting someones results.
according to SES NYC -- it doesn't have a big impact unless your code is so screwy that the bots can't spider it.
Also - the cleaner the code the site should work more efficiently across different platforms so you can never go wrong having good code.
What WYSWYG is just fine. Spend the time creating content rich pages and getting good neighborhood links.
I have never encountered the <tag /> problem as I have never felt that XHTML offerred anything extra that I might need compared to HTML 4.01 Strict. In fact I use HTML 4.01 Transitional most of the time.
It doesn't help in SERP's. Althoguh, you should clean the code so your not making it difficult for the SE to find the content. I have come across many sites that have great content but when you look at the source code, the content is buried way deep. Especially if your working with a lost of nested tables. Many times the code cleaning is simple as cleaning the mess of unclosed tags and JS. Using CSS where possible also helps when cleaning the code. If anything your pages will load faster and a better user experience.