Forum Moderators: Robert Charlton & goodroi
I have been working on a particular website for a while now. I originally started off with a non javascript based menu. However in an effort to improve SEO, I decided to make the menu javascript so that the content of the menu wouldn't affect the real content. The javascript is also compressed to help loading times.
The problem is, google hasn't really taken to the site over the last 6 months, it has only been indexing links of the first page and not going any deeper. More recently however the site jumped up to about 6000 pages (mostly supplemental), but all of the pages except a few have the original non javascript menu cached.
The new javascript menu isn't doing anything dodgy, but it uses Ajax and hence the cache of the pages fails to load the javascript. Using site:www.domain.... is also not stable and changes every click.
Anyone got an idea whether google is not taking to the site purely because of the javascript? Should I revert to the original menu and would noscript tags make any difference in this case?
Thanks
Happy
Still, if you want the internal urls of a site to be frequently spidered, and most of all to pass on backlink influence of all kinds, then currently those links do need to be "vanilla html" -- <a href="[the target url]">
<script language="javascript">
window.onload=function() {
document.getElementById('showcommentsform').onclick=function() {
document.getElementById('commentformcontainer').innerHTML="Blah, blah blah, the content I want to display is here";
document.getElementById('showcommentsform').style.display="none";
return false;
}
}
</script>
Is this ok for Google?
you mean that some JS links can be viewed and followed by Google?
There is a phenomenon that I've heard called "URL Hunger". During the index size wars between the major search engines it sometimes went to the extreme. For example:
Googlebot will try to spider any character string it picks up that looks like a url -- that has certain characteristics such as beginning with the http: protocol, and so on. This doesn't mean Google will treat that url as a link on the page -- it's only a "real" link if it's in an anchor tag, etc, etc.
Let's assume that googlebot sees an on-page script, and somewhere within the script there is the line: "location.ref="http://www.example.com/funkypage.asp". Googlebot will often put that "we-hope-it's-a-URL" into its crawl queue, if it didn't already have it from sone other source.
Unless that URL also shows up as an html anchor tag, its occurence in the script will not influence ranking calculations in the same way that a "regular" link does. But the "we-hope-it's-a-URL" may get crawled, and if the server resolves it, then that URL may get into the Google index in some way. It may not stick if it's not supported by other links, somewhere or other on the web, but it often will get an audition.
So I basically have this in the page:
<div id="commentformcontainer></div>
And with j/s I dinamically fill that container with content which is definied at the top of page (in javascript section).
Is this ok?
Thanks again,
Manca
<div id="submenuID><a href="smID.html">Partial sitemap</a></div>
Has anyone else using this method suddenly developed SERP problems on Sept 15? If so, please post.
Anyone got any clue?