| 12:23 pm on Feb 22, 2010 (gmt 0)|
|I have always used JS to collapse content onload. |
I do similar, except rather than use the onload event (which only fires after all the content has loaded, including images), I include a script before the closing BODY tag to hide the content - less flicker.
I have also sometimes included a CSS file (but only when JS is enabled) that hides the content.
|I have been toying with using some DHTML of late. |
Is this different to JS?
| 10:53 pm on Mar 2, 2010 (gmt 0)|
I am curious on this topic as well. I am looking to involve some collapsible content for product pages on a e-commerce site but I would like to make sure that for SEO purposes all content is taken in mind. Which I wasn't really aware would be an issue until reading this post. Does that mean if you have collapsible content and it doesn't load immediately that engine crawls may just skip over that data?
| 9:38 am on Mar 3, 2010 (gmt 0)|
|Does that mean if you have collapsible content and it doesn't load immediately that engine crawls may just skip over that data? |
Quite possibly, yes. The SE's only see what you see if you view the source of the page after it has first loaded. Any content created or pulled in with JS (ie. AJAX calls) are unlikely to be seen by the SE's.
| 12:18 pm on Mar 3, 2010 (gmt 0)|
penders could be correct, but somewhere in the misty past I've read that JS is readable by the SEs and is indexed... but do recall that "misty past" memory. I will say that if it is in the page it is read, perhaps not acted upon, but is read, after all it is read.
| 1:26 pm on Mar 3, 2010 (gmt 0)|
|somewhere in the misty past I've read that JS is readable by the SEs and is indexed... |
But how long ago is misty past? Certainly in the dim and distant past SEs (ie. SE bots) never touched JS. In the very recent past I think that some SEs might be able to parse a limited amount of JS.
Looking at it from a SE point of view... JS could do anything and not actually contribute at all to the page content, apart from taking time to process, so I don't think it's an easy problem for the SE's to solve and I think are wise to be cautious in this respect.
| 3:23 pm on Mar 16, 2010 (gmt 0)|
Just to update, I agree with Penders. theSwak I would air on the side of caution. The last thing you want to do is hide valuable content from search engines unintentionally.
There are many theories as to what JS spiders can crawl. They have developed somewhat in their sophistication and can navigate complex menu structures (Itís of benefit for search engines Ė or at least they see it as one - to index as many pages as they can) but I have found spiders will tend to not crawl content hidden behind JS.
When using JS I always expose then hide. I have run experiments loading pages and using JS to display content OnClick. This effectively hides the content and itís not crawled.
Run some variations and see what content appears in your cached pages before you implement site wide