Forum Moderators: open

Message Too Old, No Replies

Search Engine Spidering and Javascript

         

Road King 48

1:53 am on Apr 25, 2007 (gmt 0)

10+ Year Member



I have a client that I've built a portfolio page for that uses Javascript to flip the portfolio image and text without reloading the page. They were contacted by an SEO consultant who said their search engine rankings/spidering is adversely affected by the javascript. So, two questions:

1. Does anyone know if scripts keep Google from spidering the page?
2. If html text is only contained in a script, will that text not be spidered then?

Thanks!

Dabrowski

2:03 am on Apr 25, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



No, a spider will ignore any javascript completely. So the SEO consultant was right. You should research unobtrusive javascript, so basically the JS modifies the page after it's been written in static HTML first.

You can make creative use of stylesheets to aid with this from a user perspective, the spider won't read them either.

Fotiman

9:09 pm on Apr 25, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



Right, Dabrowski was correct. Any html that is printed using JavaScript will be ignored by search engines.

Try reading up on "Progressive Enhancement" and using Semantic Markup. Keep in mind that your content is the most important part of a page and should be accessible whether JavaScript is available or not. JavaScript should be treated as an enhancement only.

As Dabrowski mentioned, you could do something like this:

<ul id="portfolio">
<li id="customer1">... Image and text go here ...</li>
<li id="customer2">... Image and text go here ...</li>
<li id="customer3">... Image and text go here ...</li>
</ul>

Since it sounds like the content is really a list of images and text, mark them up as such. Then use JavaScript to parse this list and format it for whatever your "pageflipper" script does.

londrum

9:27 pm on Apr 25, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



if it was me i would rethink your design a bit. if what you are saying is literally true-- and you are flipping the image/text without reloading the page-- then presumably you will have the same URL for every page.
you need to find a way to get different URLs at least, to give the engines something to index, and something for you to get unique page titles, descriptions, body text, alt tags, title tags etc in. that is what helps you rank in the engines. gives people something to bookmark as well.
i wouldn't fancy keeping the same URL with a load of?variables on the end either, because search engines tend to leave them out of the index.

Fotiman

10:19 pm on Apr 25, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member Top Contributors Of The Month



I got the impression that this was a portfolio page, possibly listing a bunch of items in the portfolio. For example, if this were a web design firm, then the portfolio would contain an snapshot image of one page from a site, with some text about the site and perhaps a link to more details (or the site itself). I wouldn't expect this little bit of information to be on it's own page.

tedster

2:56 am on Apr 26, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



uses Javascript to flip the portfolio image and text without reloading the page.

This sounds like the content is all loaded in one html document, with javascript being used simply to show/hide the divs involved by changing the visibility rules. In that case, all the content WILL be indexed as one page.
The problem such a page would face is visitors from search engines whose search terms are not visible in the default configuration of the page.

Also note that while javascript may not be executed by search engines, it is downloaded as text and examined to a degree, mostly to see if it contains urls that may not have been discovered through other crawling avenues. These "links" are not scored as true backlinks in the ranking algorithms, but they can trigger the spidering of the url involved.

Dabrowski

6:13 pm on Apr 26, 2007 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



Agreed. Instead of having the content in 1 page, if you want to search engines to take an interest you should separate the pages. This will also help your PR.

If for some reason they absolutely cannot be separated, you could use [whisper]doorway pages[/whisper]. You could simply duplicate the content into their own pages, add a link in these pages back to the actual page, and stick them in your robots.txt.

That way the spiders should see them, but your users won't.

No hate mail please, I know doorways are evil.