Hi Regent, welcome to webmasterworld :)
One method is just to use
document.write ('<a href=http://www.example.com>example</a>')
Or you could use something like one of these:
<a href="#" onclick="window.location='http://www.example.com;'">example</a>
Thanks. This helps alot.
Are the examples you have given effective at preventing 'PR bleed'?
If a link is not 'spiderable', then does that mean that Google will not count it in its algorhythm?
Try using an external js file. At the moment it still works to hide links, but be prepared to change as spiders change.
No longer true for Google. If you can read it, g-bot will read it. What g-bot does not yet do is parse js.
What might be "effective at preventing 'PR bleed'" will also be effective at making a site look like a dead end -- many links in, none out -- in general, SEs like to see sites that are well linked.
>>No longer true for Google. If you can read it, g-bot will read it. What g-bot does not yet do is parse js.
Google implemented reading js for links a couple of months ago. It will read the js and if it sees something that looks like a link it will try follow it, basically anything that has an anchor or an href. So in both of your above examples it will pull out the http*://www.example.com.
And yes, you can make it more difficult because g-bot will not parse the js (at least not yet). So if you write an expression that concatenates strings into a full url (however it works: 'www.' + 'example' + '.com' or something) you should (maybe) be okay (for now).
And I don't know which ones offhand but I believe that a couple of other bots have been following js links for some time. Can anybody confirm either way?
Hmmm... this makes not to much sense to me.
I have build quite a nice site (so I feel), which utilises a DHTML menu; the spiders couldn't follow it... while a databse dump of my products into a single DIR (single indes.asp plus 2,000 product pages) got picked up within two weeks...
E.g ," … Scanners","/ShopDisplayProducts.asp?id=291&cat=Scanners",,"Scanners ...",0
... which explains why it is not picked up.
Any ideas on what i could do to have a menu the spiders could follow?
>>Any ideas on what i could do to have a menu the spiders could follow?<<
Could you not put the links in a <noscript> tag within your document <head> section?
>> Any ideas on what i could do to have a menu the spiders could follow?
The problem is the parameter 'id'. This suggests a session ID, and search engines don't like session IDs. Following them would give a lot of identical pages. So to get the spiders follow your links, rename the parameter to 'code' or so.
|Google implemented reading js for links a couple of months ago. It will read the js and if it sees something that looks like a link it will try follow it, basically anything that has an anchor or an href. So in both of your above examples it will pull out the http*://www.example.com. |
I Agree. Matt Cutts said something along the same lines in Pubcon 4.
Thank you kindly for your response...
I still believe that g-bot can't follow the kind of coding I've mentioned above.
I fixed the DHTML menu problem by putting text-based links underneath the menu. The DHTML menus live in another layer. This would also cater for users/visitors with Java disabled.
In regard to "id" meaning "session id", well, I think the bots need better algos. :)
Real session ids are much longer than 5 digits.
Another story is -- related to link development -- g-bot also got lost (and its not its fault) when hitting the "view cart" link. This triggered an error (Cart is empty), which is also a 302 redirect (to the error msg page). I have now disabled the links (together with "checkout") in case the cart is empty.