Welcome to WebmasterWorld Guest from 220.127.116.11
Drummond, whose search company avoids some of these complexities by tapping humans to do its indexing, notes that it would possible to simplify the problem and merely determine, for example, whether or not the Web application has made a data request to Facebook. Presumably, that's what Google is currently doing. [blogs.forbes.com...]
<a href=”ajax.htm?foo=32” onClick=”navigate('ajax.html#foo=32'); return false”>foo 32</a>
Googlebot is now able to construct much of the page and can access the onClick event contained in most tags. For now, if the onClick event calls a function that then constructs the URL, Googlebot can only interpret it if the function is part of the page (rather than in an external script).
Some examples of code that Googlebot can now execute include:
<tr onclick="myfunction('index.html')"><a href="#"
('welcome.html')">open new window</a>
These links pass both anchor text and PageRank.
And what is stopping the server code from not sending the client side scripts to spiders/bots requests?
webmasters *do* want bots like Googlebot to see the site the way that users do