homepage Welcome to WebmasterWorld Guest from
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
Become a Pro Member

Home / Forums Index / Google / Google SEO News and Discussion
Forum Library, Charter, Moderators: Robert Charlton & aakk9999 & brotherhood of lan & goodroi

Google SEO News and Discussion Forum

On Google Understanding Javascript
brotherhood of LAN

 12:14 am on May 24, 2014 (gmt 0)


We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average userís browser with JavaScript turned on.

To make things easier to debug, we're currently working on a tool for helping webmasters better understand how Google renders their site. We look forward to making it to available for you in the coming days in Webmaster Tools.

I've never read too much of how Google understands Javascript, other than it's capable of understanding some of it. Having some guidance on WMT may help understand how Google sees JS rendered aspects of a page.



 1:45 am on May 24, 2014 (gmt 0)

Considering they were doing page previews, they wrote Chrome which has it's own much faster javascript engine, and it's not too hard to make headless browsers like PhantomJS and SlimerJS used by everyone for scraping these days...

... I'd be more surprised at what Google CAN'T do with javascript, probably a more interesting list.


 3:23 am on May 24, 2014 (gmt 0)

I was going to ask if what you see in Preview is a good indication of what google can do when it comes to executing javascript. But then I detoured to search (I figured out a good page to try it out on, using current Safari for safety) ... and once again the ### Preview is hiding. So the question remains not only unanswered but unasked :(

brotherhood of LAN

 3:40 am on May 24, 2014 (gmt 0)

Bill, it's clear that Google can use JS no problem for all the reasons you listed, though due to JS's event driven nature, how deep do they look?

My guess is that content is analysed after any "onload" events, essentially what the user would see after the page has 'loaded' (OT, fwiw it's pretty difficult to decide when a 'page' has 'loaded' in the context of what the document is 'about'). I doubt that it hovers/clicks/any-event every pixel and node of a page. Even inspecting every listener would be quite time consuming.

I think I'll Google around and see what people have been saying about Google's understanding of JS. Good to see the 1st result for "google parsing javascript" is our very own forum!

It seems the majority of commentary is from 3-4 years ago simply acknowledging that Google has fetched what was previously hidden behind JS events.


 6:33 am on May 24, 2014 (gmt 0)

due to JS's event driven nature, how deep do they look?

Not just events. There's system information too. I check for the presence of specific, named fonts, and make certain minor but visible adjustments accordingly. (For comparison purposes: The plainclothes bingbot behaves as though it has Euphemia but not Pigiarniq. This is plausible. So far I haven't been able to find out what the Googlebot says about itself.)

For a human visitor, <script> and <noscript> are mutually exclusive. A robot of any kind-- including but not limited to search engines-- can choose to follow up on both. (Again for comparison purposes: facebookexternalhit takes the noscript option.)


 3:48 pm on May 26, 2014 (gmt 0)

Waiting for <meta name="robots" content="doNotTriggerEvents"> in order to prevent G from messing with my outbound links statistics...


 7:59 pm on May 26, 2014 (gmt 0)

looking forward to that tool. my website which is huge is best crawled by following the javascript menus. i didn't want to clutter my pages with an excessive amount of ugly and seo-ish html-links. and i was never a friend of sending xml sitemaps to google for crawling. so to help googlebot navigating through the pages, i made extra html sitemaps with full css layout for users. but eventually i cancelled it, as human visitors had no added value from these pages. thing is, i don't want to produce output solely for bots who don't understand javascript and more or less hide it for humans.

looking at the logs, i still really don't know by now if and to what extent googlebot follows my javascript links or if it only navigates through my html links, respectively only revisits the urls it has "stored". never understood what's the problem as headless browsers exist for some time now..


 12:51 am on May 28, 2014 (gmt 0)

google announced today on the webmaster central blog that the Fetch as Google feature of Webmaster Tools will now show how Googlebot would render the page:
http://www.webmasterworld.com/google/4675119.htm [webmasterworld.com]


 1:20 am on May 28, 2014 (gmt 0)

Chrome which has it's own much faster javascript engine

I am continually stumped by Chrome's lack of support for so many things and recently had to scrap functions performed by JavaScript because they either only worked once and then went into the ether or didn't work at all. There is and has never been any API compliance that one can build on. One day it works and then a month later a new browser version and it doesn't work. What version are they up to now? That's a whole lot of inconsistency and quicksand to build upon.

But then I noticed that those original functions which did work OK in Firefox are now fickle like Chrome. It's getting like... if the function is important, do NOT use JavaScript.

What was the function? It was a trigger to log time spent on a web page either upon leaving the page or losing focus to another tab. So our client was quite peeved when his users had not been billed.


 3:40 am on May 28, 2014 (gmt 0)

the Fetch as Google feature of Webmaster Tools will now show how Googlebot would render the page

:: detour to check some specimen pages ::

Oh, come on, Googlebot. You want us to believe you don't have a single member of this list?
Euphemia, "Euphemia UCAS", Pigiarniq, "DejaVu Sans", AiPaiNutaaq, Ballymun, "Ballymun RO", Code2000, NunacomU, Uqammaq
To make sure, I checked the one page where I name a font explicitly rather than rely on font substitutition. Further experimentation confirms that they understand font substitution and are perfectly willing to do classical Greek. They can even render Devanagari correctly. Whew. But really, Googlebot, no Euphemia? Let's not be ridiculous.

(There was a point to the preceding. I wanted to compare their behavior to that of the plainclothes bingbot.)

:: further detour to logs ::

They really do use the Googlebot UA, not the Preview they use for most WMT functions.


 5:26 am on May 28, 2014 (gmt 0)

Bill, it's clear that Google can use JS no problem for all the reasons you listed, though due to JS's event driven nature, how deep do they look?

All I know for sure it what scrapers are now capable of doing, including what I can do with PhantomJS, and I'd assume Google with a really big team can do a heck of a lot more.

Any AJAX content can be read would be my first assumption, unless they're not as good as scrapers.

JAB Creations

 7:28 am on May 30, 2014 (gmt 0)

Google has crawled AJAX+pushState XML pages on my XHTML website so it can analyze the window.onclick event. As far as I can tell Googlebot is now a highly automated Chrome browser; in other words if you're cloaking in any form or fashion it's going to see it. If you're triggering popups to uncanny websites, it's going to see it.

- John

Global Options:
 top home search open messages active posts  

Home / Forums Index / Google / Google SEO News and Discussion
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Home ¦ Free Tools ¦ Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About ¦ Library ¦ Newsletter
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved