Forum Moderators: Robert Charlton & goodroi
In order to make pages without hash fragments crawlable, you include a special meta tag in the head of the HTML of your page. The meta tag takes the following form:
<meta name="fragment" content="!">
I mean, Google is practically inviting you to cloak your page for Gbot.
The care must be taken if you use GET rather than POST because GET can be indexed as a separate URL.Thank you for bringing up an important question, aakk9999. I don't think you can use POST here - they are looking for a URL crafted in a particular way, so it has to be GET.
[edited by: aakk9999 at 3:58 pm (utc) on Aug 23, 2013]
[edit reason] ToS [/edit]
I don't think you can use POST here - they are looking for a URL crafted in a particular way, so it has to be GET.
The crawl budget will be effectively slashed in half because each page needs to be downloaded twice (each of the two different versions)
I would be especially happy if anyone can share real world experience with lazy loading of content pages.Mod's note: Real world experience can be shared, but please observe ToS with regards to domain name, niche and keywords.
Paginate and let the SEs index the separate pages., do you mean feed bots the ?_escaped_fragment_= (fully assembled) version of the page or rely entirely on their being able to follow the chain of pushState changes? I guess this is where my grasp of the object starts slipping: how would Gbot receive the pushState change? Don't they have to actually execute JS code, as in "emulate" scrolling for that?
One work-around to jQuery Mobile and not being able to use #'s might be to just paginate the mobile version normally [separate URLs] and rel=canonical those to the main [non-mobile] fragmented URL.Well, the idea was to feed fragmented to mobiles (in small chunks) and whole pages - to bots. I'm going to have to investigate if there's a JQM workaround that makes it work with pushState