aakk9999 - 12:33 pm on Aug 23, 2013 (gmt 0)
I mean, Google is practically inviting you to cloak your page for Gbot.
In <meta name="fragment" content="!"> you are only inviting Google to see the HTML the way your user would see it if they scroll down the page. Since googlebot has to get the page in one request (rather than multiples as user does), you are just giving gbot all in advance.
The care must be taken if you use GET rather than POST because GET can be indexed as a separate URL. I am not really sure what is the recommendation here, whether in this case the GET URL has to be blocked by robots.txt, or robots noindex be used or have rel canonical? Or should the POST always be used when requesting the further bits of the page?
Otherwise, that is, if GET is used to grab the additional page content - Googlebot will see one complete page (where your server has returned all "bits" together), but there will also be part of this content on separate URLs - which can be a duplicate content.