1script - 5:07 pm on Aug 23, 2013 (gmt 0)
@aakk9999: yes, the script that delivers separate chunks is definitely going to be both POST and blocked in robots.txt , I thought you are talking about ?_escaped_fragment_= page.
Pagination is still important: some pages without pagination would be so long (1Mb+) that feeding them in their entirety to any bot would be just silly - most of its content will be ignored (Bing is already generating page size errors as it is).
So, in reality we are talking about an even more complicated setup
real users will only see
http://www.example.com/page1.html (if they scroll far enough, it will eventually be 1MB of text)
But bots will have to get several pages:
http://www.example.com/page1.html?_escaped_fragment_= (first 200kB chunk)
http://www.example.com/page1-1.html?_escaped_fragment_= (second 200kB chunk)
http://www.example.com/page1-2.html?_escaped_fragment_= (third 200kB chunk)
http://www.example.com/page1-3.html?_escaped_fragment_= (fourth 200kB chunk)
http://www.example.com/page1-4.html?_escaped_fragment_= (fifth 200kB chunk)
And all these extra pages will have to be linked from the first, canonical if you will, fragment page http://www.example.com/page1.html?_escaped_fragment_=
Real users won't need those links, so you see, this is where the idea that bots and people see the same content starts to break down. I know, this is a little more than wee bit complicated and that's why I'm almost certain Gbot will not get it right (and Bing will be simply lost)
@FranticFish: sorry for my previous comment, I forgot that this actually is a LMGIFY type forum due to the ToS policy against external links. Just wanted to say that I did Google it, but nothing useful turned up or, more accurately, the answers I got created more open questions than I had when I started, hence this thread.