Forum Moderators: open
Since you confirmed that the version they use is the "fresh crawl version", you allready know it quite exactly ... if you include in your page code the time and date of the moment the page has been requested by googlebot, you'll know it even more exactly. Or did i miss your point?
I want to know what version of my file which Google get from the deep crawl.
Did you look into your log analyzes? You should be able to see which version was cached during the deep crawl. User Agent and freshbot vs. deepbot tracking is explained at the Googlebot: Deepbot and Freshbot FAQ and Information [webmasterworld.com].
I admit, this doesn't explain you exactly which version has been cached by deepbot or which version your current rankings are based on (could be even a different version). However, i wouldn't stress to much until there's at least one month over ... i wouldn't change optimization earlier than six weeks after a deep crawl ... as long as there's no emergency (in case you did something seriously wrong).
deep crawl came on 15th. I update my site on around 16/17th. And deep crawl CAME again on 26th for some files including my main page.
Because the deep bot also crawl my non-www version of site, so I really can't find out in the log if the bot is crawling www.mysite.com or mysite.com for certain files. I gave many files which has been double crawled.
Hmmm. Hmmm? As long as the non www version does a simple redirect to the www version or both share the same root and in case you didn't doublicate your pages, there's only one page crawled per domain - which would be the same for both and easy to track. I fear i don't get it ... :/
It would be nice to find a 'deep cache' Easter egg, but I'm not sure that Google is very keen to make it so easy for us to peek into their data.