The more I think about how they could store and associate redirects, the more the 'ranking lag' seems to make sense, because they would have to get from the spidering of the page (noticing the redirect) to updating the association (in the data the use for calculations) and finally to the re-calculation of the inbound link effect (PageRank calculation)... It would probably take some time to get all the way through the process, especially on a larger site.
Imagine just an 80 page domain (Site A) being moved to a different domain (Site B) and how it could happen on the ranking calculation side of the process though just a single redirect...
Site A is moved to Site B.
Site A pages 1 through 10 are Spidered and the 'inbound link' data is updated and sent to the calculation process.
Site A loses the weight from the top 10 pages during the process.
(Site A loses it's most important inbound link weight causing it to be lowered in the rankings.)
Site B gains the weight from the top 10 pages during the process, but only has 1/8th the content.
(Site B only has the link weight from the 10 pages previously on Site A and has very limited internal links and 'deep links' in the calculation process.)
Site A pages 11 through 40 are spidered and the 'inbound link' data is updated and sent to the calculation process.
Site A loses the weight from the next 30 pages.
(Now half of the pages and most of the inbound links are in the calculation process for Site B. Site A has very little 'link credibility', but still retains half the content.)
Site B gains the weight from the next 30 pages.
(Now Site B has most of the inbound links being associated to it in the calculation process, but only half the pages of Site A.)
The final 40 pages are spidered and the 'inbound link' data is updated and sent to the calculation process.
Site B finally replaces Site A in the rankings.
It's actually a bit more complicated when you think about internal links and other possible changes which may impact rankings and I could see where it would take some time to get all the way through the re-calculation and ranking process, some of which must depend on spidering frequency of pages.
The more redirects present the more complicated the process gets... Personally, I do wonder sometimes if they request redirect locations immediately? I think I would, but not follow links on the page personally, or that's my approach right now anyway.
Edited: Terminology... Technically, I wouldn't 'follow' redirects, but would rather store the location being redirected to and request that location before moving on to the next URL in the que. (I think, right now today.) So technically I would not 'follow' the redirect, but would insert the location of the redirect into the crawl before moving on to the next location. It's a bit of a technical difference in the process, but still a difference.