londrum - 11:00 am on Jul 24, 2011 (gmt 0)
i had a problem like this once, and found out that sticking disallow on a directory in robots.txt does not actually mean what we think it means.
search engines are not obliged to remove that directory from their index just because you have disallowed it. they usually do, of course, which is probably why we think it works that way, but all it really does is stop them from spidering it anymore. anything that is already in their index remains untouched.
what you need to do is remove the disallow in robots.txt and put a noindex header on the images instead. you can get PHP to send a header like that. once they have been spidered and removed from their index, then you can put disallow back on robots.txt
that might explain why you are getting 403s for the images. the imagebot is still trying to pick-up images that it already has in its index.