This is a complex application, since "behaviour over time" enters into it. That is, you need to track a specific client over time and over multiple HTTP requests. This requires a database to be searched and updated for each incoming HTTP request. Only if a particular client from a particular IP address fails to fetch (in your example) a non-cacheable image over some period of time (say 20 seconds, or after a certain number of additional requests from that same IP) would you want to consider blocking it.
This will likely require a database containing the IP address (perhaps indexed by an MD5 hash of the IPs to speed lookups), the time of the last request, a list of URLs that will require the non-cacheable image to be fetched, and a counter that indicates the remaining time or remaining number of requests after that URL is fetched before you will consider that client from that IP to be malicious.
The main drawback is server load; The script will need to do a database lookup and update for every HTTP request arriving at your server. This is not a good approach for a busy site.
This approach also suffers from the "closing the barn door after the horse has already left" flaw... By the time you declare a client to be malicious, it may have already collected what it wanted. If you set the parameters too strictly, you risk blocking legitimate users, and if too loosely, letting the damage be done.
Personally, I prefer the simpler methods described in two "bad-bot" scripts published here on WebmasterWorld -- the first by
Key_Master [webmasterworld.com] and the second by
xlcus [webmasterworld.com], with later versions of Key_Master's script "enhanced" by myself and
others [webmasterworld.com], and of xlcus's script enhanced by
AlexK [webmasterworld.com].
However, if you decide to pursue development of a new behaviour-based blocking script, some of the ideas in those threads may be helpful.
Jim