My access logs go back to September 2011, so I've done a little experiment where I pulled out the first few page fetches at that time by Googlebot, to see when subsequent loads were (and which hostname they were directed at)
28/Sep/2011:11:00:04 ... 301 ... example.com
28/Sep/2011:11:00:32 ... 200 ... www.example.com
20/Jun/2012:23:40:51 ... 200 ... www.example.com
(nearly 9 months between re-fetches from www.example.com, and no re-fetch from example.com after nearly 11 months)
28/Sep/2011:11:00:05 ... 301 ... example.com
28/Sep/2011:11:00:33 ... 200 ... www.example.com
20/May/2012:10:28:50 ... 200 ... www.example.com
28/Sep/2011:11:00:06 ... 301 ... example.com
28/Sep/2011:11:00:36 ... 200 ... www.example.com
15/Jan/2012:08:54:03 ... 200 ... www.example.com
So it looks like G may actually be marking 301'd content on example.com as effectively deleted and not coming back; it's just that I have so many pages, so the average time between the last 200 fetch of a page pre-June 2011, and the "new" 301 fetch, is impossibly large. It could take literally years for Googlebot to fetch and react to a 301 for every page they have queued on the wrong hostnames. The domain has been active for 5+ years so it's possible a lot of what is still showing in the index was fetched prior to setting the preferred domain in GWT a year ago.
I would guess, apart from manual intervention by Google, that a custom robots.txt disallowing everything on example.com and 188.8.131.52 would be the only way to quickly shed the unwanted URLs from their index.