>> If the robots are disallowed from fetching those URL-paths, they...will never see the redirects from old to new URLs.
> Even if the redirects are done in .htaccess?
If you tell a (robots.txt-compliant) robot (using a Disallow in robots.txt) not to fetch an old URL, then it won't request that old URL from your server, and so will never trigger the 301 redirect in your .htaccess that you intended to use to "tell it" the new URL. Simple as that.
>> I'd dump the robots.txt directives
> Are you specifically talking about my main OSC PHP product URLs that have been replaced with new SEO URLs? Or, do you mean ALL dupe content URL variations?
I mean all duplicate-content URLs -- Use the 301 redirects to "correct" them in the SE indexes, keeping in mind the above clarification.
>> as long as you don't continue to link to them, that is.
> I've changed all my *internal* links to new SEO URLs, but there are still tons of external links (websites, blog posts, etc.) that are pointing to old and dupe content urls.
In that case, the SE robots will continue to request these obsolete and non-SE-friendly URLs. Over time, as these old links disappear from the Web, the spidering frequency will decrease, but you'll still need the 301s in place to redirect them. You may be able to accelerate their "fading-out" by asking your major linking partners to update their links to your site.
In simplest terms, use robots.txt as a "fetching/bandwidth control", on-page meta-robots tags as "page content" control and SE indexing (results listing) control, and URL redirection as "URL control". In other words, robots.txt and meta-robots protect the contents of the box, and redirection simply changes the labeling on the outside of the box.