There is no duplicate-content "penalty" -- That's a very pernicious Webmaster Myth. While there may indeed be "penalties" for massive quantities of intentional duplicate content, there is no penalty for minor, accidental duplicate content.
The "penalty" is that you have two (or more) URLs competing with each other for incoming links and PageRank/Link-popularity, and this dilutes the ranking of each of them.
Your RewriteRule "creates" nothing. All it does is tell the server to serve the content from the internal
filepath /cgi-bin/news.cgi?a=article&ID=1234 when an HTTP request for the
URL /news/1234.html is received from a Web client.
Keeping URLs and filepaths as separate and distinct concepts, associated *only* by the action of a server, will help a lot when thinking about rewrites and redirects.
The only problem is that it does not 301 the /cgi-bin/news.cgi?a=article&ID=$1
version to /news/1265144816.html so I might get penalized for duplicate content.
Were you expecting it to? That's not what your rule does... So you need a second (and complementary) rule to implement that function:
RewriteCond %{THE_REQUEST} ^[A-Z]+\ /cgi-bin/news\.cgi\?a=article&ID=([0-9]+)(&[^\ ]*)?\ HTTP/
RewriteRule ^cgi-bin/news\.cgi$ http://www.example.com/news/%1.html [R=301,L]
This new rule should precede your rewrite code posted above, and it should precede your domain canonicalization redirect and any other less-specific redirects. These redirects should then be followed by your existing internal rewrites, again in order from most-specific to least-specific.
The complex RewriteCond is required to differentiate the /cgi-bin/news.cgi?a=article&ID=1234 script-path being directly requested by a client
as a URL, as opposed to being internally-requested as the result of your existing internal rewrite rule. Without this test, the two rules would unconditionally countermand each other, resulting in an 'infinite' loop.
Jim