You can add an "Allow" for "/links.html" after the "Disallow" for "/links.htm", but this will only work for the major search engines which support the "Allow:" extension to the Standard for Robot Exclusion. Many search engines don't support this extension.
If you want a solution that works for all robots, then change the name of one of these pages so that a prefix-match no longer results in a "collision" between the two names.
The prefix-matching behavior of robots.txt must be taken into account when naming resources and directories -- along with access control, cache-control, HTTP protocol requirements (naming restrictions), maintainer privilege levels (Who in your organization has access to maintain which directories?), server performance, site organization, and SEO considerations. Picking a good "name" (a URL) for a resource is not something that should or can be done instantly -- it requires some careful consideration.