If you just want the pages out of Google, then a 401, 404, or any 4xx would do; or a page returning HTTP status 200 with a <META NAME="ROBOTS" CONTENT="NOINDEX"> element. There would still need to be a link to each URL for Google to try to crawl them and see that they're dead, or you could submit them.
Another approach is to use /robots.txt or META robots exclusion and use Google's automatic removal tool.
If you have links to those URLs and want a chance of passing the link benefit to their replacements; an HTTP 301 redirect or META refresh element with zero seconds will often achieve this. At some times in the past Google have done strange things with redirects, so I would use this approach only if the destination URLs already have substantially more or better links than the old URLs.
Lastly, you could just put up a page linking to the new page. This passes about 97% of the PageRank, though not the full benefit of the multimple links to the old URLs.
You could drop a bit of code into the .htaccess file which will automatically do a 404 redirect to the main home page, but is search engine friendly as it still tells the spider that the page is missing, but the visitor does not know.
I do this to cover old dead pages, but it also picks up spelling errors too, so that if someone wrongly types in a page extension it will send them to the main home page instead, which sometimes helps keep a visitor on-site.