We are guilty too! We download whole sites for research purposes, and then can browse with the telecom/ISP ticker off. Most good offline downloaders can parse relative or absolute links.
We use WinHTTrack.. a simple freeware program which can follow links, and download one site or any levels of many sites. It also downlaods much faster than downloading and saving one page at a time as it runs several threads at a time such as LeechFTP. It follows robots.txt.
We do not do it to steal code. I would take a guess that most offline browsing is not for any nefarious or cheating purpose. But I guess if you have mainly a marketing or advertising site, you may have reason to speculate other reasons for people downlaoding material en masse.
People do it to us regularly and we are pleased they find our content useful enough to download it for later reading.. even the whole site. When we do find that people have published under their own name our material we do pursue it, and use several methods to find such illegal copying. Its harder to find breaches of copyright when people make multiple copies to distribute off line to others. It is still illegal to do do, but just the act of downloading even a whole site for personal use, I dont think is a problem at all.
Your logs still reflect the page views, at least the first time they look at a page. And server based and browser based cacheing already causes random error in your page counts. You also may see the hits.. (eg my documents/yoursite/blah.htm whenever an absolute location is called and the reader is on line, and they didnt download all elements.)
As publishers ourselves, we have to accept that publishing information on the Web, means that you allow people to view it, whether on or off line, though not to breach copyright by copying code, or reproducing content on other domains without clearance. Same deal whever you publically publish anything such as a book.