It looks like the HOST_NAME and "test_url" are incorrectly constructed. "test_url" should include only the local URL-path info. Specifically, the Host: header should contain only "www.example.com" and the test_url should contain only the server-relative path to the page, starting with at least "/". Example:
HEAD / HTTP/1.1 Host: www.example.com
If you intend to use this code to access many Web sites, I'd like to ask that you add
and provide us with a Web page to explain why you're accessing our sites. Otherwise, I regret that on my sites, you'll always get a 403 response, unless I check your Web page and decide to allow your user-agent. I'd also recommend that you read and follow robots.txt if you intend to fetch multiple URLs from other sites; It's the polite thing to do, and saves you getting added to blacklists.
No it's not a BOT or something gets site content without permission. (I'm very sensitivity about this, please!) It's a customer who keeps asking why I could not access *his* file and offer the service he bought.
And thank you for your help! It now works and I will add the User-agent line.