| Welcome to WebmasterWorld Guest from 188.8.131.52 |
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
|Pubcon Platinum Sponsor 2014|
I'm working on a large web cache, and the server I am using is limited in resources and using libcurl to get web pages is killing the servers performance.
The feature I most require is being able to read HTTP headers e.g. 302/301, I then need the script to continue to the new URL and get the page data.
Curl can do this via:
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1);
Is there any other function to do this?
All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved