| Welcome to WebmasterWorld Guest from 184.108.40.206 |
register, free tools, login, search, pro membership, help, library, announcements, recent posts, open posts,
|Pubcon Platinum Sponsor 2014|
| 12:32 pm on Nov 14, 2004 (gmt 0)|
I'm working on a large web cache, and the server I am using is limited in resources and using libcurl to get web pages is killing the servers performance.
The feature I most require is being able to read HTTP headers e.g. 302/301, I then need the script to continue to the new URL and get the page data.
Curl can do this via:
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1);
Is there any other function to do this?
| 1:17 pm on Nov 16, 2004 (gmt 0)|
How about wget [gnu.org]?
| 2:26 pm on Nov 16, 2004 (gmt 0)|
Yeah, I thought of recently and all ready implemented it.
All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
WebmasterWorld is a Developer Shed Community owned by Jim Boykin.
© Webmaster World 1996-2014 all rights reserved