homepage Welcome to WebmasterWorld Guest from 54.237.95.6
register, free tools, login, search, subscribe, help, library, announcements, recent posts, open posts,
Pubcon Platinum Sponsor
Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
Forum Library, Charter, Moderators: coopster & jatar k

PHP Server Side Scripting Forum

    
security / performance tradeoff in image uploads
do I put them in webroot, or use a php passthru?
mincklerstraat




msg:1256463
 11:23 am on Oct 15, 2004 (gmt 0)

I'm writing a script in which images, and client's ability to upload images, is essential. It would also be nice to have these images in per-page subdirectories, to facilitate archiving pages for later use. So I'm looking at basically two options, or more if there any suggestions. The script will be under the webroot and used by different url's on the same reseller account on a shared server, not free, and probably even encoded. Some of the photos are big - 580x580, up to 25K - each pageview will have about one big photo and up to 8 6k photos.

Option 1: the more security-conscious model
Uploaded photos are stored outside the webroot in a directory which is either owned by or in the same group as the webserver. Every time a new directory is added, this image directory is chmodded by the script into being writeable, the directory is added, and it's chmodded back to non-writeable. Every time a photo or bunch of photos are uploaded, the same happens to the individual destination directory.

The photos are accessed by php - either mod_rewrite sends image/subdir/pic.jpg to imagemaker.php?subdir=subdir&pic=pic.jpg (most likely scenario to encourage isp caching), or else the images are referenced directly to the php file itself in the HTML. The php file produces the appropriate content-type and cache headers and either returns the image using readfile() or imagecreatefromjpeg() and imagejpeg().

Ups: directory that is owned by / in group of apache (and potentially writeable) is out of the webroot

Downs: more scripting, php works harder, small performance hit (? how big?).

Note: it seems vbulletin has been using this technique for years already; but vbulletin is so common that thousands know its source. I've also seen it used in sites that don't seem to use open source or well-known cms's. I wouldn't have problems with having well-known source code.

Option 2: the more performance-minded option
Everything the same as above; except the directories for the photos are inside the webroot, and simply referenced directly by HTML.

Ups: PHP doesn't get used for the images for every image view.

Downs: Directory that has ordinary reading/writing permissions, except is owned by webserver or in webserver's group, and on addition of photos, is momentarily chmodded so as to be writable by server, is out in webroot. Seems to me like actually a fairly minor security concern, and being on shared hosting, not really much more of a potential liability than Option 1.

I can also see potential problems when multiple people are adding photos (this shouldn't happen so often - only a few edits per day probably) - I'll probably have to have a db table to flag the moments when photos are actually being added, to avoid the scenario that the directory becomes unwritable at the moment a second person is trying to upload photos.

I'd be grateful for any advice in choosing one of these options or in suggesting others.

 

coopster




msg:1256464
 10:32 pm on Oct 29, 2004 (gmt 0)

This is a good question and it never took off. I was hoping it would bring more discussion to the topic of image uploads and security. Time to give it a bump ;)

Perhaps somebody can first explain the concern over an images directory that has permissions set to 777.

jatar_k




msg:1256465
 9:11 pm on Oct 30, 2004 (gmt 0)

Option 1: the more security-conscious model

if you are using images in the websites then I don't see the point of storing them outside of the web root unless you need to protect them from download somehow or direct linking. All that prcessing every time you need to serve an image seems like a lot of overhead for very little purpose.

>> Downs: more scripting, php works harder, small performance hit (? how big?).

small for each and every image served and the page is already heavy (25 k + 6 * 8k ), medium -> big size page already and that doesn't take into account js, css, html and other images for menus or site logos that might need to be served per page view

>> Ups: directory that is owned by / in group of apache (and potentially writeable) is out of the webroot

huge waste of resources unless it is absolutely necessary and it doesn't sound like it is

I also don't necessarily understand what the security of the whole thing is? So images are in or out of the web root, so what? I don't mean that in any confrontational way, I just always see this brought up and don't quite get it. What are you trying to protect?

Option 2: the more performance-minded option

>> I can also see potential problems when multiple people are adding photos

why? given the chmod'ing of the dir back and forth, fine, but why chmod it back and forth? What vulnerabilities are you actually concerned with?

I just don't seem to understand the whole issue, what vulnerabilities specifically are inherant in having a writable dir that is written to using php, which is where the real protection should be enforced. The perms on the dir shouldn't really be a concern.

mincklerstraat




msg:1256466
 7:22 am on Oct 31, 2004 (gmt 0)

thanks for bumping coopster, thx for yor answer jatar. Jatar, you answer is comforting since I really don't like the idea of PHP doing all that work either.

>> What vulnerabilities are you actually concerned with?
To be honest, I'm not certain, and in a way I'm thinking of being a copycat (copying vbulletin and other scripts).

Since before I started learning PHP, I've been hearing about the evils of having directories chmodded 777 and files chmodded 666. Frequently I see the advice that things which are so writeable be placed outside of the webroot. I too wonder precisely which sort of attack this would fend off.

For example, here's one exploit I've seen. About a year ago a site I'm involved with got cross-site-scripted - a 3rd party script that was lousy security-wise did some unverified dynamic includes, and combined with register globals on, made for a walk-in sort of XSS attack by including a script that he'd placed on a remote site. The sk didn't get far since our php were mostly chmodded 644, and he didn't seem to put a lot of time into it. However, once the sk had gotten that far, the sk could have very easily traversed down to write any other files that were *below* the webroot. Also, any files owned by the webserver could easily have been chmodded by the sk anyways to make them writeable.

My guess is that putting directories under the webroot and making them owned by the webserver instead of world-writeable is mostly a measure which helps security only by virtue of a bit of confusion - a sk has to work a bit harder to write over your files.

Making file not world-writeable also has the advantage that, if your shared host is configured with different webusers per account, it's more difficult for others with access to the host to write to your files.

Anyone else know why the advice is so common, 'keep stuff that's writeable out of webroot'? And, 'make it writeable by your server's user, but not world-writeable?'

jollymcfats




msg:1256467
 3:47 pm on Oct 31, 2004 (gmt 0)

Anyone else know why the advice is so common, 'keep stuff that's writeable out of webroot'? And, 'make it writeable by your server's user, but not world-writeable?'

A writable directory under webroot opens you up to php injection, and, more insidiously, content injection attacks. For the moment, lets take as a given that any directory that a web server can write to can be exploited by someone with access to the server or webserver process.

With an exploitable directory directly visible to the public via URL, someone can embarrass you by serving pr0n out of your site, or do horrible damage by creating pages that look like your site, but are filled with offensive or libelous content. These bunk pages have your URL. The URL may start with /images/, but people & search engines linked in won't notice.

By keeping your writable directories out of webroot, and protecting your URL-space with gatekeeper logic, you drastically limit the damage someone may cause. Sure they might slip in some offensive image when they find out that files have to be named \d{5}.gif, but that's not as damaging as slipping in why_we_support_terrorism.php.

I have to disagree with jatar. Serving content out of PHP is not a big deal. You've got the standard parse & compile hit, unless you're using a bytecode cache. I can't imagine any reasonable code before your readfile() that would even have a measurable execution speed.

The real problem with serving static content from PHP is that you're tying up a "fat" server process with the task of spoon-feeding your content to slow clients. You only have so much RAM, and if all of it is taken up with bloated web server instances dribbling out the bytes to modem users, new connections will queue up or be dropped.

People avoid this by setting up a separate web server instance for static serving. GIFs get sent out from a very lightweight process that doesn't grow per-connection, and you can support vastly more modem users with the same amount of RAM.

But if you're not doing this, then serving from PHP doesn't matter. Serving a static GIF from an Apache loaded with mod_php ties up the same amount* of resources regardless if you let Apache serve it or you hand-serve it from PHP.

*Actually Apache will use sendfile(), which ideally will transfer the content directly from the OS disk buffers to the NIC, without copying it into the processes memory space. But I think there's also a Zend product that adds sendfile() to PHP.

mincklerstraat




msg:1256468
 3:30 pm on Nov 1, 2004 (gmt 0)

Thank you very much for this thoughful response, jollymcfats. Some good info here for inspiring thought. I'd still like to hear from others on this issue if there are more comments to be made.

coopster




msg:1256469
 12:51 pm on Nov 3, 2004 (gmt 0)

I agree with jollymcfats and appreciate the details laid out here for us to absorb. Once a server is compromised, you can have varying degrees of issue based on what the world-writeable directory is serving up. However, we've already jumped over the "how" and got right into the "when" or "after" stages of compromise.


For the moment, lets take as a given that any directory that a web server can write to can be exploited by someone with access to the server or webserver process.

How does somebody possibly write to a world-writeable directory when they do not have access to the server or webserver process?

jollymcfats




msg:1256470
 3:12 pm on Nov 3, 2004 (gmt 0)

Without access to the server or process, an attacker could exploit an unpatched security flaw, say a buffer overflow vulnerability. With that they can write to a writable directory and either inject more code for a full intrusion or just wreak havoc.

But in this case, mincklerstraat is targeting a shared hosting environment, where there are lots of people with access to the server and the process. Any of them could access the writable directory, either on purpose or by accident.

DaButcher




msg:1256471
 7:35 pm on Nov 3, 2004 (gmt 0)

remember to check the filetype, before the server copies the file from tmp to a "real" file..

you might also make some function catch up users uploading "suspicious" content.. even if your script manages to deny them, you might want to pause theire account if they try more than x suspicious things in y days.

check: filetype
check: is_file
etc.

MattyMoose




msg:1256472
 12:21 am on Nov 5, 2004 (gmt 0)


Without access to the server or process, an attacker could exploit an unpatched security flaw, say a buffer overflow vulnerability. With that they can write to a writable directory and either inject more code for a full intrusion or just wreak havoc.

Really, once they've gotten access to an overflowed process (root or www, regardless), they've essentially got r/w access to everything everyone has, no matter what your permissions are.

The more "interesting" problem comes into play when someone has figured out your scripts (whether they be well thought out, good scripts, or whatever), and they find that they can escape a string and PHP will parse it, or inject some SQL statements. Something like an image gallery, where someone can upload a file, but the destination filename is in the GET string, or they can CURL and POST to your script, and over-write your index.php with "image1.gif", which is actually a "You gotZ H4x0r3d" welcome page.

That's when 775 (or whatever perms) aren't good enough anyway. You'd then be making everything 555 by default, so that the php user can't write to those files. Then you think about what if they included a "chmod 777 /index.php" in a string somewhere, then what? Then you're looking at system or user-immutable flags, and basically you've ... well, you fill in the blank. ;)

... Basically ... How far do you want to go? Yep, I've set up a website with system immutable flags so that NO-ONE, not even root can modify a file... It's great, except if you forgot a semicolon, and you now have to reboot into single-user mode and turn down the securelevel of the server to add it in, then reboot again, and change the securelevel. Wicked.


But in this case, mincklerstraat is targeting a shared hosting environment, where there are lots of people with access to the server and the process. Any of them could access the writable directory, either on purpose or by accident.

Very true, and that's why I agree, 777 isn't a good idea -- denying users even the opportunity to think or care about what their permissions are set at is the key. jail/chroot/whatever them, so that user2 can't see user1, no matter what (barring any serious exploits/screwups).

But basically, no matter how much you attempt to protect and cover yourself, you're going to be exposed and vulnerable to people that are *SERIOUS* about breaking into your site, and making your life a living hell.

755 isn't going to help you. ;)
777 isn't going to be the one and only thing that lets them in.

I looked into various solutions to Cross-site includes and such for Apache and PHP, and I thought I'd tell you about my findings:

The first was with Apache's suexec option, which will work only (AFAIK) with CGIs (php CGI, perl, etc). [httpd.apache.org...]

Then I hit upon a thread from long ago on another site, where they mentioned the
"php_admin_value open_basedir" option in httpd.conf. This will make it so that PHP won't include or open any files that are above that path.

<VirtualHost *:80>
ServerName username
DocumentRoot /home/username/
php_admin_value open_basedir /usr/home/username
</VirtualHost>

You'll get an error like this:

Warning: main(): open_basedir restriction in effect. File(/home/username2/include.php) is not within the allowed path(s): (/usr/home/username) in /usr/home/username/index.php on line 2

This isn't too hard to do, and should be pretty common, or something like this.

Note that this won't stop a user from logging into a shell (via telnet/ssh/ftp) and doing something like "cat /usr/home/username2/my_secret_code.php". There are several ways to do that, including FreeBSD's jail, and most systems have chroot, etc, but they're harder to implement.

I tried using the suexec Apache settings, thinking that any requests for a particular virtual host would be run as the stated user:

<VirtualHost *:80>
ServerName username
DocumentRoot /home/username/
user username
group groupname
php_admin_value open_basedir /usr/home/username
</VirtualHost>

So any requests for "h**p://servername" would be answered by an httpd thread running as "username". That would let you have directory permissions like:

drwxrwx--- username groupname /usr/home/username

So that username and groupname would have full control, and other users (username2, etc) wouldn't be able to even read any of the files.

It didn't work out that way for me, and Apache complained about not having appropriate permissions through the path.

I may have misunderstood what suexec was for, or just misconfigured, so if anyone's used it or tried to do something like that before, let me know. ;)

FreeBSD and Jail is your friend. ;)

-MM

mincklerstraat




msg:1256473
 11:38 am on Nov 6, 2004 (gmt 0)

MattyMoose, thank you very much for this contribution. Sounds like you've got a very good handle on your stuff security-wise. I especially like the
open_basedir idea, pity that something like this can't be done with .htaccess.
Global Options:
 top home search open messages active posts  
 

Home / Forums Index / Code, Content, and Presentation / PHP Server Side Scripting
rss feed

All trademarks and copyrights held by respective owners. Member comments are owned by the poster.
Terms of Service ¦ Privacy Policy ¦ Report Problem ¦ About
© Webmaster World 1996-2014 all rights reserved