Welcome to WebmasterWorld Guest from 18.104.22.168
Forum Moderators: phranque
On a site with over 10,000 little JPGs, that method gets cumbersome. Uploading, downloading, backing up, reverting & syncing the web project was often an 8 hour job that I'd run overnight. In the morning, I'd check and if the gods were benevolent, all those files would be transfered. But not always.
Then last year, I was building another project where users could upload a little photo as an avatar, and it struck me: why not store the photos in the database? So I did.
Then I used the same technique on another project, where users uploaded little thumbnail photos of nifty objects...
Sure, it's simple to create an <img> element that points directly to the image's physical location, but through the magic of URL rewriting (.htaccess) and a little PHP hocus pocus, using the built-in GD library or Imagick, delivering images at an artificial URL from a blob in a database isn't too difficult either.
Maybe I'll describe the technique in detail, in another post.
I'd like to get a good, thorough pile of pros and cons about storing images as blobs. I've done it on three projects now, and this latest one is potentially image-heavy (ie it's storing real photos, not just profile avatars).
- your images are suddenly sortable, indexable, and easily retrievable using SQL commands. Usually I access images via a simple row identifier, but not always!
- it's more elegant (imho)
- it keeps my scripts and templates separate from my data. As I work on the back-end of my site, I can use the same methods on images as I use to keep my live databases in sync.
- it makes your database bigger. Quite a lot bigger, actually.
- Response time is probably a little slower. I haven't measured anything, but I just figure it probably is
- processing overhead: I usually suck the blob into Imagick or GD before outputting it to HTTP. It may not be necessary to do that, but that's how I'm doing it now.
- scalability concerns: what happens when the number of images gets into the thousands? millions? My database might get so bulky that it impacts "regular" data performance.
One idea I'm toying with is to keep my image blobs in a cloud (like Amazon S3), instead of a database. Really it's just switching one storage medium for another; I'd still need a script that grabs the blob and streams it out with the right mimetype. But then I'm not as likely to run into database bloat problems.
This has already been discussed. I would definitely recommend reading what cooper has to say, as I agree with him wholeheartedly.
It depends on the browser, and the URL.
If you use URL rewriting (which you almost certainly will with this technique) a URL like this one will cache nicely:
http://www.example.com/images/12345.jpg12345 is obviously a db row id
but if you employ a querystring, like this:
http://www.example.com/images/product.jpg?id=12345then browser caching will be fruity.
The caching problem with querystringed URLs can be soothed using an "Expires" header, but I find that the first method is better & easier.
Make sure you validate that the id is an integer to prevent people from injecting crap into your SQL SELECT query, and also throw a 404 status header if SQL can't find that row, or if there are any problems retrieving the image blob.