| 3:33 pm on Nov 13, 2009 (gmt 0)|
The reason you need one isn't so much the load. It would be the ability to tweak the server to your own specs to really make it snap. Fine tuning apache, gzip, mysql, memory, etc etc all can have noticeable improvements on your site's performance. And it's easiest to do that if you've got access to the underlying files so you can make the changes.
| 3:36 pm on Nov 13, 2009 (gmt 0)|
And FWIW, have a look around. I pay about $100US a month for colocation in what is one of my country's main internet locations. If your site earns any real income, that cost level isn't a big deal.
However, what may be a big deal is keeping the server running yourself - when you go this route you're mostly on your own in terms of keeping the server humming along. that will add substantially to your overhead either through increased costs or increased time.
| 8:38 pm on Nov 27, 2009 (gmt 0)|
Thanks for ideas.
I was actually incorrect wrt the numbers. I just realized it's up to 5GB/hr (mid-afternoon spikes).
Until I learn how to manage a server, what type of expert can I ask to tweak the server for maximum speed?
Also, would I get a faster site if I use one dedicated server for static files and one only for dynamic files instead of all on one dedicated server?
Some of the site's pages are very long and have many "calls" each. I'm really not satisfied with the current download speed.
CPU Usage is usually over 100% but I'm told the meter is typically inaccurate.
When it's correct is CPU the focal point for increasing speed? How much faster did you get your site to run with tweaks?
| 9:10 pm on Nov 27, 2009 (gmt 0)|
Get a managed dedicated server ( and tell them what you need it to handle at setup ie; day 1 ) so that is the set up they are managing ..good managed dedi is not as expensive as you may think
| 10:16 pm on Nov 27, 2009 (gmt 0)|
What specs on a managed dedi are suitable for the current demand?
My current host offers several of them from 3.0GHz 2G RAM to 2.4GHz 64GB RAM.
What RAM do I need to deal with 5GB/hr?
| 10:52 pm on Nov 27, 2009 (gmt 0)|
>>What RAM do I need to deal with 5GB/hr?
it's not the bandwidth, it's what you're doing to produce the pages.
are you just serving lots of video or are you doing lots of heavy processing and db calls per page?
you should be able to speed everything up with sensible partial cache-ing of pages or even full cache-ing of pages.
also optimization of your database and the queries will help a LOT.
| 11:43 pm on Nov 27, 2009 (gmt 0)|
There's some stock video but not much. I've just started into video and probably won't go much further until the speed issues are resolved.
I've been using OpenX to deliver some ads (two to three per page). Most of the ads are delivered by another server. My site delivers remnant ad code.
There is also a database for some small online polls.
The bandwidth demand is mostly for images. I was going to use Amazon S3 to reduce the load, but the pricing isn't that impressive or competitive, and I still haven't been convinced speed will increase.
The pages are html, not php or anything else.
What's the deal with page caching?
P.S. Does a large .htaccess file slow loading? How about a large .css file?
| 12:18 am on Nov 29, 2009 (gmt 0)|
>>What's the deal with page caching?
two types, caching the whole page or parts of the page on your own server, the benefits of this are felt when pages make several db calls to be 'created' thus if the content/part-page won't change in the next x timescale, then by caching the page/part-of-page on the server, when the page is called just serve the cached page thus saving db calls, etc.
second type, isp's will cache pages that are called by their subscribers, so they can serve the cached page if another subscriber calls it and thus save themselves bandwidth, you need to set the server and headers up correctly so you can let them know they can do this, if you want them to.
>>P.S. Does a large .htaccess file slow loading? How about a large .css file?
.htaccess can be slow there are some great threads here on WebmasterWorld, showing you things like how you should deal with images etc in htaccess, to save image calls being tested against all the rules!
with a dedicated server you can of course bypass htaccess (and infact should do) by hardcoding the rules directly into the apache config file, but they still need to be optimised.
if you are serving lots of images, especially multiple images per page, you might see benefit from serving some images from a seperate domain or subdomain.
you really should read this
it's an authority, so posting the link i think is ok, you can buy the book if you want but most of the info is available online ( i bought the book and was glad i did, it's only 100 odd pages and a very quick read
| 6:15 am on Dec 8, 2009 (gmt 0)|
I think I just ordered an optimization book by Souders.
I'm considering using a separate domain for images but I'm leaning toward first testing a new host that offers 4 GB RAM (Guaranteed) with 8 GB (Burst). Right now I have 1 GB (Guaranteed) and 4 GB (Burst).
digitalv back in 2004 wrote:
"Retrieving images from a server doesn't hurt the CPU any more than retrieving web pages, it's all a bunch of 1's and 0's that get turned into a picture on the client side - so don't waste your time putting images on another server. The first thing you should try is adding more RAM to your server."
I read another old post on WW where the webmaster said she completely divided her hosting duties between multiple servers (one main, one for images, one for db, and another for email). Google and Yahoo have different servers for images; and Yahoo apparently uses one server just for js.
If the RAM upgrade doesn't cut it, I may get one server for images but I'm considering using two to share the load.
| 1:10 pm on Dec 8, 2009 (gmt 0)|
On the Yahoo page re performance (link see two posts above) they discuss the pros and cons of using seperate domains for content (e.g. images):
|Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads |
So the more images you have the bigger the benefit of hosting them on sub or seperate domain would be.
Also take into account that when serving big files (images/video) it slows disc performance, dividing content on different discs can make sense regarding user experience.
Same is true for writing logfiles. If you need them, try to write them to a different disc so the writing doesn't interfere with the read process for serving content.
| 9:21 am on Dec 10, 2009 (gmt 0)|
Let's get back to earth:
|20,000 visitors/day |
Unless you have a realy badly written dynamic website which combines information from hundreds of different SQL queries per request, by no means you need a heavy server to serve this load. 4GB RAM with 8GB burst is overkill to handle 500MB per day. See it this way: With your current 1GB guaranteed RAM you can cache all pages, images etc that people request in a period of 2 full days. If you keep in mind that many requests are identical (.js files, images etc) that 1GB you have now will be sufficient to handle your load, unless something strange is happening in your scripting.
The same for splitting images over different domain names. The argument that browsers only do 2 parallel queries per domain name is not valid anymore in 2009. In all new browsers this value has been significantly increased by default. There are two reasons why large sites split images to other domains:
- Use of a content delivery network (CDN) which has multiple servers on different geo locations to get as short as possible latency times.
- Optimizing the webserver settings for the image server to handle more queries per second with less memory overhead. With the most used server model of Apache for example, every connection is handled by a private copy of the Apache program which often occupies 10 to 15 MB of RAM. That program will only accept new connections from other servers when the previous connection is closed in the right way, or times out. In practice most connections time out with a timeout value of several seconds. This means that every copy of Apache is serving a few images in milliseconds of time, and then starts to wait for a connection close. During this time, the process occupies its 15MB memory without doing anything with it. A real waste of resources.
A specialized image HTTP server can either contain a stripped down version of a standard web server (Apache without PHP, mod_rewrite modules etc) and optimized settings for connection timeout, or a lightweight HTTP server which is not optimized for running heavy scripts, but which is good in handling many concurrent HTTP connections with little memory overhead. Something like lighttpd or nginx. Especially the latter is getting popular nowadays as a fast replacement of Apache and IIS.
| 2:12 pm on Dec 10, 2009 (gmt 0)|
^He was wrong about 500 MB/day.
| 9:19 pm on Jan 14, 2010 (gmt 0)|
2,500 daily uniques
300,000 pageviews a day
50 gigs of bandwidth a day
I use two vpses, a 128 megabyte vps with lighttpd/fastcgi/mysql to server the php files and an unmetered vps to serve the images and static content. It costs me around 20 bucks a month and is really fast.