Welcome to WebmasterWorld Guest from 22.214.171.124
Forum Moderators: open
*Using same server partially for ad delivery (15% of ads).
Goal: Optimized downloading speed for minimum bounce rate and maximum pageviews/visitor incl. no delays during peak hours.
Revenue based on display ads...
If avg pvs per day goes up only one page/visitor = 20,000 pages @ 4CPM/page = $80/day (x365=$29,200/yr).
Plan to push the speed limit as far as it will increase average pageviews per visit.
Previous hosting "upgrade" made no difference to site speed or av pvs.
Any hardware recs from a webmaster with similar server stress?
The reason you need one isn't so much the load. It would be the ability to tweak the server to your own specs to really make it snap. Fine tuning apache, gzip, mysql, memory, etc etc all can have noticeable improvements on your site's performance. And it's easiest to do that if you've got access to the underlying files so you can make the changes.
However, what may be a big deal is keeping the server running yourself - when you go this route you're mostly on your own in terms of keeping the server humming along. that will add substantially to your overhead either through increased costs or increased time.
I was actually incorrect wrt the numbers. I just realized it's up to 5GB/hr (mid-afternoon spikes).
Until I learn how to manage a server, what type of expert can I ask to tweak the server for maximum speed?
Also, would I get a faster site if I use one dedicated server for static files and one only for dynamic files instead of all on one dedicated server?
Some of the site's pages are very long and have many "calls" each. I'm really not satisfied with the current download speed.
CPU Usage is usually over 100% but I'm told the meter is typically inaccurate.
When it's correct is CPU the focal point for increasing speed? How much faster did you get your site to run with tweaks?
it's not the bandwidth, it's what you're doing to produce the pages.
are you just serving lots of video or are you doing lots of heavy processing and db calls per page?
you should be able to speed everything up with sensible partial cache-ing of pages or even full cache-ing of pages.
also optimization of your database and the queries will help a LOT.
I've been using OpenX to deliver some ads (two to three per page). Most of the ads are delivered by another server. My site delivers remnant ad code.
There is also a database for some small online polls.
The bandwidth demand is mostly for images. I was going to use Amazon S3 to reduce the load, but the pricing isn't that impressive or competitive, and I still haven't been convinced speed will increase.
The pages are html, not php or anything else.
What's the deal with page caching?
P.S. Does a large .htaccess file slow loading? How about a large .css file?
two types, caching the whole page or parts of the page on your own server, the benefits of this are felt when pages make several db calls to be 'created' thus if the content/part-page won't change in the next x timescale, then by caching the page/part-of-page on the server, when the page is called just serve the cached page thus saving db calls, etc.
second type, isp's will cache pages that are called by their subscribers, so they can serve the cached page if another subscriber calls it and thus save themselves bandwidth, you need to set the server and headers up correctly so you can let them know they can do this, if you want them to.
>>P.S. Does a large .htaccess file slow loading? How about a large .css file?
.htaccess can be slow there are some great threads here on WebmasterWorld, showing you things like how you should deal with images etc in htaccess, to save image calls being tested against all the rules!
with a dedicated server you can of course bypass htaccess (and infact should do) by hardcoding the rules directly into the apache config file, but they still need to be optimised.
if you are serving lots of images, especially multiple images per page, you might see benefit from serving some images from a seperate domain or subdomain.
you really should read this
it's an authority, so posting the link i think is ok, you can buy the book if you want but most of the info is available online ( i bought the book and was glad i did, it's only 100 odd pages and a very quick read
I'm considering using a separate domain for images but I'm leaning toward first testing a new host that offers 4 GB RAM (Guaranteed) with 8 GB (Burst). Right now I have 1 GB (Guaranteed) and 4 GB (Burst).
digitalv back in 2004 wrote:
"Retrieving images from a server doesn't hurt the CPU any more than retrieving web pages, it's all a bunch of 1's and 0's that get turned into a picture on the client side - so don't waste your time putting images on another server. The first thing you should try is adding more RAM to your server."
I read another old post on WW where the webmaster said she completely divided her hosting duties between multiple servers (one main, one for images, one for db, and another for email). Google and Yahoo have different servers for images; and Yahoo apparently uses one server just for js.
If the RAM upgrade doesn't cut it, I may get one server for images but I'm considering using two to share the load.
Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads
So the more images you have the bigger the benefit of hosting them on sub or seperate domain would be.
Also take into account that when serving big files (images/video) it slows disc performance, dividing content on different discs can make sense regarding user experience.
Same is true for writing logfiles. If you need them, try to write them to a different disc so the writing doesn't interfere with the read process for serving content.
Unless you have a realy badly written dynamic website which combines information from hundreds of different SQL queries per request, by no means you need a heavy server to serve this load. 4GB RAM with 8GB burst is overkill to handle 500MB per day. See it this way: With your current 1GB guaranteed RAM you can cache all pages, images etc that people request in a period of 2 full days. If you keep in mind that many requests are identical (.js files, images etc) that 1GB you have now will be sufficient to handle your load, unless something strange is happening in your scripting.
The same for splitting images over different domain names. The argument that browsers only do 2 parallel queries per domain name is not valid anymore in 2009. In all new browsers this value has been significantly increased by default. There are two reasons why large sites split images to other domains:
A specialized image HTTP server can either contain a stripped down version of a standard web server (Apache without PHP, mod_rewrite modules etc) and optimized settings for connection timeout, or a lightweight HTTP server which is not optimized for running heavy scripts, but which is good in handling many concurrent HTTP connections with little memory overhead. Something like lighttpd or nginx. Especially the latter is getting popular nowadays as a fast replacement of Apache and IIS.
I use two vpses, a 128 megabyte vps with lighttpd/fastcgi/mysql to server the php files and an unmetered vps to serve the images and static content. It costs me around 20 bucks a month and is really fast.