Welcome to WebmasterWorld Guest from 188.8.131.52
Forum Moderators: open
Too much depends on YOUR software, YOUR configuration, and YOUR user's usage patterns.
You can only benchmark and then project from what you know about your current resource usage.
it's all text and only few small images.
If you are only sending plain text files and images, you could decide to go for a small footprint HTTPD server like thttpd. You can serve millions of visitors per month with just a few hundred megabytes RAM in your server.
We run in 512K of RAM with roughly 3 million page requests from the server each month (plus images etc), and very intensive use of CGI scripts.
I suspect Matt means "M" when he says "K", but yes, you probably could run an absolutely minimal Web server in 128kB!
I used to run my firewall (40kB of beautifully-crafted C code) and mail gateway and other stuff on a ~25MHz SunOS box with 4MB, which had some capacity to spare. I called it "lemon" because if anyone tried to break in, that's what they'd find!
... but yes, you probably could run an absolutely minimal Web server in 128kB
You can. Actually my smallest webserver is running in 128 kB RAM and 512 kB flash ROM for program code and file storage. And that server includes a display driver and an FTP server to upload HTML files. But it is running on a low power CPU and I doubt it can handle many concurrent connections which would be the case with millons of visitors per month.
On Apache based webservers without a database, the main cause of RAM usage is the timeout time after the pages have been successfully transfered. With the prefork model Apache uses in default mode, a full featured process which often uses 10 or more MB of RAM is occupied with something as simple as waiting for a timeout. That is where small footprint servers like thttpd are handy. The timeout handling is just a state somewhere in the main process instead of a complete process with its own memory space.