This can be done with a relatively simple perl script and cron job, but you'd have to hire someone to write that custom. Another thing to consider is more tightly integrating this monitoring into the software running on the machine. I.e. if resources hit a certain point not only send an email but do things like throw up a 'We are experiencing a heavy load, please try your query again in a few minutes' notice, killing database queries that have been open for 'too long', and then logging and possibly emailing the situation might be even more effective.
These typically use a "mangement server" running on some Linux system (but sometimes Windows is supported as well) which really shouldn't be one of the systems under management. An either mandatory or optional client ("agent") may be installed on each system under management. They usually also do some monitoring (such as pings) that doesn't require anything installed on the managed systems. In some cases, they don't use a client, but simply make an SSH connection and a command shell.
Rather than monitor the server and manually try to prevent disasters, I would try to restrict the processes themselves. Apache and MySQL have settings to control the numbers of subprocesses, memory usages etc. Linux itself has also many options to limit the amount of virtual and phyisical memory occupied by processes, the maximum number of processes running under a specific username, allowed number of open files per process and system wide etc.