Changing your password is unlikely to help if your server is somehow vulnerable.
Security staff usually has a six step response in place for dealing with security incidents:
To late now for some of you, but there is a load to do to both avoid the problems and to prepare what you do when you have an incident.
2. Identification and Detection
Chain of custody starts here. Assigning leadership to the response is also doen here as is coordination.
Make sure it doesn't get worse.
Thing is once an hacker can change files on a web server, the game is almost over. Either they got access to a database (e.g. via SQL injection (something you should learn about and prevent in step 1), and now the entire database can't be trusted any longer. What if they also changed something else unnoticed (even accidentally?)
or either they found another way in and you need to identify (step 2) the way they used to get in from your logs.
Decisions need to be taken here: continue vs. abort ? There's risk and benefits in both, so the risks should be evaluated.
Backup of the hacked system ?
- preserve what you still have
- preserve evidence
DO NOT overwrite older backups doing this.
(step 1: prepare for making this backup ...)
Find and remove the vulnerability. Improve defenses.
Find all that went on after the initial attack, and learn from that.
Yes, almost the last step: recover: reinstall systems as needed (it'd most often easy to start again than to trust something that was hacked and where you might not have found all backdoors, rootkits etc.). It removes a lingering doubt you'll always have if you don't do this.
Rebuild data to a known safe state.
Be extremely careful with any data from the backup in Step 3, but also with older backups as they too can contain problems already (don;t reintroduce the vulnerability etc.
Validation and putting back in business is part of this of course.
6. Lessons learned
Probably the most important one as you use it to feed the entire process and improve every step to do better next time. To train developers so they can code with less problems, to improve the preparedness to incidents, to improve communication, ...
Import here is that you can also learn from incidents that others have.
These steps aren't always fully sequential, but don't try to get back in business before you know what happened as it'll backfire badly in my experience.
Now I realize most of you don't manage your own servers, so your situation is more complex as you'll need to coordinate this with the provider of that service. It's entirely possible the host got whacked not due to something you did, but your neighbor or the machine itself might have introduced something that got exploited. Few hosts are going to be very open in their communication about this, but you need to involv them anyway as much as possible.