Forum Moderators: coopster
Ok. This is my constant stone inside of my shoe:
I code the best as I can -> that's under my control.
The server promise "99.9% uptime guaranteed" -> that's not under my control.
I mean... For example:
I count every (every) byte on accounts of my users. What if a script fails in middle of an edition session debt a fail on server? (100 Kb =/= 0 Kb)
I mean, it is possible or there is a routine or aspect or something to front this kind of matters.
What do you think abou it?
Please comment...
------------------------------
Store the relevant info in db - it's seldom to crash even on server failure.
Moreover the server down is very rare, so to say the truth not that could be the problem, but the server-client connection (or client double clicking, etc)
I don't know if this will help you in any way, but to say the truth I still don't disturb myself with such thoughts :)
Michal Cibor
$y = 1000
$x =? some integer, 0 if no results, OR NULL if the db connection fails
Then you never want
$y = $x;
or
$y/$x
You want
if (isset($x)) // DB connection worked.
{
if (!$x) { echo "no results"; exit;}
}
Is that the sort of thing that you mean?
ergophobe:
With (100 Kb =/= 0 Kb) I mean than at finish of a script run, I want store 100 Kb (in a DB field called size) if submitted data is exactly 100 Kb and I don't want store 0 Kb (or NULL) debt a system failure.
Even a DB connection is working at start, not necessarily it is at finish.
mcibor:
Your words let me more comfortable, but, if we do accept than a crush is in our future (thanks Murphy again) then I want can manage it.
so...
Do you know if, for example, certain type of DB manage by self this type of matters:
Something like (on DB):
1) arrive a process to run -> (save request)
2) run process
2.a) all OK -> say OK to user
2.b) fail -> run it again 1,2,3 times or save request to run it when system will be up.
And not simply:
2.b) fail -> say: sorry for you (or even do not say nothing!)
...
As far as I understand what you're asking, any and no DB system can do what you want. In some systems the RDMS will automatically ensure data integrity if you use transactions and so forth. In older versions of MySQL without transactions, you have to handle rollbacks within your program logic which means that if the server goes down during the process, you might end up with corrupted data. If that's unacceptable, you would need to take additional measures.
In terms of reissuing the set of DB operations if it fails the first time, that is something a RDMS should not do, since it doesn't know what you want to do.
You as the programmer need to return error codes and decide what do on error.
>> ... since it doesn't know what you want to do.
enlightenings words
I already have an eficient system of alerts for the end-user (inform, error, succsess, warning, fatal)
I will study about "transactions". Now I got it go. Thanks for your words.-
---
[webmasterworld.com...]
.- innoDB goes to be the standard past MySQL 4.0. good!
.- oracle acquiries innoDB ...! will be continue GPL?
This discussion goes for a ride in database forum here performance == MyISAM v/s reliability == innoDB [webmasterworld.com], please visit for comments...
---