| 5:33 pm on Mar 20, 2008 (gmt 0)|
| 5:35 pm on Mar 20, 2008 (gmt 0)|
I can't think of a single-word answer. But if you just want a catch-phrase, perhaps something like:
"Allowing all employees to have root access increases our accidental-misconfiguration exposure."
"This access restriction is necessary to reduce our employee-error susceptibility."
"We don't allow shell access, in order to limit user-error risk."
Or in more informal terms, "access controls help us idiot-proof the system." :)
| 5:39 pm on Mar 20, 2008 (gmt 0)|
control measures ?
Love crosswords me ;)
| 6:14 pm on Mar 20, 2008 (gmt 0)|
idiot proofing (or lack thereof)
[edited by: LifeinAsia at 6:15 pm (utc) on Mar. 20, 2008]
| 6:39 pm on Mar 20, 2008 (gmt 0)|
How about the integrity of the system?
| 7:29 pm on Mar 20, 2008 (gmt 0)|
You want to make it foolproof, though that's not the most complimentary term, is it?
Reworking your example a bit:
"You don't give an employee root FTP access to your web server not because you don't trust them, but because this helps to make the system foolproof."
| 8:11 pm on Mar 20, 2008 (gmt 0)|
Roget's doesn't offer much help: idiot-proof is pretty-much it, as ugly as it sounds. The other terms are overly broad.
But we have a problem fitting it into the right part of speech.
If it matters, a search turns up plenty of uses of the term "idiot-proofness", though I doubt you will find it appearing in a dictionary any time soon.
The field of study that deals with this would be human factors engineering.
| 10:31 pm on Mar 20, 2008 (gmt 0)|
Foolproof is the correct term. Idiotproof is a synonym. Whether you hyphenate or not is down to your own house style.
| 11:01 pm on Mar 20, 2008 (gmt 0)|
Although synonyms, I personally feel that "idiotproof" has the connotation of being more locked down than "foolproof." Idiots tend to be more resourceful at causing problems than fools. :)
| 9:58 am on Mar 21, 2008 (gmt 0)|
you can't trust a fool but an idiot doesn't know what he's doing.
(this example isn't meant to be judgmental or ageist in any way.)
a fool is like a baby who could push or sit on any button without even realizing it's a button.
an idiot is like a 100 year old who has never seen the button before or might be afraid to push it or might not realize what happens when it is pushed.
either way, fool proofing and idiot proofing are similar exercises - securing the button and in some cases ignoring the button push.
| 10:24 am on Mar 21, 2008 (gmt 0)|
Maybe "resilience", or you could talk about "points of failure".
| 7:01 pm on Mar 21, 2008 (gmt 0)|
I have asked myself this very question many times because it's something I absolutely, positively have to do with everything I code. A frequent anecdote I often use: "If I think it can't be broken, turn it loose on a customer and they will find a way."
The closest I could ever come up with is invulnerable to user error.
|You didn't give an employee root FTP access to your web server not because you don't trust them, but because it decreases vulnerability to user error. |
| 8:36 pm on Mar 22, 2008 (gmt 0)|
Some good points, but I think the idea/concept that I'm thinking about is a little more broad. It's not just about stopping dumbarses from pushing the self-destruct button, it's about setting up systems that prevent problems from happening, whether those problems are caused by human error, or other factors.
Here's a carpentry example: You use pushsticks instead of your hands because there is a (small) chance that your hand could slip.
Another tech example: We changed the way we update our web site. It's pretty high traffic, so it can't be down for even one second. So, when we want to update it, we first copy all the files to a "development" web site on our web server. Once we're confident that the development site is error-free, we flip a switch, which makes the development site the live site.
If we just uploaded files to the live site via FTP, our connection could blow up half way through the transfer, and customers would see a half updated (and probably broken) web site.
So that is the same concept, but doesn't involve idiots or fools. The terms that keep going through my head are:
But...those only describe 80% of what I mean...UH!
What's the word?!?
This is driving me nuts.
I need one single word that describes this concept. Anyone who has read "Made to Stick" will understand why. The person that comes up with it can be awarded the status of coming up with a new industry buzzword :)
| 8:59 pm on Mar 22, 2008 (gmt 0)|
Not only accidents, dimwits, and chancers, but there is also the need for protection from malicious intent from within.
| 7:27 pm on Mar 23, 2008 (gmt 0)|
Here's one for you: Bokitoproof.
A word contrived after a gorilla named Bokito escaped from his enclosure in Rotterdam zoo last year and went on the rampage. From Wikipedia:
|The word "Bokitoproof", meaning "durable enough to resist the actions of an enraged gorilla", and by extension "durable enough to resist the actions of a non-specific extreme situation", was voted the Dutch language "Woord van het jaar" (Word of the Year) for 2007. |
| 9:58 am on Mar 24, 2008 (gmt 0)|
How about safe?
Or fault tolerant?
Or just plain well written?
| 1:49 pm on Mar 24, 2008 (gmt 0)|
Thank you, I knew the term existed :)
| 10:29 pm on Mar 24, 2008 (gmt 0)|
fault tolerance refers more to how gracefully a system degrades upon failure rather than the protection from improper or unexpected operational or environmental input.
a simple web-related example regarding form input:
- fault tolerance could be returning a useful error message instead of 500 internal server error if someone submits a form without any input.
- what you are looking for is a term describing what prevents malicious or accidental script injection using that form.
| 11:56 pm on Mar 24, 2008 (gmt 0)|
human fault tolerance
But that's pretty unwieldy.
If we spoke German, we'd just drop the spaces, jam them together, and call it a day...
| 12:53 am on Mar 25, 2008 (gmt 0)|
vulnerability, reliability or stability
| 6:24 pm on Mar 30, 2008 (gmt 0)|
| 3:18 am on Mar 31, 2008 (gmt 0)|
Reminds me of Murphy's Law;
Anything that can go wrong, will go wrong .... at the worst possible moment.
Murphy was an optimist.
| 12:03 am on Apr 1, 2008 (gmt 0)|
"Risk Exposure" is currently standing as the reining champion :)
| 12:13 am on Apr 1, 2008 (gmt 0)|
Although "Risk Exposure" is backwards...it would be nice to have a term that sounds positive instead of negative.
What's the opposite of "Exposure"?
"Risk Concealment" sounds goofy...
| 7:54 pm on Apr 4, 2008 (gmt 0)|
I would say that an environment where you lock down user rights to the minimum needed for their task is called "protected".
Back in the early days of the 8086, every MS-DOS application could write in every memory location. To prevent this, the 80286 processor was given logic to provide access to certain memory locations to some applications but not to others. Applications using these logic were called to run in "protected mode". It gives applications access to their own resources, but not beyond the assigned rights. That is exactly the same as in your situation.
In a protected environment, even if the software application/user goes mad, it won't touch the integrity of the system because the integrity is controlled at an higher level than where the application/user has control rights to.
| 10:17 pm on Apr 9, 2008 (gmt 0)|
I got itó
| 10:43 pm on Apr 9, 2008 (gmt 0)|
My vote is for failsafe :)
| 10:45 pm on Apr 9, 2008 (gmt 0)|
In many professional circles it's called "Risk Management".
On the other side of the same professional coin it's called "Damage Limitation"...
| 10:57 pm on Apr 9, 2008 (gmt 0)|
"Risk Management" says Banks/Loans/Should we give this person a mortgage to me.
I like failsafe though...
| This 33 message thread spans 2 pages: 33 (  2 ) > > |