Forum Moderators: phranque
I'm not sure I understand your question... File locking's purpose is to prevent this. See the flock directive in PERL, for example. File locking is a separate and distinct operation from file read and/or write protection. It is more akin to letting one single process lock and "own" the file until that process unlocks the file for modification by other processes. In a multi-process environment like a server, file locking forces serial execution of the requesting processes. That is, the winner gets to read/write the file, and the losers have to wait until the winner is finished before they get a chance to do a read/write.
I can explain more about file locking from a PERL (or even a hardware) standpoint, if you'll elaborate your question a bit more.
Jim
BTW I don't know perl, but I can apply any concepts you have. I am using C#, pretty much Java.
Most programming languages have a method to implement file-locking. If a language does not have such a method, then you may have to call a routine in a different language which does.
Showing my age, it used to be common to have to call an assembly-language routine that would do a "Test and Set" operation on a memory location associated with the file. The function of this "Test and Set" was "atomic," meaning "indivisible." That is, it tested the memory location's current value and set it (to all ones, IIRC) in a single, uninterruptable operation. After this was complete, the code would then examine the value of the memory location as it was before the "Set". If the value was zero, then that means no-one else had requested the file, and that the process testing the location now owned the file. If the value was already all-ones, then someone else had previously locked the file, and this process would have to wait, periodically repeating the Test and Set, and continue only when the initial value was returned as zero. Meanwhile, the process which owned the file would do its thing, and then release the file by clearing that memory location to zero.
All this level of detail is usually hidden in modern high-level languages, but is supported by all modern computer architectures. It's just a matter of figuring out HOW it is supported by your programming language. Not being a C# programmer, I can't tell you, but look for "file locking" or "atomic operations" - or some of the other keywords in the text above.
One detail: it is critcal that only one process be allowed to access the file at one time. A second process must not be allowed to *read or write* the file until the first process is finished reading and writing. Otherwise, the second process will modify a stale copy of the file, overwrite the changes made by the first process, and the corruption you are worried about will happen. With this in mind, it becomes obvious that such files must be limited in size in order to prevent slowdowns. The permissible size of the file will depend on how many concurrent users you have, CPU speed and load on the server, the priority under which your code executes, and the speed of the file system (including hardware). Sharing files among mutiple processes often requires design changes at a very high level to make the sharing efficient.
You do have one advantage, in that you are almost always going to be doing an append. That means that you can split the file, and make only a small section (the last, newest section) of it writable. The rest can be "archived" and remain static. When you need to display it, then glue the static archive and the "newest section" together, and display that. When the "new section" gets too big, close it, append it to the previous static section, and open a new "new section". You'll have to lock both sections while you do this, too. :)
OK, now we need a C# expert who can put this in context for you... :)
Jim
If i recall, you are on asp.net, if that is so,
then everytime you want to write to the file, grab
an application object lock. That will effectively
serialize all writes to the file. Any request to
write to the file just queues up, so that page will
take longer to be returned to the client but will
eventually succeed. For reading, just don't bother
grabbing the lock.
Make sure you open the file in write/shared read mode.
(see docs)
+++