Forum Moderators: phranque

Message Too Old, No Replies

Multiple users writing to a file at the same time

How do you deal with this programatically?

         

korkus2000

2:45 am on Oct 18, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



I want to use a txt file as a data source. How do you account for multiple users trying to write to the file at the same time? The file is locked during the read/write operation, so how do you get the file to append info without overwriting one of the updates?

jdMorgan

3:05 am on Oct 18, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



korkus2000,

I'm not sure I understand your question... File locking's purpose is to prevent this. See the flock directive in PERL, for example. File locking is a separate and distinct operation from file read and/or write protection. It is more akin to letting one single process lock and "own" the file until that process unlocks the file for modification by other processes. In a multi-process environment like a server, file locking forces serial execution of the requesting processes. That is, the winner gets to read/write the file, and the losers have to wait until the winner is finished before they get a chance to do a read/write.

I can explain more about file locking from a PERL (or even a hardware) standpoint, if you'll elaborate your question a bit more.

Jim

korkus2000

4:10 am on Oct 18, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



That makes more sense. I mean lock as in only one user can do a write operation at a time. If I have lots of users entering a form to append text to the file, like a guestbook, will the guy who lost out, overwrite the person who won when they finally get to write, or should the read write happen at the same time? Does the OS handle this? Do I need to execute a sleep method for the script to wait if an exception is raised? I have always used databases and this is pretty much handled by those.

BTW I don't know perl, but I can apply any concepts you have. I am using C#, pretty much Java.

jdMorgan

4:56 am on Oct 18, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



korkus2000,

Most programming languages have a method to implement file-locking. If a language does not have such a method, then you may have to call a routine in a different language which does.

Showing my age, it used to be common to have to call an assembly-language routine that would do a "Test and Set" operation on a memory location associated with the file. The function of this "Test and Set" was "atomic," meaning "indivisible." That is, it tested the memory location's current value and set it (to all ones, IIRC) in a single, uninterruptable operation. After this was complete, the code would then examine the value of the memory location as it was before the "Set". If the value was zero, then that means no-one else had requested the file, and that the process testing the location now owned the file. If the value was already all-ones, then someone else had previously locked the file, and this process would have to wait, periodically repeating the Test and Set, and continue only when the initial value was returned as zero. Meanwhile, the process which owned the file would do its thing, and then release the file by clearing that memory location to zero.

All this level of detail is usually hidden in modern high-level languages, but is supported by all modern computer architectures. It's just a matter of figuring out HOW it is supported by your programming language. Not being a C# programmer, I can't tell you, but look for "file locking" or "atomic operations" - or some of the other keywords in the text above.

One detail: it is critcal that only one process be allowed to access the file at one time. A second process must not be allowed to *read or write* the file until the first process is finished reading and writing. Otherwise, the second process will modify a stale copy of the file, overwrite the changes made by the first process, and the corruption you are worried about will happen. With this in mind, it becomes obvious that such files must be limited in size in order to prevent slowdowns. The permissible size of the file will depend on how many concurrent users you have, CPU speed and load on the server, the priority under which your code executes, and the speed of the file system (including hardware). Sharing files among mutiple processes often requires design changes at a very high level to make the sharing efficient.

You do have one advantage, in that you are almost always going to be doing an append. That means that you can split the file, and make only a small section (the last, newest section) of it writable. The rest can be "archived" and remain static. When you need to display it, then glue the static archive and the "newest section" together, and display that. When the "new section" gets too big, close it, append it to the previous static section, and open a new "new section". You'll have to lock both sections while you do this, too. :)

OK, now we need a C# expert who can put this in context for you... :)

Jim

plumsauce

9:00 am on Oct 18, 2003 (gmt 0)

WebmasterWorld Senior Member 10+ Year Member



korkus2000,

If i recall, you are on asp.net, if that is so,
then everytime you want to write to the file, grab
an application object lock. That will effectively
serialize all writes to the file. Any request to
write to the file just queues up, so that page will
take longer to be returned to the client but will
eventually succeed. For reading, just don't bother
grabbing the lock.

Make sure you open the file in write/shared read mode.
(see docs)

+++