Safe to have multiple processes writing to the same file at the same time? [CentOs 6, ext4]

What you're doing seems perfectly OK, provided you're using the POSIX "raw" IO syscalls such as read(), write(), lseek() and so forth.

If you use C stdio (fread(), fwrite() and friends) or some other language runtime library which has its own userspace buffering, then the answer by "Tilo" is relevant, in that due to the buffering, which is to some extent outside your control, the different processes might overwrite each other's data.

Wrt OS locking, while POSIX states that writes or reads less than of size PIPE_BUF are atomic for some special files (pipes and FIFO's), there is no such guarantee for regular files. In practice, I think it's likely that IO's within a page are atomic, but there is no such guarantee. The OS only does locking internally to the extent that is necessary to protect its own internal data structures. One can use file locks, or some other interprocess communication mechanism, to serialize access to files. But, all this is relevant only of you have several processes doing IO to the same region of a file. In your case, as your processes are doing IO to disjoint sections of the file, none of this matters, and you should be fine.


no, generally it is not safe to do this!

you need to obtain an exclusive write lock for each process -- that implies that all the other processes will have to wait while one process is writing to the file.. the more I/O intensive processes you have, the longer the wait time.

it is better to have one output file per process and format those files with a timestamp and process identifier in the beginning of the line, so that you can later merge and sort those output files offline.

Tip: check the file format of web-server log files -- these are done with the time-stamp at the beginning of the line, so they can be later combined and sorted.


EDIT

UNIX processes use a certain / fixed buffer size when they open files (e.g. 4096 bytes), to transfer data to and from the file on disk. Once the write-buffer is full, the process flushes it to disk -- that means: it writes the complete full buffer to disk! Please Note here that it is happening when the buffer is full! -- not when there is an end-of-line! That means even for a single process which writes line-oriented text data to file, that those lines are typically cut somewhere in the middle at the time the buffer is flushed. Only at the end, when the file is closed after writing, can you assume that the file contains complete lines!

So depending on when your process decide to flush their buffers, they write at different times to the file -- e.g. the order is not deterministic / predictable When a buffer is flushed to file, you can not assume that it will only write complete lines -- e.g. it will usually write partial lines, thereby messing up the output if several processes flush their buffers without synchronization.

Check this article on Wikipedia: http://en.wikipedia.org/wiki/File_locking#File_locking_in_UNIX

Quote:

The Unix operating systems (including Linux and Apple's Mac OS X, sometimes called Darwin) do not normally automatically lock open files or running programs. Several kinds of file-locking mechanisms are available in different flavors of Unix, and many operating systems support more than one kind for compatibility. The two most common mechanisms are fcntl(2) and flock(2). A third such mechanism is lockf(3), which may be separate or may be implemented using either of the first two primitives.

You should use either flock, or Mutexes to synchronize the processes and make sure only one of them can write to the file at a time.

As I mentioned earlier, it is probably faster, easier and more straight-forward to have one output file for each process, and then later combine those files if needed (offline). This approach is used by some web-servers for example, which need to log to multiple files from multiple threads -- and need to make sure that the different threads are all high-performing (e.g. not having to wait for each other on a file lock).


Here's a related post: (Check Mark Byer's answer! the accepted answer is not correct/relevant.)

Is it safe to pipe the output of several parallel processes to one file using >>?


EDIT 2:

in the comment you said that you want to write fixed-size binary data blocks from the different processes to the same file.

Only in the case that your block size is exactly the size of the system's file-buffer size, could this work!

Make sure that your fixed block-length is exactly the system's file-buffer size. Otherwise you will get into the same situation as with the not completed lines. e.g. if you use 16k blocks, and the system uses 4k blocks, then in general you will see 4k blocks in the file in seemingly random order -- there is no guarantee that you will always see 4 blocks in a row from the same process