Post by Russell LewisI used -a on the tee command, and >> on the redirection of stdout, so that
neither would crop the log file (and presumably destroy the output of the
other), but I'm not well versed in what happens when there are multiple
open file pointers to the same file on disk.
Using tee -a causes tee to open the file for O_APPEND mode.
man 2 open
O_APPEND
The file is opened in append mode. Before each write(2), the
file offset is positioned at the end of the file, as if with
lseek(2).
Data writes are always written to the end of the file at the level of
the kernel. System call writes are always appended. Which gets back
to the question of how much data is chunked into each write(2) call.
If it is line buffered then that is line by line. If blocked into
larger chunks then it is whatever is that larger chunk.
Bob
P.S. And for years that was all there was to it. But if you are using
NFS, which you are probably *not* and therefore do NOT need to worry
about this, then the following comes into play.
O_APPEND may lead to corrupted files on NFS filesystems if more
than one process appends data to a file at once. This is
because NFS does not support appending to a file, so the client
kernel has to simulate it, which can't be done without a race
condition.
That only applies to NFS file systems. So don't let it scare you from
using it. NFS is not a POSIX compliant file system and has many cases
that need special handling.
If you are using NFS then what I do is to work in a temporary
directory in /tmp created using mktemp -d and cleaned up in the EXIT
trap handler in the script. That ensured I was using a POSIX
compliant file system. Then at the end of the task I would copy all
of the data from the local client back to the NFS mounted file system.
That also had the nice side effect of protecting the running job from
an overloaded NFS server and reducing the load on the NFS file server
too.