Discussion:
[Help-bash] shell redirection
Christof Warlich
2016-10-19 17:28:24 UTC
Permalink
Hi,
looks like I still haven’t fully gasped shell redirection:
I want my script(s) to print stderr to the terminal, while both stdout
_/and/_ stderr should go to a logfile. I thought that this is an easy
task, but so far, I failed miserably.
Any ideas how this could be done, ideally without using any temporary
files or named pipes?
Many thanks,
Chris
Greg Wooledge
2016-10-19 18:09:49 UTC
Permalink
Post by Christof Warlich
I want my script(s) to print stderr to the terminal, while both stdout
_/and/_ stderr should go to a logfile.
Not possible without hacks that will break synchronization between the
streams.
Post by Christof Warlich
Any ideas how this could be done, ideally without using any temporary
files or named pipes?
A temp file probably wouldn't help, because you'd lose *all* sync
information. Instead of having

stdOut1
stdErr1
O2
E2

you'd end up with

O1
O2
O3
E1
E2
E3

Probably not what you want.

You could set up two piped readers, one for each stream, and have them
both write to your log file (opened in *append* mode by each one).
Then the stderr reader (but not the stdout reader) would also write
to the terminal.

So, two process substitutions (which are roughly equivalent to named
pipes), and each one is reading in (presumably) a line-oriented way.
That's the best you're likely to get.
Christof Warlich
2016-10-19 18:34:09 UTC
Permalink
Post by Greg Wooledge
Post by Christof Warlich
I want my script(s) to print stderr to the terminal, while both stdout
_/and/_ stderr should go to a logfile.
Not possible without hacks that will break synchronization between the
streams.
Post by Christof Warlich
Any ideas how this could be done, ideally without using any temporary
files or named pipes?
A temp file probably wouldn't help, because you'd lose *all* sync
information. Instead of having
stdOut1
stdErr1
O2
E2
you'd end up with
O1
O2
O3
E1
E2
E3
Probably not what you want.
You could set up two piped readers, one for each stream, and have them
both write to your log file (opened in *append* mode by each one).
Then the stderr reader (but not the stdout reader) would also write
to the terminal.
So, two process substitutions (which are roughly equivalent to named
pipes), and each one is reading in (presumably) a line-oriented way.
That's the best you're likely to get.
Thanks for the quick response: Thus, as much as I understand,
a line-oriented piped reader approach would avoid mixing output
of stdout and stderr _within_ lines, but the sequence of whole lines
w.r.t. stdout and stderr may still be garbeled ...?!

That's really rather disappointing: My idea was to make the
sequence of commnds executed by my scripts (and possible
errors) visible on the terminal by employing set -x, while writing
a logfile containing _all_ information for "debugging" purpose
if needed.

This would be particularly usefull for scripts that produce loads
of output, e.g. when generating toolchains.

Anyhow, thanks for caring :-)

Chris
Greg Wooledge
2016-10-19 18:38:21 UTC
Permalink
Post by Christof Warlich
Thanks for the quick response: Thus, as much as I understand,
a line-oriented piped reader approach would avoid mixing output
of stdout and stderr _within_ lines, but the sequence of whole lines
w.r.t. stdout and stderr may still be garbeled ...?!
Yes, that's correct. Imagine the two readers each receive one line,
a microsecond apart. If the kernel's scheduling causes one of them
to delay by 2 microseconds, it'll write its line last. Over the
course of a very long run, this could cause a few unexpected line
order switches.
Post by Christof Warlich
That's really rather disappointing: My idea was to make the
sequence of commnds executed by my scripts (and possible
errors) visible on the terminal by employing set -x, while writing
a logfile containing _all_ information for "debugging" purpose
if needed.
This seems strange to me. I would expect the set -x output to be
much greater in size than the regular output, for most scripts.
So, suppressing stdout on the terminal doesn't seem like it would
help you a lot.
Post by Christof Warlich
This would be particularly usefull for scripts that produce loads
of output, e.g. when generating toolchains.
I guess I never deal with that kind of script.
Bob Proulx
2016-10-19 22:04:48 UTC
Permalink
Post by Greg Wooledge
Post by Christof Warlich
Thanks for the quick response: Thus, as much as I understand,
a line-oriented piped reader approach would avoid mixing output
of stdout and stderr _within_ lines, but the sequence of whole lines
w.r.t. stdout and stderr may still be garbeled ...?!
Yes, that's correct. Imagine the two readers each receive one line,
a microsecond apart. If the kernel's scheduling causes one of them
to delay by 2 microseconds, it'll write its line last. Over the
course of a very long run, this could cause a few unexpected line
order switches.
Yes. In addition to that, which regards line buffered output, there
will be programs that buffer output into larger chunks bigger than
lines to stdout. The result is always based upon the underlying chunk
that was written all at once. If that is a line then it is a line.
If it is a 1024 byte chunk then that is what it is too. For many
scripts they may only ever deal with line buffered output. But others
might run programs that write bigger chunks.
Post by Greg Wooledge
Post by Christof Warlich
That's really rather disappointing: My idea was to make the
sequence of commnds executed by my scripts (and possible
errors) visible on the terminal by employing set -x, while writing
a logfile containing _all_ information for "debugging" purpose
if needed.
This seems strange to me. I would expect the set -x output to be
much greater in size than the regular output, for most scripts.
So, suppressing stdout on the terminal doesn't seem like it would
help you a lot.
A filter program like grep, sed, awk, often produces as much output as
input. In which case the -x output would be small. But Unix filters
are designed the way they are intentionally to support a particular
style. Input goes through the filter to the output. Errors are left
directed to the tty so that they are visible. The exit code reflects
the error status regardless of any written stderr information.
Post by Greg Wooledge
Post by Christof Warlich
This would be particularly usefull for scripts that produce loads
of output, e.g. when generating toolchains.
I guess I never deal with that kind of script.
I think the usual approach is to modify the program to write an
explicit log file if that is specifically desired. But if it is a
filter like program like grep, sed, awk then just worry about the
errors only.

Bob
John McKown
2016-10-20 01:32:34 UTC
Permalink
Post by Christof Warlich
Hi,
I want my script(s) to print stderr to the terminal, while both stdout
_/and/_ stderr should go to a logfile. I thought that this is an easy task,
but so far, I failed miserably.
Any ideas how this could be done, ideally without using any temporary
files or named pipes?
Many thanks,
Chris
​I've read all the replies. And tried some really strange things myself.
The closest that I can come to what you want (and it is not all that close)
is to use the "script" program to run your program.

script -c 'somecmd p1 parm2 --option' logfile.txt

Both stdout & stderr come to the terminal & get written to logfile.txt​ .
There is no way, that I know of, to put stdout into "logfile.txt" but
suppress it from the terminal.
--
Heisenberg may have been here.

Unicode: http://xkcd.com/1726/

Maranatha! <><
John McKown
Russell Lewis
2016-10-20 04:02:20 UTC
Permalink
As others have mentioned, there's no way to ensure that the lines reach the
logfile in order (that problem exists even in the source program, but also
in the tee command below). But here's a command line which seems to work
(under trivial testing):
{ echo "fred"; echo "wilma" 1>&2; } 2> >(tee -a logfile) 1>>logfile

Basically, it redirects stderr to a process substitution (that is, it
connects stderr to the stdin of tee). The process substitution's stdout is
of course the same as the current shell - so stderr goes both to the log
file and to the tty; then we redirect stdout to point to the logfile as
well.

I used -a on the tee command, and >> on the redirection of stdout, so that
neither would crop the log file (and presumably destroy the output of the
other), but I'm not well versed in what happens when there are multiple
open file pointers to the same file on disk. From trivial testing, it
seems to work, but I worry that perhaps there are race conditions I haven't
explored. (I originally thought about using some magic with /dev/fd to
duplicate file pointers, but I wasn't sure that it was necessary complexity
- or that I could get it totally correct.)

Russ
Post by John McKown
Post by Christof Warlich
Hi,
I want my script(s) to print stderr to the terminal, while both stdout
_/and/_ stderr should go to a logfile. I thought that this is an easy
task,
Post by Christof Warlich
but so far, I failed miserably.
Any ideas how this could be done, ideally without using any temporary
files or named pipes?
Many thanks,
Chris
​I've read all the replies. And tried some really strange things myself.
The closest that I can come to what you want (and it is not all that close)
is to use the "script" program to run your program.
script -c 'somecmd p1 parm2 --option' logfile.txt
Both stdout & stderr come to the terminal & get written to logfile.txt​ .
There is no way, that I know of, to put stdout into "logfile.txt" but
suppress it from the terminal.
--
Heisenberg may have been here.
Unicode: http://xkcd.com/1726/
Maranatha! <><
John McKown
Christof Warlich
2016-10-20 16:15:29 UTC
Permalink
Hi Russ,
Post by Russell Lewis
{ echo "fred"; echo "wilma" 1>&2; } 2> >(tee -a logfile) 1>>logfile
thanks a lot, that's cool :-). And it serves my purpose amazingly well,
even with sporadic line sequence issue. And anyhow, so far, I only
occasionally see lines being out of sequence when using builtin shell
commands like echo, and that's not an issue anyway in my use-case.

Cheers,

Chris
Bob Proulx
2016-10-20 17:01:34 UTC
Permalink
Post by Russell Lewis
I used -a on the tee command, and >> on the redirection of stdout, so that
neither would crop the log file (and presumably destroy the output of the
other), but I'm not well versed in what happens when there are multiple
open file pointers to the same file on disk.
Using tee -a causes tee to open the file for O_APPEND mode.

man 2 open

O_APPEND
The file is opened in append mode. Before each write(2), the
file offset is positioned at the end of the file, as if with
lseek(2).

Data writes are always written to the end of the file at the level of
the kernel. System call writes are always appended. Which gets back
to the question of how much data is chunked into each write(2) call.
If it is line buffered then that is line by line. If blocked into
larger chunks then it is whatever is that larger chunk.

Bob

P.S. And for years that was all there was to it. But if you are using
NFS, which you are probably *not* and therefore do NOT need to worry
about this, then the following comes into play.

O_APPEND may lead to corrupted files on NFS filesystems if more
than one process appends data to a file at once. This is
because NFS does not support appending to a file, so the client
kernel has to simulate it, which can't be done without a race
condition.

That only applies to NFS file systems. So don't let it scare you from
using it. NFS is not a POSIX compliant file system and has many cases
that need special handling.

If you are using NFS then what I do is to work in a temporary
directory in /tmp created using mktemp -d and cleaned up in the EXIT
trap handler in the script. That ensured I was using a POSIX
compliant file system. Then at the end of the task I would copy all
of the data from the local client back to the NFS mounted file system.
That also had the nice side effect of protecting the running job from
an overloaded NFS server and reducing the load on the NFS file server
too.

Loading...