Discussion:
Killing subshells
(too old to reply)
Paul Wagner
2018-10-30 13:37:01 UTC
Permalink
Dear bashers,

I am trying to create a simple script to schedule wget to grab radio
programmes. Since I occasionally lose the connection, I ended up with
something like

function job() {
i=0
while true
do
i=$((i+1))
wget -a "$name.log" -O "$name-$i.mp3" "$url"
done &
sleep $length
kill %
}

while true
do
t=$(date '+%M %H')
while read startmin starthour length url name
do
[[ $t == $startmin' '$starthour ]] && job &
done < conf-file
sleep 60
done

Unfortunately, the kill does not end wget. What am I missing?

Kind regards,

Paul
Jesse Hathaway
2018-10-30 14:41:02 UTC
Permalink
Post by Paul Wagner
Dear bashers,
I am trying to create a simple script to schedule wget to grab radio
programmes. Since I occasionally lose the connection, I ended up with
something like
function job() {
i=0
while true
do
i=$((i+1))
wget -a "$name.log" -O "$name-$i.mp3" "$url"
done &
sleep $length
kill %
}
while true
do
t=$(date '+%M %H')
while read startmin starthour length url name
do
[[ $t == $startmin' '$starthour ]] && job &
done < conf-file
sleep 60
done
Unfortunately, the kill does not end wget. What am I missing?
Paul,

This is happening because your script does not have job control
enabled, so background jobs are not started in their own process
group, but instead are part of the script's process group. I tested by
enabling job control in a script and killing the entire process group
with a negative pid to kill. With those two changes, the children are
killed:

#!/bin/bash

set -o errexit
# enable job control so we get process groups
set -m

function job() {
while true; do
sleep 9999
done &
printf 'Job: %s\n' "$!"
pstree -gp "$$"
sleep 1
kill -9 -"$!"
wait
}

if job; then
printf 'Children Left: %s\n' "$(pgrep -f 9999 | wc -l)"
fi
Bob Proulx
2018-10-31 03:57:49 UTC
Permalink
Post by Paul Wagner
Dear bashers,
I am trying to create a simple script to schedule wget to grab radio
programmes. Since I occasionally lose the connection, I ended up with
something like
...
Post by Paul Wagner
kill %
Jesse has already pointed out that main problem you asked about.
However you are using wget and wget can retry. It might be a more
robust solution to try the wget options to retry the connection with
shorter timeouts. Perhaps something like this:

wget --retry-connrefused --waitretry=10 --read-timeout=30 --timeout=20 -t 0

Also important is to always check the exit code and if you want you
could loop and retry that upon error. Perhaps something like this:

until wget --retry-connrefused --waitretry=10 --read-timeout=30 --timeout=20 -t 0 \
-a "$name.log" -O "$name-$i.mp3" "$url"; do
sleep 30
done

Of course that loops indefinitely but exiting after a count of loops
is reasonable too.

However if you want to put things into the background then I recommend
always capturing the job id from $! into a variable. Then set up a
trap handler such that upon exit any background job can be sent the
signal. Perhaps something like this quickly created snippet:

#!/bin/sh
unset rc
cleanup() {
test -n "$rc" && kill $rc && unset rc
}
trap "cleanup" EXIT
trap "cleanup; trap - HUP; kill -HUP $$" HUP
trap "cleanup; trap - INT; kill -INT $$" INT
trap "cleanup; trap - QUIT; kill -QUIT $$" QUIT
trap "cleanup; trap - TERM; kill -TERM $$" TERM
sleep 30 &
rc=$!
sleep 10

That way an interrupt with Control-C will also send a kill to the
background task.

Hope that helps!
Bob

Loading...