سؤال

I've got a python script that calls ffmpeg via subprocess to do some mp3 manipulations. It works fine in the foreground, but if I run it in the background, it gets as far as the ffmpeg command, which itself gets as far as dumping its config into stderr. At this point, everything stops and the parent task is reported as stopped, without raising an exception anywhere. I've tried a few other simple commands in the place of ffmpeg, they execute normally in foreground or background.

This is the minimal example of the problem:

import subprocess

inf = "3HTOSD.mp3"
outf = "out.mp3"

args = [    "ffmpeg",
            "-y",
            "-i",   inf,
            "-ss",  "0",
            "-t",   "20",
            outf
        ]

print "About to do"

result = subprocess.call(args)

print "Done"

I really can't work out why or how a wrapped process can cause the parent to terminate without at least raising an error, and how it only happens in so niche a circumstance. What is going on?

Also, I'm aware that ffmpeg isn't the nicest of packages, but I'm interfacing with something that has using ffmpeg compiled into it, so using it again seems sensible.

هل كانت مفيدة؟

المحلول

It might be related to Linux process in background - “Stopped” in jobs? e.g., using parent.py:

from subprocess import check_call

check_call(["python", "-c", "import sys; sys.stdin.readline()"])

should reproduce the issue: "parent.py script shown as stopped" if you run it in bash as a background job:

$ python parent.py &
[1] 28052
$ jobs
[1]+  Stopped                 python parent.py

If the parent process is in an orphaned process group then it is killed on receiving SIGTTIN signal (a signal to stop).

The solution is to redirect the input:

import os
from subprocess import check_call
try:
    from subprocess import DEVNULL
except ImportError: # Python 2
    DEVNULL = open(os.devnull, 'r+b', 0)

check_call(["python", "-c", "import sys; sys.stdin.readline()"], stdin=DEVNULL)

If you don't need to see ffmpeg stdout/stderr; you could also redirect them to /dev/null:

check_call(ffmpeg_cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)

نصائح أخرى

I like to use the commands module. It's simpler to use in my opinion.

import commands
cmd = "ffmpeg -y -i %s -ss 0 -t 20 %s 2>&1" % (inf, outf)
status, output = commands.getstatusoutput(cmd)
if status != 0:
    raise Exception(output)

As a side note, sometimes PATH can be an issue, and you might want to use an absolute path to the ffmpeg binary.

matt@goliath:~$ which ffmpeg
/opt/local/bin/ffmpeg

From the python/subprocess/call documentation:

Wait for command to complete, then return the returncode attribute.

So as long as the process you called does not exit, your program does not go on.

You should set up a Popen process object, put its standard output and error in different buffers/streams and when there is an error, you terminate the process.

Maybe something like this works:

proc = subprocess.Popen(args, stderr = subprocess.PIPE) # puts stderr into a new stream
while proc.poll() is None:
     try:
        err = proc.stderr.read()
     except: continue
     else:
        if err:
            proc.terminate()
            break
مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top