質問

I have problems using the subprocess module to obtain the output of crashed programs. I'm using python2.7 and subprocess to call a program with strange arguments in order to get some segfaults In order to call the program, I use the following code:

proc = (subprocess.Popen(called,
                         stdout=subprocess.PIPE,
                         stderr=subprocess.PIPE))
out,err=proc.communicate()
print out,err

called is a list containing the name of the program and the argument (a string containing random bytes except the NULL byte which subprocess doesn't like at all)

The code behave and show me the stdout and stderr when the program doesn't crash, but when it does crash, out and err are empty instead of showing the famous "Segmentation fault".

I wish to find a way to obtain out and err even when the program crash.

I also tried the check_output / call / check_call methods

Some additional information:

  • I'm running this script on an Archlinux 64 bits in a python virtual environment (shouldn't be something important here, but you never know :p)

  • The segfault happens in the C program I'm trying to run and is a consequence of a buffer overflow

  • The problem is that when the segfault occurs, I can't get the output of what happened with subprocess

  • I get the returncode right: -11 (SIGSEGV)

  • Using python i get:

      ./dumb2 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA 
      ('Exit code was:', -11) 
      ('Output was:', '') 
      ('Errors were:', '')
    
  • While outside python I get:

     ./dumb2 $(perl -e "print 'A'x50")  
     BEGINNING OF PROGRAM 
     AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
     END OF THE PROGRAM
     Segmentation fault (core dumped)
    
  • The return value of the shell is the same: echo $? returns 139 so -11 ($? & 128)

役に立ちましたか?

解決 2

Came back here: it works like a charm with subprocess from python3 and if you are on linux, there is a backport to python2 called subprocess32 which does work quite well

Older solution: I used pexpect and it works

def cmd_line_call(name, args):
    child = pexpect.spawn(name, args)
    # Wait for the end of the output
    child.expect(pexpect.EOF) 
    out = child.before # we get all the data before the EOF (stderr and stdout)
    child.close() # that will set the return code for us
    # signalstatus and existstatus read as the same (for my purpose only)
    if child.exitstatus is None:
        returncode = child.signalstatus
    else:
        returncode = child.exitstatus
    return (out, returncode)
    

PS: a little slower (because it spawns a pseudo tty)

他のヒント

"Segmentation fault" message might be generated by a shell. To find out, whether the process is kill by SIGSEGV, check proc.returncode == -signal.SIGSEGV.

If you want to see the message, you could run the command in the shell:

#!/usr/bin/env python
from subprocess import Popen, PIPE

proc = Popen(shell_command, shell=True, stdout=PIPE, stderr=PIPE)
out, err = proc.communicate()
print out, err, proc.returncode

I've tested it with shell_command="python -c 'from ctypes import *; memset(0,1,1)'" that causes segfault and the message is captured in err.

If the message is printed directly to the terminal then you could use pexpect module to capture it:

#!/usr/bin/env python
from pipes import quote
from pexpect import run # $ pip install pexpect

out, returncode = run("sh -c " + quote(shell_command), withexitstatus=1)
signal = returncode - 128 # 128+n
print out, signal

Or using pty stdlib module directly:

#!/usr/bin/env python
import os
import pty
from select import select
from subprocess import Popen, STDOUT

# use pseudo-tty to capture output printed directly to the terminal
master_fd, slave_fd = pty.openpty()
p = Popen(shell_command, shell=True, stdin=slave_fd, stdout=slave_fd,
          stderr=STDOUT, close_fds=True)
buf = []
while True:
    if select([master_fd], [], [], 0.04)[0]: # has something to read
        data = os.read(master_fd, 1 << 20)
        if data:
            buf.append(data)
        else: # EOF
            break
    elif p.poll() is not None: # process is done
        assert not select([master_fd], [], [], 0)[0] # nothing to read
        break
os.close(slave_fd)
os.close(master_fd)
print "".join(buf), p.returncode-128
proc = (subprocess.Popen(called, stdout=subprocess.PIPE, stderr=subprocess.PIPE))

print(proc.stdout.read())
print(proc.stderr.read())

This should work better.
Personally i'd go with:

from subprocess import Popen, PIPE

handle = Popen(called, shell=True, stdout=PIPE, stderr=PIPE)
output = ''
error = ''

while handle.poll() is None:
    output += handle.stdout.readline() + '\n'
    error += handle.stderr.readline() + '\n'

handle.stdout.close()
handle.stderr.close()

print('Exit code was:', handle.poll())
print('Output was:', output)
print('Errors were:', error)

And probably use epoll() if possible for the stderr as it sometimes blocks the call because it's empty which is why i end up doing stderr=STDOUT when i'm lazy.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top