Pergunta

I'm doing something similar to this question but have a more subtle problem.

I have an API client class that makes HTTP requests; I store the QNetworkReply in the object, so as to access its data from the slot connected to the "finished" signal. On the next request, this is replaced by the next QNetworkReply, so Python should be able to free the previous request object and thus the underlying network resources. Instead, the old reply objects seem to get stuck somewhere, causing a memory leak, and if the app runs long enough, a delay on quit, presumably because all the requests ever issued are finally being deleted.

Simplified but complete example:

import sys, signal
from PySide import QtCore, QtNetwork

class Poller(QtCore.QObject):
    url = QtCore.QUrl("http://localhost:5000/")

    def __init__(self, parent=None):
        super(Poller,self).__init__(parent)
        self.manager = QtNetwork.QNetworkAccessManager()

    def start(self):
        request = QtNetwork.QNetworkRequest(self.url)
        self.reply = self.manager.get(request)
        self.reply.finished.connect(self.readReply)
        self.reply.error.connect(self.error)

    def readReply(self):
        text = self.reply.readAll()
        self.reply.close() # doesn't help
        self.reply.deleteLater() # doesn't help
        QtCore.QTimer.singleShot(10, self.start)

    @QtCore.Slot(QtNetwork.QNetworkReply.NetworkError)
    def error(self, err):
        self.reply.finished.disconnect()
        sys.stderr.write('Error: ' + str(err) + '\n')
        QtCore.QTimer.singleShot(10, self.start)

signal.signal(signal.SIGINT, signal.SIG_DFL)
app = QtCore.QCoreApplication(sys.argv)
p = Poller()
p.start()
app.exec_()

The http server being polled doesn't much matter; for this test I'm using the Flask hello world. In fact, since this doesn't do connection keepalive, if I check "netstat" I see a steadily increasing number of zombie TCP connections in a TIME_WAIT state, and eventually start getting PySide.QtNetwork.QNetworkReply.NetworkError.UnknownNetworkError once 30,000+ ports have been used up; further evidence that the QNetworkReply is not being properly freed.

Same problem happens with PySide or PyQt4. What am I doing wrong, or could this be a bug?

Foi útil?

Solução

First, turns out it's normal for TCP connections to stick around for a while in TIME_WAIT after closing. I was only hitting the limit because I had set the singleShot timer to 0ms for testing.

I rewrote the example in C++. deleteLater worked as expected, and I could reproduce both the memory growth and slow quit by omitting it. (Since the memory is managed by Qt, all the reply objects had to be deleted by the QNetworkAccessManager destructor).

Interestingly, on closer examination, the slow quit does not happen in Python when deleteLater is used, but the memory growth does. So I guess the C++ object is getting deleted but there are still resources being used somewhere. This is still mysterious to me.

The fix, though, is to call setParent(None) on the QNetworkReply. This can be done at any time, even when it is first returned from the QNetworkAccessManager. It is initially parented to the manager; changing the parent to null means Qt isn't responsible for memory management, and it will be properly handled by Python, even without using deleteLater.

(I found this hint somewhere online; unfortunately I can't find it now, or I would link it.)

Edit: I thought this worked in my testing, but my app is still leaking memory.

Edit 2: I have no leak with PyQt4 on Python 2, but I do with PySide, and with both on Python 3.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top