Question

I am using a PersistentConnection for publishing large amounts of data (many small packages) to the connected clients.

It is basically a one way direction of data (since each client will call endpoints on other servers to set up various subscriptions, so they will not push any data back to the server via the SignalR connection).

Is there any way to detect that the client cannot keep up with the messages sent to it?

The example could be a mobile client on a poor connection (e.g. in a roaming situation, the speed may vary a lot). If we are sending 100 messages per second, but the client can only handle 10, we will eventually lose the messages (due to the message buffer on the server side).

I was looking for a server side event, similar to what has been done on the (SignalR) client, e.g.

protected override Task OnConnectionSlow(IRequest request, string connectionId) {}

but that is not part of the framework (for good reasons, I assume).

I have considered using the approach (suggested elsewhere on Stackoverflow), to let the client tell the server (e.g. every 10-30 seconds) how many messages it has received, and if that number differentiates a lot from the number of messages sent to the client, it is likely that the client cannot keep up.

The event would be used to tell the distributed backend that the client cannot keep up, and then turn down the data generation rate.

Was it helpful?

Solution

There's no way to to this right now other than coding something custom. We have discussed this in the past as a potential feature but it isn't anywhere the roadmap right now. It's also not clear what "slow" means as it's up to the application to decide. There'd probably be some kind of bandwidth/time/message based setting that would make this hypothetical event trigger.

If you want to hook in at a really low level, you could use owin middleware to replace the client's underlying stream with one that you owned so that you'd see all of the data going over the write (you'd have to do the same for websockets though and that might be non trivial).

Once you have that, you could write some time based logic that determined if the flush was taking too long and kill the client that way.

That's very fuzzy but it's basically a brain dump of how a feature like this could work.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top