It turned out this was due to re-entrancy of the request
function. Because I was unlocking in the middle to allow IRQs to come in, the request
function could be called again, would take the lock (while the original request handler was waiting on IO) and then the wrong handler would get the IRQ and everything went south with stacks of failed IO.
The way I solved this was to set a "busy" flag at the start of the request function, clear it at the end and return immediately at the start of the function if this is set:
static void mydev_submit_req(struct request_queue *q){
struct mydevice *dev = q->queuedata;
// We are already processing a request
// so reentrant calls can take a hike
// They'll be back
if (dev->has_request)
return;
// We own the IO now, new requests need to wait
// Queue lock is held when this function is called
// so no need for an atomic set
dev->has_request = 1;
// Access request queue here, while queue lock is held
spin_unlock_irq(q->queue_lock);
// Perform IO here, with IRQs enabled
// You can't access the queue or request here, make sure
// you got the info you need out before you release the lock
spin_lock_irq(q->queue_lock);
// you can end the requests as needed here, with the lock held
// allow new requests to be processed after we return
dev->has_request = 0;
// lock is held when the function returns
}
I am still not sure why I consistently got the stacktrace from swiotlb_unmap_sq_attrs()
, however.