Commit 1b8f21b7 authored by Ming Lei's avatar Ming Lei Committed by Jens Axboe
Browse files

blk-mq: introduce blk_mq_complete_request_sync()

In NVMe's error handler, follows the typical steps of tearing down
hardware for recovering controller:

1) stop blk_mq hw queues
2) stop the real hw queues
3) cancel in-flight requests via
	blk_mq_tagset_busy_iter(tags, cancel_request, ...)
	mark the request as abort
4) destroy real hw queues

However, there may be race between #3 and #4, because blk_mq_complete_request()
may run q->mq_ops->complete(rq) remotelly and asynchronously, and
->complete(rq) may be run after #4.

This patch introduces blk_mq_complete_request_sync() for fixing the
above race.

Cc: Sagi Grimberg <>
Cc: Bart Van Assche <>
Cc: James Smart <>
Reviewed-by: default avatarKeith Busch <>
Reviewed-by: default avatarChristoph Hellwig <>
Signed-off-by: default avatarMing Lei <>
Signed-off-by: default avatarJens Axboe <>
parent 1978f30a
......@@ -654,6 +654,13 @@ bool blk_mq_complete_request(struct request *rq)
void blk_mq_complete_request_sync(struct request *rq)
int blk_mq_request_started(struct request *rq)
return blk_mq_rq_state(rq) != MQ_RQ_IDLE;
......@@ -302,6 +302,7 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list);
void blk_mq_kick_requeue_list(struct request_queue *q);
void blk_mq_delay_kick_requeue_list(struct request_queue *q, unsigned long msecs);
bool blk_mq_complete_request(struct request *rq);
void blk_mq_complete_request_sync(struct request *rq);
bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list,
struct bio *bio);
bool blk_mq_queue_stopped(struct request_queue *q);
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment