1. 27 Sep, 2019 1 commit
    • Yufen Yu's avatar
      block: fix null pointer dereference in blk_mq_rq_timed_out() · 8d699663
      Yufen Yu authored
      We got a null pointer deference BUG_ON in blk_mq_rq_timed_out()
      as following:
      [  108.825472] BUG: kernel NULL pointer dereference, address: 0000000000000040
      [  108.827059] PGD 0 P4D 0
      [  108.827313] Oops: 0000 [#1] SMP PTI
      [  108.827657] CPU: 6 PID: 198 Comm: kworker/6:1H Not tainted 5.3.0-rc8+ #431
      [  108.829503] Workqueue: kblockd blk_mq_timeout_work
      [  108.829913] RIP: 0010:blk_mq_check_expired+0x258/0x330
      [  108.838191] Call Trace:
      [  108.838406]  bt_iter+0x74/0x80
      [  108.838665]  blk_mq_queue_tag_busy_iter+0x204/0x450
      [  108.839074]  ? __switch_to_asm+0x34/0x70
      [  108.839405]  ? blk_mq_stop_hw_queue+0x40/0x40
      [  108.839823]  ? blk_mq_stop_hw_queue+0x40/0x40
      [  108.840273]  ? syscall_return_via_sysret+0xf/0x7f
      [  108.840732]  blk_mq_timeout_work+0x74/0x200
      [  108.841151]  process_one_work+0x297/0x680
      [  108.841550]  worker_thread+0x29c/0x6f0
      [  108.841926]  ? rescuer_thread+0x580/0x580
      [  108.842344]  kthread+0x16a/0x1a0
      [  108.842666]  ? kthread_flush_work+0x170/0x170
      [  108.843100]  ret_from_fork+0x35/0x40
      The bug is caused by the race between timeout handle and completion for
      flush request.
      When timeout handle function blk_mq_rq_timed_out() try to read
      'req->q->mq_ops', the 'req' have completed and reinitiated by next
      flush request, which would call blk_rq_init() to clear 'req' as 0.
      After commit 12f5b931
       ("blk-mq: Remove generation seqeunce"),
      normal requests lifetime are protected by refcount. Until 'rq->ref'
      drop to zero, the request can really be free. Thus, these requests
      cannot been reused before timeout handle finish.
      However, flush request has defined .end_io and rq->end_io() is still
      called even if 'rq->ref' doesn't drop to zero. After that, the 'flush_rq'
      can be reused by the next flush request handle, resulting in null
      pointer deference BUG ON.
      We fix this problem by covering flush request with 'rq->ref'.
      If the refcount is not zero, flush_end_io() return and wait the
      last holder recall it. To record the request status, we add a new
      entry 'rq_status', which will be used in flush_end_io().
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Bart Van Assche <bvanassche@acm.org>
      Cc: stable@vger.kernel.org # v4.18+
      Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
      Reviewed-by: default avatarBob Liu <bob.liu@oracle.com>
      Signed-off-by: default avatarYufen Yu <yuyufen@huawei.com>
       - move rq_status from struct request to struct blk_flush_queue
       - remove unnecessary '{}' pair.
       - let spinlock to protect 'fq->rq_status'
       - move rq_status after flush_running_idx member of struct blk_flush_queue
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
  2. 30 Apr, 2019 1 commit
  3. 24 Mar, 2019 1 commit
  4. 30 Jan, 2019 1 commit
    • Jianchao Wang's avatar
      blk-mq: fix a hung issue when fsync · 85bd6e61
      Jianchao Wang authored
      Florian reported a io hung issue when fsync(). It should be
      triggered by following race condition.
      data + post flush         a flush
        case REQ_FSEQ_DATA
          issued to driver      blk_mq_dispatch_rq_list
                                  try to issue a flush req
                                  failed due to NON-NCQ command
                                  .queue_rq return BLK_STS_DEV_RESOURCE
      request completion
        req->end_io // doesn't check RESTART
          case REQ_FSEQ_POSTFLUSH
              do nothing because previous flush
              has not been completed
                                    insert rq to hctx->dispatch
                                    due to RESTART is still set, do nothing
      To fix this, replace the blk_mq_run_hw_queue in mq_flush_data_end_io
      with blk_mq_sched_restart to check and clear the RESTART flag.
      Fixes: bd166ef1
       (blk-mq-sched: add framework for MQ capable IO schedulers)
      Reported-by: default avatarFlorian Stecker <m19@florianstecker.de>
      Tested-by: default avatarFlorian Stecker <m19@florianstecker.de>
      Signed-off-by: default avatarJianchao Wang <jianchao.w.wang@oracle.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
  5. 16 Nov, 2018 1 commit
  6. 15 Nov, 2018 1 commit
  7. 07 Nov, 2018 4 commits
  8. 13 Oct, 2018 1 commit
  9. 09 Jun, 2018 1 commit
  10. 06 Jun, 2018 1 commit
  11. 04 Nov, 2017 3 commits
    • Ming Lei's avatar
      blk-mq: don't allocate driver tag upfront for flush rq · 923218f6
      Ming Lei authored
      The idea behind it is simple:
      1) for none scheduler, driver tag has to be borrowed for flush rq,
         otherwise we may run out of tag, and that causes an IO hang. And
         get/put driver tag is actually noop for none, so reordering tags
         isn't necessary at all.
      2) for a real I/O scheduler, we need not allocate a driver tag upfront
         for flush rq. It works just fine to follow the same approach as
         normal requests: allocate driver tag for each rq just before calling
      One driver visible change is that the driver tag isn't shared in the
      flush request sequence. That won't be a problem, since we always do that
      in legacy path.
      Then flush rq need not be treated specially wrt. get/put driver tag.
      This cleans up the code - for instance, reorder_tags_to_front() can be
      removed, and we needn't worry about request ordering in dispatch list
      for avoiding I/O deadlock.
      Also we have to put the driver tag before requeueing.
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
    • Ming Lei's avatar
      blk-flush: use blk_mq_request_bypass_insert() · 598906f8
      Ming Lei authored
      In the following patch, we will use RQF_FLUSH_SEQ to decide:
      1) if the flag isn't set, the flush rq need to be inserted via
      2) otherwise, the flush rq need to be dispatched directly since
      it is in flush machinery now.
      So we use blk_mq_request_bypass_insert() for requests of bypassing
      flush machinery, just like the legacy path did.
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
    • Ming Lei's avatar
      blk-flush: don't run queue for requests bypassing flush · 9c71c83c
      Ming Lei authored
      blk_insert_flush() should only insert request since run queue always
      follows it.
      In case of bypassing flush, we don't need to run queue because every
      blk_insert_flush() follows one run queue.
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
  12. 25 Aug, 2017 1 commit
  13. 23 Aug, 2017 1 commit
    • Christoph Hellwig's avatar
      block: replace bi_bdev with a gendisk pointer and partitions index · 74d46992
      Christoph Hellwig authored
      This way we don't need a block_device structure to submit I/O.  The
      block_device has different life time rules from the gendisk and
      request_queue and is usually only available when the block device node
      is open.  Other callers need to explicitly create one (e.g. the lightnvm
      passthrough code, or the new nvme multipathing code).
      For the actual I/O path all that we need is the gendisk, which exists
      once per block device.  But given that the block layer also does
      partition remapping we additionally need a partition index, which is
      used for said remapping in generic_make_request.
      Note that all the block drivers generally want request_queue or
      sometimes the gendisk, so this removes a layer of indirection all
      over the stack.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
  14. 21 Jun, 2017 1 commit
  15. 09 Jun, 2017 1 commit
    • Christoph Hellwig's avatar
      block: introduce new block status code type · 2a842aca
      Christoph Hellwig authored
      Currently we use nornal Linux errno values in the block layer, and while
      we accept any error a few have overloaded magic meanings.  This patch
      instead introduces a new  blk_status_t value that holds block layer specific
      status codes and explicitly explains their meaning.  Helpers to convert from
      and to the previous special meanings are provided for now, but I suspect
      we want to get rid of them in the long run - those drivers that have a
      errno input (e.g. networking) usually get errnos that don't know about
      the special block layer overloads, and similarly returning them to userspace
      will usually return somethings that strictly speaking isn't correct
      for file system operations, but that's left as an exercise for later.
      For now the set of errors is a very limited set that closely corresponds
      to the previous overloaded errno values, but there is some low hanging
      fruite to improve it.
      blk_status_t (ab)uses the sparse __bitwise annotations to allow for sparse
      typechecking, so that we can easily catch places passing the wrong values.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
  16. 19 Apr, 2017 1 commit
  17. 24 Mar, 2017 1 commit
  18. 17 Feb, 2017 1 commit
    • Jens Axboe's avatar
      block: don't defer flushes on blk-mq + scheduling · 7520872c
      Jens Axboe authored
      For blk-mq with scheduling, we can potentially end up with ALL
      driver tags assigned and sitting on the flush queues. If we
      defer because of an inlfight data request, then we can deadlock
      if that data request doesn't already have a tag assigned.
      This fixes a deadlock with running the xfs/297 xfstest, where
      thousands of syncs can cause the drive queue to stall.
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      Reviewed-by: default avatarOmar Sandoval <osandov@fb.com>
  19. 31 Jan, 2017 1 commit
    • Christoph Hellwig's avatar
      block: fold cmd_type into the REQ_OP_ space · aebf526b
      Christoph Hellwig authored
      Instead of keeping two levels of indirection for requests types, fold it
      all into the operations.  The little caveat here is that previously
      cmd_type only applied to struct request, while the request and bio op
      fields were set to plain REQ_OP_READ/WRITE even for passthrough
      Instead this patch adds new REQ_OP_* for SCSI passthrough and driver
      private requests, althought it has to add two for each so that we
      can communicate the data in/out nature of the request.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
  20. 27 Jan, 2017 2 commits
  21. 17 Jan, 2017 1 commit
  22. 09 Dec, 2016 1 commit
  23. 09 Nov, 2016 1 commit
  24. 02 Nov, 2016 1 commit
  25. 01 Nov, 2016 1 commit
  26. 28 Oct, 2016 2 commits
    • Christoph Hellwig's avatar
      block: better op and flags encoding · ef295ecf
      Christoph Hellwig authored
      Now that we don't need the common flags to overflow outside the range
      of a 32-bit type we can encode them the same way for both the bio and
      request fields.  This in addition allows us to place the operation
      first (and make some room for more ops while we're at it) and to
      stop having to shift around the operation values.
      In addition this allows passing around only one value in the block layer
      instead of two (and eventuall also in the file systems, but we can do
      that later) and thus clean up a lot of code.
      Last but not least this allows decreasing the size of the cmd_flags
      field in struct request to 32-bits.  Various functions passing this
      value could also be updated, but I'd like to avoid the churn for now.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
    • Christoph Hellwig's avatar
      block: split out request-only flags into a new namespace · e8064021
      Christoph Hellwig authored
      A lot of the REQ_* flags are only used on struct requests, and only of
      use to the block layer and a few drivers that dig into struct request
      This patch adds a new req_flags_t rq_flags field to struct request for
      them, and thus dramatically shrinks the number of common requests.  It
      also removes the unfortunate situation where we have to fit the fields
      from the same enum into 32 bits for struct bio and 64 bits for
      struct request.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarShaun Tancheff <shaun.tancheff@seagate.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
  27. 26 Oct, 2016 1 commit
  28. 15 Sep, 2016 1 commit
  29. 07 Jun, 2016 4 commits
  30. 13 Apr, 2016 1 commit