Skip to content
Snippets Groups Projects
  1. Nov 20, 2023
  2. Nov 17, 2023
  3. Nov 13, 2023
  4. Nov 07, 2023
  5. Oct 28, 2023
  6. Oct 21, 2023
  7. Oct 18, 2023
  8. Oct 17, 2023
  9. Oct 13, 2023
  10. Oct 11, 2023
  11. Oct 04, 2023
  12. Sep 26, 2023
    • Coly Li's avatar
      badblocks: switch to the improved badblock handling code · aa511ff8
      Coly Li authored
      
      This patch removes old code of badblocks_set(), badblocks_clear() and
      badblocks_check(), and make them as wrappers to call _badblocks_set(),
      _badblocks_clear() and _badblocks_check().
      
      By this change now the badblock handing switch to the improved algorithm
      in  _badblocks_set(), _badblocks_clear() and _badblocks_check().
      
      This patch only contains the changes of old code deletion, new added
      code for the improved algorithms are in previous patches.
      
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Geliang Tang <geliang.tang@suse.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Vishal L Verma <vishal.l.verma@intel.com>
      Cc: Xiao Ni <xni@redhat.com>
      Reviewed-by: default avatarXiao Ni <xni@redhat.com>
      Acked-by: default avatarGeliang Tang <geliang.tang@suse.com>
      Link: https://lore.kernel.org/r/20230811170513.2300-7-colyli@suse.de
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      aa511ff8
    • Coly Li's avatar
      badblocks: improve badblocks_check() for multiple ranges handling · 3ea3354c
      Coly Li authored
      
      This patch rewrites badblocks_check() with similar coding style as
      _badblocks_set() and _badblocks_clear(). The only difference is bad
      blocks checking may handle multiple ranges in bad tables now.
      
      If a checking range covers multiple bad blocks range in bad block table,
      like the following condition (C is the checking range, E1, E2, E3 are
      three bad block ranges in bad block table),
        +------------------------------------+
        |                C                   |
        +------------------------------------+
          +----+      +----+      +----+
          | E1 |      | E2 |      | E3 |
          +----+      +----+      +----+
      The improved badblocks_check() algorithm will divide checking range C
      into multiple parts, and handle them in 7 runs of a while-loop,
        +--+ +----+ +----+ +----+ +----+ +----+ +----+
        |C1| | C2 | | C3 | | C4 | | C5 | | C6 | | C7 |
        +--+ +----+ +----+ +----+ +----+ +----+ +----+
             +----+        +----+        +----+
             | E1 |        | E2 |        | E3 |
             +----+        +----+        +----+
      And the start LBA and length of range E1 will be set as first_bad and
      bad_sectors for the caller.
      
      The return value rule is consistent for multiple ranges. For example if
      there are following bad block ranges in bad block table,
         Index No.     Start        Len         Ack
             0          400          20          1
             1          500          50          1
             2          650          20          0
      the return value, first_bad, bad_sectors by calling badblocks_set() with
      different checking range can be the following values,
          Checking Start, Len     Return Value   first_bad    bad_sectors
                     100, 100          0           N/A           N/A
                     100, 310          1           400           10
                     100, 440          1           400           10
                     100, 540          1           400           10
                     100, 600         -1           400           10
                     100, 800         -1           400           10
      
      In order to make code review easier, this patch names the improved bad
      block range checking routine as _badblocks_check() and does not change
      existing badblock_check() code yet. Later patch will delete old code of
      badblocks_check() and make it as a wrapper to call _badblocks_check().
      Then the new added code won't mess up with the old deleted code, it will
      be more clear and easier for code review.
      
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Geliang Tang <geliang.tang@suse.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Vishal L Verma <vishal.l.verma@intel.com>
      Cc: Xiao Ni <xni@redhat.com>
      Reviewed-by: default avatarXiao Ni <xni@redhat.com>
      Acked-by: default avatarGeliang Tang <geliang.tang@suse.com>
      Link: https://lore.kernel.org/r/20230811170513.2300-6-colyli@suse.de
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3ea3354c
    • Coly Li's avatar
      badblocks: improve badblocks_clear() for multiple ranges handling · db448eb6
      Coly Li authored
      
      With the fundamental ideas and helper routines from badblocks_set()
      improvement, clearing bad block for multiple ranges is much simpler.
      
      With a similar idea from badblocks_set() improvement, this patch
      simplifies bad block range clearing into 5 situations. No matter how
      complicated the clearing condition is, we just look at the head part
      of clearing range with relative already set bad block range from the
      bad block table. The rested part will be handled in next run of the
      while-loop.
      
      Based on existing helpers added from badblocks_set(), this patch adds
      two more helpers,
      - front_clear()
        Clear the bad block range from bad block table which is front
        overlapped with the clearing range.
      - front_splitting_clear()
        Handle the condition that the clearing range hits middle of an
        already set bad block range from bad block table.
      
      Similar as badblocks_set(), the first part of clearing range is handled
      with relative bad block range which is find by prev_badblocks(). In most
      cases a valid hint is provided to prev_badblocks() to avoid unnecessary
      bad block table iteration.
      
      This patch also explains the detail algorithm code comments at beginning
      of badblocks.c, including which five simplified situations are
      categrized and how all the bad block range clearing conditions are
      handled by these five situations.
      
      Again, in order to make the code review easier and avoid the code
      changes mixed together, this patch does not modify badblock_clear() and
      implement another routine called _badblock_clear() for the improvement.
      Later patch will delete current code of badblock_clear() and make it as
      a wrapper to _badblock_clear(), so the code change can be much clear for
      review.
      
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Geliang Tang <geliang.tang@suse.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Vishal L Verma <vishal.l.verma@intel.com>
      Cc: Xiao Ni <xni@redhat.com>
      Reviewed-by: default avatarXiao Ni <xni@redhat.com>
      Acked-by: default avatarGeliang Tang <geliang.tang@suse.com>
      Link: https://lore.kernel.org/r/20230811170513.2300-5-colyli@suse.de
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      db448eb6
    • Coly Li's avatar
      badblocks: improve badblocks_set() for multiple ranges handling · 1726c774
      Coly Li authored
      
      Recently I received a bug report that current badblocks code does not
      properly handle multiple ranges. For example,
              badblocks_set(bb, 32, 1, true);
              badblocks_set(bb, 34, 1, true);
              badblocks_set(bb, 36, 1, true);
              badblocks_set(bb, 32, 12, true);
      Then indeed badblocks_show() reports,
              32 3
              36 1
      But the expected bad blocks table should be,
              32 12
      Obviously only the first 2 ranges are merged and badblocks_set() returns
      and ignores the rest setting range.
      
      This behavior is improper, if the caller of badblocks_set() wants to set
      a range of blocks into bad blocks table, all of the blocks in the range
      should be handled even the previous part encountering failure.
      
      The desired way to set bad blocks range by badblocks_set() is,
      - Set as many as blocks in the setting range into bad blocks table.
      - Merge the bad blocks ranges and occupy as less as slots in the bad
        blocks table.
      - Fast.
      
      Indeed the above proposal is complicated, especially with the following
      restrictions,
      - The setting bad blocks range can be acknowledged or not acknowledged.
      - The bad blocks table size is limited.
      - Memory allocation should be avoided.
      
      The basic idea of the patch is to categorize all possible bad blocks
      range setting combinations into much less simplified and more less
      special conditions. Inside badblocks_set() there is an implicit loop
      composed by jumping between labels 're_insert' and 'update_sectors'. No
      matter how large the setting bad blocks range is, in every loop just a
      minimized range from the head is handled by a pre-defined behavior from
      one of the categorized conditions. The logic is simple and code flow is
      manageable.
      
      The different relative layout between the setting range and existing bad
      block range are checked and handled (merge, combine, overwrite, insert)
      by the helpers in previous patch. This patch is to make all the helpers
      work together with the above idea.
      
      This patch only has the algorithm improvement for badblocks_set(). There
      are following patches contain improvement for badblocks_clear() and
      badblocks_check(). But the algorithm in badblocks_set() is fundamental
      and typical, other improvement in clear and check routines are based on
      all the helpers and ideas in this patch.
      
      In order to make the change to be more clear for code review, this patch
      does not directly modify existing badblocks_set(), and just add a new
      one named _badblocks_set(). Later patch will remove current existing
      badblocks_set() code and make it as a wrapper of _badblocks_set(). So
      the new added change won't be mixed with deleted code, the code review
      can be easier.
      
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Geliang Tang <geliang.tang@suse.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Vishal L Verma <vishal.l.verma@intel.com>
      Cc: Wols Lists <antlists@youngman.org.uk>
      Cc: Xiao Ni <xni@redhat.com>
      Reviewed-by: default avatarXiao Ni <xni@redhat.com>
      Acked-by: default avatarGeliang Tang <geliang.tang@suse.com>
      Link: https://lore.kernel.org/r/20230811170513.2300-4-colyli@suse.de
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1726c774
    • Coly Li's avatar
      badblocks: add helper routines for badblock ranges handling · c3c6a86e
      Coly Li authored
      
      This patch adds several helper routines to improve badblock ranges
      handling. These helper routines will be used later in the improved
      version of badblocks_set()/badblocks_clear()/badblocks_check().
      
      - Helpers prev_by_hint() and prev_badblocks() are used to find the bad
        range from bad table which the searching range starts at or after.
      
      - The following helpers are to decide the relative layout between the
        manipulating range and existing bad block range from bad table.
        - can_merge_behind()
          Return 'true' if the manipulating range can backward merge with the
          bad block range.
        - can_merge_front()
          Return 'true' if the manipulating range can forward merge with the
          bad block range.
        - can_combine_front()
          Return 'true' if two adjacent bad block ranges before the
          manipulating range can be merged.
        - overlap_front()
          Return 'true' if the manipulating range exactly overlaps with the
          bad block range in front of its range.
        - overlap_behind()
          Return 'true' if the manipulating range exactly overlaps with the
          bad block range behind its range.
        - can_front_overwrite()
          Return 'true' if the manipulating range can forward overwrite the
          bad block range in front of its range.
      
      - The following helpers are to add the manipulating range into the bad
        block table. Different routine is called with the specific relative
        layout between the manipulating range and other bad block range in the
        bad block table.
        - behind_merge()
          Merge the manipulating range with the bad block range behind its
          range, and return the number of merged length in unit of sector.
        - front_merge()
          Merge the manipulating range with the bad block range in front of
          its range, and return the number of merged length in unit of sector.
        - front_combine()
          Combine the two adjacent bad block ranges before the manipulating
          range into a larger one.
        - front_overwrite()
          Overwrite partial of whole bad block range which is in front of the
          manipulating range. The overwrite may split existing bad block range
          and generate more bad block ranges into the bad block table.
        - insert_at()
          Insert the manipulating range at a specific location in the bad
          block table.
      
      All the above helpers are used in later patches to improve the bad block
      ranges handling for badblocks_set()/badblocks_clear()/badblocks_check().
      
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Geliang Tang <geliang.tang@suse.com>
      Cc: Hannes Reinecke <hare@suse.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Vishal L Verma <vishal.l.verma@intel.com>
      Cc: Xiao Ni <xni@redhat.com>
      Reviewed-by: default avatarXiao Ni <xni@redhat.com>
      Acked-by: default avatarGeliang Tang <geliang.tang@suse.com>
      Link: https://lore.kernel.org/r/20230811170513.2300-3-colyli@suse.de
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      c3c6a86e
    • Randy Dunlap's avatar
      block: fix kernel-doc for disk_force_media_change() · a578a253
      Randy Dunlap authored
      
      Drop one function parameter's kernel-doc comment since the parameter
      was removed. This prevents a kernel-doc warning:
      
      block/disk-events.c:300: warning: Excess function parameter 'events' description in 'disk_force_media_change'
      
      Fixes: ab6860f6 ("block: simplify the disk_force_media_change interface")
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Closes: lore.kernel.org/r/202309060957.vfl0mUur-lkp@intel.com
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: linux-block@vger.kernel.org
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230926005232.23666-1-rdunlap@infradead.org
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a578a253
  13. Sep 22, 2023
  14. Sep 18, 2023
  15. Sep 11, 2023
  16. Sep 06, 2023
  17. Aug 31, 2023
  18. Aug 30, 2023
Loading