Skip to content
Snippets Groups Projects
  1. Aug 18, 2023
  2. May 16, 2023
  3. Feb 23, 2023
  4. Sep 27, 2022
  5. Mar 25, 2022
  6. Mar 04, 2022
  7. Nov 30, 2021
  8. Oct 01, 2021
  9. Aug 24, 2021
  10. May 31, 2021
  11. Dec 09, 2020
  12. Oct 28, 2020
    • Mauro Carvalho Chehab's avatar
      docs: lockdep-design: fix some warning issues · e3e7439d
      Mauro Carvalho Chehab authored
      
      There are several warnings caused by a recent change
      224ec489 ("lockdep/Documention: Recursive read lock detection reasoning")
      
      Those are reported by htmldocs build:
      
          Documentation/locking/lockdep-design.rst:429: WARNING: Definition list ends without a blank line; unexpected unindent.
          Documentation/locking/lockdep-design.rst:452: WARNING: Block quote ends without a blank line; unexpected unindent.
          Documentation/locking/lockdep-design.rst:453: WARNING: Unexpected indentation.
          Documentation/locking/lockdep-design.rst:453: WARNING: Blank line required after table.
          Documentation/locking/lockdep-design.rst:454: WARNING: Block quote ends without a blank line; unexpected unindent.
          Documentation/locking/lockdep-design.rst:455: WARNING: Unexpected indentation.
          Documentation/locking/lockdep-design.rst:455: WARNING: Blank line required after table.
          Documentation/locking/lockdep-design.rst:456: WARNING: Block quote ends without a blank line; unexpected unindent.
          Documentation/locking/lockdep-design.rst:457: WARNING: Unexpected indentation.
          Documentation/locking/lockdep-design.rst:457: WARNING: Blank line required after table.
      
      Besides the reported issues, there are some missing blank
      lines that ended producing wrong html output, and some
      literals are not properly identified.
      
      Also, the symbols used at the irq enabled/disable table
      are not displayed as expected, as they're not literals.
      Also, on another table they're using a different notation.
      
      Fixes: 224ec489 ("lockdep/Documention: Recursive read lock detection reasoning")
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+huawei@kernel.org>
      Link: https://lore.kernel.org/r/3b9431ac5c01e38111cd59928a93e7259ab7db0f.1603791716.git.mchehab+huawei@kernel.org
      
      
      Signed-off-by: default avatarJonathan Corbet <corbet@lwn.net>
      e3e7439d
  13. Sep 10, 2020
    • Ahmed S. Darwish's avatar
      seqlock: Introduce seqcount_latch_t · 80793c34
      Ahmed S. Darwish authored
      
      Latch sequence counters are a multiversion concurrency control mechanism
      where the seqcount_t counter even/odd value is used to switch between
      two copies of protected data. This allows the seqcount_t read path to
      safely interrupt its write side critical section (e.g. from NMIs).
      
      Initially, latch sequence counters were implemented as a single write
      function above plain seqcount_t: raw_write_seqcount_latch(). The read
      side was expected to use plain seqcount_t raw_read_seqcount().
      
      A specialized latch read function, raw_read_seqcount_latch(), was later
      added. It became the standardized way for latch read paths.  Due to the
      dependent load, it has one read memory barrier less than the plain
      seqcount_t raw_read_seqcount() API.
      
      Only raw_write_seqcount_latch() and raw_read_seqcount_latch() should be
      used with latch sequence counters. Having *unique* read and write path
      APIs means that latch sequence counters are actually a data type of
      their own -- just inappropriately overloading plain seqcount_t.
      
      Introduce seqcount_latch_t. This adds type-safety and ensures that only
      the correct latch-safe APIs are to be used.
      
      Not to break bisection, let the latch APIs also accept plain seqcount_t
      or seqcount_raw_spinlock_t. After converting all call sites to
      seqcount_latch_t, only that new data type will be allowed.
      
      References: 9b0fd802 ("seqcount: Add raw_write_seqcount_latch()")
      References: 7fc26327 ("seqlock: Introduce raw_read_seqcount_latch()")
      References: aadd6e5c ("time/sched_clock: Use raw_read_seqcount_latch()")
      Signed-off-by: default avatarAhmed S. Darwish <a.darwish@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20200827114044.11173-4-a.darwish@linutronix.de
      80793c34
  14. Aug 26, 2020
  15. Aug 24, 2020
  16. Aug 13, 2020
  17. Jul 29, 2020
    • Ahmed S. Darwish's avatar
      seqlock: Extend seqcount API with associated locks · 55f3560d
      Ahmed S. Darwish authored
      
      A sequence counter write side critical section must be protected by some
      form of locking to serialize writers. If the serialization primitive is
      not disabling preemption implicitly, preemption has to be explicitly
      disabled before entering the write side critical section.
      
      There is no built-in debugging mechanism to verify that the lock used
      for writer serialization is held and preemption is disabled. Some usage
      sites like dma-buf have explicit lockdep checks for the writer-side
      lock, but this covers only a small portion of the sequence counter usage
      in the kernel.
      
      Add new sequence counter types which allows to associate a lock to the
      sequence counter at initialization time. The seqcount API functions are
      extended to provide appropriate lockdep assertions depending on the
      seqcount/lock type.
      
      For sequence counters with associated locks that do not implicitly
      disable preemption, preemption protection is enforced in the sequence
      counter write side functions. This removes the need to explicitly add
      preempt_disable/enable() around the write side critical sections: the
      write_begin/end() functions for these new sequence counter types
      automatically do this.
      
      Introduce the following seqcount types with associated locks:
      
           seqcount_spinlock_t
           seqcount_raw_spinlock_t
           seqcount_rwlock_t
           seqcount_mutex_t
           seqcount_ww_mutex_t
      
      Extend the seqcount read and write functions to branch out to the
      specific seqcount_LOCKTYPE_t implementation at compile-time. This avoids
      kernel API explosion per each new seqcount_LOCKTYPE_t added. Add such
      compile-time type detection logic into a new, internal, seqlock header.
      
      Document the proper seqcount_LOCKTYPE_t usage, and rationale, at
      Documentation/locking/seqlock.rst.
      
      If lockdep is disabled, this lock association is compiled out and has
      neither storage size nor runtime overhead.
      
      Signed-off-by: default avatarAhmed S. Darwish <a.darwish@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20200720155530.1173732-10-a.darwish@linutronix.de
      55f3560d
    • Ahmed S. Darwish's avatar
      Documentation: locking: Describe seqlock design and usage · 0d24f65e
      Ahmed S. Darwish authored
      
      Proper documentation for the design and usage of sequence counters and
      sequential locks does not exist. Complete the seqlock.h documentation as
      follows:
      
        - Divide all documentation on a seqcount_t vs. seqlock_t basis. The
          description for both mechanisms was intermingled, which is incorrect
          since the usage constrains for each type are vastly different.
      
        - Add an introductory paragraph describing the internal design of, and
          rationale for, sequence counters.
      
        - Document seqcount_t writer non-preemptibility requirement, which was
          not previously documented anywhere, and provide a clear rationale.
      
        - Provide template code for seqcount_t and seqlock_t initialization
          and reader/writer critical sections.
      
        - Recommend using seqlock_t by default. It implicitly handles the
          serialization and non-preemptibility requirements of writers.
      
      At seqlock.h:
      
        - Remove references to brlocks as they've long been removed from the
          kernel.
      
        - Remove references to gcc-3.x since the kernel's minimum supported
          gcc version is 4.9.
      
      References: 0f6ed63b ("no need to keep brlock macros anymore...")
      References: 6ec4476a ("Raise gcc version requirement to 4.9")
      Signed-off-by: default avatarAhmed S. Darwish <a.darwish@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20200720155530.1173732-2-a.darwish@linutronix.de
      0d24f65e
  18. Jul 16, 2020
  19. Jul 13, 2020
  20. Jun 29, 2020
  21. May 28, 2020
    • Thomas Gleixner's avatar
      locking: Introduce local_lock() · 91710728
      Thomas Gleixner authored
      
      preempt_disable() and local_irq_disable/save() are in principle per CPU big
      kernel locks. This has several downsides:
      
        - The protection scope is unknown
      
        - Violation of protection rules is hard to detect by instrumentation
      
        - For PREEMPT_RT such sections, unless in low level critical code, can
          violate the preemptability constraints.
      
      To address this PREEMPT_RT introduced the concept of local_locks which are
      strictly per CPU.
      
      The lock operations map to preempt_disable(), local_irq_disable/save() and
      the enabling counterparts on non RT enabled kernels.
      
      If lockdep is enabled local locks gain a lock map which tracks the usage
      context. This will catch cases where an area is protected by
      preempt_disable() but the access also happens from interrupt context. local
      locks have identified quite a few such issues over the years, the most
      recent example is:
      
        b7d5dc21 ("random: add a spinlock_t to struct batched_entropy")
      
      Aside of the lockdep coverage this also improves code readability as it
      precisely annotates the protection scope.
      
      PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
      protect such sections while maintaining preemtability and CPU locality.
      
      local locks can replace:
      
        - preempt_enable()/disable() pairs
        - local_irq_disable/enable() pairs
        - local_irq_save/restore() pairs
      
      They are also used to replace code which implicitly disables preemption
      like:
      
        - get_cpu()/put_cpu()
        - get_cpu_var()/put_cpu_var()
      
      with PREEMPT_RT friendly constructs.
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Link: https://lore.kernel.org/r/20200527201119.1692513-2-bigeasy@linutronix.de
      91710728
  22. May 15, 2020
  23. May 05, 2020
  24. Mar 28, 2020
  25. Mar 21, 2020
  26. Feb 05, 2020
  27. Dec 19, 2019
  28. Sep 14, 2019
  29. Jul 17, 2019
  30. Jul 15, 2019
  31. Jun 03, 2019
  32. Mar 04, 2019
Loading