Skip to content
Snippets Groups Projects
  1. Apr 08, 2020
  2. Apr 07, 2020
  3. Apr 02, 2020
  4. Mar 26, 2020
  5. Mar 25, 2020
  6. Mar 23, 2020
  7. Mar 21, 2020
    • Peter Zijlstra's avatar
      lockdep: Introduce wait-type checks · de8f5e4f
      Peter Zijlstra authored
      
      Extend lockdep to validate lock wait-type context.
      
      The current wait-types are:
      
      	LD_WAIT_FREE,		/* wait free, rcu etc.. */
      	LD_WAIT_SPIN,		/* spin loops, raw_spinlock_t etc.. */
      	LD_WAIT_CONFIG,		/* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */
      	LD_WAIT_SLEEP,		/* sleeping locks, mutex_t etc.. */
      
      Where lockdep validates that the current lock (the one being acquired)
      fits in the current wait-context (as generated by the held stack).
      
      This ensures that there is no attempt to acquire mutexes while holding
      spinlocks, to acquire spinlocks while holding raw_spinlocks and so on. In
      other words, its a more fancy might_sleep().
      
      Obviously RCU made the entire ordeal more complex than a simple single
      value test because RCU can be acquired in (pretty much) any context and
      while it presents a context to nested locks it is not the same as it
      got acquired in.
      
      Therefore its necessary to split the wait_type into two values, one
      representing the acquire (outer) and one representing the nested context
      (inner). For most 'normal' locks these two are the same.
      
      [ To make static initialization easier we have the rule that:
        .outer == INV means .outer == .inner; because INV == 0. ]
      
      It further means that its required to find the minimal .inner of the held
      stack to compare against the outer of the new lock; because while 'normal'
      RCU presents a CONFIG type to nested locks, if it is taken while already
      holding a SPIN type it obviously doesn't relax the rules.
      
      Below is an example output generated by the trivial test code:
      
        raw_spin_lock(&foo);
        spin_lock(&bar);
        spin_unlock(&bar);
        raw_spin_unlock(&foo);
      
       [ BUG: Invalid wait context ]
       -----------------------------
       swapper/0/1 is trying to lock:
       ffffc90000013f20 (&bar){....}-{3:3}, at: kernel_init+0xdb/0x187
       other info that might help us debug this:
       1 lock held by swapper/0/1:
        #0: ffffc90000013ee0 (&foo){+.+.}-{2:2}, at: kernel_init+0xd1/0x187
      
      The way to read it is to look at the new -{n,m} part in the lock
      description; -{3:3} for the attempted lock, and try and match that up to
      the held locks, which in this case is the one: -{2,2}.
      
      This tells that the acquiring lock requires a more relaxed environment than
      presented by the lock stack.
      
      Currently only the normal locks and RCU are converted, the rest of the
      lockdep users defaults to .inner = INV which is ignored. More conversions
      can be done when desired.
      
      The check for spinlock_t nesting is not enabled by default. It's a separate
      config option for now as there are known problems which are currently
      addressed. The config option allows to identify these problems and to
      verify that the solutions found are indeed solving them.
      
      The config switch will be removed and the checks will permanently enabled
      once the vast majority of issues has been addressed.
      
      [ bigeasy: Move LD_WAIT_FREE,… out of CONFIG_LOCKDEP to avoid compile
      	   failure with CONFIG_DEBUG_SPINLOCK + !CONFIG_LOCKDEP]
      [ tglx: Add the config option ]
      
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Link: https://lkml.kernel.org/r/20200321113242.427089655@linutronix.de
      de8f5e4f
    • Vincenzo Frascino's avatar
      lib/vdso: Enable common headers · 8c59ab83
      Vincenzo Frascino authored
      
      The vDSO library should only include the necessary headers required for
      a userspace library (UAPI and a minimal set of kernel headers). To make
      this possible it is necessary to isolate from the kernel headers the
      common parts that are strictly necessary to build the library.
      
      Refactor the unified vdso code to use the common headers.
      
      Signed-off-by: default avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20200320145351.32292-26-vincenzo.frascino@arm.com
      8c59ab83
  8. Mar 20, 2020
  9. Mar 12, 2020
    • Matthew Wilcox (Oracle)'s avatar
      xarray: Fix early termination of xas_for_each_marked · 7e934cf5
      Matthew Wilcox (Oracle) authored
      
      xas_for_each_marked() is using entry == NULL as a termination condition
      of the iteration. When xas_for_each_marked() is used protected only by
      RCU, this can however race with xas_store(xas, NULL) in the following
      way:
      
      TASK1                                   TASK2
      page_cache_delete()         	        find_get_pages_range_tag()
                                                xas_for_each_marked()
                                                  xas_find_marked()
                                                    off = xas_find_chunk()
      
        xas_store(&xas, NULL)
          xas_init_marks(&xas);
          ...
          rcu_assign_pointer(*slot, NULL);
                                                    entry = xa_entry(off);
      
      And thus xas_for_each_marked() terminates prematurely possibly leading
      to missed entries in the iteration (translating to missing writeback of
      some pages or a similar problem).
      
      If we find a NULL entry that has been marked, skip it (unless we're trying
      to allocate an entry).
      
      Reported-by: default avatarJan Kara <jack@suse.cz>
      CC: stable@vger.kernel.org
      Fixes: ef8e5717 ("page cache: Convert delete_batch to XArray")
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      7e934cf5
Loading