- Jun 22, 2023
-
-
Yury Norov authored
bitmap_{from,to}_arr64() optimization is overly optimistic on 32-bit LE architectures when it's wired to bitmap_copy_clear_tail(). bitmap_copy_clear_tail() takes care of unused bits in the bitmap up to the next word boundary. But on 32-bit machines when copying bits from bitmap to array of 64-bit words, it's expected that the unused part of a recipient array must be cleared up to 64-bit boundary, so the last 4 bytes may stay untouched when nbits % 64 <= 32. While the copying part of the optimization works correct, that clear-tail trick makes corresponding tests reasonably fail: test_bitmap: bitmap_to_arr64(nbits == 1): tail is not safely cleared: 0xa5a5a5a500000001 (must be 0x0000000000000001) Fix it by removing bitmap_{from,to}_arr64() optimization for 32-bit LE arches. Reported-by:
Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/lkml/20230225184702.GA3587246@roeck-us.net/ Fixes: 0a97953f ("lib: add bitmap_{from,to}_arr64") Signed-off-by:
Yury Norov <yury.norov@gmail.com> Tested-by:
Guenter Roeck <linux@roeck-us.net> Reviewed-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by:
Alexander Lobakin <aleksander.lobakin@intel.com>
-
Yury Norov authored
The tests that don't use expect_eq() macro to determine that a test is failured must increment failed_tests explicitly. Reported-by:
Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/lkml/20230225184702.GA3587246@roeck-us.net/ Signed-off-by:
Yury Norov <yury.norov@gmail.com> Reviewed-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by:
Alexander Lobakin <aleksander.lobakin@intel.com>
-
- Jun 12, 2023
-
-
Lorenzo Stoakes authored
It turns out that alloc_pages_bulk_array() does not treat the page_array parameter as an output parameter, but rather reads the array and skips any entries that have already been allocated. This is somewhat unexpected and breaks this test, as we allocate the pages array uninitialised on the assumption it will be overwritten. As a result, the test was referencing uninitialised data and causing the PFN to not be valid and thus a WARN_ON() followed by a null pointer deref and panic. In addition, this is an array of pointers not of struct page objects, so we need only allocate an array with elements of pointer size. We solve both problems by simply using kcalloc() and referencing sizeof(struct page *) rather than sizeof(struct page). Link: https://lkml.kernel.org/r/20230524082424.10022-1-lstoakes@gmail.com Fixes: 869cb29a ("lib/test_vmalloc.c: add vm_map_ram()/vm_unmap_ram() test case") Signed-off-by:
Lorenzo Stoakes <lstoakes@gmail.com> Reviewed-by:
Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by:
Baoquan He <bhe@redhat.com> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Arnd Bergmann authored
The xarray.c file contains the only call to radix_tree_node_rcu_free(), and it comes with its own extern declaration for it. This means the function definition causes a missing-prototype warning: lib/radix-tree.c:288:6: error: no previous prototype for 'radix_tree_node_rcu_free' [-Werror=missing-prototypes] Instead, move the declaration for this function to a new header that can be included by both, and do the same for the radix_tree_node_cachep variable that has the same underlying problem but does not cause a warning with gcc. [zhangpeng.00@bytedance.com: fix building radix tree test suite] Link: https://lkml.kernel.org/r/20230521095450.21332-1-zhangpeng.00@bytedance.com Link: https://lkml.kernel.org/r/20230516194212.548910-1-arnd@kernel.org Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Jun 08, 2023
-
-
Ben Hutchings authored
irq_cpu_rmap_release() calls cpu_rmap_put(), which may free the rmap. So we need to clear the pointer to our glue structure in rmap before doing that, not after. Fixes: 4e0473f1 ("lib: cpu_rmap: Avoid use after free on rmap->obj array entries") Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Reviewed-by:
Simon Horman <simon.horman@corigine.com> Link: https://lore.kernel.org/r/ZHo0vwquhOy3FaXc@decadent.org.uk Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- May 31, 2023
-
-
Mirsad Goran Todorovac authored
The following kernel memory leak was noticed after running tools/testing/selftests/firmware/fw_run_tests.sh: [root@pc-mtodorov firmware]# cat /sys/kernel/debug/kmemleak . . . unreferenced object 0xffff955389bc3400 (size 1024): comm "test_firmware-0", pid 5451, jiffies 4294944822 (age 65.652s) hex dump (first 32 bytes): 47 48 34 35 36 37 0a 00 00 00 00 00 00 00 00 00 GH4567.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff962f5dec>] slab_post_alloc_hook+0x8c/0x3c0 [<ffffffff962fcca4>] __kmem_cache_alloc_node+0x184/0x240 [<ffffffff962704de>] kmalloc_trace+0x2e/0xc0 [<ffffffff9665b42d>] test_fw_run_batch_request+0x9d/0x180 [<ffffffff95fd813b>] kthread+0x10b/0x140 [<ffffffff95e033e9>] ret_from_fork+0x29/0x50 unreferenced object 0xffff9553c334b400 (size 1024): comm "test_firmware-1", pid 5452, jiffies 4294944822 (age 65.652s) hex dump (first 32 bytes): 47 48 34 35 36 37 0a 00 00 00 00 00 00 00 00 00 GH4567.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff962f5dec>] slab_post_alloc_hook+0x8c/0x3c0 [<ffffffff962fcca4>] __kmem_cache_alloc_node+0x184/0x240 [<ffffffff962704de>] kmalloc_trace+0x2e/0xc0 [<ffffffff9665b42d>] test_fw_run_batch_request+0x9d/0x180 [<ffffffff95fd813b>] kthread+0x10b/0x140 [<ffffffff95e033e9>] ret_from_fork+0x29/0x50 unreferenced object 0xffff9553c334f000 (size 1024): comm "test_firmware-2", pid 5453, jiffies 4294944822 (age 65.652s) hex dump (first 32 bytes): 47 48 34 35 36 37 0a 00 00 00 00 00 00 00 00 00 GH4567.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff962f5dec>] slab_post_alloc_hook+0x8c/0x3c0 [<ffffffff962fcca4>] __kmem_cache_alloc_node+0x184/0x240 [<ffffffff962704de>] kmalloc_trace+0x2e/0xc0 [<ffffffff9665b42d>] test_fw_run_batch_request+0x9d/0x180 [<ffffffff95fd813b>] kthread+0x10b/0x140 [<ffffffff95e033e9>] ret_from_fork+0x29/0x50 unreferenced object 0xffff9553c3348400 (size 1024): comm "test_firmware-3", pid 5454, jiffies 4294944822 (age 65.652s) hex dump (first 32 bytes): 47 48 34 35 36 37 0a 00 00 00 00 00 00 00 00 00 GH4567.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff962f5dec>] slab_post_alloc_hook+0x8c/0x3c0 [<ffffffff962fcca4>] __kmem_cache_alloc_node+0x184/0x240 [<ffffffff962704de>] kmalloc_trace+0x2e/0xc0 [<ffffffff9665b42d>] test_fw_run_batch_request+0x9d/0x180 [<ffffffff95fd813b>] kthread+0x10b/0x140 [<ffffffff95e033e9>] ret_from_fork+0x29/0x50 [root@pc-mtodorov firmware]# Note that the size 1024 corresponds to the size of the test firmware buffer. The actual number of the buffers leaked is around 70-110, depending on the test run. The cause of the leak is the following: request_partial_firmware_into_buf() and request_firmware_into_buf() provided firmware buffer isn't released on release_firmware(), we have allocated it and we are responsible for deallocating it manually. This is introduced in a number of context where previously only release_firmware() was called, which was insufficient. Reported-by:
Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Fixes: 7feebfa4 ("test_firmware: add support for request_firmware_into_buf") Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Dan Carpenter <error27@gmail.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Russ Weight <russell.h.weight@intel.com> Cc: Tianfei zhang <tianfei.zhang@intel.com> Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Cc: Zhengchao Shao <shaozhengchao@huawei.com> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: linux-kernel@vger.kernel.org Cc: Kees Cook <keescook@chromium.org> Cc: Scott Branden <sbranden@broadcom.com> Cc: Luis R. Rodriguez <mcgrof@kernel.org> Cc: linux-kselftest@vger.kernel.org Cc: stable@vger.kernel.org # v5.4 Signed-off-by:
Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Link: https://lore.kernel.org/r/20230509084746.48259-3-mirsad.todorovac@alu.unizg.hr Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mirsad Goran Todorovac authored
Dan Carpenter spotted that test_fw_config->reqs will be leaked if trigger_batched_requests_store() is called two or more times. The same appears with trigger_batched_requests_async_store(). This bug wasn't trigger by the tests, but observed by Dan's visual inspection of the code. The recommended workaround was to return -EBUSY if test_fw_config->reqs is already allocated. Fixes: 7feebfa4 ("test_firmware: add support for request_firmware_into_buf") Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Russ Weight <russell.h.weight@intel.com> Cc: Tianfei Zhang <tianfei.zhang@intel.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: linux-kselftest@vger.kernel.org Cc: stable@vger.kernel.org # v5.4 Suggested-by:
Dan Carpenter <error27@gmail.com> Suggested-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Reviewed-by:
Dan Carpenter <dan.carpenter@linaro.org> Acked-by:
Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20230509084746.48259-2-mirsad.todorovac@alu.unizg.hr Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Mirsad Goran Todorovac authored
Dan Carpenter spotted a race condition in a couple of situations like these in the test_firmware driver: static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg) { u8 val; int ret; ret = kstrtou8(buf, 10, &val); if (ret) return ret; mutex_lock(&test_fw_mutex); *(u8 *)cfg = val; mutex_unlock(&test_fw_mutex); /* Always return full write size even if we didn't consume all */ return size; } static ssize_t config_num_requests_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { int rc; mutex_lock(&test_fw_mutex); if (test_fw_config->reqs) { pr_err("Must call release_all_firmware prior to changing config\n"); rc = -EINVAL; mutex_unlock(&test_fw_mutex); goto out; } mutex_unlock(&test_fw_mutex); rc = test_dev_config_update_u8(buf, count, &test_fw_config->num_requests); out: return rc; } static ssize_t config_read_fw_idx_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { return test_dev_config_update_u8(buf, count, &test_fw_config->read_fw_idx); } The function test_dev_config_update_u8() is called from both the locked and the unlocked context, function config_num_requests_store() and config_read_fw_idx_store() which can both be called asynchronously as they are driver's methods, while test_dev_config_update_u8() and siblings change their argument pointed to by u8 *cfg or similar pointer. To avoid deadlock on test_fw_mutex, the lock is dropped before calling test_dev_config_update_u8() and re-acquired within test_dev_config_update_u8() itself, but alas this creates a race condition. Having two locks wouldn't assure a race-proof mutual exclusion. This situation is best avoided by the introduction of a new, unlocked function __test_dev_config_update_u8() which can be called from the locked context and reducing test_dev_config_update_u8() to: static int test_dev_config_update_u8(const char *buf, size_t size, u8 *cfg) { int ret; mutex_lock(&test_fw_mutex); ret = __test_dev_config_update_u8(buf, size, cfg); mutex_unlock(&test_fw_mutex); return ret; } doing the locking and calling the unlocked primitive, which enables both locked and unlocked versions without duplication of code. The similar approach was applied to all functions called from the locked and the unlocked context, which safely mitigates both deadlocks and race conditions in the driver. __test_dev_config_update_bool(), __test_dev_config_update_u8() and __test_dev_config_update_size_t() unlocked versions of the functions were introduced to be called from the locked contexts as a workaround without releasing the main driver's lock and thereof causing a race condition. The test_dev_config_update_bool(), test_dev_config_update_u8() and test_dev_config_update_size_t() locked versions of the functions are being called from driver methods without the unnecessary multiplying of the locking and unlocking code for each method, and complicating the code with saving of the return value across lock. Fixes: 7feebfa4 ("test_firmware: add support for request_firmware_into_buf") Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Russ Weight <russell.h.weight@intel.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Tianfei Zhang <tianfei.zhang@intel.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Colin Ian King <colin.i.king@gmail.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: linux-kselftest@vger.kernel.org Cc: stable@vger.kernel.org # v5.4 Suggested-by:
Dan Carpenter <error27@gmail.com> Signed-off-by:
Mirsad Goran Todorovac <mirsad.todorovac@alu.unizg.hr> Link: https://lore.kernel.org/r/20230509084746.48259-1-mirsad.todorovac@alu.unizg.hr Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- May 22, 2023
-
-
Tetsuo Handa authored
syzbot is reporting a lockdep warning in fill_pool() because the allocation from debugobjects is using GFP_ATOMIC, which is (__GFP_HIGH | __GFP_KSWAPD_RECLAIM) and therefore tries to wake up kswapd, which acquires kswapd_wait::lock. Since fill_pool() might be called with arbitrary locks held, fill_pool() should not assume that acquiring kswapd_wait::lock is safe. Use __GFP_HIGH instead and remove __GFP_NORETRY as it is pointless for !__GFP_DIRECT_RECLAIM allocation. Fixes: 3ac7fe5a ("infrastructure to debug (dynamic) objects") Reported-by:
syzbot <syzbot+fe0c72f0ccbb93786380@syzkaller.appspotmail.com> Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/6577e1fa-b6ee-f2be-2414-a2b51b1c5e30@I-love.SAKURA.ne.jp Closes: https://syzkaller.appspot.com/bug?extid=fe0c72f0ccbb93786380
-
- May 17, 2023
-
-
Peng Zhang authored
Make mas->min and mas->max point to a node range instead of a leaf entry range. This allows mas to still be usable after mas_empty_area() returns. Users would get unexpected results from other operations on the maple state after calling the affected function. For example, x86 MAP_32BIT mmap() acts as if there is no suitable gap when there should be one. Link: https://lkml.kernel.org/r/20230505145829.74574-1-zhangpeng.00@bytedance.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Reported-by:
"Edgecombe, Rick P" <rick.p.edgecombe@intel.com> Reported-by:
Tad <support@spotco.us> Reported-by:
Michael Keyes <mgkeyes@vigovproductions.net> Link: https://lore.kernel.org/linux-mm/32f156ba80010fd97dbaf0a0cdfc84366608624d.camel@intel.com/ Link: https://lore.kernel.org/linux-mm/e6108286ac025c268964a7ead3aab9899f9bc6e9.camel@spotco.us/ Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Tested-by:
Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- May 09, 2023
-
-
Roy Novich authored
Add return value for dim_calc_stats. This is an indication for the caller if curr_stats was assigned by the function. Avoid using curr_stats uninitialized over {rdma/net}_dim, when no time delta between samples. Coverity reported this potential use of an uninitialized variable. Fixes: 4c4dbb4a ("net/mlx5e: Move dynamic interrupt coalescing code to include/linux") Fixes: cb3c7fd4 ("net/mlx5e: Support adaptive RX coalescing") Signed-off-by:
Roy Novich <royno@nvidia.com> Reviewed-by:
Aya Levin <ayal@nvidia.com> Reviewed-by:
Saeed Mahameed <saeedm@nvidia.com> Signed-off-by:
Tariq Toukan <tariqt@nvidia.com> Reviewed-by:
Leon Romanovsky <leonro@nvidia.com> Reviewed-by:
Michal Kubiak <michal.kubiak@intel.com> Link: https://lore.kernel.org/r/20230507135743.138993-1-tariqt@nvidia.com Signed-off-by:
Paolo Abeni <pabeni@redhat.com>
-
- May 03, 2023
-
-
Kefeng Wang authored
dump_user_range() is used to copy the user page to a coredump file, but if a hardware memory error occurred during copy, which called from __kernel_write_iter() in dump_user_range(), it crashes, CPU: 112 PID: 7014 Comm: mca-recover Not tainted 6.3.0-rc2 #425 pc : __memcpy+0x110/0x260 lr : _copy_from_iter+0x3bc/0x4c8 ... Call trace: __memcpy+0x110/0x260 copy_page_from_iter+0xcc/0x130 pipe_write+0x164/0x6d8 __kernel_write_iter+0x9c/0x210 dump_user_range+0xc8/0x1d8 elf_core_dump+0x308/0x368 do_coredump+0x2e8/0xa40 get_signal+0x59c/0x788 do_signal+0x118/0x1f8 do_notify_resume+0xf0/0x280 el0_da+0x130/0x138 el0t_64_sync_handler+0x68/0xc0 el0t_64_sync+0x188/0x190 Generally, the '->write_iter' of file ops will use copy_page_from_iter() and copy_page_from_iter_atomic(), change memcpy() to copy_mc_to_kernel() in both of them to handle #MC during source read, which stop coredump processing and kill the task instead of kernel panic, but the source address may not always a user address, so introduce a new copy_mc flag in struct iov_iter{} to indicate that the iter could do a safe memory copy, also introduce the helpers to set/cleck the flag, for now, it's only used in coredump's dump_user_range(), but it could expand to any other scenarios to fix the similar issue. Link: https://lkml.kernel.org/r/20230417045323.11054-1-wangkefeng.wang@huawei.com Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Naoya Horiguchi <naoya.horiguchi@nec.com> Cc: Tong Tiangen <tongtiangen@huawei.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- May 02, 2023
-
-
Peter Zijlstra authored
There is an explicit wait-type violation in debug_object_fill_pool() for PREEMPT_RT=n kernels which allows them to more easily fill the object pool and reduce the chance of allocation failures. Lockdep's wait-type checks are designed to check the PREEMPT_RT locking rules even for PREEMPT_RT=n kernels and object to this, so create a lockdep annotation to allow this to stand. Specifically, create a 'lock' type that overrides the inner wait-type while it is held -- allowing one to temporarily raise it, such that the violation is hidden. Reported-by:
Vlastimil Babka <vbabka@suse.cz> Reported-by:
Qi Zheng <zhengqi.arch@bytedance.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Qi Zheng <zhengqi.arch@bytedance.com> Link: https://lkml.kernel.org/r/20230429100614.GA1489784@hirez.programming.kicks-ass.net
-
Thomas Gleixner authored
The recent fix to ensure atomicity of lookup and allocation inadvertently broke the pool refill mechanism. Prior to that change debug_objects_activate() and debug_objecs_assert_init() invoked debug_objecs_init() to set up the tracking object for statically initialized objects. That's not longer the case and debug_objecs_init() is now the only place which does pool refills. Depending on the number of statically initialized objects this can be enough to actually deplete the pool, which was observed by Ido via a debugobjects OOM warning. Restore the old behaviour by adding explicit refill opportunities to debug_objects_activate() and debug_objecs_assert_init(). Fixes: 63a75969 ("debugobject: Prevent init race with static objects") Reported-by:
Ido Schimmel <idosch@nvidia.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Ido Schimmel <idosch@nvidia.com> Link: https://lore.kernel.org/r/871qk05a9d.ffs@tglx
-
- Apr 26, 2023
-
-
Sergey Senozhatsky authored
Sometimes we use seq_buf to format a string buffer, which we then pass to printk(). However, in certain situations the seq_buf string buffer can get too big, exceeding the PRINTKRB_RECORD_MAX bytes limit, and causing printk() to truncate the string. Add a new seq_buf helper. This helper prints the seq_buf string buffer line by line, using \n as a delimiter, rather than passing the whole string buffer to printk() at once. Link: https://lkml.kernel.org/r/20230415100110.1419872-1-senozhatsky@chromium.org Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by:
Petr Mladek <pmladek@suse.com> Tested-by:
Yosry Ahmed <yosryahmed@google.com> Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org>
-
- Apr 24, 2023
-
-
Linus Torvalds authored
Use the same pattern as the compat version of this code does: instead of copying the whole array to a kernel buffer and then having a separate phase of verifying it, just do it one entry at a time, verifying as you go. On Jens' /dev/zero readv() test this improves performance by ~6%. [ This was obviously triggered by Jens' ITER_UBUF updates series ] Reported-and-tested-by:
Jens Axboe <axboe@kernel.dk> Link: https://lore.kernel.org/all/de35d11d-bce7-e976-7372-1f2caf417103@kernel.dk/ Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 21, 2023
-
-
Peng Zhang authored
In the case of reverse allocation, mas->index and mas->last do not point to the correct allocation range, which will cause users to get incorrect allocation results, so fix it. If the user does not use it in a specific way, this bug will not be triggered. This is a bug, but only VMA uses it now, the way VMA is used now will not trigger it. There is a possibility that a user will trigger it in the future. Also re-check whether the size is still satisfied after the lower bound was increased, which is a corner case and is incorrect in previous versions. Link: https://lkml.kernel.org/r/20230419093625.99201-1-zhangpeng.00@bytedance.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Cc: Liam R. Howlett <Liam.Howlett@Oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Yajun Deng authored
__show_mem() needs to iterate over all zones that have memory, we can simplify the code by using for_each_populated_zone(). Link: https://lkml.kernel.org/r/20230417035226.4013584-1-yajun.deng@linux.dev Signed-off-by:
Yajun Deng <yajun.deng@linux.dev> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Xie Yongji authored
Export group_cpus_evenly() so that some modules can make use of it to group CPUs evenly according to NUMA and CPU locality. Signed-off-by:
Xie Yongji <xieyongji@bytedance.com> Acked-by:
Jason Wang <jasowang@redhat.com> Message-Id: <20230323053043.35-2-xieyongji@bytedance.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- Apr 18, 2023
-
-
Noah Goldstein authored
This has a slight benefit for x86 and has no effect on other targets. The benefit to x86 is it change the codegen for setting a node to block from `mov %r0, %r1; or $RB_BLACK, %r1` to `lea RB_BLACK(%r0), %r1` which saves an instructions. In all other cases it just replace ALU with ALU (or -> and) which perform the same on all machines I am aware of. Total instructions in rbtree.o: Before - 802 After - 782 so it saves about 20 `mov` instructions. Link: https://lkml.kernel.org/r/20230404221350.3806566-1-goldstein.w.n@gmail.com Signed-off-by:
Noah Goldstein <goldstein.w.n@gmail.com> Cc: Michel Lespinasse <michel@lespinasse.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Peng Zhang authored
The type of variable pointed to by pivs is unsigned long, but the type used in sizeof is a pointer type. Change it to unsigned long. This change has no runtime effect, as sizeof(ul) == sizeof(ul *). Link: https://lkml.kernel.org/r/20230411023513.15227-1-zhangpeng.00@bytedance.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Reported-by:
David Binderman <dcb314@hotmail.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Peng Zhang authored
Simplify code of mas_wr_node_walk() without changing functionality, and improve readability. Remove some special judgments. Instead of dynamically recording the min and max in the loop, get the final min and max directly at the end. Link: https://lkml.kernel.org/r/20230314124203.91572-3-zhangpeng.00@bytedance.com Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Uladzislau Rezki (Sony) authored
Add vm_map_ram()/vm_unmap_ram() test case to our stress test-suite. [akpm@linux-foundation.org: fix whitespace, per Lorenzo] Link: https://lkml.kernel.org/r/20230330190639.431589-2-urezki@gmail.com Signed-off-by:
Uladzislau Rezki (Sony) <urezki@gmail.com> Reviewed-by:
Lorenzo Stoakes <lstoakes@gmail.com> Reviewed-by:
Baoquan He <bhe@redhat.com> Cc: Christoph Hellwig <hch@infradead.org> Cc: Dave Chinner <david@fromorbit.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sony.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
The internal function of mas_awalk() was incorrectly skipping the last entry in a node, which could potentially be NULL. This is only a problem for the left-most node in the tree - otherwise that NULL would not exist. Fix mas_awalk() by using the metadata to obtain the end of the node for the loop and the logical pivot as apposed to the raw pivot value. Link: https://lkml.kernel.org/r/20230414145728.4067069-2-Liam.Howlett@oracle.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by:
Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
Stop using maple state min/max for the range by passing through pointers for those values. This will allow the maple state to be reused without resetting. Also add some logic to fail out early on searching with invalid arguments. Link: https://lkml.kernel.org/r/20230414145728.4067069-1-Liam.Howlett@oracle.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by:
Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Apr 17, 2023
-
-
Christoph Hellwig authored
This was only ever used by btrfs, and the usage just went away. This effectively reverts df91f56a ("libcrc32c: Add crc32c_impl function"). Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
David Sterba <dsterba@suse.com> Signed-off-by:
David Sterba <dsterba@suse.com>
-
- Apr 16, 2023
-
-
Akinobu Mita authored
This fixes a build error when CONFIG_FAULT_INJECTION_CONFIGFS=y and CONFIG_CONFIGFS_FS=m. Since the fault-injection library cannot built as a module, avoid building configfs as a module. Fixes: 4668c7a2 ("fault-inject: allow configuration via configfs") Reported-by:
kernel test robot <lkp@intel.com> Link: https://lore.kernel.org/oe-kbuild-all/202304150025.K0hczLR4-lkp@intel.com/ Signed-off-by:
Akinobu Mita <akinobu.mita@gmail.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Peng Zhang authored
In mas_alloc_nodes(), "node->node_count = 0" means to initialize the node_count field of the new node, but the node may not be a new node. It may be a node that existed before and node_count has a value, setting it to 0 will cause a memory leak. At this time, mas->alloc->total will be greater than the actual number of nodes in the linked list, which may cause many other errors. For example, out-of-bounds access in mas_pop_node(), and mas_pop_node() may return addresses that should not be used. Fix it by initializing node_count only for new nodes. Also, by the way, an if-else statement was removed to simplify the code. Link: https://lkml.kernel.org/r/20230411041005.26205-1-zhangpeng.00@bytedance.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Apr 15, 2023
-
-
Thomas Gleixner authored
Statically initialized objects are usually not initialized via the init() function of the subsystem. They are special cased and the subsystem provides a function to validate whether an object which is not yet tracked by debugobjects is statically initialized. This means the object is started to be tracked on first use, e.g. activation. This works perfectly fine, unless there are two concurrent operations on that object. Schspa decoded the problem: T0 T1 debug_object_assert_init(addr) lock_hash_bucket() obj = lookup_object(addr); if (!obj) { unlock_hash_bucket(); - > preemption lock_subsytem_object(addr); activate_object(addr) lock_hash_bucket(); obj = lookup_object(addr); if (!obj) { unlock_hash_bucket(); if (is_static_object(addr)) init_and_track(addr); lock_hash_bucket(); obj = lookup_object(addr); obj->state = ACTIVATED; unlock_hash_bucket(); subsys function modifies content of addr, so static object detection does not longer work. unlock_subsytem_object(addr); if (is_static_object(addr)) <- Fails debugobject emits a warning and invokes the fixup function which reinitializes the already active object in the worst case. This race exists forever, but was never observed until mod_timer() got a debug_object_assert_init() added which is outside of the timer base lock held section right at the beginning of the function to cover the lockless early exit points too. Rework the code so that the lookup, the static object check and the tracking object association happens atomically under the hash bucket lock. This prevents the issue completely as all callers are serialized on the hash bucket lock and therefore cannot observe inconsistent state. Fixes: 3ac7fe5a ("infrastructure to debug (dynamic) objects") Reported-by:
<syzbot+5093ba19745994288b53@syzkaller.appspotmail.com> Debugged-by:
Schspa Shi <schspa@gmail.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Stephen Boyd <swboyd@chromium.org> Link: https://syzkaller.appspot.com/bug?id=22c8a5938eab640d1c6bcc0e3dc7be519d878462 Link: https://lore.kernel.org/lkml/20230303161906.831686-1-schspa@gmail.com Link: https://lore.kernel.org/r/87zg7dzgao.ffs@tglx
-
- Apr 13, 2023
-
-
Nick Alcock authored
Since commit 8b41fc44 ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"), MODULE_LICENSE declarations are used to identify modules. As a consequence, uses of the macro in non-modules will cause modprobe to misidentify their containing object file as a module when it is not (false positives), and modprobe might succeed rather than failing with a suitable error message. So remove it in the files in this commit, none of which can be built as modules. Signed-off-by:
Nick Alcock <nick.alcock@oracle.com> Suggested-by:
Luis Chamberlain <mcgrof@kernel.org> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: linux-modules@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Nick Alcock authored
Since commit 8b41fc44 ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"), MODULE_LICENSE declarations are used to identify modules. As a consequence, uses of the macro in non-modules will cause modprobe to misidentify their containing object file as a module when it is not (false positives), and modprobe might succeed rather than failing with a suitable error message. So remove it in the files in this commit, none of which can be built as modules. Signed-off-by:
Nick Alcock <nick.alcock@oracle.com> Suggested-by:
Luis Chamberlain <mcgrof@kernel.org> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: linux-modules@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Nick Alcock authored
Since commit 8b41fc44 ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"), MODULE_LICENSE declarations are used to identify modules. As a consequence, uses of the macro in non-modules will cause modprobe to misidentify their containing object file as a module when it is not (false positives), and modprobe might succeed rather than failing with a suitable error message. So remove it in the files in this commit, none of which can be built as modules. Signed-off-by:
Nick Alcock <nick.alcock@oracle.com> Suggested-by:
Luis Chamberlain <mcgrof@kernel.org> Acked-by:
Jacob Keller <jacob.e.keller@intel.com> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: linux-modules@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> Cc: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Nick Alcock authored
Now blake2s-generic.c can no longer be a module, drop all remaining module-related code as well. Signed-off-by:
Nick Alcock <nick.alcock@oracle.com> Requested-by:
Herbert Xu <herbert@gondor.apana.org.au> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: linux-modules@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Nick Alcock authored
Since commit 8b41fc44 ("kbuild: create modules.builtin without Makefile.modbuiltin or tristate.conf"), MODULE_LICENSE declarations are used to identify modules. As a consequence, uses of the macro in non-modules will cause modprobe to misidentify their containing object file as a module when it is not (false positives), and modprobe might succeed rather than failing with a suitable error message. So remove it in the files in this commit, none of which can be built as modules. Signed-off-by:
Nick Alcock <nick.alcock@oracle.com> Suggested-by:
Luis Chamberlain <mcgrof@kernel.org> Cc: Luis Chamberlain <mcgrof@kernel.org> Cc: linux-modules@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Hitomi Hasegawa <hasegawa-hitomi@fujitsu.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Akinobu Mita authored
This provides a helper function to allow configuration of fault-injection for configfs-based drivers. The config items created by this function have the same interface as the one created under debugfs by fault_create_debugfs_attr(). Signed-off-by:
Akinobu Mita <akinobu.mita@gmail.com> Link: https://lore.kernel.org/r/20230327143733.14599-2-akinobu.mita@gmail.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 12, 2023
-
-
Josh Poimboeuf authored
After commit 6376ce56feb6 ("iov_iter: import single vector iovecs as ITER_UBUF"), GCC does an inter-procedural compiler optimization which moves the user_access_begin() out of copy_compat_iovec_from_user() and into its callers: lib/iov_iter.o: warning: objtool: .altinstr_replacement+0x0: redundant UACCESS disable lib/iov_iter.o: warning: objtool: iovec_from_user.part.0+0xc7: call to copy_compat_iovec_from_user.part.0() with UACCESS enabled lib/iov_iter.o: warning: objtool: __import_iovec+0x21d: call to copy_compat_iovec_from_user.part.0() with UACCESS enabled Enforce the "no UACCESS enable across function boundaries" rule by disabling cloning for copy_compat_iovec_from_user(). Fixes: 6376ce56feb6 ("iov_iter: import single vector iovecs as ITER_UBUF") Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> https://lkml.kernel.org/lkml/20230327120017.6bb826d7@canb.auug.org.au Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Tested-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 08, 2023
-
-
Andy Shevchenko authored
When we get a random number to generate a flag in the valid range of UNESCAPE flags, use UNESCAPE_ALL_MASK, It's more correct and prevents from missed updates of the test coverage in the future if any. Link: https://lkml.kernel.org/r/20230327142604.48213-1-andriy.shevchenko@linux.intel.com Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Alexey Dobriyan authored
ELF is acronym and therefore should be spelled in all caps. I left one exception at Documentation/arm/nwfpe/nwfpe.rst which looks like being written in the first person. Link: https://lkml.kernel.org/r/Y/3wGWQviIOkyLJW@p183 Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Apr 06, 2023
-
-
Lorenzo Stoakes authored
Provide a means to copy a page to user space from an iterator, aborting if a page fault would occur. This supports compound pages, but may be passed a tail page with an offset extending further into the compound page, so we cannot pass a folio. This allows for this function to be called from atomic context and _try_ to user pages if they are faulted in, aborting if not. The function does not use _copy_to_iter() in order to not specify might_fault(), this is similar to copy_page_from_iter_atomic(). This is being added in order that an iteratable form of vread() can be implemented while holding spinlocks. Link: https://lkml.kernel.org/r/19734729defb0f498a76bdec1bef3ac48a3af3e8.1679511146.git.lstoakes@gmail.com Signed-off-by:
Lorenzo Stoakes <lstoakes@gmail.com> Reviewed-by:
Baoquan He <bhe@redhat.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: David Hildenbrand <david@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Liu Shixin <liushixin2@huawei.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Peng Zhang authored
There is a concurrency bug that may cause the wrong value to be loaded when a CPU is modifying the maple tree. CPU1: mtree_insert_range() mas_insert() mas_store_root() ... mas_root_expand() ... rcu_assign_pointer(mas->tree->ma_root, mte_mk_root(mas->node)); ma_set_meta(node, maple_leaf_64, 0, slot); <---IP CPU2: mtree_load() mtree_lookup_walk() ma_data_end(); When CPU1 is about to execute the instruction pointed to by IP, the ma_data_end() executed by CPU2 may return the wrong end position, which will cause the value loaded by mtree_load() to be wrong. An example of triggering the bug: Add mdelay(100) between rcu_assign_pointer() and ma_set_meta() in mas_root_expand(). static DEFINE_MTREE(tree); int work(void *p) { unsigned long val; for (int i = 0 ; i< 30; ++i) { val = (unsigned long)mtree_load(&tree, 8); mdelay(5); pr_info("%lu",val); } return 0; } mt_init_flags(&tree, MT_FLAGS_USE_RCU); mtree_insert(&tree, 0, (void*)12345, GFP_KERNEL); run_thread(work) mtree_insert(&tree, 1, (void*)56789, GFP_KERNEL); In RCU mode, mtree_load() should always return the value before or after the data structure is modified, and in this example mtree_load(&tree, 8) may return 56789 which is not expected, it should always return NULL. Fix it by put ma_set_meta() before rcu_assign_pointer(). Link: https://lkml.kernel.org/r/20230314124203.91572-4-zhangpeng.00@bytedance.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Peng Zhang <zhangpeng.00@bytedance.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-