- Jul 25, 2023
-
-
Ritesh Harjani (IBM) authored
When filesystem blocksize is less than folio size (either with mapping_large_folio_support() or with blocksize < pagesize) and when the folio is uptodate in pagecache, then even a byte write can cause an entire folio to be written to disk during writeback. This happens because we currently don't have a mechanism to track per-block dirty state within struct iomap_folio_state. We currently only track uptodate state. This patch implements support for tracking per-block dirty state in iomap_folio_state->state bitmap. This should help improve the filesystem write performance and help reduce write amplification. Performance testing of below fio workload reveals ~16x performance improvement using nvme with XFS (4k blocksize) on Power (64K pagesize) FIO reported write bw scores improved from around ~28 MBps to ~452 MBps. 1. <test_randwrite.fio> [global] ioengine=psync rw=randwrite overwrite=1 pre_read=1 direct=0 bs=4k size=1G dir=./ numjobs=8 fdatasync=1 runtime=60 iodepth=64 group_reporting=1 [fio-run] 2. Also our internal performance team reported that this patch improves their database workload performance by around ~83% (with XFS on Power) Reported-by:
Aravinda Herle <araherle@in.ibm.com> Reported-by:
Brian Foster <bfoster@redhat.com> Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Ritesh Harjani (IBM) authored
We dont need to allocate an ifs in ->write_begin() for writes where the position and length completely overlap with the given folio. Therefore, such cases are skipped. Currently when the folio is uptodate, we only allocate ifs at writeback time (in iomap_writepage_map()). This is ok until now, but when we are going to add support for per-block dirty state bitmap in ifs, this could cause some performance degradation. The reason is that if we don't allocate ifs during ->write_begin(), then we will never mark the necessary dirty bits in ->write_end() call. And we will have to mark all the bits as dirty at the writeback time, that could cause the same write amplification and performance problems as it is now. Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
Ritesh Harjani (IBM) authored
This patch factors iomap_write_delalloc_punch() function out. This function is resposible for actual punch out operation. The reason for doing this is, to avoid deep indentation when we bring punch-out of individual non-dirty blocks within a dirty folio in a later patch (which adds per-block dirty status handling to iomap) to avoid delalloc block leak. Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
Ritesh Harjani (IBM) authored
It makes it much easier if we have iomap_punch_t typedef for "punch" function pointer in all delalloc related punch, scan and release functions. It will be useful in later patches when we will factor out iomap_write_delalloc_punch() function. Suggested-by:
Matthew Wilcox <willy@infradead.org> Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Ritesh Harjani (IBM) authored
folio_next_index() returns an unsigned long value which left shifted by PAGE_SHIFT could possibly cause an overflow on 32-bit system. Instead use folio_pos(folio) + folio_size(folio), which does this correctly. Suggested-by:
Matthew Wilcox <willy@infradead.org> Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Ritesh Harjani (IBM) authored
This patch adds two of the helper routines ifs_is_fully_uptodate() and ifs_block_is_uptodate() for managing uptodate state of "ifs" state bitmap. In later patches ifs state bitmap array will also handle dirty state of all blocks of a folio. Hence this patch adds some helper routines for handling uptodate state of the ifs state bitmap. Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Ritesh Harjani (IBM) authored
iomap_folio_state (ifs) can be derived directly from the folio, making it unnecessary to pass "ifs" as an argument to iomap_set_range_uptodate(). This patch eliminates "ifs" argument from iomap_set_range_uptodate() function. Also, the definition of iomap_set_range_uptodate() and ifs_set_range_uptodate() functions are moved above ifs_alloc(). In upcoming patches, we plan to introduce additional helper routines for handling dirty state, with the intention of consolidating all of "ifs" state handling routines at one place. Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Ritesh Harjani (IBM) authored
struct iomap_page actually tracks per-block state of a folio. Hence it make sense to rename some of these function names and data structures for e.g. 1. struct iomap_page (iop) -> struct iomap_folio_state (ifs) 2. iomap_page_create() -> ifs_alloc() 3. iomap_page_release() -> ifs_free() 4. iomap_iop_set_range_uptodate() -> ifs_set_range_uptodate() 5. to_iomap_page() -> folio->private Since in later patches we are also going to add per-block dirty state tracking to iomap_folio_state. Hence this patch also renames "uptodate" & "uptodate_lock" members of iomap_folio_state to "state" and"state_lock". We don't really need to_iomap_page() function, instead directly open code it as folio->private; Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
- Jul 24, 2023
-
-
Matthew Wilcox (Oracle) authored
If we have a large folio, we can copy in larger chunks than PAGE_SIZE. Start at the maximum page cache size and shrink by half every time we hit the "we are short on memory" problem. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Matthew Wilcox (Oracle) authored
Use the size of the write as a hint for the size of the folio to create. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Matthew Wilcox (Oracle) authored
Similarly to gfp_t, define fgf_t as its own type to prevent various misuses and confusion. Leave the flags as FGP_* for now to reduce the size of this patch; they will be converted to FGF_* later. Move the documentation to the definition of the type insted of burying it in the __filemap_get_folio() documentation. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Kent Overstreet <kent.overstreet@linux.dev>
-
Matthew Wilcox (Oracle) authored
The check for the folio being under writeback is unnecessary; the caller has checked this and the folio is locked, so the folio cannot be under writeback at this point. The comment is somewhat misleading in that it talks about one specific situation in which we can see a dirty folio. There are others, so change the comment to explain why we can't release the iomap_page. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Matthew Wilcox (Oracle) authored
We do not need to release the iomap_page in iomap_invalidate_folio() to allow the folio to be split. The splitting code will call ->release_folio() if there is still per-fs private data attached to the folio. At that point, we will check if the folio is still dirty and decline to release the iomap_page. It is possible to trigger the warning in perfectly legitimate circumstances (eg if a disk read fails, we do a partial write to the folio, then we truncate the folio), which will cause those writes to be lost. Fixes: 60d82310 ("iomap: Support large folios in invalidatepage") Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
- Jul 17, 2023
-
-
Christoph Hellwig authored
We have the new value for ki_pos right at hand in iter.pos, so assign that instead of recalculating it from ret. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Ritesh Harjani (IBM) <ritesh.harjani@gmail.com>
-
Christoph Hellwig authored
When write* wrote some data it should return the amount of written data and not the error code that caused it to stop. Fix a recent regression in iomap_file_buffered_write that caused it to return the errno instead. Fixes: 219580ee ("iomap: update ki_pos in iomap_file_buffered_write") Reported-by:
kernel test robot <oliver.sang@intel.com> Reported-by:
Cyril Hrubis <chrubis@suse.cz> Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Ritesh Harjani (IBM) <ritesh.harjani@gmail.com>
-
- Jun 29, 2023
-
-
Lu Hongfei authored
fs: iomap: Change the type of blocksize from 'int' to 'unsigned int' in iomap_file_buffered_write_punch_delalloc The return value type of i_blocksize() is 'unsigned int', so the type of blocksize has been modified from 'int' to 'unsigned int' to ensure data type consistency. Signed-off-by:
Lu Hongfei <luhongfei@vivo.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
- Jun 09, 2023
-
-
Christoph Hellwig authored
Use the common helpers for direct I/O page invalidation instead of open coding the logic. This leads to a slight reordering of checks in __iomap_dio_rw to keep the logic straight. Link: https://lkml.kernel.org/r/20230601145904.1385409-9-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Reviewed-by:
Hannes Reinecke <hare@suse.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Anna Schumaker <anna@kernel.org> Cc: Chao Yu <chao@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Xiubo Li <xiubli@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Christoph Hellwig authored
All callers of iomap_file_buffered_write need to updated ki_pos, move it into common code. Link: https://lkml.kernel.org/r/20230601145904.1385409-8-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Hannes Reinecke <hare@suse.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Acked-by:
Damien Le Moal <dlemoal@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Anna Schumaker <anna@kernel.org> Cc: Chao Yu <chao@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Xiubo Li <xiubli@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Christoph Hellwig authored
Add a helper to invalidate page cache after a dio write. Link: https://lkml.kernel.org/r/20230601145904.1385409-7-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Reviewed-by:
Hannes Reinecke <hare@suse.de> Acked-by:
Darrick J. Wong <djwong@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Anna Schumaker <anna@kernel.org> Cc: Chao Yu <chao@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Xiubo Li <xiubli@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Christoph Hellwig authored
Move the ki_pos update down a bit to prepare for a better common helper that invalidates pages based of an iocb. Link: https://lkml.kernel.org/r/20230601145904.1385409-3-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Damien Le Moal <dlemoal@kernel.org> Reviewed-by:
Hannes Reinecke <hare@suse.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Anna Schumaker <anna@kernel.org> Cc: Chao Yu <chao@kernel.org> Cc: Christian Brauner <brauner@kernel.org> Cc: Ilya Dryomov <idryomov@gmail.com> Cc: Jaegeuk Kim <jaegeuk@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Miklos Szeredi <mszeredi@redhat.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Xiubo Li <xiubli@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Jun 01, 2023
-
-
Johannes Thumshirn authored
When the iomap buffered-io code can't add a folio to a bio, it allocates a new bio and adds the folio to that one. This is done using bio_add_folio(), but doesn't check for errors. As adding a folio to a newly created bio can't fail, use the newly introduced bio_add_folio_nofail() function. Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Link: https://lore.kernel.org/r/58fa893c24c67340a63323f09a179fefdca07f2a.1685532726.git.johannes.thumshirn@wdc.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- May 24, 2023
-
-
Christoph Hellwig authored
Replace BIO_NO_PAGE_REF with a BIO_PAGE_REFFED flag that has the inverted meaning is only set when a page reference has been acquired that needs to be released by bio_release_pages(). Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> cc: Al Viro <viro@zeniv.linux.org.uk> cc: Jens Axboe <axboe@kernel.dk> cc: Jan Kara <jack@suse.cz> cc: Matthew Wilcox <willy@infradead.org> cc: Logan Gunthorpe <logang@deltatee.com> cc: linux-block@vger.kernel.org Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230522205744.2825689-4-dhowells@redhat.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
David Howells authored
ZERO_PAGE can't go away, no need to hold an extra reference. Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Reviewed-by:
John Hubbard <jhubbard@nvidia.com> Reviewed-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> cc: Al Viro <viro@zeniv.linux.org.uk> cc: linux-fsdevel@vger.kernel.org Reviewed-by:
Christian Brauner <brauner@kernel.org> Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20230522205744.2825689-2-dhowells@redhat.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
- Apr 21, 2023
-
-
Ritesh Harjani (IBM) authored
Add trace_iomap_dio_rw_begin, trace_iomap_dio_rw_queued and trace_iomap_dio_complete tracepoint. trace_iomap_dio_rw_queued is mostly only to know that the request was queued and -EIOCBQUEUED was returned. It is mostly trace_iomap_dio_rw_begin & trace_iomap_dio_complete which has all the details. <example output log> a.out-2073 [006] 134.225717: iomap_dio_rw_begin: dev 7:7 ino 0xe size 0x0 offset 0x0 length 0x1000 done_before 0x0 flags DIRECT|WRITE dio_flags DIO_FORCE_WAIT aio 1 a.out-2073 [006] 134.226234: iomap_dio_complete: dev 7:7 ino 0xe size 0x1000 offset 0x1000 flags DIRECT|WRITE aio 1 error 0 ret 4096 a.out-2074 [006] 136.225975: iomap_dio_rw_begin: dev 7:7 ino 0xe size 0x1000 offset 0x0 length 0x1000 done_before 0x0 flags DIRECT dio_flags aio 1 a.out-2074 [006] 136.226173: iomap_dio_rw_queued: dev 7:7 ino 0xe size 0x1000 offset 0x1000 length 0x0 ksoftirqd/3-31 [003] 136.226389: iomap_dio_complete: dev 7:7 ino 0xe size 0x1000 offset 0x1000 flags DIRECT aio 1 error 0 ret 4096 a.out-2075 [003] 141.674969: iomap_dio_rw_begin: dev 7:7 ino 0xe size 0x1000 offset 0x0 length 0x1000 done_before 0x0 flags DIRECT|WRITE dio_flags aio 1 a.out-2075 [003] 141.676085: iomap_dio_rw_queued: dev 7:7 ino 0xe size 0x1000 offset 0x1000 length 0x0 kworker/2:0-27 [002] 141.676432: iomap_dio_complete: dev 7:7 ino 0xe size 0x1000 offset 0x1000 flags DIRECT|WRITE aio 1 error 0 ret 4096 a.out-2077 [006] 143.443746: iomap_dio_rw_begin: dev 7:7 ino 0xe size 0x1000 offset 0x0 length 0x1000 done_before 0x0 flags DIRECT dio_flags aio 1 a.out-2077 [006] 143.443866: iomap_dio_rw_queued: dev 7:7 ino 0xe size 0x1000 offset 0x1000 length 0x0 ksoftirqd/5-41 [005] 143.444134: iomap_dio_complete: dev 7:7 ino 0xe size 0x1000 offset 0x1000 flags DIRECT aio 1 error 0 ret 4096 a.out-2078 [007] 146.716833: iomap_dio_rw_begin: dev 7:7 ino 0xe size 0x1000 offset 0x0 length 0x1000 done_before 0x0 flags DIRECT dio_flags aio 0 a.out-2078 [007] 146.717639: iomap_dio_complete: dev 7:7 ino 0xe size 0x1000 offset 0x1000 flags DIRECT aio 0 error 0 ret 4096 a.out-2079 [006] 148.972605: iomap_dio_rw_begin: dev 7:7 ino 0xe size 0x1000 offset 0x0 length 0x1000 done_before 0x0 flags DIRECT dio_flags aio 0 a.out-2079 [006] 148.973099: iomap_dio_complete: dev 7:7 ino 0xe size 0x1000 offset 0x1000 flags DIRECT aio 0 error 0 ret 4096 Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> [djwong: line up strings all prettylike] Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Ritesh Harjani (IBM) authored
IOMAP_DIO_NOSYNC earlier was added for use in btrfs. But it seems for aio dsync writes this is not useful anyway. For aio dsync case, we we queue the request and return -EIOCBQUEUED. Now, since IOMAP_DIO_NOSYNC doesn't let iomap_dio_complete() to call generic_write_sync(), hence we may lose the sync write. Hence kill this flag as it is not in use by any FS now. Tested-by:
Disha Goel <disgoel@linux.ibm.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Ritesh Harjani (IBM) <ritesh.list@gmail.com> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
- Apr 06, 2023
-
-
Matthew Wilcox authored
This particular combination of flags is used by most filesystems in their ->write_begin method, although it does find use in a few other places. Before folios, it warranted its own function (grab_cache_page_write_begin()), but I think that just having specialised flags is enough. It certainly helps the few places that have been converted from grab_cache_page_write_begin() to __filemap_get_folio(). Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Link: https://lore.kernel.org/r/20230324180129.1220691-2-willy@infradead.org Signed-off-by:
Theodore Ts'o <tytso@mit.edu>
-
Christoph Hellwig authored
Instead of returning NULL for all errors, distinguish between: - no entry found and not asked to allocated (-ENOENT) - failed to allocate memory (-ENOMEM) - would block (-EAGAIN) so that callers don't have to guess the error based on the passed in flags. Also pass through the error through the direct callers: filemap_get_folio, filemap_lock_folio filemap_grab_folio and filemap_get_incore_folio. [hch@lst.de: fix null-pointer deref] Link: https://lkml.kernel.org/r/20230310070023.GA13563@lst.de Link: https://lkml.kernel.org/r/20230310043137.GA1624890@u2004 Link: https://lkml.kernel.org/r/20230307143410.28031-8-hch@lst.de Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com> [nilfs2] Cc: Andreas Gruenbacher <agruenba@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Naoya Horiguchi <naoya.horiguchi@linux.dev> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Feb 15, 2023
-
-
Christoph Hellwig authored
No users left now that btrfs takes REQ_OP_WRITE bios from iomap and splits and converts them to REQ_OP_ZONE_APPEND internally. Reviewed-by:
Josef Bacik <josef@toxicpanda.com> Reviewed-by:
Damien Le Moal <damien.lemoal@opensource.wdc.com> Reviewed-by:
Johannes Thumshirn <johannes.thumshirn@wdc.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
David Sterba <dsterba@suse.com>
-
- Feb 03, 2023
-
-
Matthew Wilcox (Oracle) authored
Patch series "Convert writepage_t to use a folio". More folioisation. I split out the mpage work from everything else because it completely dominated the patch, but some implementations I just converted outright. This patch (of 2): We always write back an entire folio, but that's currently passed as the head page. Convert all filesystems that use write_cache_pages() to expect a folio instead of a page. Link: https://lkml.kernel.org/r/20230126201255.1681189-1-willy@infradead.org Link: https://lkml.kernel.org/r/20230126201255.1681189-2-willy@infradead.org Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Jan 18, 2023
-
-
Andreas Gruenbacher authored
The operations in struct page_ops all operate on folios, so rename struct page_ops to struct folio_ops. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> [djwong: port around not removing iomap_valid] Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
The ->page_prepare() handler in struct iomap_page_ops is now somewhat misnamed, so rename it to ->get_folio(). Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
Add an __iomap_get_folio() helper as the counterpart of the existing __iomap_put_folio() helper. Use the new helper in iomap_write_begin(). Not a functional change. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
Change the iomap ->page_prepare() handler to get and return a locked folio instead of doing that in iomap_write_begin(). This allows to recover from out-of-memory situations in ->page_prepare(), which eliminates the corresponding error handling code in iomap_write_begin(). The ->put_folio() handler now also isn't called with NULL as the folio value anymore. Filesystems are expected to use the iomap_get_folio() helper for getting locked folios in their ->page_prepare() handlers. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
Add an iomap_get_folio() helper that gets a folio reference based on an iomap iterator and an offset into the address space. Use it in iomap_write_begin(). Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
The ->page_done() handler in struct iomap_page_ops is now somewhat misnamed in that it mainly deals with unlocking and putting a folio, so rename it to ->put_folio(). Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
When an iomap defines a ->page_done() handler in its page_ops, delegate unlocking the folio and putting the folio reference to that handler. This allows to fix a race between journaled data writes and folio writeback in gfs2: before this change, gfs2_iomap_page_done() was called after unlocking the folio, so writeback could start writing back the folio's buffers before they could be marked for writing to the journal. Also, try_to_free_buffers() could free the buffers before gfs2_iomap_page_done() was done adding the buffers to the current current transaction. With this change, gfs2_iomap_page_done() adds the buffers to the current transaction while the folio is still locked, so the problems described above can no longer occur. The only current user of ->page_done() is gfs2, so other filesystems are not affected. To catch out any out-of-tree users, switch from a page to a folio in ->page_done(). Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
Andreas Gruenbacher authored
Add an __iomap_put_folio() helper to encapsulate unlocking the folio, calling ->page_done(), and putting the folio. Use the new helper in iomap_write_begin() and iomap_write_end(). This effectively doesn't change the way the code works, but prepares for successive improvements. Signed-off-by:
Andreas Gruenbacher <agruenba@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Darrick J. Wong <djwong@kernel.org>
-
- Nov 28, 2022
-
-
Dave Chinner authored
A recent multithreaded write data corruption has been uncovered in the iomap write code. The core of the problem is partial folio writes can be flushed to disk while a new racing write can map it and fill the rest of the page: writeback new write allocate blocks blocks are unwritten submit IO ..... map blocks iomap indicates UNWRITTEN range loop { lock folio copyin data ..... IO completes runs unwritten extent conv blocks are marked written <iomap now stale> get next folio } Now add memory pressure such that memory reclaim evicts the partially written folio that has already been written to disk. When the new write finally gets to the last partial page of the new write, it does not find it in cache, so it instantiates a new page, sees the iomap is unwritten, and zeros the part of the page that it does not have data from. This overwrites the data on disk that was originally written. The full description of the corruption mechanism can be found here: https://lore.kernel.org/linux-xfs/20220817093627.GZ3600936@dread.disaster.area/ To solve this problem, we need to check whether the iomap is still valid after we lock each folio during the write. We have to do it after we lock the page so that we don't end up with state changes occurring while we wait for the folio to be locked. Hence we need a mechanism to be able to check that the cached iomap is still valid (similar to what we already do in buffered writeback), and we need a way for ->begin_write to back out and tell the high level iomap iterator that we need to remap the remaining write range. The iomap needs to grow some storage for the validity cookie that the filesystem provides to travel with the iomap. XFS, in particular, also needs to know some more information about what the iomap maps (attribute extents rather than file data extents) to for the validity cookie to cover all the types of iomaps we might need to validate. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
Dave Chinner authored
iomap_file_buffered_write_punch_delalloc() currently invalidates the page cache over the unused range of the delalloc extent that was allocated. While the write allocated the delalloc extent, it does not own it exclusively as the write does not hold any locks that prevent either writeback or mmap page faults from changing the state of either the page cache or the extent state backing this range. Whilst xfs_bmap_punch_delalloc_range() already handles races in extent conversion - it will only punch out delalloc extents and it ignores any other type of extent - the page cache truncate does not discriminate between data written by this write or some other task. As a result, truncating the page cache can result in data corruption if the write races with mmap modifications to the file over the same range. generic/346 exercises this workload, and if we randomly fail writes (as will happen when iomap gets stale iomap detection later in the patchset), it will randomly corrupt the file data because it removes data written by mmap() in the same page as the write() that failed. Hence we do not want to punch out the page cache over the range of the extent we failed to write to - what we actually need to do is detect the ranges that have dirty data in cache over them and *not punch them out*. To do this, we have to walk the page cache over the range of the delalloc extent we want to remove. This is made complex by the fact we have to handle partially up-to-date folios correctly and this can happen even when the FSB size == PAGE_SIZE because we now support multi-page folios in the page cache. Because we are only interested in discovering the edges of data ranges in the page cache (i.e. hole-data boundaries) we can make use of mapping_seek_hole_data() to find those transitions in the page cache. As we hold the invalidate_lock, we know that the boundaries are not going to change while we walk the range. This interface is also byte-based and is sub-page block aware, so we can find the data ranges in the cache based on byte offsets rather than page, folio or fs block sized chunks. This greatly simplifies the logic of finding dirty cached ranges in the page cache. Once we've identified a range that contains cached data, we can then iterate the range folio by folio. This allows us to determine if the data is dirty and hence perform the correct delalloc extent punching operations. The seek interface we use to iterate data ranges will give us sub-folio start/end granularity, so we may end up looking up the same folio multiple times as the seek interface iterates across each discontiguous data region in the folio. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-
- Nov 23, 2022
-
-
Dave Chinner authored
Because that's what Christoph wants for this error handling path only XFS uses. It requires a new iomap export for handling errors over delalloc ranges. This is basically the XFS code as is stands, but even though Christoph wants this as iomap funcitonality, we still have to call it from the filesystem specific ->iomap_end callback, and call into the iomap code with yet another filesystem specific callback to punch the delalloc extent within the defined ranges. Signed-off-by:
Dave Chinner <dchinner@redhat.com> Reviewed-by:
Darrick J. Wong <djwong@kernel.org>
-