Commit c5d9ab85 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'f2fs-for-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs

Pull f2fs update from Jaegeuk Kim:
 "In this round, there are a number of updates on mainly two areas:
  Zoned block device support and Per-file compression. For example,
  we've found several issues to support Zoned block device especially
  having large sections regarding to GC and file pinning used for
  Android devices. In compression side, we've fixed many corner race
  conditions that had broken the design assumption.

  Enhancements:
   - Support file pinning for Zoned block device having large section
   - Enhance the data recovery after sudden power cut on Zoned block
     device
   - Add more error injection cases to easily detect the kernel panics
   - add a proc entry show the entire disk layout
   - Improve various error paths paniced by BUG_ON in block allocation
     and GC
   - support SEEK_DATA and SEEK_HOLE for compression files

  Bug fixes:
   - avoid use-after-free issue in f2fs_filemap_fault
   - fix some race conditions to break the atomic write design
     assumption
   - fix to truncate meta inode pages forcely
   - resolve various per-file compression issues wrt the space
     management and compression policies
   - fix some swap-related bugs

  In addition, we removed deprecated codes such as io_bits and
  heap_allocation, and also fixed minor error handling routines with
  neat debugging messages"

* tag 'f2fs-for-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (60 commits)
  f2fs: fix to avoid use-after-free issue in f2fs_filemap_fault
  f2fs: truncate page cache before clearing flags when aborting atomic write
  f2fs: mark inode dirty for FI_ATOMIC_COMMITTED flag
  f2fs: prevent atomic write on pinned file
  f2fs: fix to handle error paths of {new,change}_curseg()
  f2fs: unify the error handling of f2fs_is_valid_blkaddr
  f2fs: zone: fix to remove pow2 check condition for zoned block device
  f2fs: fix to truncate meta inode pages forcely
  f2fs: compress: fix reserve_cblocks counting error when out of space
  f2fs: compress: relocate some judgments in f2fs_reserve_compress_blocks
  f2fs: add a proc entry show disk layout
  f2fs: introduce SEGS_TO_BLKS/BLKS_TO_SEGS for cleanup
  f2fs: fix to check return value of f2fs_gc_range
  f2fs: fix to check return value __allocate_new_segment
  f2fs: fix to do sanity check in update_sit_entry
  f2fs: fix to reset fields for unloaded curseg
  f2fs: clean up new_curseg()
  f2fs: relocate f2fs_precache_extents() in f2fs_swap_activate()
  f2fs: fix blkofs_end correctly in f2fs_migrate_blocks()
  f2fs: ro: don't start discard thread for readonly image
  ...
parents 0d7ca657 eb70d5a6
...@@ -205,7 +205,7 @@ Description: Controls the idle timing of system, if there is no FS operation ...@@ -205,7 +205,7 @@ Description: Controls the idle timing of system, if there is no FS operation
What: /sys/fs/f2fs/<disk>/discard_idle_interval What: /sys/fs/f2fs/<disk>/discard_idle_interval
Date: September 2018 Date: September 2018
Contact: "Chao Yu" <yuchao0@huawei.com> Contact: "Chao Yu" <yuchao0@huawei.com>
Contact: "Sahitya Tummala" <stummala@codeaurora.org> Contact: "Sahitya Tummala" <quic_stummala@quicinc.com>
Description: Controls the idle timing of discard thread given Description: Controls the idle timing of discard thread given
this time interval. this time interval.
Default is 5 secs. Default is 5 secs.
...@@ -213,7 +213,7 @@ Description: Controls the idle timing of discard thread given ...@@ -213,7 +213,7 @@ Description: Controls the idle timing of discard thread given
What: /sys/fs/f2fs/<disk>/gc_idle_interval What: /sys/fs/f2fs/<disk>/gc_idle_interval
Date: September 2018 Date: September 2018
Contact: "Chao Yu" <yuchao0@huawei.com> Contact: "Chao Yu" <yuchao0@huawei.com>
Contact: "Sahitya Tummala" <stummala@codeaurora.org> Contact: "Sahitya Tummala" <quic_stummala@quicinc.com>
Description: Controls the idle timing for gc path. Set to 5 seconds by default. Description: Controls the idle timing for gc path. Set to 5 seconds by default.
What: /sys/fs/f2fs/<disk>/iostat_enable What: /sys/fs/f2fs/<disk>/iostat_enable
...@@ -701,9 +701,9 @@ Description: Support configuring fault injection type, should be ...@@ -701,9 +701,9 @@ Description: Support configuring fault injection type, should be
enabled with fault_injection option, fault type value enabled with fault_injection option, fault type value
is shown below, it supports single or combined type. is shown below, it supports single or combined type.
=================== =========== =========================== ===========
Type_Name Type_Value Type_Name Type_Value
=================== =========== =========================== ===========
FAULT_KMALLOC 0x000000001 FAULT_KMALLOC 0x000000001
FAULT_KVMALLOC 0x000000002 FAULT_KVMALLOC 0x000000002
FAULT_PAGE_ALLOC 0x000000004 FAULT_PAGE_ALLOC 0x000000004
...@@ -722,8 +722,10 @@ Description: Support configuring fault injection type, should be ...@@ -722,8 +722,10 @@ Description: Support configuring fault injection type, should be
FAULT_SLAB_ALLOC 0x000008000 FAULT_SLAB_ALLOC 0x000008000
FAULT_DQUOT_INIT 0x000010000 FAULT_DQUOT_INIT 0x000010000
FAULT_LOCK_OP 0x000020000 FAULT_LOCK_OP 0x000020000
FAULT_BLKADDR 0x000040000 FAULT_BLKADDR_VALIDITY 0x000040000
=================== =========== FAULT_BLKADDR_CONSISTENCE 0x000080000
FAULT_NO_SEGMENT 0x000100000
=========================== ===========
What: /sys/fs/f2fs/<disk>/discard_io_aware_gran What: /sys/fs/f2fs/<disk>/discard_io_aware_gran
Date: January 2023 Date: January 2023
......
...@@ -126,9 +126,7 @@ norecovery Disable the roll-forward recovery routine, mounted read- ...@@ -126,9 +126,7 @@ norecovery Disable the roll-forward recovery routine, mounted read-
discard/nodiscard Enable/disable real-time discard in f2fs, if discard is discard/nodiscard Enable/disable real-time discard in f2fs, if discard is
enabled, f2fs will issue discard/TRIM commands when a enabled, f2fs will issue discard/TRIM commands when a
segment is cleaned. segment is cleaned.
no_heap Disable heap-style segment allocation which finds free heap/no_heap Deprecated.
segments for data from the beginning of main area, while
for node from the end of main area.
nouser_xattr Disable Extended User Attributes. Note: xattr is enabled nouser_xattr Disable Extended User Attributes. Note: xattr is enabled
by default if CONFIG_F2FS_FS_XATTR is selected. by default if CONFIG_F2FS_FS_XATTR is selected.
noacl Disable POSIX Access Control List. Note: acl is enabled noacl Disable POSIX Access Control List. Note: acl is enabled
...@@ -184,9 +182,9 @@ fault_type=%d Support configuring fault injection type, should be ...@@ -184,9 +182,9 @@ fault_type=%d Support configuring fault injection type, should be
enabled with fault_injection option, fault type value enabled with fault_injection option, fault type value
is shown below, it supports single or combined type. is shown below, it supports single or combined type.
=================== =========== =========================== ===========
Type_Name Type_Value Type_Name Type_Value
=================== =========== =========================== ===========
FAULT_KMALLOC 0x000000001 FAULT_KMALLOC 0x000000001
FAULT_KVMALLOC 0x000000002 FAULT_KVMALLOC 0x000000002
FAULT_PAGE_ALLOC 0x000000004 FAULT_PAGE_ALLOC 0x000000004
...@@ -205,8 +203,10 @@ fault_type=%d Support configuring fault injection type, should be ...@@ -205,8 +203,10 @@ fault_type=%d Support configuring fault injection type, should be
FAULT_SLAB_ALLOC 0x000008000 FAULT_SLAB_ALLOC 0x000008000
FAULT_DQUOT_INIT 0x000010000 FAULT_DQUOT_INIT 0x000010000
FAULT_LOCK_OP 0x000020000 FAULT_LOCK_OP 0x000020000
FAULT_BLKADDR 0x000040000 FAULT_BLKADDR_VALIDITY 0x000040000
=================== =========== FAULT_BLKADDR_CONSISTENCE 0x000080000
FAULT_NO_SEGMENT 0x000100000
=========================== ===========
mode=%s Control block allocation mode which supports "adaptive" mode=%s Control block allocation mode which supports "adaptive"
and "lfs". In "lfs" mode, there should be no random and "lfs". In "lfs" mode, there should be no random
writes towards main area. writes towards main area.
...@@ -228,8 +228,6 @@ mode=%s Control block allocation mode which supports "adaptive" ...@@ -228,8 +228,6 @@ mode=%s Control block allocation mode which supports "adaptive"
option for more randomness. option for more randomness.
Please, use these options for your experiments and we strongly Please, use these options for your experiments and we strongly
recommend to re-format the filesystem after using these options. recommend to re-format the filesystem after using these options.
io_bits=%u Set the bit size of write IO requests. It should be set
with "mode=lfs".
usrquota Enable plain user disk quota accounting. usrquota Enable plain user disk quota accounting.
grpquota Enable plain group disk quota accounting. grpquota Enable plain group disk quota accounting.
prjquota Enable plain project quota accounting. prjquota Enable plain project quota accounting.
......
...@@ -154,49 +154,47 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr, ...@@ -154,49 +154,47 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr,
if (unlikely(f2fs_cp_error(sbi))) if (unlikely(f2fs_cp_error(sbi)))
return exist; return exist;
if (exist && type == DATA_GENERIC_ENHANCE_UPDATE) { if ((exist && type == DATA_GENERIC_ENHANCE_UPDATE) ||
f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d", (!exist && type == DATA_GENERIC_ENHANCE))
blkaddr, exist); goto out_err;
set_sbi_flag(sbi, SBI_NEED_FSCK); if (!exist && type != DATA_GENERIC_ENHANCE_UPDATE)
goto out_handle;
return exist; return exist;
}
if (!exist && type == DATA_GENERIC_ENHANCE) { out_err:
f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d", f2fs_err(sbi, "Inconsistent error blkaddr:%u, sit bitmap:%d",
blkaddr, exist); blkaddr, exist);
set_sbi_flag(sbi, SBI_NEED_FSCK); set_sbi_flag(sbi, SBI_NEED_FSCK);
dump_stack(); dump_stack();
} out_handle:
f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
return exist; return exist;
} }
bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, static bool __f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
block_t blkaddr, int type) block_t blkaddr, int type)
{ {
if (time_to_inject(sbi, FAULT_BLKADDR))
return false;
switch (type) { switch (type) {
case META_NAT: case META_NAT:
break; break;
case META_SIT: case META_SIT:
if (unlikely(blkaddr >= SIT_BLK_CNT(sbi))) if (unlikely(blkaddr >= SIT_BLK_CNT(sbi)))
return false; goto err;
break; break;
case META_SSA: case META_SSA:
if (unlikely(blkaddr >= MAIN_BLKADDR(sbi) || if (unlikely(blkaddr >= MAIN_BLKADDR(sbi) ||
blkaddr < SM_I(sbi)->ssa_blkaddr)) blkaddr < SM_I(sbi)->ssa_blkaddr))
return false; goto err;
break; break;
case META_CP: case META_CP:
if (unlikely(blkaddr >= SIT_I(sbi)->sit_base_addr || if (unlikely(blkaddr >= SIT_I(sbi)->sit_base_addr ||
blkaddr < __start_cp_addr(sbi))) blkaddr < __start_cp_addr(sbi)))
return false; goto err;
break; break;
case META_POR: case META_POR:
if (unlikely(blkaddr >= MAX_BLKADDR(sbi) || if (unlikely(blkaddr >= MAX_BLKADDR(sbi) ||
blkaddr < MAIN_BLKADDR(sbi))) blkaddr < MAIN_BLKADDR(sbi)))
return false; goto err;
break; break;
case DATA_GENERIC: case DATA_GENERIC:
case DATA_GENERIC_ENHANCE: case DATA_GENERIC_ENHANCE:
...@@ -213,7 +211,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, ...@@ -213,7 +211,7 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
blkaddr); blkaddr);
set_sbi_flag(sbi, SBI_NEED_FSCK); set_sbi_flag(sbi, SBI_NEED_FSCK);
dump_stack(); dump_stack();
return false; goto err;
} else { } else {
return __is_bitmap_valid(sbi, blkaddr, type); return __is_bitmap_valid(sbi, blkaddr, type);
} }
...@@ -221,13 +219,30 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, ...@@ -221,13 +219,30 @@ bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
case META_GENERIC: case META_GENERIC:
if (unlikely(blkaddr < SEG0_BLKADDR(sbi) || if (unlikely(blkaddr < SEG0_BLKADDR(sbi) ||
blkaddr >= MAIN_BLKADDR(sbi))) blkaddr >= MAIN_BLKADDR(sbi)))
return false; goto err;
break; break;
default: default:
BUG(); BUG();
} }
return true; return true;
err:
f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
return false;
}
bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi,
block_t blkaddr, int type)
{
if (time_to_inject(sbi, FAULT_BLKADDR_VALIDITY))
return false;
return __f2fs_is_valid_blkaddr(sbi, blkaddr, type);
}
bool f2fs_is_valid_blkaddr_raw(struct f2fs_sb_info *sbi,
block_t blkaddr, int type)
{
return __f2fs_is_valid_blkaddr(sbi, blkaddr, type);
} }
/* /*
...@@ -889,7 +904,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi, ...@@ -889,7 +904,7 @@ static struct page *validate_checkpoint(struct f2fs_sb_info *sbi,
cp_blocks = le32_to_cpu(cp_block->cp_pack_total_block_count); cp_blocks = le32_to_cpu(cp_block->cp_pack_total_block_count);
if (cp_blocks > sbi->blocks_per_seg || cp_blocks <= F2FS_CP_PACKS) { if (cp_blocks > BLKS_PER_SEG(sbi) || cp_blocks <= F2FS_CP_PACKS) {
f2fs_warn(sbi, "invalid cp_pack_total_block_count:%u", f2fs_warn(sbi, "invalid cp_pack_total_block_count:%u",
le32_to_cpu(cp_block->cp_pack_total_block_count)); le32_to_cpu(cp_block->cp_pack_total_block_count));
goto invalid_cp; goto invalid_cp;
...@@ -1324,7 +1339,7 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1324,7 +1339,7 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
if (cpc->reason & CP_UMOUNT) { if (cpc->reason & CP_UMOUNT) {
if (le32_to_cpu(ckpt->cp_pack_total_block_count) + if (le32_to_cpu(ckpt->cp_pack_total_block_count) +
NM_I(sbi)->nat_bits_blocks > sbi->blocks_per_seg) { NM_I(sbi)->nat_bits_blocks > BLKS_PER_SEG(sbi)) {
clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG); clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
f2fs_notice(sbi, "Disable nat_bits due to no space"); f2fs_notice(sbi, "Disable nat_bits due to no space");
} else if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG) && } else if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG) &&
...@@ -1527,7 +1542,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1527,7 +1542,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
cp_ver |= ((__u64)crc32 << 32); cp_ver |= ((__u64)crc32 << 32);
*(__le64 *)nm_i->nat_bits = cpu_to_le64(cp_ver); *(__le64 *)nm_i->nat_bits = cpu_to_le64(cp_ver);
blk = start_blk + sbi->blocks_per_seg - nm_i->nat_bits_blocks; blk = start_blk + BLKS_PER_SEG(sbi) - nm_i->nat_bits_blocks;
for (i = 0; i < nm_i->nat_bits_blocks; i++) for (i = 0; i < nm_i->nat_bits_blocks; i++)
f2fs_update_meta_page(sbi, nm_i->nat_bits + f2fs_update_meta_page(sbi, nm_i->nat_bits +
(i << F2FS_BLKSIZE_BITS), blk + i); (i << F2FS_BLKSIZE_BITS), blk + i);
...@@ -1587,8 +1602,9 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ...@@ -1587,8 +1602,9 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
*/ */
if (f2fs_sb_has_encrypt(sbi) || f2fs_sb_has_verity(sbi) || if (f2fs_sb_has_encrypt(sbi) || f2fs_sb_has_verity(sbi) ||
f2fs_sb_has_compression(sbi)) f2fs_sb_has_compression(sbi))
invalidate_mapping_pages(META_MAPPING(sbi), f2fs_bug_on(sbi,
MAIN_BLKADDR(sbi), MAX_BLKADDR(sbi) - 1); invalidate_inode_pages2_range(META_MAPPING(sbi),
MAIN_BLKADDR(sbi), MAX_BLKADDR(sbi) - 1));
f2fs_release_ino_entry(sbi, false); f2fs_release_ino_entry(sbi, false);
...@@ -1730,7 +1746,7 @@ void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi) ...@@ -1730,7 +1746,7 @@ void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi)
im->ino_num = 0; im->ino_num = 0;
} }
sbi->max_orphans = (sbi->blocks_per_seg - F2FS_CP_PACKS - sbi->max_orphans = (BLKS_PER_SEG(sbi) - F2FS_CP_PACKS -
NR_CURSEG_PERSIST_TYPE - __cp_payload(sbi)) * NR_CURSEG_PERSIST_TYPE - __cp_payload(sbi)) *
F2FS_ORPHANS_PER_BLOCK; F2FS_ORPHANS_PER_BLOCK;
} }
......
...@@ -512,8 +512,8 @@ static int lzorle_compress_pages(struct compress_ctx *cc) ...@@ -512,8 +512,8 @@ static int lzorle_compress_pages(struct compress_ctx *cc)
ret = lzorle1x_1_compress(cc->rbuf, cc->rlen, cc->cbuf->cdata, ret = lzorle1x_1_compress(cc->rbuf, cc->rlen, cc->cbuf->cdata,
&cc->clen, cc->private); &cc->clen, cc->private);
if (ret != LZO_E_OK) { if (ret != LZO_E_OK) {
printk_ratelimited("%sF2FS-fs (%s): lzo-rle compress failed, ret:%d\n", f2fs_err_ratelimited(F2FS_I_SB(cc->inode),
KERN_ERR, F2FS_I_SB(cc->inode)->sb->s_id, ret); "lzo-rle compress failed, ret:%d", ret);
return -EIO; return -EIO;
} }
return 0; return 0;
...@@ -780,9 +780,9 @@ void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task) ...@@ -780,9 +780,9 @@ void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task)
if (provided != calculated) { if (provided != calculated) {
if (!is_inode_flag_set(dic->inode, FI_COMPRESS_CORRUPT)) { if (!is_inode_flag_set(dic->inode, FI_COMPRESS_CORRUPT)) {
set_inode_flag(dic->inode, FI_COMPRESS_CORRUPT); set_inode_flag(dic->inode, FI_COMPRESS_CORRUPT);
printk_ratelimited( f2fs_info_ratelimited(sbi,
"%sF2FS-fs (%s): checksum invalid, nid = %lu, %x vs %x", "checksum invalid, nid = %lu, %x vs %x",
KERN_INFO, sbi->sb->s_id, dic->inode->i_ino, dic->inode->i_ino,
provided, calculated); provided, calculated);
} }
set_sbi_flag(sbi, SBI_NEED_FSCK); set_sbi_flag(sbi, SBI_NEED_FSCK);
...@@ -1418,6 +1418,8 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page) ...@@ -1418,6 +1418,8 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
struct f2fs_sb_info *sbi = bio->bi_private; struct f2fs_sb_info *sbi = bio->bi_private;
struct compress_io_ctx *cic = struct compress_io_ctx *cic =
(struct compress_io_ctx *)page_private(page); (struct compress_io_ctx *)page_private(page);
enum count_type type = WB_DATA_TYPE(page,
f2fs_is_compressed_page(page));
int i; int i;
if (unlikely(bio->bi_status)) if (unlikely(bio->bi_status))
...@@ -1425,7 +1427,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page) ...@@ -1425,7 +1427,7 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
f2fs_compress_free_page(page); f2fs_compress_free_page(page);
dec_page_count(sbi, F2FS_WB_DATA); dec_page_count(sbi, type);
if (atomic_dec_return(&cic->pending_pages)) if (atomic_dec_return(&cic->pending_pages))
return; return;
...@@ -1441,12 +1443,14 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page) ...@@ -1441,12 +1443,14 @@ void f2fs_compress_write_end_io(struct bio *bio, struct page *page)
} }
static int f2fs_write_raw_pages(struct compress_ctx *cc, static int f2fs_write_raw_pages(struct compress_ctx *cc,
int *submitted, int *submitted_p,
struct writeback_control *wbc, struct writeback_control *wbc,
enum iostat_type io_type) enum iostat_type io_type)
{ {
struct address_space *mapping = cc->inode->i_mapping; struct address_space *mapping = cc->inode->i_mapping;
int _submitted, compr_blocks, ret, i; struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
int submitted, compr_blocks, i;
int ret = 0;
compr_blocks = f2fs_compressed_blocks(cc); compr_blocks = f2fs_compressed_blocks(cc);
...@@ -1461,6 +1465,10 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc, ...@@ -1461,6 +1465,10 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
if (compr_blocks < 0) if (compr_blocks < 0)
return compr_blocks; return compr_blocks;
/* overwrite compressed cluster w/ normal cluster */
if (compr_blocks > 0)
f2fs_lock_op(sbi);
for (i = 0; i < cc->cluster_size; i++) { for (i = 0; i < cc->cluster_size; i++) {
if (!cc->rpages[i]) if (!cc->rpages[i])
continue; continue;
...@@ -1485,7 +1493,7 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc, ...@@ -1485,7 +1493,7 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
if (!clear_page_dirty_for_io(cc->rpages[i])) if (!clear_page_dirty_for_io(cc->rpages[i]))
goto continue_unlock; goto continue_unlock;
ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted, ret = f2fs_write_single_data_page(cc->rpages[i], &submitted,
NULL, NULL, wbc, io_type, NULL, NULL, wbc, io_type,
compr_blocks, false); compr_blocks, false);
if (ret) { if (ret) {
...@@ -1493,26 +1501,29 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc, ...@@ -1493,26 +1501,29 @@ static int f2fs_write_raw_pages(struct compress_ctx *cc,
unlock_page(cc->rpages[i]); unlock_page(cc->rpages[i]);
ret = 0; ret = 0;
} else if (ret == -EAGAIN) { } else if (ret == -EAGAIN) {
ret = 0;
/* /*
* for quota file, just redirty left pages to * for quota file, just redirty left pages to
* avoid deadlock caused by cluster update race * avoid deadlock caused by cluster update race
* from foreground operation. * from foreground operation.
*/ */
if (IS_NOQUOTA(cc->inode)) if (IS_NOQUOTA(cc->inode))
return 0; goto out;
ret = 0;
f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT); f2fs_io_schedule_timeout(DEFAULT_IO_TIMEOUT);
goto retry_write; goto retry_write;
} }
return ret; goto out;
} }
*submitted += _submitted; *submitted_p += submitted;
} }
f2fs_balance_fs(F2FS_M_SB(mapping), true); out:
if (compr_blocks > 0)
f2fs_unlock_op(sbi);
return 0; f2fs_balance_fs(sbi, true);
return ret;
} }
int f2fs_write_multi_pages(struct compress_ctx *cc, int f2fs_write_multi_pages(struct compress_ctx *cc,
...@@ -1806,16 +1817,18 @@ void f2fs_put_page_dic(struct page *page, bool in_task) ...@@ -1806,16 +1817,18 @@ void f2fs_put_page_dic(struct page *page, bool in_task)
* check whether cluster blocks are contiguous, and add extent cache entry * check whether cluster blocks are contiguous, and add extent cache entry
* only if cluster blocks are logically and physically contiguous. * only if cluster blocks are logically and physically contiguous.
*/ */
unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn) unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn,
unsigned int ofs_in_node)
{ {
bool compressed = f2fs_data_blkaddr(dn) == COMPRESS_ADDR; bool compressed = data_blkaddr(dn->inode, dn->node_page,
ofs_in_node) == COMPRESS_ADDR;
int i = compressed ? 1 : 0; int i = compressed ? 1 : 0;
block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_page, block_t first_blkaddr = data_blkaddr(dn->inode, dn->node_page,
dn->ofs_in_node + i); ofs_in_node + i);
for (i += 1; i < F2FS_I(dn->inode)->i_cluster_size; i++) { for (i += 1; i < F2FS_I(dn->inode)->i_cluster_size; i++) {
block_t blkaddr = data_blkaddr(dn->inode, dn->node_page, block_t blkaddr = data_blkaddr(dn->inode, dn->node_page,
dn->ofs_in_node + i); ofs_in_node + i);
if (!__is_valid_data_blkaddr(blkaddr)) if (!__is_valid_data_blkaddr(blkaddr))
break; break;
...@@ -1878,12 +1891,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page, ...@@ -1878,12 +1891,8 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
set_page_private_data(cpage, ino); set_page_private_data(cpage, ino);
if (!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE_READ))
goto out;
memcpy(page_address(cpage), page_address(page), PAGE_SIZE); memcpy(page_address(cpage), page_address(page), PAGE_SIZE);
SetPageUptodate(cpage); SetPageUptodate(cpage);
out:
f2fs_put_page(cpage, 1); f2fs_put_page(cpage, 1);
} }
......
This diff is collapsed.
...@@ -41,7 +41,7 @@ void f2fs_update_sit_info(struct f2fs_sb_info *sbi) ...@@ -41,7 +41,7 @@ void f2fs_update_sit_info(struct f2fs_sb_info *sbi)
total_vblocks = 0; total_vblocks = 0;
blks_per_sec = CAP_BLKS_PER_SEC(sbi); blks_per_sec = CAP_BLKS_PER_SEC(sbi);
hblks_per_sec = blks_per_sec / 2; hblks_per_sec = blks_per_sec / 2;
for (segno = 0; segno < MAIN_SEGS(sbi); segno += sbi->segs_per_sec) { for (segno = 0; segno < MAIN_SEGS(sbi); segno += SEGS_PER_SEC(sbi)) {
vblocks = get_valid_blocks(sbi, segno, true); vblocks = get_valid_blocks(sbi, segno, true);
dist = abs(vblocks - hblks_per_sec); dist = abs(vblocks - hblks_per_sec);
bimodal += dist * dist; bimodal += dist * dist;
...@@ -135,7 +135,7 @@ static void update_general_status(struct f2fs_sb_info *sbi) ...@@ -135,7 +135,7 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->cur_ckpt_time = sbi->cprc_info.cur_time; si->cur_ckpt_time = sbi->cprc_info.cur_time;
si->peak_ckpt_time = sbi->cprc_info.peak_time; si->peak_ckpt_time = sbi->cprc_info.peak_time;
spin_unlock(&sbi->cprc_info.stat_lock); spin_unlock(&sbi->cprc_info.stat_lock);
si->total_count = (int)sbi->user_block_count / sbi->blocks_per_seg; si->total_count = BLKS_TO_SEGS(sbi, (int)sbi->user_block_count);
si->rsvd_segs = reserved_segments(sbi); si->rsvd_segs = reserved_segments(sbi);
si->overp_segs = overprovision_segments(sbi); si->overp_segs = overprovision_segments(sbi);
si->valid_count = valid_user_blocks(sbi); si->valid_count = valid_user_blocks(sbi);
...@@ -176,11 +176,10 @@ static void update_general_status(struct f2fs_sb_info *sbi) ...@@ -176,11 +176,10 @@ static void update_general_status(struct f2fs_sb_info *sbi)
si->alloc_nids = NM_I(sbi)->nid_cnt[PREALLOC_NID]; si->alloc_nids = NM_I(sbi)->nid_cnt[PREALLOC_NID];
si->io_skip_bggc = sbi->io_skip_bggc; si->io_skip_bggc = sbi->io_skip_bggc;
si->other_skip_bggc = sbi->other_skip_bggc; si->other_skip_bggc = sbi->other_skip_bggc;
si->util_free = (int)(free_user_blocks(sbi) >> sbi->log_blocks_per_seg) si->util_free = (int)(BLKS_TO_SEGS(sbi, free_user_blocks(sbi)))
* 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg) * 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg)
/ 2; / 2;
si->util_valid = (int)(written_block_count(sbi) >> si->util_valid = (int)(BLKS_TO_SEGS(sbi, written_block_count(sbi)))
sbi->log_blocks_per_seg)
* 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg) * 100 / (int)(sbi->user_block_count >> sbi->log_blocks_per_seg)
/ 2; / 2;
si->util_invalid = 50 - si->util_free - si->util_valid; si->util_invalid = 50 - si->util_free - si->util_valid;
...@@ -208,7 +207,7 @@ static void update_general_status(struct f2fs_sb_info *sbi) ...@@ -208,7 +207,7 @@ static void update_general_status(struct f2fs_sb_info *sbi)
if (!blks) if (!blks)
continue; continue;
if (blks == sbi->blocks_per_seg) if (blks == BLKS_PER_SEG(sbi))
si->full_seg[type]++; si->full_seg[type]++;
else else
si->dirty_seg[type]++; si->dirty_seg[type]++;
......
...@@ -830,13 +830,14 @@ int f2fs_do_add_link(struct inode *dir, const struct qstr *name, ...@@ -830,13 +830,14 @@ int f2fs_do_add_link(struct inode *dir, const struct qstr *name,
return err; return err;
} }
int f2fs_do_tmpfile(struct inode *inode, struct inode *dir) int f2fs_do_tmpfile(struct inode *inode, struct inode *dir,
struct f2fs_filename *fname)
{ {
struct page *page; struct page *page;
int err = 0; int err = 0;
f2fs_down_write(&F2FS_I(inode)->i_sem); f2fs_down_write(&F2FS_I(inode)->i_sem);
page = f2fs_init_inode_metadata(inode, dir, NULL, NULL); page = f2fs_init_inode_metadata(inode, dir, fname, NULL);
if (IS_ERR(page)) { if (IS_ERR(page)) {
err = PTR_ERR(page); err = PTR_ERR(page);
goto fail; goto fail;
...@@ -995,9 +996,8 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, ...@@ -995,9 +996,8 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
de = &d->dentry[bit_pos]; de = &d->dentry[bit_pos];
if (de->name_len == 0) { if (de->name_len == 0) {
if (found_valid_dirent || !bit_pos) { if (found_valid_dirent || !bit_pos) {
printk_ratelimited( f2fs_warn_ratelimited(sbi,
"%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.", "invalid namelen(0), ino:%u, run fsck to fix.",
KERN_WARNING, sbi->sb->s_id,
le32_to_cpu(de->ino)); le32_to_cpu(de->ino));
set_sbi_flag(sbi, SBI_NEED_FSCK); set_sbi_flag(sbi, SBI_NEED_FSCK);
} }
......
...@@ -43,7 +43,6 @@ bool sanity_check_extent_cache(struct inode *inode) ...@@ -43,7 +43,6 @@ bool sanity_check_extent_cache(struct inode *inode)
if (!f2fs_is_valid_blkaddr(sbi, ei->blk, DATA_GENERIC_ENHANCE) || if (!f2fs_is_valid_blkaddr(sbi, ei->blk, DATA_GENERIC_ENHANCE) ||
!f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1, !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1,
DATA_GENERIC_ENHANCE)) { DATA_GENERIC_ENHANCE)) {
set_sbi_flag(sbi, SBI_NEED_FSCK);
f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix", f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix",
__func__, inode->i_ino, __func__, inode->i_ino,
ei->blk, ei->fofs, ei->len); ei->blk, ei->fofs, ei->len);
...@@ -856,10 +855,8 @@ static int __get_new_block_age(struct inode *inode, struct extent_info *ei, ...@@ -856,10 +855,8 @@ static int __get_new_block_age(struct inode *inode, struct extent_info *ei,
goto out; goto out;
if (__is_valid_data_blkaddr(blkaddr) && if (__is_valid_data_blkaddr(blkaddr) &&
!f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE)) { !f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE))
f2fs_bug_on(sbi, 1);
return -EINVAL; return -EINVAL;
}
out: out:
/* /*
* init block age with zero, this can happen when the block age extent * init block age with zero, this can happen when the block age extent
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -96,7 +96,7 @@ static inline block_t free_segs_blk_count(struct f2fs_sb_info *sbi) ...@@ -96,7 +96,7 @@ static inline block_t free_segs_blk_count(struct f2fs_sb_info *sbi)
if (f2fs_sb_has_blkzoned(sbi)) if (f2fs_sb_has_blkzoned(sbi))
return free_segs_blk_count_zoned(sbi); return free_segs_blk_count_zoned(sbi);
return free_segments(sbi) << sbi->log_blocks_per_seg; return SEGS_TO_BLKS(sbi, free_segments(sbi));
} }
static inline block_t free_user_blocks(struct f2fs_sb_info *sbi) static inline block_t free_user_blocks(struct f2fs_sb_info *sbi)
...@@ -104,7 +104,7 @@ static inline block_t free_user_blocks(struct f2fs_sb_info *sbi) ...@@ -104,7 +104,7 @@ static inline block_t free_user_blocks(struct f2fs_sb_info *sbi)
block_t free_blks, ovp_blks; block_t free_blks, ovp_blks;
free_blks = free_segs_blk_count(sbi); free_blks = free_segs_blk_count(sbi);
ovp_blks = overprovision_segments(sbi) << sbi->log_blocks_per_seg; ovp_blks = SEGS_TO_BLKS(sbi, overprovision_segments(sbi));
if (free_blks < ovp_blks) if (free_blks < ovp_blks)
return 0; return 0;
......
...@@ -851,7 +851,7 @@ static int f2fs_mknod(struct mnt_idmap *idmap, struct inode *dir, ...@@ -851,7 +851,7 @@ static int f2fs_mknod(struct mnt_idmap *idmap, struct inode *dir,
static int __f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir, static int __f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
struct file *file, umode_t mode, bool is_whiteout, struct file *file, umode_t mode, bool is_whiteout,
struct inode **new_inode) struct inode **new_inode, struct f2fs_filename *fname)
{ {
struct f2fs_sb_info *sbi = F2FS_I_SB(dir); struct f2fs_sb_info *sbi = F2FS_I_SB(dir);
struct inode *inode; struct inode *inode;
...@@ -879,7 +879,7 @@ static int __f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir, ...@@ -879,7 +879,7 @@ static int __f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
if (err) if (err)
goto out; goto out;
err = f2fs_do_tmpfile(inode, dir); err = f2fs_do_tmpfile(inode, dir, fname);
if (err) if (err)
goto release_out; goto release_out;
...@@ -930,22 +930,24 @@ static int f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir, ...@@ -930,22 +930,24 @@ static int f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
if (!f2fs_is_checkpoint_ready(sbi)) if (!f2fs_is_checkpoint_ready(sbi))
return -ENOSPC; return -ENOSPC;
err = __f2fs_tmpfile(idmap, dir, file, mode, false, NULL); err = __f2fs_tmpfile(idmap, dir, file, mode, false, NULL, NULL);
return finish_open_simple(file, err); return finish_open_simple(file, err);
} }
static int f2fs_create_whiteout(struct mnt_idmap *idmap, static int f2fs_create_whiteout(struct mnt_idmap *idmap,
struct inode *dir, struct inode **whiteout) struct inode *dir, struct inode **whiteout,
struct f2fs_filename *fname)
{ {
return __f2fs_tmpfile(idmap, dir, NULL, return __f2fs_tmpfile(idmap, dir, NULL, S_IFCHR | WHITEOUT_MODE,
S_IFCHR | WHITEOUT_MODE, true, whiteout); true, whiteout, fname);
} }
int f2fs_get_tmpfile(struct mnt_idmap *idmap, struct inode *dir, int f2fs_get_tmpfile(struct mnt_idmap *idmap, struct inode *dir,
struct inode **new_inode) struct inode **new_inode)
{ {
return __f2fs_tmpfile(idmap, dir, NULL, S_IFREG, false, new_inode); return __f2fs_tmpfile(idmap, dir, NULL, S_IFREG,
false, new_inode, NULL);
} }
static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir, static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
...@@ -989,7 +991,14 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir, ...@@ -989,7 +991,14 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
} }
if (flags & RENAME_WHITEOUT) { if (flags & RENAME_WHITEOUT) {
err = f2fs_create_whiteout(idmap, old_dir, &whiteout); struct f2fs_filename fname;
err = f2fs_setup_filename(old_dir, &old_dentry->d_name,
0, &fname);
if (err)
return err;
err = f2fs_create_whiteout(idmap, old_dir, &whiteout, &fname);
if (err) if (err)
return err; return err;
} }
...@@ -1104,14 +1113,11 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir, ...@@ -1104,14 +1113,11 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
iput(whiteout); iput(whiteout);
} }
if (old_is_dir) {
if (old_dir_entry) if (old_dir_entry)
f2fs_set_link(old_inode, old_dir_entry, f2fs_set_link(old_inode, old_dir_entry, old_dir_page, new_dir);
old_dir_page, new_dir); if (old_is_dir)
else
f2fs_put_page(old_dir_page, 0);
f2fs_i_links_write(old_dir, false); f2fs_i_links_write(old_dir, false);
}
if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) { if (F2FS_OPTION(sbi).fsync_mode == FSYNC_MODE_STRICT) {
f2fs_add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO); f2fs_add_ino_entry(sbi, new_dir->i_ino, TRANS_DIR_INO);
if (S_ISDIR(old_inode->i_mode)) if (S_ISDIR(old_inode->i_mode))
......
...@@ -852,21 +852,29 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode) ...@@ -852,21 +852,29 @@ int f2fs_get_dnode_of_data(struct dnode_of_data *dn, pgoff_t index, int mode)
if (is_inode_flag_set(dn->inode, FI_COMPRESSED_FILE) && if (is_inode_flag_set(dn->inode, FI_COMPRESSED_FILE) &&
f2fs_sb_has_readonly(sbi)) { f2fs_sb_has_readonly(sbi)) {
unsigned int c_len = f2fs_cluster_blocks_are_contiguous(dn); unsigned int cluster_size = F2FS_I(dn->inode)->i_cluster_size;
unsigned int ofs_in_node = dn->ofs_in_node;
pgoff_t fofs = index;
unsigned int c_len;
block_t blkaddr; block_t blkaddr;
/* should align fofs and ofs_in_node to cluster_size */
if (fofs % cluster_size) {
fofs = round_down(fofs, cluster_size);
ofs_in_node = round_down(ofs_in_node, cluster_size);
}
c_len = f2fs_cluster_blocks_are_contiguous(dn, ofs_in_node);
if (!c_len) if (!c_len)
goto out; goto out;
blkaddr = f2fs_data_blkaddr(dn); blkaddr = data_blkaddr(dn->inode, dn->node_page, ofs_in_node);
if (blkaddr == COMPRESS_ADDR) if (blkaddr == COMPRESS_ADDR)
blkaddr = data_blkaddr(dn->inode, dn->node_page, blkaddr = data_blkaddr(dn->inode, dn->node_page,
dn->ofs_in_node + 1); ofs_in_node + 1);
f2fs_update_read_extent_tree_range_compressed(dn->inode, f2fs_update_read_extent_tree_range_compressed(dn->inode,
index, blkaddr, fofs, blkaddr, cluster_size, c_len);
F2FS_I(dn->inode)->i_cluster_size,
c_len);
} }
out: out:
return 0; return 0;
...@@ -1919,7 +1927,7 @@ void f2fs_flush_inline_data(struct f2fs_sb_info *sbi) ...@@ -1919,7 +1927,7 @@ void f2fs_flush_inline_data(struct f2fs_sb_info *sbi)
for (i = 0; i < nr_folios; i++) { for (i = 0; i < nr_folios; i++) {
struct page *page = &fbatch.folios[i]->page; struct page *page = &fbatch.folios[i]->page;
if (!IS_DNODE(page)) if (!IS_INODE(page))
continue; continue;
lock_page(page); lock_page(page);
...@@ -2841,7 +2849,7 @@ int f2fs_restore_node_summary(struct f2fs_sb_info *sbi, ...@@ -2841,7 +2849,7 @@ int f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
int i, idx, last_offset, nrpages; int i, idx, last_offset, nrpages;
/* scan the node segment */ /* scan the node segment */
last_offset = sbi->blocks_per_seg; last_offset = BLKS_PER_SEG(sbi);
addr = START_BLOCK(sbi, segno); addr = START_BLOCK(sbi, segno);
sum_entry = &sum->entries[0]; sum_entry = &sum->entries[0];
...@@ -3158,7 +3166,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi) ...@@ -3158,7 +3166,7 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG)) if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
return 0; return 0;
nat_bits_addr = __start_cp_addr(sbi) + sbi->blocks_per_seg - nat_bits_addr = __start_cp_addr(sbi) + BLKS_PER_SEG(sbi) -
nm_i->nat_bits_blocks; nm_i->nat_bits_blocks;
for (i = 0; i < nm_i->nat_bits_blocks; i++) { for (i = 0; i < nm_i->nat_bits_blocks; i++) {
struct page *page; struct page *page;
......
...@@ -208,10 +208,10 @@ static inline pgoff_t current_nat_addr(struct f2fs_sb_info *sbi, nid_t start) ...@@ -208,10 +208,10 @@ static inline pgoff_t current_nat_addr(struct f2fs_sb_info *sbi, nid_t start)
block_addr = (pgoff_t)(nm_i->nat_blkaddr + block_addr = (pgoff_t)(nm_i->nat_blkaddr +
(block_off << 1) - (block_off << 1) -
(block_off & (sbi->blocks_per_seg - 1))); (block_off & (BLKS_PER_SEG(sbi) - 1)));
if (f2fs_test_bit(block_off, nm_i->nat_bitmap)) if (f2fs_test_bit(block_off, nm_i->nat_bitmap))
block_addr += sbi->blocks_per_seg; block_addr += BLKS_PER_SEG(sbi);
return block_addr; return block_addr;
} }
......
...@@ -354,7 +354,7 @@ static unsigned int adjust_por_ra_blocks(struct f2fs_sb_info *sbi, ...@@ -354,7 +354,7 @@ static unsigned int adjust_por_ra_blocks(struct f2fs_sb_info *sbi,
if (blkaddr + 1 == next_blkaddr) if (blkaddr + 1 == next_blkaddr)
ra_blocks = min_t(unsigned int, RECOVERY_MAX_RA_BLOCKS, ra_blocks = min_t(unsigned int, RECOVERY_MAX_RA_BLOCKS,
ra_blocks * 2); ra_blocks * 2);
else if (next_blkaddr % sbi->blocks_per_seg) else if (next_blkaddr % BLKS_PER_SEG(sbi))
ra_blocks = max_t(unsigned int, RECOVERY_MIN_RA_BLOCKS, ra_blocks = max_t(unsigned int, RECOVERY_MIN_RA_BLOCKS,
ra_blocks / 2); ra_blocks / 2);
return ra_blocks; return ra_blocks;
...@@ -611,6 +611,19 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, ...@@ -611,6 +611,19 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi,
return 0; return 0;
} }
static int f2fs_reserve_new_block_retry(struct dnode_of_data *dn)
{
int i, err = 0;
for (i = DEFAULT_FAILURE_RETRY_COUNT; i > 0; i--) {
err = f2fs_reserve_new_block(dn);
if (!err)
break;
}
return err;
}
static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
struct page *page) struct page *page)
{ {
...@@ -680,14 +693,12 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -680,14 +693,12 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
if (__is_valid_data_blkaddr(src) && if (__is_valid_data_blkaddr(src) &&
!f2fs_is_valid_blkaddr(sbi, src, META_POR)) { !f2fs_is_valid_blkaddr(sbi, src, META_POR)) {
err = -EFSCORRUPTED; err = -EFSCORRUPTED;
f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
goto err; goto err;
} }
if (__is_valid_data_blkaddr(dest) && if (__is_valid_data_blkaddr(dest) &&
!f2fs_is_valid_blkaddr(sbi, dest, META_POR)) { !f2fs_is_valid_blkaddr(sbi, dest, META_POR)) {
err = -EFSCORRUPTED; err = -EFSCORRUPTED;
f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
goto err; goto err;
} }
...@@ -712,14 +723,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -712,14 +723,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
*/ */
if (dest == NEW_ADDR) { if (dest == NEW_ADDR) {
f2fs_truncate_data_blocks_range(&dn, 1); f2fs_truncate_data_blocks_range(&dn, 1);
do {
err = f2fs_reserve_new_block(&dn); err = f2fs_reserve_new_block_retry(&dn);
if (err == -ENOSPC) {
f2fs_bug_on(sbi, 1);
break;
}
} while (err &&
IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
if (err) if (err)
goto err; goto err;
continue; continue;
...@@ -727,16 +732,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -727,16 +732,8 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
/* dest is valid block, try to recover from src to dest */ /* dest is valid block, try to recover from src to dest */
if (f2fs_is_valid_blkaddr(sbi, dest, META_POR)) { if (f2fs_is_valid_blkaddr(sbi, dest, META_POR)) {
if (src == NULL_ADDR) { if (src == NULL_ADDR) {
do { err = f2fs_reserve_new_block_retry(&dn);
err = f2fs_reserve_new_block(&dn);
if (err == -ENOSPC) {
f2fs_bug_on(sbi, 1);
break;
}
} while (err &&
IS_ENABLED(CONFIG_F2FS_FAULT_INJECTION));
if (err) if (err)
goto err; goto err;
} }
...@@ -756,8 +753,6 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode, ...@@ -756,8 +753,6 @@ static int do_recover_data(struct f2fs_sb_info *sbi, struct inode *inode,
f2fs_err(sbi, "Inconsistent dest blkaddr:%u, ino:%lu, ofs:%u", f2fs_err(sbi, "Inconsistent dest blkaddr:%u, ino:%lu, ofs:%u",
dest, inode->i_ino, dn.ofs_in_node); dest, inode->i_ino, dn.ofs_in_node);
err = -EFSCORRUPTED; err = -EFSCORRUPTED;
f2fs_handle_error(sbi,
ERROR_INVALID_BLKADDR);
goto err; goto err;
} }
...@@ -852,7 +847,7 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list, ...@@ -852,7 +847,7 @@ static int recover_data(struct f2fs_sb_info *sbi, struct list_head *inode_list,
f2fs_ra_meta_pages_cond(sbi, blkaddr, ra_blocks); f2fs_ra_meta_pages_cond(sbi, blkaddr, ra_blocks);
} }
if (!err) if (!err)
f2fs_allocate_new_segments(sbi); err = f2fs_allocate_new_segments(sbi);
return err; return err;
} }
...@@ -864,7 +859,6 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -864,7 +859,6 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
int ret = 0; int ret = 0;
unsigned long s_flags = sbi->sb->s_flags; unsigned long s_flags = sbi->sb->s_flags;
bool need_writecp = false; bool need_writecp = false;
bool fix_curseg_write_pointer = false;
if (is_sbi_flag_set(sbi, SBI_IS_WRITABLE)) if (is_sbi_flag_set(sbi, SBI_IS_WRITABLE))
f2fs_info(sbi, "recover fsync data on readonly fs"); f2fs_info(sbi, "recover fsync data on readonly fs");
...@@ -895,8 +889,6 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -895,8 +889,6 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
else else
f2fs_bug_on(sbi, sbi->sb->s_flags & SB_ACTIVE); f2fs_bug_on(sbi, sbi->sb->s_flags & SB_ACTIVE);
skip: skip:
fix_curseg_write_pointer = !check_only || list_empty(&inode_list);
destroy_fsync_dnodes(&inode_list, err); destroy_fsync_dnodes(&inode_list, err);
destroy_fsync_dnodes(&tmp_inode_list, err); destroy_fsync_dnodes(&tmp_inode_list, err);
...@@ -914,11 +906,13 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only) ...@@ -914,11 +906,13 @@ int f2fs_recover_fsync_data(struct f2fs_sb_info *sbi, bool check_only)
* and the f2fs is not read only, check and fix zoned block devices' * and the f2fs is not read only, check and fix zoned block devices'
* write pointer consistency. * write pointer consistency.
*/ */
if (!err && fix_curseg_write_pointer && !f2fs_readonly(sbi->sb) && if (f2fs_sb_has_blkzoned(sbi) && !f2fs_readonly(sbi->sb)) {
f2fs_sb_has_blkzoned(sbi)) { int err2 = f2fs_fix_curseg_write_pointer(sbi);
err = f2fs_fix_curseg_write_pointer(sbi);
if (!err) if (!err2)
err = f2fs_check_write_pointer(sbi); err2 = f2fs_check_write_pointer(sbi);
if (err2)
err = err2;
ret = err; ret = err;
} }
......
This diff is collapsed.
...@@ -48,21 +48,21 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi, ...@@ -48,21 +48,21 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
#define IS_CURSEC(sbi, secno) \ #define IS_CURSEC(sbi, secno) \
(((secno) == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno / \ (((secno) == CURSEG_I(sbi, CURSEG_HOT_DATA)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_WARM_DATA)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_WARM_DATA)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_COLD_DATA)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_COLD_DATA)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_HOT_NODE)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_HOT_NODE)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_WARM_NODE)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_WARM_NODE)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_COLD_NODE)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_COLD_DATA_PINNED)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_COLD_DATA_PINNED)->segno / \
(sbi)->segs_per_sec) || \ SEGS_PER_SEC(sbi)) || \
((secno) == CURSEG_I(sbi, CURSEG_ALL_DATA_ATGC)->segno / \ ((secno) == CURSEG_I(sbi, CURSEG_ALL_DATA_ATGC)->segno / \
(sbi)->segs_per_sec)) SEGS_PER_SEC(sbi)))
#define MAIN_BLKADDR(sbi) \ #define MAIN_BLKADDR(sbi) \
(SM_I(sbi) ? SM_I(sbi)->main_blkaddr : \ (SM_I(sbi) ? SM_I(sbi)->main_blkaddr : \
...@@ -77,40 +77,37 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi, ...@@ -77,40 +77,37 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
#define TOTAL_SEGS(sbi) \ #define TOTAL_SEGS(sbi) \
(SM_I(sbi) ? SM_I(sbi)->segment_count : \ (SM_I(sbi) ? SM_I(sbi)->segment_count : \
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count)) le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count))
#define TOTAL_BLKS(sbi) (TOTAL_SEGS(sbi) << (sbi)->log_blocks_per_seg) #define TOTAL_BLKS(sbi) (SEGS_TO_BLKS(sbi, TOTAL_SEGS(sbi)))
#define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi)) #define MAX_BLKADDR(sbi) (SEG0_BLKADDR(sbi) + TOTAL_BLKS(sbi))
#define SEGMENT_SIZE(sbi) (1ULL << ((sbi)->log_blocksize + \ #define SEGMENT_SIZE(sbi) (1ULL << ((sbi)->log_blocksize + \
(sbi)->log_blocks_per_seg)) (sbi)->log_blocks_per_seg))
#define START_BLOCK(sbi, segno) (SEG0_BLKADDR(sbi) + \ #define START_BLOCK(sbi, segno) (SEG0_BLKADDR(sbi) + \
(GET_R2L_SEGNO(FREE_I(sbi), segno) << (sbi)->log_blocks_per_seg)) (SEGS_TO_BLKS(sbi, GET_R2L_SEGNO(FREE_I(sbi), segno))))
#define NEXT_FREE_BLKADDR(sbi, curseg) \ #define NEXT_FREE_BLKADDR(sbi, curseg) \
(START_BLOCK(sbi, (curseg)->segno) + (curseg)->next_blkoff) (START_BLOCK(sbi, (curseg)->segno) + (curseg)->next_blkoff)
#define GET_SEGOFF_FROM_SEG0(sbi, blk_addr) ((blk_addr) - SEG0_BLKADDR(sbi)) #define GET_SEGOFF_FROM_SEG0(sbi, blk_addr) ((blk_addr) - SEG0_BLKADDR(sbi))
#define GET_SEGNO_FROM_SEG0(sbi, blk_addr) \ #define GET_SEGNO_FROM_SEG0(sbi, blk_addr) \
(GET_SEGOFF_FROM_SEG0(sbi, blk_addr) >> (sbi)->log_blocks_per_seg) (BLKS_TO_SEGS(sbi, GET_SEGOFF_FROM_SEG0(sbi, blk_addr)))
#define GET_BLKOFF_FROM_SEG0(sbi, blk_addr) \ #define GET_BLKOFF_FROM_SEG0(sbi, blk_addr) \
(GET_SEGOFF_FROM_SEG0(sbi, blk_addr) & ((sbi)->blocks_per_seg - 1)) (GET_SEGOFF_FROM_SEG0(sbi, blk_addr) & (BLKS_PER_SEG(sbi) - 1))
#define GET_SEGNO(sbi, blk_addr) \ #define GET_SEGNO(sbi, blk_addr) \
((!__is_valid_data_blkaddr(blk_addr)) ? \ ((!__is_valid_data_blkaddr(blk_addr)) ? \
NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \ NULL_SEGNO : GET_L2R_SEGNO(FREE_I(sbi), \
GET_SEGNO_FROM_SEG0(sbi, blk_addr))) GET_SEGNO_FROM_SEG0(sbi, blk_addr)))
#define BLKS_PER_SEC(sbi) \
((sbi)->segs_per_sec * (sbi)->blocks_per_seg)
#define CAP_BLKS_PER_SEC(sbi) \ #define CAP_BLKS_PER_SEC(sbi) \
((sbi)->segs_per_sec * (sbi)->blocks_per_seg - \ (BLKS_PER_SEC(sbi) - (sbi)->unusable_blocks_per_sec)
(sbi)->unusable_blocks_per_sec)
#define CAP_SEGS_PER_SEC(sbi) \ #define CAP_SEGS_PER_SEC(sbi) \
((sbi)->segs_per_sec - ((sbi)->unusable_blocks_per_sec >>\ (SEGS_PER_SEC(sbi) - \
(sbi)->log_blocks_per_seg)) BLKS_TO_SEGS(sbi, (sbi)->unusable_blocks_per_sec))
#define GET_SEC_FROM_SEG(sbi, segno) \ #define GET_SEC_FROM_SEG(sbi, segno) \
(((segno) == -1) ? -1 : (segno) / (sbi)->segs_per_sec) (((segno) == -1) ? -1 : (segno) / SEGS_PER_SEC(sbi))
#define GET_SEG_FROM_SEC(sbi, secno) \ #define GET_SEG_FROM_SEC(sbi, secno) \
((secno) * (sbi)->segs_per_sec) ((secno) * SEGS_PER_SEC(sbi))
#define GET_ZONE_FROM_SEC(sbi, secno) \ #define GET_ZONE_FROM_SEC(sbi, secno) \
(((secno) == -1) ? -1 : (secno) / (sbi)->secs_per_zone) (((secno) == -1) ? -1 : (secno) / (sbi)->secs_per_zone)
#define GET_ZONE_FROM_SEG(sbi, segno) \ #define GET_ZONE_FROM_SEG(sbi, segno) \
...@@ -138,16 +135,6 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi, ...@@ -138,16 +135,6 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
#define SECTOR_TO_BLOCK(sectors) \ #define SECTOR_TO_BLOCK(sectors) \
((sectors) >> F2FS_LOG_SECTORS_PER_BLOCK) ((sectors) >> F2FS_LOG_SECTORS_PER_BLOCK)
/*
* indicate a block allocation direction: RIGHT and LEFT.
* RIGHT means allocating new sections towards the end of volume.
* LEFT means the opposite direction.
*/
enum {
ALLOC_RIGHT = 0,
ALLOC_LEFT
};
/* /*
* In the victim_sel_policy->alloc_mode, there are three block allocation modes. * In the victim_sel_policy->alloc_mode, there are three block allocation modes.
* LFS writes data sequentially with cleaning operations. * LFS writes data sequentially with cleaning operations.
...@@ -364,7 +351,7 @@ static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi, ...@@ -364,7 +351,7 @@ static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
unsigned int blocks = 0; unsigned int blocks = 0;
int i; int i;
for (i = 0; i < sbi->segs_per_sec; i++, start_segno++) { for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) {
struct seg_entry *se = get_seg_entry(sbi, start_segno); struct seg_entry *se = get_seg_entry(sbi, start_segno);
blocks += se->ckpt_valid_blocks; blocks += se->ckpt_valid_blocks;
...@@ -449,7 +436,7 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno) ...@@ -449,7 +436,7 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
free_i->free_segments++; free_i->free_segments++;
next = find_next_bit(free_i->free_segmap, next = find_next_bit(free_i->free_segmap,
start_segno + sbi->segs_per_sec, start_segno); start_segno + SEGS_PER_SEC(sbi), start_segno);
if (next >= start_segno + usable_segs) { if (next >= start_segno + usable_segs) {
clear_bit(secno, free_i->free_secmap); clear_bit(secno, free_i->free_secmap);
free_i->free_sections++; free_i->free_sections++;
...@@ -485,7 +472,7 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi, ...@@ -485,7 +472,7 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
if (!inmem && IS_CURSEC(sbi, secno)) if (!inmem && IS_CURSEC(sbi, secno))
goto skip_free; goto skip_free;
next = find_next_bit(free_i->free_segmap, next = find_next_bit(free_i->free_segmap,
start_segno + sbi->segs_per_sec, start_segno); start_segno + SEGS_PER_SEC(sbi), start_segno);
if (next >= start_segno + usable_segs) { if (next >= start_segno + usable_segs) {
if (test_and_clear_bit(secno, free_i->free_secmap)) if (test_and_clear_bit(secno, free_i->free_secmap))
free_i->free_sections++; free_i->free_sections++;
...@@ -573,23 +560,22 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi, ...@@ -573,23 +560,22 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
unsigned int node_blocks, unsigned int dent_blocks) unsigned int node_blocks, unsigned int dent_blocks)
{ {
unsigned int segno, left_blocks; unsigned segno, left_blocks;
int i; int i;
/* check current node segment */ /* check current node sections in the worst case. */
for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) { for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) {
segno = CURSEG_I(sbi, i)->segno; segno = CURSEG_I(sbi, i)->segno;
left_blocks = f2fs_usable_blks_in_seg(sbi, segno) - left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_seg_entry(sbi, segno)->ckpt_valid_blocks; get_ckpt_valid_blocks(sbi, segno, true);
if (node_blocks > left_blocks) if (node_blocks > left_blocks)
return false; return false;
} }
/* check current data segment */ /* check current data section for dentry blocks. */
segno = CURSEG_I(sbi, CURSEG_HOT_DATA)->segno; segno = CURSEG_I(sbi, CURSEG_HOT_DATA)->segno;
left_blocks = f2fs_usable_blks_in_seg(sbi, segno) - left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_seg_entry(sbi, segno)->ckpt_valid_blocks; get_ckpt_valid_blocks(sbi, segno, true);
if (dent_blocks > left_blocks) if (dent_blocks > left_blocks)
return false; return false;
return true; return true;
...@@ -638,7 +624,7 @@ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, ...@@ -638,7 +624,7 @@ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
if (free_secs > upper_secs) if (free_secs > upper_secs)
return false; return false;
else if (free_secs <= lower_secs) if (free_secs <= lower_secs)
return true; return true;
return !curseg_space; return !curseg_space;
} }
...@@ -793,10 +779,10 @@ static inline int check_block_count(struct f2fs_sb_info *sbi, ...@@ -793,10 +779,10 @@ static inline int check_block_count(struct f2fs_sb_info *sbi,
return -EFSCORRUPTED; return -EFSCORRUPTED;
} }
if (usable_blks_per_seg < sbi->blocks_per_seg) if (usable_blks_per_seg < BLKS_PER_SEG(sbi))
f2fs_bug_on(sbi, find_next_bit_le(&raw_sit->valid_map, f2fs_bug_on(sbi, find_next_bit_le(&raw_sit->valid_map,
sbi->blocks_per_seg, BLKS_PER_SEG(sbi),
usable_blks_per_seg) != sbi->blocks_per_seg); usable_blks_per_seg) != BLKS_PER_SEG(sbi));
/* check segment usage, and check boundary of a given segment number */ /* check segment usage, and check boundary of a given segment number */
if (unlikely(GET_SIT_VBLOCKS(raw_sit) > usable_blks_per_seg if (unlikely(GET_SIT_VBLOCKS(raw_sit) > usable_blks_per_seg
...@@ -915,9 +901,9 @@ static inline int nr_pages_to_skip(struct f2fs_sb_info *sbi, int type) ...@@ -915,9 +901,9 @@ static inline int nr_pages_to_skip(struct f2fs_sb_info *sbi, int type)
return 0; return 0;
if (type == DATA) if (type == DATA)
return sbi->blocks_per_seg; return BLKS_PER_SEG(sbi);
else if (type == NODE) else if (type == NODE)
return 8 * sbi->blocks_per_seg; return SEGS_TO_BLKS(sbi, 8);
else if (type == META) else if (type == META)
return 8 * BIO_MAX_VECS; return 8 * BIO_MAX_VECS;
else else
...@@ -969,3 +955,13 @@ static inline void wake_up_discard_thread(struct f2fs_sb_info *sbi, bool force) ...@@ -969,3 +955,13 @@ static inline void wake_up_discard_thread(struct f2fs_sb_info *sbi, bool force)
dcc->discard_wake = true; dcc->discard_wake = true;
wake_up_interruptible_all(&dcc->discard_wait_queue); wake_up_interruptible_all(&dcc->discard_wait_queue);
} }
static inline unsigned int first_zoned_segno(struct f2fs_sb_info *sbi)
{
int devi;
for (devi = 0; devi < sbi->s_ndevs; devi++)
if (bdev_is_zoned(FDEV(devi).bdev))
return GET_SEGNO(sbi, FDEV(devi).start_blk);
return 0;
}
This diff is collapsed.
...@@ -493,8 +493,8 @@ static ssize_t __sbi_store(struct f2fs_attr *a, ...@@ -493,8 +493,8 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
spin_lock(&sbi->stat_lock); spin_lock(&sbi->stat_lock);
if (t > (unsigned long)(sbi->user_block_count - if (t > (unsigned long)(sbi->user_block_count -
F2FS_OPTION(sbi).root_reserved_blocks - F2FS_OPTION(sbi).root_reserved_blocks -
sbi->blocks_per_seg * SEGS_TO_BLKS(sbi,
SM_I(sbi)->additional_reserved_segments)) { SM_I(sbi)->additional_reserved_segments))) {
spin_unlock(&sbi->stat_lock); spin_unlock(&sbi->stat_lock);
return -EINVAL; return -EINVAL;
} }
...@@ -551,7 +551,7 @@ static ssize_t __sbi_store(struct f2fs_attr *a, ...@@ -551,7 +551,7 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
} }
if (!strcmp(a->attr.name, "migration_granularity")) { if (!strcmp(a->attr.name, "migration_granularity")) {
if (t == 0 || t > sbi->segs_per_sec) if (t == 0 || t > SEGS_PER_SEC(sbi))
return -EINVAL; return -EINVAL;
} }
...@@ -1492,6 +1492,50 @@ static int __maybe_unused discard_plist_seq_show(struct seq_file *seq, ...@@ -1492,6 +1492,50 @@ static int __maybe_unused discard_plist_seq_show(struct seq_file *seq,
return 0; return 0;
} }
static int __maybe_unused disk_map_seq_show(struct seq_file *seq,
void *offset)
{
struct super_block *sb = seq->private;
struct f2fs_sb_info *sbi = F2FS_SB(sb);
int i;
seq_printf(seq, "Address Layout : %5luB Block address (# of Segments)\n",
F2FS_BLKSIZE);
seq_printf(seq, " SB : %12s\n", "0/1024B");
seq_printf(seq, " seg0_blkaddr : 0x%010x\n", SEG0_BLKADDR(sbi));
seq_printf(seq, " Checkpoint : 0x%010x (%10d)\n",
le32_to_cpu(F2FS_RAW_SUPER(sbi)->cp_blkaddr), 2);
seq_printf(seq, " SIT : 0x%010x (%10d)\n",
SIT_I(sbi)->sit_base_addr,
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count_sit));
seq_printf(seq, " NAT : 0x%010x (%10d)\n",
NM_I(sbi)->nat_blkaddr,
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count_nat));
seq_printf(seq, " SSA : 0x%010x (%10d)\n",
SM_I(sbi)->ssa_blkaddr,
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count_ssa));
seq_printf(seq, " Main : 0x%010x (%10d)\n",
SM_I(sbi)->main_blkaddr,
le32_to_cpu(F2FS_RAW_SUPER(sbi)->segment_count_main));
seq_printf(seq, " # of Sections : %12d\n",
le32_to_cpu(F2FS_RAW_SUPER(sbi)->section_count));
seq_printf(seq, " Segs/Sections : %12d\n",
SEGS_PER_SEC(sbi));
seq_printf(seq, " Section size : %12d MB\n",
SEGS_PER_SEC(sbi) << 1);
if (!f2fs_is_multi_device(sbi))
return 0;
seq_puts(seq, "\nDisk Map for multi devices:\n");
for (i = 0; i < sbi->s_ndevs; i++)
seq_printf(seq, "Disk:%2d (zoned=%d): 0x%010x - 0x%010x on %s\n",
i, bdev_is_zoned(FDEV(i).bdev),
FDEV(i).start_blk, FDEV(i).end_blk,
FDEV(i).path);
return 0;
}
int __init f2fs_init_sysfs(void) int __init f2fs_init_sysfs(void)
{ {
int ret; int ret;
...@@ -1573,6 +1617,8 @@ int f2fs_register_sysfs(struct f2fs_sb_info *sbi) ...@@ -1573,6 +1617,8 @@ int f2fs_register_sysfs(struct f2fs_sb_info *sbi)
victim_bits_seq_show, sb); victim_bits_seq_show, sb);
proc_create_single_data("discard_plist_info", 0444, sbi->s_proc, proc_create_single_data("discard_plist_info", 0444, sbi->s_proc,
discard_plist_seq_show, sb); discard_plist_seq_show, sb);
proc_create_single_data("disk_map", 0444, sbi->s_proc,
disk_map_seq_show, sb);
return 0; return 0;
put_feature_list_kobj: put_feature_list_kobj:
kobject_put(&sbi->s_feature_list_kobj); kobject_put(&sbi->s_feature_list_kobj);
......
...@@ -258,21 +258,23 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode, ...@@ -258,21 +258,23 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
pgoff_t index, pgoff_t index,
unsigned long num_ra_pages) unsigned long num_ra_pages)
{ {
struct page *page; struct folio *folio;
index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT; index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT;
page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED); folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
if (!page || !PageUptodate(page)) { if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index); DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
if (page) if (!IS_ERR(folio))
put_page(page); folio_put(folio);
else if (num_ra_pages > 1) else if (num_ra_pages > 1)
page_cache_ra_unbounded(&ractl, num_ra_pages, 0); page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
page = read_mapping_page(inode->i_mapping, index, NULL); folio = read_mapping_folio(inode->i_mapping, index, NULL);
if (IS_ERR(folio))
return ERR_CAST(folio);
} }
return page; return folio_file_page(folio, index);
} }
static int f2fs_write_merkle_tree_block(struct inode *inode, const void *buf, static int f2fs_write_merkle_tree_block(struct inode *inode, const void *buf,
......
...@@ -27,6 +27,7 @@ ...@@ -27,6 +27,7 @@
#define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS) #define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS)
#define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS) #define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS)
#define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1)
/* 0, 1(node nid), 2(meta nid) are reserved node id */ /* 0, 1(node nid), 2(meta nid) are reserved node id */
#define F2FS_RESERVED_NODE_NUM 3 #define F2FS_RESERVED_NODE_NUM 3
...@@ -40,12 +41,6 @@ ...@@ -40,12 +41,6 @@
#define F2FS_ENC_UTF8_12_1 1 #define F2FS_ENC_UTF8_12_1 1
#define F2FS_IO_SIZE(sbi) BIT(F2FS_OPTION(sbi).write_io_size_bits) /* Blocks */
#define F2FS_IO_SIZE_KB(sbi) BIT(F2FS_OPTION(sbi).write_io_size_bits + 2) /* KB */
#define F2FS_IO_SIZE_BITS(sbi) (F2FS_OPTION(sbi).write_io_size_bits) /* power of 2 */
#define F2FS_IO_SIZE_MASK(sbi) (F2FS_IO_SIZE(sbi) - 1)
#define F2FS_IO_ALIGNED(sbi) (F2FS_IO_SIZE(sbi) > 1)
/* This flag is used by node and meta inodes, and by recovery */ /* This flag is used by node and meta inodes, and by recovery */
#define GFP_F2FS_ZERO (GFP_NOFS | __GFP_ZERO) #define GFP_F2FS_ZERO (GFP_NOFS | __GFP_ZERO)
...@@ -81,6 +76,7 @@ enum stop_cp_reason { ...@@ -81,6 +76,7 @@ enum stop_cp_reason {
STOP_CP_REASON_CORRUPTED_SUMMARY, STOP_CP_REASON_CORRUPTED_SUMMARY,
STOP_CP_REASON_UPDATE_INODE, STOP_CP_REASON_UPDATE_INODE,
STOP_CP_REASON_FLUSH_FAIL, STOP_CP_REASON_FLUSH_FAIL,
STOP_CP_REASON_NO_SEGMENT,
STOP_CP_REASON_MAX, STOP_CP_REASON_MAX,
}; };
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment