Commit 3f79b187 authored by Kairui Song's avatar Kairui Song Committed by Andrew Morton

swapfile: get rid of volatile and avoid redundant read

Patch series "Clean up and fixes for swap", v2.

This series cleans up some code paths, saves a few cycles and reduces the
object size by a bit.  It also fixes some rare race issue with statistics.


This patch (of 4):

Convert a volatile variable to more readable READ_ONCE.  And this actually
avoids the code from reading the variable twice redundantly when it races.

Link: https://lkml.kernel.org/r/20221219185840.25441-1-ryncsn@gmail.com
Link: https://lkml.kernel.org/r/20221219185840.25441-2-ryncsn@gmail.comSigned-off-by: default avatarKairui Song <kasong@tencent.com>
Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 497b099d
...@@ -1835,13 +1835,13 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -1835,13 +1835,13 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
pte_t *pte; pte_t *pte;
struct swap_info_struct *si; struct swap_info_struct *si;
int ret = 0; int ret = 0;
volatile unsigned char *swap_map;
si = swap_info[type]; si = swap_info[type];
pte = pte_offset_map(pmd, addr); pte = pte_offset_map(pmd, addr);
do { do {
struct folio *folio; struct folio *folio;
unsigned long offset; unsigned long offset;
unsigned char swp_count;
if (!is_swap_pte(*pte)) if (!is_swap_pte(*pte))
continue; continue;
...@@ -1852,7 +1852,6 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -1852,7 +1852,6 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
offset = swp_offset(entry); offset = swp_offset(entry);
pte_unmap(pte); pte_unmap(pte);
swap_map = &si->swap_map[offset];
folio = swap_cache_get_folio(entry, vma, addr); folio = swap_cache_get_folio(entry, vma, addr);
if (!folio) { if (!folio) {
struct page *page; struct page *page;
...@@ -1869,8 +1868,10 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd, ...@@ -1869,8 +1868,10 @@ static int unuse_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
folio = page_folio(page); folio = page_folio(page);
} }
if (!folio) { if (!folio) {
if (*swap_map == 0 || *swap_map == SWAP_MAP_BAD) swp_count = READ_ONCE(si->swap_map[offset]);
if (swp_count == 0 || swp_count == SWAP_MAP_BAD)
goto try_next; goto try_next;
return -ENOMEM; return -ENOMEM;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment