Commit 79aa925b authored by Mike Kravetz's avatar Mike Kravetz Committed by Linus Torvalds

hugetlb_cgroup: fix reservation accounting

Michal Privoznik was using "free page reporting" in QEMU/virtio-balloon
with hugetlbfs and hit the warning below.  QEMU with free page hinting
uses fallocate(FALLOC_FL_PUNCH_HOLE) to discard pages that are reported
as free by a VM.  The reporting granularity is in pageblock granularity.
So when the guest reports 2M chunks, we fallocate(FALLOC_FL_PUNCH_HOLE)
one huge page in QEMU.

  WARNING: CPU: 7 PID: 6636 at mm/page_counter.c:57 page_counter_uncharge+0x4b/0x50
  Modules linked in: ...
  CPU: 7 PID: 6636 Comm: qemu-system-x86 Not tainted 5.9.0 #137
  Hardware name: Gigabyte Technology Co., Ltd. X570 AORUS PRO/X570 AORUS PRO, BIOS F21 07/31/2020
  RIP: 0010:page_counter_uncharge+0x4b/0x50
  ...
  Call Trace:
    hugetlb_cgroup_uncharge_file_region+0x4b/0x80
    region_del+0x1d3/0x300
    hugetlb_unreserve_pages+0x39/0xb0
    remove_inode_hugepages+0x1a8/0x3d0
    hugetlbfs_fallocate+0x3c4/0x5c0
    vfs_fallocate+0x146/0x290
    __x64_sys_fallocate+0x3e/0x70
    do_syscall_64+0x33/0x40
    entry_SYSCALL_64_after_hwframe+0x44/0xa9

Investigation of the issue uncovered bugs in hugetlb cgroup reservation
accounting.  This patch addresses the found issues.

Fixes: 075a61d0 ("hugetlb_cgroup: add accounting for shared mappings")
Reported-by: default avatarMichal Privoznik <mprivozn@redhat.com>
Co-developed-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Tested-by: default avatarMichal Privoznik <mprivozn@redhat.com>
Reviewed-by: default avatarMina Almasry <almasrymina@google.com>
Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
Cc: <stable@vger.kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Muchun Song <songmuchun@bytedance.com>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Tejun Heo <tj@kernel.org>
Link: https://lkml.kernel.org/r/20201021204426.36069-1-mike.kravetz@oracle.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 46b1ee38
...@@ -648,6 +648,8 @@ static long region_del(struct resv_map *resv, long f, long t) ...@@ -648,6 +648,8 @@ static long region_del(struct resv_map *resv, long f, long t)
} }
del += t - f; del += t - f;
hugetlb_cgroup_uncharge_file_region(
resv, rg, t - f);
/* New entry for end of split region */ /* New entry for end of split region */
nrg->from = t; nrg->from = t;
...@@ -660,9 +662,6 @@ static long region_del(struct resv_map *resv, long f, long t) ...@@ -660,9 +662,6 @@ static long region_del(struct resv_map *resv, long f, long t)
/* Original entry is trimmed */ /* Original entry is trimmed */
rg->to = f; rg->to = f;
hugetlb_cgroup_uncharge_file_region(
resv, rg, nrg->to - nrg->from);
list_add(&nrg->link, &rg->link); list_add(&nrg->link, &rg->link);
nrg = NULL; nrg = NULL;
break; break;
...@@ -678,17 +677,17 @@ static long region_del(struct resv_map *resv, long f, long t) ...@@ -678,17 +677,17 @@ static long region_del(struct resv_map *resv, long f, long t)
} }
if (f <= rg->from) { /* Trim beginning of region */ if (f <= rg->from) { /* Trim beginning of region */
del += t - rg->from;
rg->from = t;
hugetlb_cgroup_uncharge_file_region(resv, rg, hugetlb_cgroup_uncharge_file_region(resv, rg,
t - rg->from); t - rg->from);
} else { /* Trim end of region */
del += rg->to - f;
rg->to = f;
del += t - rg->from;
rg->from = t;
} else { /* Trim end of region */
hugetlb_cgroup_uncharge_file_region(resv, rg, hugetlb_cgroup_uncharge_file_region(resv, rg,
rg->to - f); rg->to - f);
del += rg->to - f;
rg->to = f;
} }
} }
...@@ -2443,6 +2442,9 @@ struct page *alloc_huge_page(struct vm_area_struct *vma, ...@@ -2443,6 +2442,9 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
rsv_adjust = hugepage_subpool_put_pages(spool, 1); rsv_adjust = hugepage_subpool_put_pages(spool, 1);
hugetlb_acct_memory(h, -rsv_adjust); hugetlb_acct_memory(h, -rsv_adjust);
if (deferred_reserve)
hugetlb_cgroup_uncharge_page_rsvd(hstate_index(h),
pages_per_huge_page(h), page);
} }
return page; return page;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment