Commit 2f37511c authored by Wei Yang's avatar Wei Yang Committed by Linus Torvalds

mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page

set_hugetlb_cgroup_[rsvd] just manipulate page local data, which is not
necessary to be protected by hugetlb_lock.

Let's take this out.
Signed-off-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Reviewed-by: default avatarBaoquan He <bhe@redhat.com>
Reviewed-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Link: https://lkml.kernel.org/r/20200831022351.20916-7-richard.weiyang@linux.alibaba.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 15a8d68e
...@@ -1504,9 +1504,9 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) ...@@ -1504,9 +1504,9 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
{ {
INIT_LIST_HEAD(&page->lru); INIT_LIST_HEAD(&page->lru);
set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
spin_lock(&hugetlb_lock);
set_hugetlb_cgroup(page, NULL); set_hugetlb_cgroup(page, NULL);
set_hugetlb_cgroup_rsvd(page, NULL); set_hugetlb_cgroup_rsvd(page, NULL);
spin_lock(&hugetlb_lock);
h->nr_huge_pages++; h->nr_huge_pages++;
h->nr_huge_pages_node[nid]++; h->nr_huge_pages_node[nid]++;
spin_unlock(&hugetlb_lock); spin_unlock(&hugetlb_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment