Commit b0365c8d authored by Hillf Danton's avatar Hillf Danton Committed by Linus Torvalds

mm: hugetlb: fix non-atomic enqueue of huge page

If a huge page is enqueued under the protection of hugetlb_lock, then the
operation is atomic and safe.
Signed-off-by: default avatarHillf Danton <dhillf@gmail.com>
Reviewed-by: default avatarMichal Hocko <mhocko@suse.cz>
Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: <stable@vger.kernel.org>		[2.6.37+]
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 34845636
...@@ -901,7 +901,6 @@ static int gather_surplus_pages(struct hstate *h, int delta) ...@@ -901,7 +901,6 @@ static int gather_surplus_pages(struct hstate *h, int delta)
h->resv_huge_pages += delta; h->resv_huge_pages += delta;
ret = 0; ret = 0;
spin_unlock(&hugetlb_lock);
/* Free the needed pages to the hugetlb pool */ /* Free the needed pages to the hugetlb pool */
list_for_each_entry_safe(page, tmp, &surplus_list, lru) { list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
if ((--needed) < 0) if ((--needed) < 0)
...@@ -915,6 +914,7 @@ static int gather_surplus_pages(struct hstate *h, int delta) ...@@ -915,6 +914,7 @@ static int gather_surplus_pages(struct hstate *h, int delta)
VM_BUG_ON(page_count(page)); VM_BUG_ON(page_count(page));
enqueue_huge_page(h, page); enqueue_huge_page(h, page);
} }
spin_unlock(&hugetlb_lock);
/* Free unnecessary surplus pages to the buddy allocator */ /* Free unnecessary surplus pages to the buddy allocator */
free: free:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment