Commit ace2e92e authored by Jeremy Fitzhardinge's avatar Jeremy Fitzhardinge Committed by Jeremy Fitzhardinge

xfs: eagerly remove vmap mappings to avoid upsetting Xen

XFS leaves stray mappings around when it vmaps memory to make it
virtually contigious.  This upsets Xen if one of those pages is being
recycled into a pagetable, since it finds an extra writable mapping of
the page.

This patch solves the problem in a brute force way, by making XFS
always eagerly unmap its mappings.  David Chinner says this shouldn't
have any performance impact on filesystems with default block sizes;
it will only affect filesystems with large block sizes.
Signed-off-by: default avatarJeremy Fitzhardinge <jeremy@xensource.com>
Acked-by: default avatarDavid Chinner <dgc@sgi.com>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: XFS masters <xfs-masters@oss.sgi.com>
Cc: Stable kernel <stable@kernel.org>
Cc: Morten =?utf-8?q?B=C3=B8geskov?= <xen-users@morten.bogeskov.dk>
Cc: Mark Williamson <mark.williamson@cl.cam.ac.uk>
parent a122d623
...@@ -187,6 +187,19 @@ free_address( ...@@ -187,6 +187,19 @@ free_address(
{ {
a_list_t *aentry; a_list_t *aentry;
#ifdef CONFIG_XEN
/*
* Xen needs to be able to make sure it can get an exclusive
* RO mapping of pages it wants to turn into a pagetable. If
* a newly allocated page is also still being vmap()ed by xfs,
* it will cause pagetable construction to fail. This is a
* quick workaround to always eagerly unmap pages so that Xen
* is happy.
*/
vunmap(addr);
return;
#endif
aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT); aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT);
if (likely(aentry)) { if (likely(aentry)) {
spin_lock(&as_lock); spin_lock(&as_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment