Commit de21fd42 authored by Hugh Dickins's avatar Hugh Dickins Committed by Ben Hutchings

shmem: fix faulting into a hole, not taking i_mutex

commit 8e205f77 upstream.

Commit f00cdc6d ("shmem: fix faulting into a hole while it's
punched") was buggy: Sasha sent a lockdep report to remind us that
grabbing i_mutex in the fault path is a no-no (write syscall may already
hold i_mutex while faulting user buffer).

We tried a completely different approach (see following patch) but that
proved inadequate: good enough for a rational workload, but not good
enough against trinity - which forks off so many mappings of the object
that contention on i_mmap_mutex while hole-puncher holds i_mutex builds
into serious starvation when concurrent faults force the puncher to fall
back to single-page unmap_mapping_range() searches of the i_mmap tree.

So return to the original umbrella approach, but keep away from i_mutex
this time.  We really don't want to bloat every shmem inode with a new
mutex or completion, just to protect this unlikely case from trinity.
So extend the original with wait_queue_head on stack at the hole-punch
end, and wait_queue item on the stack at the fault end.

This involves further use of i_lock to guard against the races: lockdep
has been happy so far, and I see fs/inode.c:unlock_new_inode() holds
i_lock around wake_up_bit(), which is comparable to what we do here.
i_lock is more convenient, but we could switch to shmem's info->lock.

This issue has been tagged with CVE-2014-4171, which will require commit
f00cdc6d and this and the following patch to be backported: we
suggest to 3.1+, though in fact the trinity forkbomb effect might go
back as far as 2.6.16, when madvise(,,MADV_REMOVE) came in - or might
not, since much has changed, with i_mmap_mutex a spinlock before 3.0.
Anyone running trinity on 3.0 and earlier? I don't think we need care.
Signed-off-by: default avatarHugh Dickins <hughd@google.com>
Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
Tested-by: default avatarSasha Levin <sasha.levin@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Konstantin Khlebnikov <koct9i@gmail.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Lukas Czerner <lczerner@redhat.com>
Cc: Dave Jones <davej@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
parent f159cc25
...@@ -82,6 +82,7 @@ static struct vfsmount *shm_mnt; ...@@ -82,6 +82,7 @@ static struct vfsmount *shm_mnt;
* a time): we would prefer not to enlarge the shmem inode just for that. * a time): we would prefer not to enlarge the shmem inode just for that.
*/ */
struct shmem_falloc { struct shmem_falloc {
wait_queue_head_t *waitq; /* faults into hole wait for punch to end */
pgoff_t start; /* start of range currently being fallocated */ pgoff_t start; /* start of range currently being fallocated */
pgoff_t next; /* the next page offset to be fallocated */ pgoff_t next; /* the next page offset to be fallocated */
}; };
...@@ -1074,37 +1075,57 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) ...@@ -1074,37 +1075,57 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
* Trinity finds that probing a hole which tmpfs is punching can * Trinity finds that probing a hole which tmpfs is punching can
* prevent the hole-punch from ever completing: which in turn * prevent the hole-punch from ever completing: which in turn
* locks writers out with its hold on i_mutex. So refrain from * locks writers out with its hold on i_mutex. So refrain from
* faulting pages into the hole while it's being punched, and * faulting pages into the hole while it's being punched. Although
* wait on i_mutex to be released if vmf->flags permits. * shmem_truncate_range() does remove the additions, it may be unable to
* keep up, as each new page needs its own unmap_mapping_range() call,
* and the i_mmap tree grows ever slower to scan if new vmas are added.
*
* It does not matter if we sometimes reach this check just before the
* hole-punch begins, so that one fault then races with the punch:
* we just need to make racing faults a rare case.
*
* The implementation below would be much simpler if we just used a
* standard mutex or completion: but we cannot take i_mutex in fault,
* and bloating every shmem inode for this unlikely case would be sad.
*/ */
if (unlikely(inode->i_private)) { if (unlikely(inode->i_private)) {
struct shmem_falloc *shmem_falloc; struct shmem_falloc *shmem_falloc;
spin_lock(&inode->i_lock); spin_lock(&inode->i_lock);
shmem_falloc = inode->i_private; shmem_falloc = inode->i_private;
if (!shmem_falloc || if (shmem_falloc &&
vmf->pgoff < shmem_falloc->start || vmf->pgoff >= shmem_falloc->start &&
vmf->pgoff >= shmem_falloc->next) vmf->pgoff < shmem_falloc->next) {
shmem_falloc = NULL; wait_queue_head_t *shmem_falloc_waitq;
spin_unlock(&inode->i_lock); DEFINE_WAIT(shmem_fault_wait);
/*
* i_lock has protected us from taking shmem_falloc seriously ret = VM_FAULT_NOPAGE;
* once return from vmtruncate_range() went back up that stack.
* i_lock does not serialize with i_mutex at all, but it does
* not matter if sometimes we wait unnecessarily, or sometimes
* miss out on waiting: we just need to make those cases rare.
*/
if (shmem_falloc) {
if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) && if ((vmf->flags & FAULT_FLAG_ALLOW_RETRY) &&
!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) { !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
/* It's polite to up mmap_sem if we can */
up_read(&vma->vm_mm->mmap_sem); up_read(&vma->vm_mm->mmap_sem);
mutex_lock(&inode->i_mutex); ret = VM_FAULT_RETRY;
mutex_unlock(&inode->i_mutex);
return VM_FAULT_RETRY;
} }
/* cond_resched? Leave that to GUP or return to user */
return VM_FAULT_NOPAGE; shmem_falloc_waitq = shmem_falloc->waitq;
prepare_to_wait(shmem_falloc_waitq, &shmem_fault_wait,
TASK_UNINTERRUPTIBLE);
spin_unlock(&inode->i_lock);
schedule();
/*
* shmem_falloc_waitq points into the vmtruncate_range()
* stack of the hole-punching task: shmem_falloc_waitq
* is usually invalid by the time we reach here, but
* finish_wait() does not dereference it in that case;
* though i_lock needed lest racing with wake_up_all().
*/
spin_lock(&inode->i_lock);
finish_wait(shmem_falloc_waitq, &shmem_fault_wait);
spin_unlock(&inode->i_lock);
return ret;
} }
spin_unlock(&inode->i_lock);
} }
error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret); error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret);
...@@ -1135,7 +1156,9 @@ int vmtruncate_range(struct inode *inode, loff_t lstart, loff_t lend) ...@@ -1135,7 +1156,9 @@ int vmtruncate_range(struct inode *inode, loff_t lstart, loff_t lend)
struct address_space *mapping = inode->i_mapping; struct address_space *mapping = inode->i_mapping;
loff_t unmap_start = round_up(lstart, PAGE_SIZE); loff_t unmap_start = round_up(lstart, PAGE_SIZE);
loff_t unmap_end = round_down(1 + lend, PAGE_SIZE) - 1; loff_t unmap_end = round_down(1 + lend, PAGE_SIZE) - 1;
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(shmem_falloc_waitq);
shmem_falloc.waitq = &shmem_falloc_waitq;
shmem_falloc.start = unmap_start >> PAGE_SHIFT; shmem_falloc.start = unmap_start >> PAGE_SHIFT;
shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT;
spin_lock(&inode->i_lock); spin_lock(&inode->i_lock);
...@@ -1150,6 +1173,7 @@ int vmtruncate_range(struct inode *inode, loff_t lstart, loff_t lend) ...@@ -1150,6 +1173,7 @@ int vmtruncate_range(struct inode *inode, loff_t lstart, loff_t lend)
spin_lock(&inode->i_lock); spin_lock(&inode->i_lock);
inode->i_private = NULL; inode->i_private = NULL;
wake_up_all(&shmem_falloc_waitq);
spin_unlock(&inode->i_lock); spin_unlock(&inode->i_lock);
} }
mutex_unlock(&inode->i_mutex); mutex_unlock(&inode->i_mutex);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment