• Tony Luck's avatar
    x86/mce: Avoid infinite loop for copy from user recovery · 81065b35
    Tony Luck authored
    There are two cases for machine check recovery:
    
    1) The machine check was triggered by ring3 (application) code.
       This is the simpler case. The machine check handler simply queues
       work to be executed on return to user. That code unmaps the page
       from all users and arranges to send a SIGBUS to the task that
       triggered the poison.
    
    2) The machine check was triggered in kernel code that is covered by
       an exception table entry. In this case the machine check handler
       still queues a work entry to unmap the page, etc. but this will
       not be called right away because the #MC handler returns to the
       fix up code address in the exception table entry.
    
    Problems occur if the kernel triggers another machine check before the
    return to user processes the first queued work item.
    
    Specifically, the work is queued using the ->mce_kill_me callback
    structure in the task struct for the current thread. Attempting to queue
    a second work item using this same callback results in a loop in the
    linked list of work functions to call. So when the kernel does return to
    user, it enters an infinite loop processing the same entry for ever.
    
    There are some legitimate scenarios where the kernel may take a second
    machine check before returning to the user.
    
    1) Some code (e.g. futex) first tries a get_user() with page faults
       disabled. If this fails, the code retries with page faults enabled
       expecting that this will resolve the page fault.
    
    2) Copy from user code retries a copy in byte-at-time mode to check
       whether any additional bytes can be copied.
    
    On the other side of the fence are some bad drivers that do not check
    the return value from individual get_user() calls and may access
    multiple user addresses without noticing that some/all calls have
    failed.
    
    Fix by adding a counter (current->mce_count) to keep track of repeated
    machine checks before task_work() is called. First machine check saves
    the address information and calls task_work_add(). Subsequent machine
    checks before that task_work call back is executed check that the address
    is in the same page as the first machine check (since the callback will
    offline exactly one page).
    
    Expected worst case is four machine checks before moving on (e.g. one
    user access with page faults disabled, then a repeat to the same address
    with page faults enabled ... repeat in copy tail bytes). Just in case
    there is some code that loops forever enforce a limit of 10.
    
     [ bp: Massage commit message, drop noinstr, fix typo, extend panic
       messages. ]
    
    Fixes: 5567d11c ("x86/mce: Send #MC singal from task work")
    Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
    Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
    Cc: <stable@vger.kernel.org>
    Link: https://lkml.kernel.org/r/YT/IJ9ziLqmtqEPu@agluck-desk2.amr.corp.intel.com
    81065b35
core.c 66 KB