1. 08 Mar, 2021 1 commit
    • Andy Lutomirski's avatar
      x86/stackprotector/32: Make the canary into a regular percpu variable · 3fb0fdb3
      Andy Lutomirski authored
      On 32-bit kernels, the stackprotector canary is quite nasty -- it is
      stored at %gs:(20), which is nasty because 32-bit kernels use %fs for
      percpu storage.  It's even nastier because it means that whether %gs
      contains userspace state or kernel state while running kernel code
      depends on whether stackprotector is enabled (this is
      CONFIG_X86_32_LAZY_GS), and this setting radically changes the way
      that segment selectors work.  Supporting both variants is a
      maintenance and testing mess.
      
      Merely rearranging so that percpu and the stack canary
      share the same segment would be messy as the 32-bit percpu address
      layout isn't currently compatible with putting a variable at a fixed
      offset.
      
      Fortunately, GCC 8.1 added options that allow the stack canary to be
      accessed as %fs:__stack_chk_guard, effectively turning it into an ordinary
      percpu variable.  This lets us get rid of all of the code to manage the
      stack canary GDT descriptor and the CONFIG_X86_32_LAZY_GS mess.
      
      (That name is special.  We could use any symbol we want for the
       %fs-relative mode, but for CONFIG_SMP=n, gcc refuses to let us use any
       name other than __stack_chk_guard.)
      
      Forcibly disable stackprotector on older compilers that don't support
      the new options and turn the stack canary into a percpu variable. The
      "lazy GS" approach is now used for all 32-bit configurations.
      
      Also makes load_gs_index() work on 32-bit kernels. On 64-bit kernels,
      it loads the GS selector and updates the user GSBASE accordingly. (This
      is unchanged.) On 32-bit kernels, it loads the GS selector and updates
      GSBASE, which is now always the user base. This means that the overall
      effect is the same on 32-bit and 64-bit, which avoids some ifdeffery.
      
       [ bp: Massage commit message. ]
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Link: https://lkml.kernel.org/r/c0ff7dba14041c7e5d1cae5d4df052f03759bef3.1613243844.git.luto@kernel.org
      3fb0fdb3
  2. 06 Mar, 2021 4 commits
  3. 05 Mar, 2021 33 commits
  4. 04 Mar, 2021 2 commits
    • Jens Axboe's avatar
      kernel: provide create_io_thread() helper · cc440e87
      Jens Axboe authored
      Provide a generic helper for setting up an io_uring worker. Returns a
      task_struct so that the caller can do whatever setup is needed, then call
      wake_up_new_task() to kick it into gear.
      
      Add a kernel_clone_args member, io_thread, which tells copy_process() to
      mark the task with PF_IO_WORKER.
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      cc440e87
    • Pavel Begunkov's avatar
      io_uring: reliably cancel linked timeouts · dd59a3d5
      Pavel Begunkov authored
      Linked timeouts are fired asynchronously (i.e. soft-irq), and use
      generic cancellation paths to do its stuff, including poking into io-wq.
      The problem is that it's racy to access tctx->io_wq, as
      io_uring_task_cancel() and others may be happening at this exact moment.
      Mark linked timeouts with REQ_F_INLIFGHT for now, making sure there are
      no timeouts before io-wq destraction.
      Signed-off-by: default avatarPavel Begunkov <asml.silence@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      dd59a3d5