1. 21 Dec, 2020 4 commits
  2. 08 Dec, 2020 5 commits
  3. 12 Nov, 2020 3 commits
  4. 28 Oct, 2020 1 commit
  5. 27 Oct, 2020 7 commits
    • Linus Walleij's avatar
      ARM: 9017/2: Enable KASan for ARM · 42101571
      Linus Walleij authored
      This patch enables the kernel address sanitizer for ARM. XIP_KERNEL
      has not been tested and is therefore not allowed for now.
      
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Acked-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Signed-off-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      42101571
    • Linus Walleij's avatar
      ARM: 9016/2: Initialize the mapping of KASan shadow memory · 5615f69b
      Linus Walleij authored
      This patch initializes KASan shadow region's page table and memory.
      There are two stage for KASan initializing:
      
      1. At early boot stage the whole shadow region is mapped to just
         one physical page (kasan_zero_page). It is finished by the function
         kasan_early_init which is called by __mmap_switched(arch/arm/kernel/
         head-common.S)
      
      2. After the calling of paging_init, we use kasan_zero_page as zero
         shadow for some memory that KASan does not need to track, and we
         allocate a new shadow space for the other memory that KASan need to
         track. These issues are finished by the function kasan_init which is
         call by setup_arch.
      
      When using KASan we also need to increase the THREAD_SIZE_ORDER
      from 1 to 2 as the extra calls for shadow memory uses quite a bit
      of stack.
      
      As we need to make a temporary copy of the PGD when setting up
      shadow memory we create a helpful PGD_SIZE definition for both
      LPAE and non-LPAE setups.
      
      The KASan core code unconditionally calls pud_populate() so this
      needs to be changed from BUG() to do {} while (0) when building
      with KASan enabled.
      
      After the initial development by Andre Ryabinin several modifications
      have been made to this code:
      
      Abbott Liu <liuwenliang@huawei.com>
      - Add support ARM LPAE: If LPAE is enabled, KASan shadow region's
        mapping table need be copied in the pgd_alloc() function.
      - Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate,
        kasan_pgd_populate from .meminit.text section to .init.text section.
        Reported by Florian Fainelli <f.fainelli@gmail.com>
      
      Linus Walleij <linus.walleij@linaro.org>:
      - Drop the custom mainpulation of TTBR0 and just use
        cpu_switch_mm() to switch the pgd table.
      - Adopt to handle 4th level page tabel folding.
      - Rewrite the entire page directory and page entry initialization
        sequence to be recursive based on ARM64:s kasan_init.c.
      
      Ard Biesheuvel <ardb@kernel.org>:
      - Necessary underlying fixes.
      - Crucial bug fixes to the memory set-up code.
      Co-developed-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Co-developed-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Co-developed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Reported-by: default avatarRussell King - ARM Linux <rmk+kernel@armlinux.org.uk>
      Reported-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      5615f69b
    • Linus Walleij's avatar
      ARM: 9015/2: Define the virtual space of KASan's shadow region · c12366ba
      Linus Walleij authored
      Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for
      the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB
      addressable by a 32bit architecture) out of the virtual address
      space to use as shadow memory for KASan as follows:
      
       +----+ 0xffffffff
       |    |
       |    | |-> Static kernel image (vmlinux) BSS and page table
       |    |/
       +----+ PAGE_OFFSET
       |    |
       |    | |->  Loadable kernel modules virtual address space area
       |    |/
       +----+ MODULES_VADDR = KASAN_SHADOW_END
       |    |
       |    | |-> The shadow area of kernel virtual address.
       |    |/
       +----+->  TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the
       |    |   shadow address of MODULES_VADDR
       |    | |
       |    | |
       |    | |-> The user space area in lowmem. The kernel address
       |    | |   sanitizer do not use this space, nor does it map it.
       |    | |
       |    | |
       |    | |
       |    | |
       |    |/
       ------ 0
      
      0 .. TASK_SIZE is the memory that can be used by shared
      userspace/kernelspace. It us used for userspace processes and for
      passing parameters and memory buffers in system calls etc. We do not
      need to shadow this area.
      
      KASAN_SHADOW_START:
       This value begins with the MODULE_VADDR's shadow address. It is the
       start of kernel virtual space. Since we have modules to load, we need
       to cover also that area with shadow memory so we can find memory
       bugs in modules.
      
      KASAN_SHADOW_END
       This value is the 0x100000000's shadow address: the mapping that would
       be after the end of the kernel memory at 0xffffffff. It is the end of
       kernel address sanitizer shadow area. It is also the start of the
       module area.
      
      KASAN_SHADOW_OFFSET:
       This value is used to map an address to the corresponding shadow
       address by the following formula:
      
         shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET;
      
       As you would expect, >> 3 is equal to dividing by 8, meaning each
       byte in the shadow memory covers 8 bytes of kernel memory, so one
       bit shadow memory per byte of kernel memory is used.
      
       The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending
       on the VMSPLIT layout of the system: the kernel and userspace can
       split up lowmem in different ways according to needs, so we calculate
       the shadow offset depending on this.
      
      When kasan is enabled, the definition of TASK_SIZE is not an 8-bit
      rotated constant, so we need to modify the TASK_SIZE access code in the
      *.s file.
      
      The kernel and modules may use different amounts of memory,
      according to the VMSPLIT configuration, which in turn
      determines the PAGE_OFFSET.
      
      We use the following KASAN_SHADOW_OFFSETs depending on how the
      virtual memory is split up:
      
      - 0x1f000000 if we have 1G userspace / 3G kernelspace split:
        - The kernel address space is 3G (0xc0000000)
        - PAGE_OFFSET is then set to 0x40000000 so the kernel static
          image (vmlinux) uses addresses 0x40000000 .. 0xffffffff
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0x3f000000
          so the modules use addresses 0x3f000000 .. 0x3fffffff
        - So the addresses 0x3f000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0xc1000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x18200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0x26e00000, to
          KASAN_SHADOW_END at 0x3effffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0x3f000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3)
          KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000
          KASAN_SHADOW_OFFSET = 0x1f000000
      
      - 0x5f000000 if we have 2G userspace / 2G kernelspace split:
        - The kernel space is 2G (0x80000000)
        - PAGE_OFFSET is set to 0x80000000 so the kernel static
          image uses 0x80000000 .. 0xffffffff.
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0x7f000000
          so the modules use addresses 0x7f000000 .. 0x7fffffff
        - So the addresses 0x7f000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0x81000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x10200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0x6ee00000, to
          KASAN_SHADOW_END at 0x7effffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0x7f000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3)
          KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000
          KASAN_SHADOW_OFFSET = 0x5f000000
      
      - 0x9f000000 if we have 3G userspace / 1G kernelspace split,
        and this is the default split for ARM:
        - The kernel address space is 1GB (0x40000000)
        - PAGE_OFFSET is set to 0xc0000000 so the kernel static
          image uses 0xc0000000 .. 0xffffffff.
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0xbf000000
          so the modules use addresses 0xbf000000 .. 0xbfffffff
        - So the addresses 0xbf000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0x41000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x08200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0xb6e00000, to
          KASAN_SHADOW_END at 0xbfffffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0xbf000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3)
          KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000
          KASAN_SHADOW_OFFSET = 0x9f000000
      
      - 0x8f000000 if we have 3G userspace / 1G kernelspace with
        full 1 GB low memory (VMSPLIT_3G_OPT):
        - The kernel address space is 1GB (0x40000000)
        - PAGE_OFFSET is set to 0xb0000000 so the kernel static
          image uses 0xb0000000 .. 0xffffffff.
        - On top of that we have the MODULES_VADDR which under
          the worst case (using ARM instructions) is
          PAGE_OFFSET - 16M (0x01000000) = 0xaf000000
          so the modules use addresses 0xaf000000 .. 0xaffffff
        - So the addresses 0xaf000000 .. 0xffffffff need to be
          covered with shadow memory. That is 0x51000000 bytes
          of memory.
        - 1/8 of that is needed for its shadow memory, so
          0x0a200000 bytes of shadow memory is needed. We
          "steal" that from the remaining lowmem.
        - The KASAN_SHADOW_START becomes 0xa4e00000, to
          KASAN_SHADOW_END at 0xaeffffff.
        - Now we can calculate the KASAN_SHADOW_OFFSET for any
          kernel address as 0xaf000000 needs to map to the first
          byte of shadow memory and 0xffffffff needs to map to
          the last byte of shadow memory. Since:
          SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET
          0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET
          KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3)
          KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000
          KASAN_SHADOW_OFFSET = 0x8f000000
      
      - The default value of 0xffffffff for KASAN_SHADOW_OFFSET
        is an error value. We should always match one of the
        above shadow offsets.
      
      When we do this, TASK_SIZE will sometimes get a bit odd values
      that will not fit into immediate mov assembly instructions.
      To account for this, we need to rewrite some assembly using
      TASK_SIZE like this:
      
      -       mov     r1, #TASK_SIZE
      +       ldr     r1, =TASK_SIZE
      
      or
      
      -       cmp     r4, #TASK_SIZE
      +       ldr     r0, =TASK_SIZE
      +       cmp     r4, r0
      
      this is done to avoid the immediate #TASK_SIZE that need to
      fit into a limited number of bits.
      
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Reported-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      c12366ba
    • Linus Walleij's avatar
      ARM: 9014/2: Replace string mem* functions for KASan · d6d51a96
      Linus Walleij authored
      Functions like memset()/memmove()/memcpy() do a lot of memory
      accesses.
      
      If a bad pointer is passed to one of these functions it is important
      to catch this. Compiler instrumentation cannot do this since these
      functions are written in assembly.
      
      KASan replaces these memory functions with instrumented variants.
      
      The original functions are declared as weak symbols so that
      the strong definitions in mm/kasan/kasan.c can replace them.
      
      The original functions have aliases with a '__' prefix in their
      name, so we can call the non-instrumented variant if needed.
      
      We must use __memcpy()/__memset() in place of memcpy()/memset()
      when we copy .data to RAM and when we clear .bss, because
      kasan_early_init cannot be called before the initialization of
      .data and .bss.
      
      For the kernel compression and EFI libstub's custom string
      libraries we need a special quirk: even if these are built
      without KASan enabled, they rely on the global headers for their
      custom string libraries, which means that e.g. memcpy()
      will be defined to __memcpy() and we get link failures.
      Since these implementations are written i C rather than
      assembly we use e.g. __alias(memcpy) to redirected any
      users back to the local implementation.
      
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Reported-by: default avatarRussell King - ARM Linux <rmk+kernel@armlinux.org.uk>
      Signed-off-by: default avatarAhmad Fatoum <a.fatoum@pengutronix.de>
      Signed-off-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      d6d51a96
    • Linus Walleij's avatar
      ARM: 9013/2: Disable KASan instrumentation for some code · d5d44e7e
      Linus Walleij authored
      Disable instrumentation for arch/arm/boot/compressed/*
      since that code is executed before the kernel has even
      set up its mappings and definately out of scope for
      KASan.
      
      Disable instrumentation of arch/arm/vdso/* because that code
      is not linked with the kernel image, so the KASan management
      code would fail to link.
      
      Disable instrumentation of arch/arm/mm/physaddr.c. See commit
      ec6d06ef ("arm64: Add support for CONFIG_DEBUG_VIRTUAL")
      for more details.
      
      Disable kasan check in the function unwind_pop_register because
      it does not matter that kasan checks failed when unwind_pop_register()
      reads the stack memory of a task.
      
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: kasan-dev@googlegroups.com
      Reviewed-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: Ard Biesheuvel <ardb@kernel.org> # QEMU/KVM/mach-virt/LPAE/8G
      Tested-by: Florian Fainelli <f.fainelli@gmail.com> # Brahma SoCs
      Tested-by: Ahmad Fatoum <a.fatoum@pengutronix.de> # i.MX6Q
      Reported-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Reported-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarAbbott Liu <liuwenliang@huawei.com>
      Signed-off-by: default avatarFlorian Fainelli <f.fainelli@gmail.com>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      d5d44e7e
    • Ard Biesheuvel's avatar
      ARM: 9012/1: move device tree mapping out of linear region · 7a1be318
      Ard Biesheuvel authored
      On ARM, setting up the linear region is tricky, given the constraints
      around placement and alignment of the memblocks, and how the kernel
      itself as well as the DT are placed in physical memory.
      
      Let's simplify matters a bit, by moving the device tree mapping to the
      top of the address space, right between the end of the vmalloc region
      and the start of the the fixmap region, and create a read-only mapping
      for it that is independent of the size of the linear region, and how it
      is organized.
      
      Since this region was formerly used as a guard region, which will now be
      populated fully on LPAE builds by this read-only mapping (which will
      still be able to function as a guard region for stray writes), bump the
      start of the [underutilized] fixmap region by 512 KB as well, to ensure
      that there is always a proper guard region here. Doing so still leaves
      ample room for the fixmap space, even with NR_CPUS set to its maximum
      value of 32.
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      7a1be318
    • Ard Biesheuvel's avatar
      ARM: 9011/1: centralize phys-to-virt conversion of DT/ATAGS address · e9a2f8b5
      Ard Biesheuvel authored
      Before moving the DT mapping out of the linear region, let's prepare
      for this change by removing all the phys-to-virt translations of the
      __atags_pointer variable, and perform this translation only once at
      setup time.
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Acked-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      e9a2f8b5
  6. 25 Oct, 2020 17 commits
  7. 24 Oct, 2020 3 commits
    • Linus Torvalds's avatar
      Merge tag 'block-5.10-2020-10-24' of git://git.kernel.dk/linux-block · d7691390
      Linus Torvalds authored
      Pull block fixes from Jens Axboe:
      
       - NVMe pull request from Christoph
           - rdma error handling fixes (Chao Leng)
           - fc error handling and reconnect fixes (James Smart)
           - fix the qid displace when tracing ioctl command (Keith Busch)
           - don't use BLK_MQ_REQ_NOWAIT for passthru (Chaitanya Kulkarni)
           - fix MTDT for passthru (Logan Gunthorpe)
           - blacklist Write Same on more devices (Kai-Heng Feng)
           - fix an uninitialized work struct (zhenwei pi)"
      
       - lightnvm out-of-bounds fix (Colin)
      
       - SG allocation leak fix (Doug)
      
       - rnbd fixes (Gioh, Guoqing, Jack)
      
       - zone error translation fixes (Keith)
      
       - kerneldoc markup fix (Mauro)
      
       - zram lockdep fix (Peter)
      
       - Kill unused io_context members (Yufen)
      
       - NUMA memory allocation cleanup (Xianting)
      
       - NBD config wakeup fix (Xiubo)
      
      * tag 'block-5.10-2020-10-24' of git://git.kernel.dk/linux-block: (27 commits)
        block: blk-mq: fix a kernel-doc markup
        nvme-fc: shorten reconnect delay if possible for FC
        nvme-fc: wait for queues to freeze before calling update_hr_hw_queues
        nvme-fc: fix error loop in create_hw_io_queues
        nvme-fc: fix io timeout to abort I/O
        null_blk: use zone status for max active/open
        nvmet: don't use BLK_MQ_REQ_NOWAIT for passthru
        nvmet: cleanup nvmet_passthru_map_sg()
        nvmet: limit passthru MTDS by BIO_MAX_PAGES
        nvmet: fix uninitialized work for zero kato
        nvme-pci: disable Write Zeroes on Sandisk Skyhawk
        nvme: use queuedata for nvme_req_qid
        nvme-rdma: fix crash due to incorrect cqe
        nvme-rdma: fix crash when connect rejected
        block: remove unused members for io_context
        blk-mq: remove the calling of local_memory_node()
        zram: Fix __zram_bvec_{read,write}() locking order
        skd_main: remove unused including <linux/version.h>
        sgl_alloc_order: fix memory leak
        lightnvm: fix out-of-bounds write to array devices->info[]
        ...
      d7691390
    • Linus Torvalds's avatar
      Merge tag 'io_uring-5.10-2020-10-24' of git://git.kernel.dk/linux-block · af004187
      Linus Torvalds authored
      Pull io_uring fixes from Jens Axboe:
      
       - fsize was missed in previous unification of work flags
      
       - Few fixes cleaning up the flags unification creds cases (Pavel)
      
       - Fix NUMA affinities for completely unplugged/replugged node for io-wq
      
       - Two fallout fixes from the set_fs changes. One local to io_uring, one
         for the splice entry point that io_uring uses.
      
       - Linked timeout fixes (Pavel)
      
       - Removal of ->flush() ->files work-around that we don't need anymore
         with referenced files (Pavel)
      
       - Various cleanups (Pavel)
      
      * tag 'io_uring-5.10-2020-10-24' of git://git.kernel.dk/linux-block:
        splice: change exported internal do_splice() helper to take kernel offset
        io_uring: make loop_rw_iter() use original user supplied pointers
        io_uring: remove req cancel in ->flush()
        io-wq: re-set NUMA node affinities if CPUs come online
        io_uring: don't reuse linked_timeout
        io_uring: unify fsize with def->work_flags
        io_uring: fix racy REQ_F_LINK_TIMEOUT clearing
        io_uring: do poll's hash_node init in common code
        io_uring: inline io_poll_task_handler()
        io_uring: remove extra ->file check in poll prep
        io_uring: make cached_cq_overflow non atomic_t
        io_uring: inline io_fail_links()
        io_uring: kill ref get/drop in personality init
        io_uring: flags-based creds init in queue
      af004187
    • Linus Torvalds's avatar
      Merge tag 'libata-5.10-2020-10-24' of git://git.kernel.dk/linux-block · cb6b2897
      Linus Torvalds authored
      Pull libata fixes from Jens Axboe:
       "Two minor libata fixes:
      
         - Fix a DMA boundary mask regression for sata_rcar (Geert)
      
         - kerneldoc markup fix (Mauro)"
      
      * tag 'libata-5.10-2020-10-24' of git://git.kernel.dk/linux-block:
        ata: fix some kernel-doc markups
        ata: sata_rcar: Fix DMA boundary mask
      cb6b2897