1. 03 Dec, 2020 21 commits
  2. 26 Nov, 2020 7 commits
  3. 25 Nov, 2020 1 commit
  4. 23 Nov, 2020 2 commits
    • Stephen Rothwell's avatar
      powerpc/64s: Fix allnoconfig build since uaccess flush · b6b79dd5
      Stephen Rothwell authored
      Using DECLARE_STATIC_KEY_FALSE needs linux/jump_table.h.
      
      Otherwise the build fails with eg:
      
        arch/powerpc/include/asm/book3s/64/kup-radix.h:66:1: warning: data definition has no type or storage class
           66 | DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
      
      Fixes: 9a32a7e7 ("powerpc/64s: flush L1D after user accesses")
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      [mpe: Massage change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20201123184016.693fe464@canb.auug.org.au
      b6b79dd5
    • Michael Ellerman's avatar
      Merge tag 'powerpc-cve-2020-4788' into fixes · 962f8e64
      Michael Ellerman authored
      From Daniel's cover letter:
      
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern.
      
      This patch series flushes the L1 cache on kernel entry (patch 2) and after the
      kernel performs any user accesses (patch 3). It also adds a self-test and
      performs some related cleanups.
      962f8e64
  5. 19 Nov, 2020 9 commits
    • Daniel Axtens's avatar
      powerpc/64s: rename pnv|pseries_setup_rfi_flush to _setup_security_mitigations · da631f7f
      Daniel Axtens authored
      pseries|pnv_setup_rfi_flush already does the count cache flush setup, and
      we just added entry and uaccess flushes. So the name is not very accurate
      any more. In both platforms we then also immediately setup the STF flush.
      
      Rename them to _setup_security_mitigations and fold the STF flush in.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      da631f7f
    • Daniel Axtens's avatar
      selftests/powerpc: refactor entry and rfi_flush tests · 0d239f3b
      Daniel Axtens authored
      For simplicity in backporting, the original entry_flush test contained
      a lot of duplicated code from the rfi_flush test. De-duplicate that code.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      0d239f3b
    • Daniel Axtens's avatar
      selftests/powerpc: entry flush test · 89a83a0c
      Daniel Axtens authored
      Add a test modelled on the RFI flush test which counts the number
      of L1D misses doing a simple syscall with the entry flush on and off.
      
      For simplicity of backporting, this test duplicates a lot of code from
      rfi_flush. We clean that up in the next patch.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      89a83a0c
    • Michael Ellerman's avatar
      powerpc: Only include kup-radix.h for 64-bit Book3S · 178d52c6
      Michael Ellerman authored
      In kup.h we currently include kup-radix.h for all 64-bit builds, which
      includes Book3S and Book3E. The latter doesn't make sense, Book3E
      never uses the Radix MMU.
      
      This has worked up until now, but almost by accident, and the recent
      uaccess flush changes introduced a build breakage on Book3E because of
      the bad structure of the code.
      
      So disentangle things so that we only use kup-radix.h for Book3S. This
      requires some more stubs in kup.h and fixing an include in
      syscall_64.c.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      178d52c6
    • Nicholas Piggin's avatar
      powerpc/64s: flush L1D after user accesses · 9a32a7e7
      Nicholas Piggin authored
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache after user accesses.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      9a32a7e7
    • Nicholas Piggin's avatar
      powerpc/64s: flush L1D on kernel entry · f7964378
      Nicholas Piggin authored
      IBM Power9 processors can speculatively operate on data in the L1 cache
      before it has been completely validated, via a way-prediction mechanism. It
      is not possible for an attacker to determine the contents of impermissible
      memory using this method, since these systems implement a combination of
      hardware and software security measures to prevent scenarios where
      protected data could be leaked.
      
      However these measures don't address the scenario where an attacker induces
      the operating system to speculatively execute instructions using data that
      the attacker controls. This can be used for example to speculatively bypass
      "kernel user access prevention" techniques, as discovered by Anthony
      Steinhauser of Google's Safeside Project. This is not an attack by itself,
      but there is a possibility it could be used in conjunction with
      side-channels or other weaknesses in the privileged code to construct an
      attack.
      
      This issue can be mitigated by flushing the L1 cache between privilege
      boundaries of concern. This patch flushes the L1 cache on kernel entry.
      
      This is part of the fix for CVE-2020-4788.
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f7964378
    • Russell Currey's avatar
      selftests/powerpc: rfi_flush: disable entry flush if present · fcb48454
      Russell Currey authored
      We are about to add an entry flush. The rfi (exit) flush test measures
      the number of L1D flushes over a syscall with the RFI flush enabled and
      disabled. But if the entry flush is also enabled, the effect of enabling
      and disabling the RFI flush is masked.
      
      If there is a debugfs entry for the entry flush, disable it during the RFI
      flush and restore it later.
      Reported-by: default avatarSpoorthy S <spoorts2@in.ibm.com>
      Signed-off-by: default avatarRussell Currey <ruscur@russell.cc>
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      fcb48454
    • David Hildenbrand's avatar
      powernv/memtrace: don't abuse memory hot(un)plug infrastructure for memory allocations · 0bd4b96d
      David Hildenbrand authored
      Let's use alloc_contig_pages() for allocating memory and remove the
      linear mapping manually via arch_remove_linear_mapping(). Mark all pages
      PG_offline, such that they will definitely not get touched - e.g.,
      when hibernating. When freeing memory, try to revert what we did.
      
      The original idea was discussed in:
       https://lkml.kernel.org/r/48340e96-7e6b-736f-9e23-d3111b915b6e@redhat.com
      
      This is similar to CONFIG_DEBUG_PAGEALLOC handling on other
      architectures, whereby only single pages are unmapped from the linear
      mapping. Let's mimic what memory hot(un)plug would do with the linear
      mapping.
      
      We now need MEMORY_HOTPLUG and CONTIG_ALLOC as dependencies. Add a TODO
      that we want to use __GFP_ZERO for clearing once alloc_contig_pages()
      understands that.
      
      Tested with in QEMU/TCG with 10 GiB of main memory:
        [root@localhost ~]# echo 0x40000000 > /sys/kernel/debug/powerpc/memtrace/enable
        [  105.903043][ T1080] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
        [root@localhost ~]# echo 0x40000000 > /sys/kernel/debug/powerpc/memtrace/enable
        [  145.042493][ T1080] radix-mmu: Mapped 0x0000000080000000-0x00000000c0000000 with 64.0 KiB pages
        [  145.049019][ T1080] memtrace: Freed trace memory back on node 0
        [  145.333960][ T1080] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
        [root@localhost ~]# echo 0x80000000 > /sys/kernel/debug/powerpc/memtrace/enable
        [  213.606916][ T1080] radix-mmu: Mapped 0x0000000080000000-0x00000000c0000000 with 64.0 KiB pages
        [  213.613855][ T1080] memtrace: Freed trace memory back on node 0
        [  214.185094][ T1080] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
        [root@localhost ~]# echo 0x100000000 > /sys/kernel/debug/powerpc/memtrace/enable
        [  234.874872][ T1080] radix-mmu: Mapped 0x0000000080000000-0x0000000100000000 with 64.0 KiB pages
        [  234.886974][ T1080] memtrace: Freed trace memory back on node 0
        [  234.890153][ T1080] memtrace: Failed to allocate trace memory on node 0
        [root@localhost ~]# echo 0x40000000 > /sys/kernel/debug/powerpc/memtrace/enable
        [  259.490196][ T1080] memtrace: Allocated trace memory on node 0 at 0x0000000080000000
      
      I also made sure allocated memory is properly zeroed.
      
      Note 1: We currently won't be allocating from ZONE_MOVABLE - because our
      	pages are not movable. However, as we don't run with any memory
      	hot(un)plug mechanism around, we could make an exception to
      	increase the chance of allocations succeeding.
      
      Note 2: PG_reserved isn't sufficient. E.g., kernel_page_present() used
      	along PG_reserved in hibernation code will always return "true"
      	on powerpc, resulting in the pages getting touched. It's too
      	generic - e.g., indicates boot allocations.
      
      Note 3: For now, we keep using memory_block_size_bytes() as minimum
      	granularity.
      Suggested-by: default avatarMichal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarOscar Salvador <osalvador@suse.de>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20201111145322.15793-9-david@redhat.com
      0bd4b96d
    • David Hildenbrand's avatar
      powerpc/mm: remove linear mapping if __add_pages() fails in arch_add_memory() · ca2c36ca
      David Hildenbrand authored
      Let's revert what we did in case something goes wrong and we return an
      error - as already done on arm64 and s390x.
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarOscar Salvador <osalvador@suse.de>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20201111145322.15793-8-david@redhat.com
      ca2c36ca