An error occurred fetching the project authors.
  1. 08 Oct, 2020 2 commits
  2. 21 Aug, 2020 1 commit
  3. 18 Aug, 2020 1 commit
  4. 26 Jul, 2020 1 commit
  5. 09 Jun, 2020 1 commit
    • Mike Rapoport's avatar
      mm: pgtable: add shortcuts for accessing kernel PMD and PTE · e05c7b1f
      Mike Rapoport authored
      The powerpc 32-bit implementation of pgtable has nice shortcuts for
      accessing kernel PMD and PTE for a given virtual address.  Make these
      helpers available for all architectures.
      
      [rppt@linux.ibm.com: microblaze: fix page table traversal in setup_rt_frame()]
        Link: http://lkml.kernel.org/r/20200518191511.GD1118872@kernel.org
      [akpm@linux-foundation.org: s/pmd_ptr_k/pmd_off_k/ in various powerpc places]
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-9-rppt@kernel.orgSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e05c7b1f
  6. 26 May, 2020 2 commits
  7. 25 Feb, 2020 1 commit
  8. 18 Feb, 2020 1 commit
    • Christophe Leroy's avatar
      powerpc/32s: Fix DSI and ISI exceptions for CONFIG_VMAP_STACK · 232ca1ee
      Christophe Leroy authored
      hash_page() needs to read page tables from kernel memory. When entire
      kernel memory is mapped by BATs, which is normally the case when
      CONFIG_STRICT_KERNEL_RWX is not set, it works even if the page hosting
      the page table is not referenced in the MMU hash table.
      
      However, if the page where the page table resides is not covered by
      a BAT, a DSI fault can be encountered from hash_page(), and it loops
      forever. This can happen when CONFIG_STRICT_KERNEL_RWX is selected
      and the alignment of the different regions is too small to allow
      covering the entire memory with BATs. This also happens when
      CONFIG_DEBUG_PAGEALLOC is selected or when booting with 'nobats'
      flag.
      
      Also, if the page containing the kernel stack is not present in the
      MMU hash table, registers cannot be saved and a recursive DSI fault
      is encountered.
      
      To allow hash_page() to properly do its job at all time and load the
      MMU hash table whenever needed, it must run with data MMU disabled.
      This means it must be called before re-enabling data MMU. To allow
      this, registers clobbered by hash_page() and create_hpte() have to
      be saved in the thread struct together with SRR0, SSR1, DAR and DSISR.
      It is also necessary to ensure that DSI prolog doesn't overwrite
      regs saved by prolog of the current running exception. That means:
      - DSI can only use SPRN_SPRG_SCRATCH0
      - Exceptions must free SPRN_SPRG_SCRATCH0 before writing to the stack.
      
      This also fixes the Oops reported by Erhard when create_hpte() is
      called by add_hash_page().
      
      Due to prolog size increase, a few more exceptions had to get split
      in two parts.
      
      Fixes: cd08f109 ("powerpc/32s: Enable CONFIG_VMAP_STACK")
      Reported-by: default avatarErhard F. <erhard_f@mailbox.org>
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Tested-by: default avatarErhard F. <erhard_f@mailbox.org>
      Tested-by: default avatarLarry Finger <Larry.Finger@lwfinger.net>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=206501
      Link: https://lore.kernel.org/r/64a4aa44686e9fd4b01333401367029771d9b231.1581761633.git.christophe.leroy@c-s.fr
      232ca1ee
  9. 27 Jan, 2020 1 commit
  10. 19 Nov, 2019 1 commit
  11. 28 Aug, 2019 1 commit
  12. 20 Aug, 2019 4 commits
  13. 30 May, 2019 1 commit
  14. 02 May, 2019 7 commits
  15. 21 Apr, 2019 2 commits
    • Christophe Leroy's avatar
      powerpc/32s: Implement Kernel Userspace Access Protection · a68c31fc
      Christophe Leroy authored
      This patch implements Kernel Userspace Access Protection for
      book3s/32.
      
      Due to limitations of the processor page protection capabilities,
      the protection is only against writing. read protection cannot be
      achieved using page protection.
      
      The previous patch modifies the page protection so that RW user
      pages are RW for Key 0 and RO for Key 1, and it sets Key 0 for
      both user and kernel.
      
      This patch changes userspace segment registers are set to Ku 0
      and Ks 1. When kernel needs to write to RW pages, the associated
      segment register is then changed to Ks 0 in order to allow write
      access to the kernel.
      
      In order to avoid having the read all segment registers when
      locking/unlocking the access, some data is kept in the thread_struct
      and saved on stack on exceptions. The field identifies both the
      first unlocked segment and the first segment following the last
      unlocked one. When no segment is unlocked, it contains value 0.
      
      As the hash_page() function is not able to easily determine if a
      protfault is due to a bad kernel access to userspace, protfaults
      need to be handled by handle_page_fault when KUAP is set.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      [mpe: Drop allow_read/write_to/from_user() as they're now in kup.h,
            and adapt allow_user_access() to do nothing when to == NULL]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      a68c31fc
    • Christophe Leroy's avatar
      powerpc/32s: Implement Kernel Userspace Execution Prevention. · 31ed2b13
      Christophe Leroy authored
      To implement Kernel Userspace Execution Prevention, this patch
      sets NX bit on all user segments on kernel entry and clears NX bit
      on all user segments on kernel exit.
      
      Note that powerpc 601 doesn't have the NX bit, so KUEP will not
      work on it. A warning is displayed at startup.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      31ed2b13
  16. 12 Mar, 2019 1 commit
    • Mike Rapoport's avatar
      treewide: add checks for the return value of memblock_alloc*() · 8a7f97b9
      Mike Rapoport authored
      Add check for the return value of memblock_alloc*() functions and call
      panic() in case of error.  The panic message repeats the one used by
      panicing memblock allocators with adjustment of parameters to include
      only relevant ones.
      
      The replacement was mostly automated with semantic patches like the one
      below with manual massaging of format strings.
      
        @@
        expression ptr, size, align;
        @@
        ptr = memblock_alloc(size, align);
        + if (!ptr)
        + 	panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);
      
      [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
        Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
      [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
        Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
      [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
        Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
      [akpm@linux-foundation.org: fix xtensa printk warning]
      Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAnders Roxell <anders.roxell@linaro.org>
      Reviewed-by: Guo Ren <ren_guo@c-sky.com>		[c-sky]
      Acked-by: Paul Burton <paul.burton@mips.com>		[MIPS]
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>	[s390]
      Reviewed-by: Juergen Gross <jgross@suse.com>		[Xen]
      Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>	[m68k]
      Acked-by: Max Filippov <jcmvbkbc@gmail.com>		[xtensa]
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Rob Herring <robh+dt@kernel.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8a7f97b9
  17. 08 Mar, 2019 1 commit
    • Mike Rapoport's avatar
      arch: simplify several early memory allocations · b63a07d6
      Mike Rapoport authored
      There are several early memory allocations in arch/ code that use
      memblock_phys_alloc() to allocate memory, convert the returned physical
      address to the virtual address and then set the allocated memory to
      zero.
      
      Exactly the same behaviour can be achieved simply by calling
      memblock_alloc(): it allocates the memory in the same way as
      memblock_phys_alloc(), then it performs the phys_to_virt() conversion
      and clears the allocated memory.
      
      Replace the longer sequence with a simpler call to memblock_alloc().
      
      Link: http://lkml.kernel.org/r/1546248566-14910-6-git-send-email-rppt@linux.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b63a07d6
  18. 23 Feb, 2019 5 commits
    • Christophe Leroy's avatar
      powerpc/mm/32s: Use BATs for STRICT_KERNEL_RWX · 63b2bc61
      Christophe Leroy authored
      Today, STRICT_KERNEL_RWX is based on the use of regular pages
      to map kernel pages.
      
      On Book3s 32, it has three consequences:
      - Using pages instead of BAT for mapping kernel linear memory severely
      impacts performance.
      - Exec protection is not effective because no-execute cannot be set at
      page level (except on 603 which doesn't have hash tables)
      - Write protection is not effective because PP bits do not provide RO
      mode for kernel-only pages (except on 603 which handles it in software
      via PAGE_DIRTY)
      
      On the 603+, we have:
      - Independent IBAT and DBAT allowing limitation of exec parts.
      - NX bit can be set in segment registers to forbit execution on memory
      mapped by pages.
      - RO mode on DBATs even for kernel-only blocks.
      
      On the 601, there is nothing much we can do other than warn the user
      about it, because:
      - BATs are common to instructions and data.
      - BAT do not provide RO mode for kernel-only blocks.
      - segment registers don't have the NX bit.
      
      In order to use IBAT for exec protection, this patch:
      - Aligns _etext to BAT block sizes (128kb)
      - Set NX bit in kernel segment register (Except on vmalloc area when
      CONFIG_MODULES is selected)
      - Maps kernel text with IBATs.
      
      In order to use DBAT for exec protection, this patch:
      - Aligns RW DATA to BAT block sizes (4M)
      - Maps kernel RO area with write prohibited DBATs
      - Maps remaining memory with remaining DBATs
      
      Here is what we get with this patch on a 832x when activating
      STRICT_KERNEL_RWX:
      
      Symbols:
      c0000000 T _stext
      c0680000 R __start_rodata
      c0680000 R _etext
      c0800000 T __init_begin
      c0800000 T _sinittext
      
      ~# cat /sys/kernel/debug/block_address_translation
      ---[ Instruction Block Address Translation ]---
      0: 0xc0000000-0xc03fffff 0x00000000 Kernel EXEC coherent
      1: 0xc0400000-0xc05fffff 0x00400000 Kernel EXEC coherent
      2: 0xc0600000-0xc067ffff 0x00600000 Kernel EXEC coherent
      3:         -
      4:         -
      5:         -
      6:         -
      7:         -
      
      ---[ Data Block Address Translation ]---
      0: 0xc0000000-0xc07fffff 0x00000000 Kernel RO coherent
      1: 0xc0800000-0xc0ffffff 0x00800000 Kernel RW coherent
      2: 0xc1000000-0xc1ffffff 0x01000000 Kernel RW coherent
      3: 0xc2000000-0xc3ffffff 0x02000000 Kernel RW coherent
      4: 0xc4000000-0xc7ffffff 0x04000000 Kernel RW coherent
      5: 0xc8000000-0xcfffffff 0x08000000 Kernel RW coherent
      6: 0xd0000000-0xdfffffff 0x10000000 Kernel RW coherent
      7:         -
      
      ~# cat /sys/kernel/debug/segment_registers
      ---[ User Segments ]---
      0x00000000-0x0fffffff Kern key 1 User key 1 VSID 0xa085d0
      0x10000000-0x1fffffff Kern key 1 User key 1 VSID 0xa086e1
      0x20000000-0x2fffffff Kern key 1 User key 1 VSID 0xa087f2
      0x30000000-0x3fffffff Kern key 1 User key 1 VSID 0xa08903
      0x40000000-0x4fffffff Kern key 1 User key 1 VSID 0xa08a14
      0x50000000-0x5fffffff Kern key 1 User key 1 VSID 0xa08b25
      0x60000000-0x6fffffff Kern key 1 User key 1 VSID 0xa08c36
      0x70000000-0x7fffffff Kern key 1 User key 1 VSID 0xa08d47
      0x80000000-0x8fffffff Kern key 1 User key 1 VSID 0xa08e58
      0x90000000-0x9fffffff Kern key 1 User key 1 VSID 0xa08f69
      0xa0000000-0xafffffff Kern key 1 User key 1 VSID 0xa0907a
      0xb0000000-0xbfffffff Kern key 1 User key 1 VSID 0xa0918b
      
      ---[ Kernel Segments ]---
      0xc0000000-0xcfffffff Kern key 0 User key 1 No Exec VSID 0x000ccc
      0xd0000000-0xdfffffff Kern key 0 User key 1 No Exec VSID 0x000ddd
      0xe0000000-0xefffffff Kern key 0 User key 1 No Exec VSID 0x000eee
      0xf0000000-0xffffffff Kern key 0 User key 1 No Exec VSID 0x000fff
      
      Aligning _etext to 128kb allows to map up to 32Mb text with 8 IBATs:
      16Mb + 8Mb + 4Mb + 2Mb + 1Mb + 512kb + 256kb + 128kb (+ 128kb) = 32Mb
      (A 9th IBAT is unneeded as 32Mb would need only a single 32Mb block)
      
      Aligning data to 4M allows to map up to 512Mb data with 8 DBATs:
      16Mb + 8Mb + 4Mb + 4Mb + 32Mb + 64Mb + 128Mb + 256Mb = 512Mb
      
      Because some processors only have 4 BATs and because some targets need
      DBATs for mapping other areas, the following patch will allow to
      modify _etext and data alignment.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      63b2bc61
    • Christophe Leroy's avatar
      powerpc/mm/32s: add setibat() clearibat() and update_bats() · 5e04ae85
      Christophe Leroy authored
      setibat() and clearibat() allows to manipulate IBATs independently
      of DBATs.
      
      update_bats() allows to update bats after init. This is done
      with MMU off.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      5e04ae85
    • Christophe Leroy's avatar
      powerpc/mm/32s: use _PAGE_EXEC in setbat() · df25f863
      Christophe Leroy authored
      Do not set IBAT when setbat() is called without _PAGE_EXEC
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      df25f863
    • Christophe Leroy's avatar
      powerpc/mm/32s: rework mmu_mapin_ram() · e4d6654e
      Christophe Leroy authored
      This patch reworks mmu_mapin_ram() to be more generic and map as much
      blocks as possible. It now supports blocks not starting at address 0.
      
      It scans DBATs array to find free ones instead of forcing the use of
      BAT2 and BAT3.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e4d6654e
    • Christophe Leroy's avatar
      powerpc/mm/32: add base address to mmu_mapin_ram() · 14e609d6
      Christophe Leroy authored
      At the time being, mmu_mapin_ram() always maps RAM from the beginning.
      But some platforms like the WII have to map a second block of RAM.
      
      This patch adds to mmu_mapin_ram() the base address of the block.
      At the moment, only base address 0 is supported.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      14e609d6
  19. 21 Feb, 2019 1 commit
  20. 19 Dec, 2018 3 commits
  21. 31 Oct, 2018 1 commit
    • Mike Rapoport's avatar
      memblock: rename memblock_alloc{_nid,_try_nid} to memblock_phys_alloc* · 9a8dd708
      Mike Rapoport authored
      Make it explicit that the caller gets a physical address rather than a
      virtual one.
      
      This will also allow using meblock_alloc prefix for memblock allocations
      returning virtual address, which is done in the following patches.
      
      The conversion is done using the following semantic patch:
      
      @@
      expression e1, e2, e3;
      @@
      (
      - memblock_alloc(e1, e2)
      + memblock_phys_alloc(e1, e2)
      |
      - memblock_alloc_nid(e1, e2, e3)
      + memblock_phys_alloc_nid(e1, e2, e3)
      |
      - memblock_alloc_try_nid(e1, e2, e3)
      + memblock_phys_alloc_try_nid(e1, e2, e3)
      )
      
      Link: http://lkml.kernel.org/r/1536927045-23536-7-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9a8dd708
  22. 14 Oct, 2018 1 commit