1. 14 Oct, 2018 21 commits
  2. 13 Oct, 2018 19 commits
    • Christophe Leroy's avatar
      drivers/block/z2ram: use ioremap_wt() instead of __ioremap(_PAGE_WRITETHRU) · ed18e423
      Christophe Leroy authored
      _PAGE_WRITETHRU is a target specific flag. Prefer generic functions.
      Acked-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ed18e423
    • Christophe Leroy's avatar
      drivers/video/fbdev: use ioremap_wc/wt() instead of __ioremap() · e04e3950
      Christophe Leroy authored
      _PAGE_NO_CACHE is a platform specific flag. In addition, this flag
      is misleading because one would think it requests a noncached page
      whereas a noncached page is _PAGE_NO_CACHE | _PAGE_GUARDED
      
      _PAGE_NO_CACHE alone means write combined noncached page, so lets
      use ioremap_wc() instead.
      
      _PAGE_WRITETHRU is also platform specific flag. Use ioremap_wt()
      instead.
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Acked-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
      Acked-by: default avatarBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e04e3950
    • Christophe Leroy's avatar
      powerpc/32: Add ioremap_wt() and ioremap_coherent() · 86c391bd
      Christophe Leroy authored
      Other arches have ioremap_wt() to map IO areas write-through.
      Implement it on PPC as well in order to avoid drivers using
      __ioremap(_PAGE_WRITETHRU)
      
      Also implement ioremap_coherent() to avoid drivers using
      __ioremap(_PAGE_COHERENT)
      Signed-off-by: default avatarChristophe Leroy <christophe.leroy@c-s.fr>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      86c391bd
    • Gautham R. Shenoy's avatar
      powerpc/rtas: Fix a potential race between CPU-Offline & Migration · dfd718a2
      Gautham R. Shenoy authored
      Live Partition Migrations require all the present CPUs to execute the
      H_JOIN call, and hence rtas_ibm_suspend_me() onlines any offline CPUs
      before initiating the migration for this purpose.
      
      The commit 85a88cab
      ("powerpc/pseries: Disable CPU hotplug across migrations")
      disables any CPU-hotplug operations once all the offline CPUs are
      brought online to prevent any further state change. Once the
      CPU-Hotplug operation is disabled, the code assumes that all the CPUs
      are online.
      
      However, there is a minor window in rtas_ibm_suspend_me() between
      onlining the offline CPUs and disabling CPU-Hotplug when a concurrent
      CPU-offline operations initiated by the userspace can succeed thereby
      nullifying the the aformentioned assumption. In this unlikely case
      these offlined CPUs will not call H_JOIN, resulting in a system hang.
      
      Fix this by verifying that all the present CPUs are actually online
      after CPU-Hotplug has been disabled, failing which we restore the
      state of the offline CPUs in rtas_ibm_suspend_me() and return an
      -EBUSY.
      
      Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com>
      Cc: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
      Suggested-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Reviewed-by: default avatarNathan Fontenot <nfont@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      dfd718a2
    • Gautham R. Shenoy's avatar
      powerpc/cacheinfo: Report the correct shared_cpu_map on big-cores · 500fe5f5
      Gautham R. Shenoy authored
      Currently on POWER9 SMT8 cores systems, in sysfs, we report the
      shared_cache_map for L1 caches (both data and instruction) to be the
      cpu-ids of the threads in SMT8 cores. This is incorrect since on
      POWER9 SMT8 cores there are two groups of threads, each of which
      shares its own L1 cache.
      
      This patch addresses this by reporting the shared_cpu_map correctly in
      sysfs for L1 caches.
      
      Before the patch
         /sys/devices/system/cpu/cpu0/cache/index0/shared_cpu_map : 000000ff
         /sys/devices/system/cpu/cpu0/cache/index1/shared_cpu_map : 000000ff
         /sys/devices/system/cpu/cpu1/cache/index0/shared_cpu_map : 000000ff
         /sys/devices/system/cpu/cpu1/cache/index1/shared_cpu_map : 000000ff
      
      After the patch
         /sys/devices/system/cpu/cpu0/cache/index0/shared_cpu_map : 00000055
         /sys/devices/system/cpu/cpu0/cache/index1/shared_cpu_map : 00000055
         /sys/devices/system/cpu/cpu1/cache/index0/shared_cpu_map : 000000aa
         /sys/devices/system/cpu/cpu1/cache/index1/shared_cpu_map : 000000aa
      Signed-off-by: default avatarGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      500fe5f5
    • Gautham R. Shenoy's avatar
      powerpc: Use cpu_smallcore_sibling_mask at SMT level on bigcores · 8e8a31d7
      Gautham R. Shenoy authored
      POWER9 SMT8 cores consist of two groups of threads, where threads in
      each group shares L1-cache. The scheduler is not aware of this
      distinction as the current sched-domain hierarchy has all the threads
      of the core defined at the SMT domain.
      
      	SMT  [Thread siblings of the SMT8 core]
      	DIE  [CPUs in the same die]
      	NUMA [All the CPUs in the system]
      
      Due to this, we can observe run-to-run variance when we run a
      multi-threaded benchmark bound to a single core based on how the
      scheduler spreads the software threads across the two groups in the
      core.
      
      We fix this in this patch by defining each group of threads which
      share L1-cache to be the SMT level. The group of threads in the SMT8
      core is defined to be the CACHE level. The sched-domain hierarchy
      after this patch will be :
      
      	SMT	[Thread siblings in the core that share L1 cache]
      	CACHE 	[Thread siblings that are in the SMT8 core]
      	DIE  	[CPUs in the same die]
      	NUMA 	[All the CPUs in the system]
      Signed-off-by: default avatarGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8e8a31d7
    • Gautham R. Shenoy's avatar
      powerpc: Detect the presence of big-cores via "ibm, thread-groups" · 425752c6
      Gautham R. Shenoy authored
      On IBM POWER9, the device tree exposes a property array identifed by
      "ibm,thread-groups" which will indicate which groups of threads share
      a particular set of resources.
      
      As of today we only have one form of grouping identifying the group of
      threads in the core that share the L1 cache, translation cache and
      instruction data flow.
      
      This patch adds helper functions to parse the contents of
      "ibm,thread-groups" and populate a per-cpu variable to cache
      information about siblings of each CPU that share the L1, traslation
      cache and instruction data-flow.
      
      It also defines a new global variable named "has_big_cores" which
      indicates if the cores on this configuration have multiple groups of
      threads that share L1 cache.
      
      For each online CPU, it maintains a cpu_smallcore_mask, which
      indicates the online siblings which share the L1-cache with it.
      Signed-off-by: default avatarGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      425752c6
    • Michael Ellerman's avatar
      powerpc: Fix stackprotector detection for non-glibc toolchains · bf6cbd0c
      Michael Ellerman authored
      If GCC is not built with glibc support then we must explicitly tell it
      which register to use for TLS mode stack protector, otherwise it will
      error out and the cc-option check will fail.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      bf6cbd0c
    • Michael Ellerman's avatar
      powerpc/xmon: Show the stack protector canary in xmon · 50530f5e
      Michael Ellerman authored
      This is helpful for debugging stack protector crashes.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      50530f5e
    • Joel Stanley's avatar
      powerpc: Use SWITCH_FRAME_SIZE for prom and rtas entry · ed9e84a4
      Joel Stanley authored
      Commit 6c171994 ("powerpc/of: Remove useless register save/restore
      when calling OF back") removed the saving of srr0 and srr1 when calling
      into OpenFirmware. Commit e31aa453 ("powerpc: Use LOAD_REG_IMMEDIATE
      only for constants on 64-bit") did the same for rtas.
      
      This means we don't need to save the extra stack space and can use
      the common SWITCH_FRAME_SIZE.
      
      There were already no users of _SRR0 and _SRR1 so we can remove them
      too.
      
      Link: https://github.com/linuxppc/linux/issues/83Signed-off-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ed9e84a4
    • Michael Bringmann's avatar
      powerpc/pseries/mobility: Extend start/stop topology update scope · 65b9fdad
      Michael Bringmann authored
      The powerpc mobility code may receive RTAS requests to perform PRRN
      (Platform Resource Reassignment Notification) topology changes at any
      time, including during LPAR migration operations.
      
      In some configurations where the affinity of CPUs or memory is being
      changed on that platform, the PRRN requests may apply or refer to
      outdated information prior to the complete update of the device-tree.
      
      This patch changes the duration for which topology updates are
      suppressed during LPAR migrations from just the rtas_ibm_suspend_me()
      / 'ibm,suspend-me' call(s) to cover the entire migration_store()
      operation to allow all changes to the device-tree to be applied prior
      to accepting and applying any PRRN requests.
      
      For tracking purposes, pr_info notices are added to the functions
      start_topology_update() and stop_topology_update() of 'numa.c'.
      Signed-off-by: default avatarMichael Bringmann <mwb@linux.vnet.ibm.com>
      Reviewed-by: default avatarNathan Fontenot <nfont@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      65b9fdad
    • Joel Stanley's avatar
      powerpc/Makefile: Fix PPC_BOOK3S_64 ASFLAGS · 960e3002
      Joel Stanley authored
      Ever since commit 15a3204d ("powerpc/64s: Set assembler machine type
      to POWER4") we force -mpower4 to be passed to the assembler
      irrespective of the CFLAGS used (for Book3s 64).
      
      When building a powerpc64 kernel with clang, clang will not add -many
      to the assembler flags, so any instructions that the compiler has
      generated that are not available on power4 will cause an error:
      
        /usr/bin/as -a64 -mppc64 -mlittle-endian -mpower8 \
         -I ./arch/powerpc/include -I ./arch/powerpc/include/generated \
         -I ./include -I ./arch/powerpc/include/uapi \
         -I ./arch/powerpc/include/generated/uapi -I ./include/uapi \
         -I ./include/generated/uapi -I arch/powerpc -I arch/powerpc \
         -maltivec -mpower4 -o init/do_mounts.o /tmp/do_mounts-3b0a3d.s
        /tmp/do_mounts-51ce54.s:748: Error: unrecognized opcode: `isel'
      
      GCC does include -many, so the GCC driven gas call will succeed:
      
        as -v -I ./arch/powerpc/include -I ./arch/powerpc/include/generated -I
        ./include -I ./arch/powerpc/include/uapi
        -I ./arch/powerpc/include/generated/uapi -I ./include/uapi
        -I ./include/generated/uapi -I arch/powerpc -I arch/powerpc
         -a64 -mpower8 -many -mlittle -maltivec -mpower4 -o init/do_mounts.o
      
      Note that isel is power7 and above for IBM CPUs. GCC only generates it
      for Power9 and above, but the above test was run against the clang
      generated assembly.
      
      Peter Bergner explains:
      
        When using -many -mpower4, gas will first try and find a matching
        power4 mnemonic and failing that, it will then allow any valid
        mnemonic that gas knows about. GCC's use of -many predates me
        though.
      
        IIRC, Alan looked at trying to remove it, but I forget why he
        didn't. Could be either a gcc or gas issue at the time. I'm not sure
        whether issue still exists or not. He and I have modified how gas
        works internally a fair amount since he tried removing gcc use of
        -many.
      
        I will also note that when using -many, gas will choose the first
        mnemonic that matches in the mnemonic table and we have (mostly)
        sorted the table so that server mnemonics show up earlier in the
        table than other mnemonics, so they'll be seen/chosen first.
      
      By explicitly setting -many we can build with Clang and GCC while
      retaining the -mpower4 option.
      Signed-off-by: default avatarJoel Stanley <joel@jms.id.au>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      960e3002
    • YueHaibing's avatar
      powerpc/pseries/memory-hotplug: Fix return value type of find_aa_index · b45e9d76
      YueHaibing authored
      The variable 'aa_index' is defined as an unsigned value in
      update_lmb_associativity_index(), but find_aa_index() may return -1
      when dlpar_clone_property() fails. So change find_aa_index() to return
      a bool, which indicates whether 'aa_index' was found or not.
      
      Fixes: c05a5a40 ("powerpc/pseries: Dynamic add entires to associativity lookup array")
      Signed-off-by: default avatarYueHaibing <yuehaibing@huawei.com>
      Reviewed-by: Nathan Fontenot nfont@linux.vnet.ibm.com>
      [mpe: Tweak changelog, rename is_found to just found]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      b45e9d76
    • Sam Bobroff's avatar
      powerpc/eeh: Cleanup control flow in eeh_handle_normal_event() · b90484ec
      Sam Bobroff authored
      Rather than mixing "if (state)" blocks and gotos, convert entirely to
      "if (state)" blocks to make the state machine behaviour clearer.
      Signed-off-by: default avatarSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      b90484ec
    • Sam Bobroff's avatar
      powerpc/eeh: Cleanup eeh_ops.wait_state() · fef7f905
      Sam Bobroff authored
      The wait_state member of eeh_ops does not need to be platform
      dependent; it's just logic around eeh_ops.get_state(). Therefore,
      merge the two (slightly different!) platform versions into a new
      function, eeh_wait_state() and remove the eeh_ops member.
      
      While doing this, also correct:
      * The wait logic, so that it never waits longer than max_wait.
      * The wait logic, so that it never waits less than
        EEH_STATE_MIN_WAIT_TIME.
      * One call site where the result is treated like a bit field before
        it's checked for negative error values.
      * In pseries_eeh_get_state(), rename the "state" parameter to "delay"
        because that's what it is.
      Signed-off-by: default avatarSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      fef7f905
    • Sam Bobroff's avatar
      powerpc/eeh: Cleanup eeh_pe_state_mark() · e762bb89
      Sam Bobroff authored
      Currently, eeh_pe_state_mark() marks a PE (and it's children) with a
      state and then performs additional processing if that state included
      EEH_PE_ISOLATED.
      
      The state parameter is always a constant at the call site, so
      rearrange eeh_pe_state_mark() into two functions and just call the
      appropriate one at each site.
      Signed-off-by: default avatarSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      e762bb89
    • Sam Bobroff's avatar
      powerpc/eeh: Cleanup unnecessary eeh_pe_state_mark_with_cfg() · eed4bdbe
      Sam Bobroff authored
      The function eeh_pe_state_mark_with_cfg() just performs the work of
      eeh_pe_state_mark() and then, conditionally, the work of
      eeh_pe_state_clear(). However it is only ever called with a constant
      state such that the condition is always true, so replace it by direct
      calls.
      Signed-off-by: default avatarSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      eed4bdbe
    • Sam Bobroff's avatar
      powerpc/eeh: Cleanup eeh_enabled() · 54644927
      Sam Bobroff authored
      Signed-off-by: default avatarSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      54644927
    • Sam Bobroff's avatar
      powerpc/eeh: Cleanup logic in eeh_rmv_from_parent_pe() · 9a3eda26
      Sam Bobroff authored
      Move the call to eeh_dev_to_pe() up, so that later it's clear that
      "pe" isn't NULL.
      Signed-off-by: default avatarSam Bobroff <sbobroff@linux.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      9a3eda26