1. 16 Oct, 2017 7 commits
  2. 13 Oct, 2017 3 commits
  3. 10 Oct, 2017 1 commit
  4. 06 Oct, 2017 7 commits
  5. 05 Oct, 2017 2 commits
    • Naveen N. Rao's avatar
      powerpc/jprobes: Validate break handler invocation as being due to a jprobe_return() · 3368f569
      Naveen N. Rao authored
      Fix a circa 2005 FIXME by implementing a check to ensure that we
      actually got into the jprobe break handler() due to the trap in
      jprobe_return().
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      3368f569
    • Naveen N. Rao's avatar
      powerpc/jprobes: Disable preemption when triggered through ftrace · 6baea433
      Naveen N. Rao authored
      KPROBES_SANITY_TEST throws the below splat when CONFIG_PREEMPT is
      enabled:
      
        Kprobe smoke test: started
        DEBUG_LOCKS_WARN_ON(val > preempt_count())
        ------------[ cut here ]------------
        WARNING: CPU: 19 PID: 1 at kernel/sched/core.c:3094 preempt_count_sub+0xcc/0x140
        Modules linked in:
        CPU: 19 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc7-nnr+ #97
        task: c0000000fea80000 task.stack: c0000000feb00000
        NIP:  c00000000011d3dc LR: c00000000011d3d8 CTR: c000000000a090d0
        REGS: c0000000feb03400 TRAP: 0700   Not tainted  (4.13.0-rc7-nnr+)
        MSR:  8000000000021033 <SF,ME,IR,DR,RI,LE>  CR: 28000282  XER: 00000000
        CFAR: c00000000015aa18 SOFTE: 0
        <snip>
        NIP preempt_count_sub+0xcc/0x140
        LR  preempt_count_sub+0xc8/0x140
        Call Trace:
          preempt_count_sub+0xc8/0x140 (unreliable)
          kprobe_handler+0x228/0x4b0
          program_check_exception+0x58/0x3b0
          program_check_common+0x16c/0x170
          --- interrupt: 0 at kprobe_target+0x8/0x20
                           LR = init_test_probes+0x248/0x7d0
          kp+0x0/0x80 (unreliable)
          livepatch_handler+0x38/0x74
          init_kprobes+0x1d8/0x208
          do_one_initcall+0x68/0x1d0
          kernel_init_freeable+0x298/0x374
          kernel_init+0x24/0x160
          ret_from_kernel_thread+0x5c/0x70
        Instruction dump:
        419effdc 3d22001b 39299240 81290000 2f890000 409effc8 3c82ffcb 3c62ffcb
        3884bc68 3863bc18 4803d5fd 60000000 <0fe00000> 4bffffa8 60000000 60000000
        ---[ end trace 432dd46b4ce3d29f ]---
        Kprobe smoke test: passed successfully
      
      The issue is that we aren't disabling preemption in
      kprobe_ftrace_handler(). Disable it.
      
      Fixes: ead514d5 ("powerpc/kprobes: Add support for KPROBES_ON_FTRACE")
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      [mpe: Trim oops a little for formatting]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      6baea433
  6. 04 Oct, 2017 18 commits
  7. 03 Oct, 2017 1 commit
  8. 28 Sep, 2017 1 commit
    • Frederic Barrat's avatar
      cxl: Enable global TLBIs for cxl contexts · 03b8abed
      Frederic Barrat authored
      The PSL and nMMU need to see all TLB invalidations for the memory
      contexts used on the adapter. For the hash memory model, it is done by
      making all TLBIs global as soon as the cxl driver is in use. For
      radix, we need something similar, but we can refine and only convert
      to global the invalidations for contexts actually used by the device.
      
      The new mm_context_add_copro() API increments the 'active_cpus' count
      for the contexts attached to the cxl adapter. As soon as there's more
      than 1 active cpu, the TLBIs for the context become global. Active cpu
      count must be decremented when detaching to restore locality if
      possible and to avoid overflowing the counter.
      
      The hash memory model support is somewhat limited, as we can't
      decrement the active cpus count when mm_context_remove_copro() is
      called, because we can't flush the TLB for a mm on hash. So TLBIs
      remain global on hash.
      Signed-off-by: default avatarFrederic Barrat <fbarrat@linux.vnet.ibm.com>
      Fixes: f24be42a ("cxl: Add psl9 specific code")
      Tested-by: default avatarAlistair Popple <alistair@popple.id.au>
      [mpe: Fold in updated comment on the barrier from Fred]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      03b8abed