1. 26 Apr, 2016 5 commits
  2. 22 Apr, 2016 2 commits
  3. 21 Apr, 2016 6 commits
    • Madhavan Srinivasan's avatar
      tool/perf: Add sample_reg_mask to include all perf_regs · bb62bad6
      Madhavan Srinivasan authored
      Add sample_reg_mask array with pt_regs registers.
      This is needed for printing supported regs ( -I? option).
      Signed-off-by: default avatarMadhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Acked-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      bb62bad6
    • Anju T's avatar
      tools/perf: Map the ID values with register names · dc642e83
      Anju T authored
      Map ID values with corresponding register names. These names are then
      displayed when user issues perf record with the -I option
      followed by perf report/script with -D option.
      
      To test this patchset, Eg:
      
        $ perf record -I ls   # record machine state at interrupt
        $ perf script -D      # read the perf.data file
      
      Sample output obtained for this patch / output looks like as follows:
      
        496768515470 0x1988 [0x188]: PERF_RECORD_SAMPLE(IP, 0x1): 4522/4522:
        0xc0000000001e538c period: 1 addr: 0
        ... intr regs: mask 0x7ffffffffff ABI 64-bit
        .... r0    0xc0000000001e5e34
        .... r1    0xc000000fe733f9a0
        .... r2    0xc000000001523100
        .... r3    0xc000000ffaadeb60
        .... r4    0xc000000003456800
        .... r5    0x73a9b5e000
        .... r6    0x1e000000
        .... r7    0x0
        .... r8    0x0
        .... r9    0x0
        .... r10   0x1
        .... r11   0x0
        .... r12   0x24022822
        .... r13   0xc00000000feec180
        .... r14   0x0
        .... r15   0xc000001e4be18800
        .... r16   0x0
        .... r17   0xc000000ffaac5000
        .... r18   0xc000000fe733f8a0
        .... r19   0xc000000001523100
        .... r20   0xc00000000009fd1c
        .... r21   0xc000000fcaa69000
        .... r22   0xc0000000001e4968
        .... r23   0xc000000001523100
        .... r24   0xc000000fe733f850
        .... r25   0xc000000fcaa69000
        .... r26   0xc000000003b8fcf0
        .... r27   0xfffffffffffffead
        .... r28   0x0
        .... r29   0xc000000fcaa69000
        .... r30   0x1
        .... r31   0x0
        .... nip   0xc0000000001dd320
        .... msr   0x9000000000009032
        .... orig_r3 0xc0000000001e538c
        .... ctr   0xc00000000009d550
        .... link  0xc0000000001e5e34
        .... xer   0x0
        .... ccr   0x84022882
        .... softe 0x0
        .... trap  0xf01
        .... dar   0x0
        .... dsisr 0xf00040060000004
         ... thread: :4522:4522
         ...... dso: /root/.debug/.build-id/b0/ef11b1a1629e62ac9de75199117ee5ef9469e9
                   :4522 4522 496.768515: 1 cycles: c0000000001e538c
                   .perf_event_context_sched_in (/boot/vmlinux)
      Signed-off-by: default avatarAnju T <anju@linux.vnet.ibm.com>
      Acked-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      dc642e83
    • Anju T's avatar
      powerpc/perf: Add support for sampling interrupt register state · ed4a4ef8
      Anju T authored
      The perf infrastructure uses a bit mask to find out valid registers to
      display. Define a register mask for supported registers defined in
      uapi/asm/perf_regs.h. The bit positions also correspond to register IDs
      which is used by perf infrastructure to fetch the register values.
      CONFIG_HAVE_PERF_REGS enables sampling of the interrupted machine state.
      Signed-off-by: default avatarAnju T <anju@linux.vnet.ibm.com>
      [mpe: Add license, use CONFIG_PPC64, fix 32-bit build]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      ed4a4ef8
    • Anju T's avatar
      powerpc/perf: Assign an id to each powerpc register · 1bfadabf
      Anju T authored
      The enum definition assigns an 'id' to each register in "struct pt_regs"
      of arch/powerpc. The order of these values in the enum definition are
      based on the order of members in pt_regs.
      Signed-off-by: default avatarAnju T <anju@linux.vnet.ibm.com>
      [mpe: Rename LNK to LINK, use _UAPI_ASM for include guards]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      1bfadabf
    • Hari Bathini's avatar
      powerpc/book3s64: Remove __end_handlers marker · 057b6d7e
      Hari Bathini authored
      The __end_handlers marker was intended to mark down upto code that gets
      called from exception prologs. But that hasn't kept pace with code
      changes. Case in point, slb_miss_realmode being called from exception
      prolog code but isn't below __end_handlers marker. So, __end_handlers
      marker is as good as a comment but could be misleading at times if it
      isn't in sync with the code, as is the case now. So, let us avoid this
      confusion by having a better comment and removing __end_handlers marker
      altogether.
      Signed-off-by: default avatarHari Bathini <hbathini@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      057b6d7e
    • Hari Bathini's avatar
      powerpc/book3s64: Fix branching to OOL handlers in relocatable kernel · 8ed8ab40
      Hari Bathini authored
      Some of the interrupt vectors on 64-bit POWER server processors are only
      32 bytes long (8 instructions), which is not enough for the full
      first-level interrupt handler. For these we need to branch to an
      out-of-line (OOL) handler. But when we are running a relocatable kernel,
      interrupt vectors till __end_interrupts marker are copied down to real
      address 0x100. So, branching to labels (ie. OOL handlers) outside this
      section must be handled differently (see LOAD_HANDLER()), considering
      relocatable kernel, which would need at least 4 instructions.
      
      However, branching from interrupt vector means that we corrupt the
      CFAR (come-from address register) on POWER7 and later processors as
      mentioned in commit 1707dd16. So, EXCEPTION_PROLOG_0 (6 instructions)
      that contains the part up to the point where the CFAR is saved in the
      PACA should be part of the short interrupt vectors before we branch out
      to OOL handlers.
      
      But as mentioned already, there are interrupt vectors on 64-bit POWER
      server processors that are only 32 bytes long (like vectors 0x4f00,
      0x4f20, etc.), which cannot accomodate the above two cases at the same
      time owing to space constraint. Currently, in these interrupt vectors,
      we simply branch out to OOL handlers, without using LOAD_HANDLER(),
      which leaves us vulnerable when running a relocatable kernel (eg. kdump
      case). While this has been the case for sometime now and kdump is used
      widely, we were fortunate not to see any problems so far, for three
      reasons:
      
        1. In almost all cases, production kernel (relocatable) is used for
           kdump as well, which would mean that crashed kernel's OOL handler
           would be at the same place where we end up branching to, from short
           interrupt vector of kdump kernel.
        2. Also, OOL handler was unlikely the reason for crash in almost all
           the kdump scenarios, which meant we had a sane OOL handler from
           crashed kernel that we branched to.
        3. On most 64-bit POWER server processors, page size is large enough
           that marking interrupt vector code as executable (see commit
           429d2e83) leads to marking OOL handler code from crashed kernel,
           that sits right below interrupt vector code from kdump kernel, as
           executable as well.
      
      Let us fix this by moving the __end_interrupts marker down past OOL
      handlers to make sure that we also copy OOL handlers to real address
      0x100 when running a relocatable kernel.
      
      This fix has been tested successfully in kdump scenario, on an LPAR with
      4K page size by using different default/production kernel and kdump
      kernel.
      
      Also tested by manually corrupting the OOL handlers in the first kernel
      and then kdump'ing, and then causing the OOL handlers to fire - mpe.
      
      Fixes: c1fb6816 ("powerpc: Add relocation on exception vector handlers")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarHari Bathini <hbathini@linux.vnet.ibm.com>
      Signed-off-by: default avatarMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8ed8ab40
  4. 18 Apr, 2016 1 commit
  5. 14 Apr, 2016 5 commits
    • Michael Ellerman's avatar
      powerpc/livepatch: Add live patching support on ppc64le · 85baa095
      Michael Ellerman authored
      Add the kconfig logic & assembly support for handling live patched
      functions. This depends on DYNAMIC_FTRACE_WITH_REGS, which in turn
      depends on the new -mprofile-kernel ftrace ABI, which is only supported
      currently on ppc64le.
      
      Live patching is handled by a special ftrace handler. This means it runs
      from ftrace_caller(). The live patch handler modifies the NIP so as to
      redirect the return from ftrace_caller() to the new patched function.
      
      However there is one particularly tricky case we need to handle.
      
      If a function A calls another function B, and it is known at link time
      that they share the same TOC, then A will not save or restore its TOC,
      and will call the local entry point of B.
      
      When we live patch B, we replace it with a new function C, which may
      not have the same TOC as A. At live patch time it's too late to modify A
      to do the TOC save/restore, so the live patching code must interpose
      itself between A and C, and do the TOC save/restore that A omitted.
      
      An additionaly complication is that the livepatch code can not create a
      stack frame in order to save the TOC. That is because if C takes > 8
      arguments, or is varargs, A will have written the arguments for C in
      A's stack frame.
      
      To solve this, we introduce a "livepatch stack" which grows upward from
      the base of the regular stack, and is used to store the TOC & LR when
      calling a live patched function.
      
      When the patched function returns, we retrieve the real LR & TOC from
      the livepatch stack, restore them, and pop the livepatch "stack frame".
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Reviewed-by: default avatarTorsten Duwe <duwe@suse.de>
      Reviewed-by: default avatarBalbir Singh <bsingharora@gmail.com>
      85baa095
    • Michael Ellerman's avatar
      powerpc/livepatch: Add livepatch stack to struct thread_info · 5d31a96e
      Michael Ellerman authored
      In order to support live patching we need to maintain an alternate
      stack of TOC & LR values. We use the base of the stack for this, and
      store the "live patch stack pointer" in struct thread_info.
      
      Unlike the other fields of thread_info, we can not statically initialise
      that value, so it must be done at run time.
      
      This patch just adds the code to support that, it is not enabled until
      the next patch which actually adds live patch support.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Acked-by: default avatarBalbir Singh <bsingharora@gmail.com>
      5d31a96e
    • Michael Ellerman's avatar
      powerpc/livepatch: Add livepatch header · f63e6d89
      Michael Ellerman authored
      Add the powerpc specific livepatch definitions. In particular we provide
      a non-default implementation of klp_get_ftrace_location().
      
      This is required because the location of the mcount call is not constant
      when using -mprofile-kernel (which we always do for live patching).
      Signed-off-by: default avatarTorsten Duwe <duwe@suse.de>
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      f63e6d89
    • Michael Ellerman's avatar
      livepatch: Allow architectures to specify an alternate ftrace location · 28e7cbd3
      Michael Ellerman authored
      When livepatch tries to patch a function it takes the function address
      and asks ftrace to install the livepatch handler at that location.
      ftrace will look for an mcount call site at that exact address.
      
      On powerpc the mcount location is not the first instruction of the
      function, and in fact it's not at a constant offset from the start of
      the function. To accommodate this add a hook which arch code can
      override to customise the behaviour.
      Signed-off-by: default avatarTorsten Duwe <duwe@suse.de>
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarPetr Mladek <pmladek@suse.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      28e7cbd3
    • Michael Ellerman's avatar
      ftrace: Make ftrace_location_range() global · 04cf31a7
      Michael Ellerman authored
      In order to support live patching on powerpc we would like to call
      ftrace_location_range(), so make it global.
      Signed-off-by: default avatarTorsten Duwe <duwe@suse.de>
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      04cf31a7
  6. 12 Apr, 2016 5 commits
  7. 11 Apr, 2016 16 commits