1. 28 Apr, 2012 2 commits
    • Steven Rostedt's avatar
      ftrace/x86: Remove the complex ftrace NMI handling code · 4a6d70c9
      Steven Rostedt authored
      As ftrace function tracing would require modifying code that could
      be executed in NMI context, which is not stopped with stop_machine(),
      ftrace had to do a complex algorithm with various stages of setup
      and memory barriers to make it work.
      
      With the new breakpoint method, this is no longer required. The changes
      to the code can be done without any problem in NMI context, as well as
      without stop machine altogether. Remove the complex code as it is
      no longer needed.
      
      Also, a lot of the notrace annotations could be removed from the
      NMI code as it is now safe to trace them. With the exception of
      do_nmi itself, which does some special work to handle running in
      the debug stack. The breakpoint method can cause NMIs to double
      nest the debug stack if it's not setup properly, and that is done
      in do_nmi(), thus that function must not be traced.
      
      (Note the arch sh may want to do the same)
      
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      4a6d70c9
    • Steven Rostedt's avatar
      ftrace/x86: Have arch x86_64 use breakpoints instead of stop machine · 08d636b6
      Steven Rostedt authored
      This method changes x86 to add a breakpoint to the mcount locations
      instead of calling stop machine.
      
      Now that iret can be handled by NMIs, we perform the following to
      update code:
      
      1) Add a breakpoint to all locations that will be modified
      
      2) Sync all cores
      
      3) Update all locations to be either a nop or call (except breakpoint
         op)
      
      4) Sync all cores
      
      5) Remove the breakpoint with the new code.
      
      6) Sync all cores
      
      [
        Added updates that Masami suggested:
         Use unlikely(modifying_ftrace_code) in int3 trap to keep kprobes efficient.
         Don't use NOTIFY_* in ftrace handler in int3 as it is not a notifier.
      ]
      
      Cc: H. Peter Anvin <hpa@zytor.com>
      Acked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      08d636b6
  2. 24 Apr, 2012 3 commits
    • Vaibhav Nagarnaik's avatar
      ring-buffer: Add per_cpu ring buffer control files · 438ced17
      Vaibhav Nagarnaik authored
      Add a debugfs entry under per_cpu/ folder for each cpu called
      buffer_size_kb to control the ring buffer size for each CPU
      independently.
      
      If the global file buffer_size_kb is used to set size, the individual
      ring buffers will be adjusted to the given size. The buffer_size_kb will
      report the common size to maintain backward compatibility.
      
      If the buffer_size_kb file under the per_cpu/ directory is used to
      change buffer size for a specific CPU, only the size of the respective
      ring buffer is updated. When tracing/buffer_size_kb is read, it reports
      'X' to indicate that sizes of per_cpu ring buffers are not equivalent.
      
      Link: http://lkml.kernel.org/r/1328212844-11889-1-git-send-email-vnagarnaik@google.com
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Michael Rubin <mrubin@google.com>
      Cc: David Sharp <dhsharp@google.com>
      Cc: Justin Teravest <teravest@google.com>
      Signed-off-by: default avatarVaibhav Nagarnaik <vnagarnaik@google.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      438ced17
    • Dan Carpenter's avatar
      tracing: Remove an unneeded check in trace_seq_buffer() · 5a26c8f0
      Dan Carpenter authored
      memcpy() returns a pointer to "bug".  Hopefully, it's not NULL here or
      we would already have Oopsed.
      
      Link: http://lkml.kernel.org/r/20120420063145.GA22649@elgon.mountain
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      5a26c8f0
    • Steven Rostedt's avatar
      tracing: Add percpu buffers for trace_printk() · 07d777fe
      Steven Rostedt authored
      Currently, trace_printk() uses a single buffer to write into
      to calculate the size and format needed to save the trace. To
      do this safely in an SMP environment, a spin_lock() is taken
      to only allow one writer at a time to the buffer. But this could
      also affect what is being traced, and add synchronization that
      would not be there otherwise.
      
      Ideally, using percpu buffers would be useful, but since trace_printk()
      is only used in development, having per cpu buffers for something
      never used is a waste of space. Thus, the use of the trace_bprintk()
      format section is changed to be used for static fmts as well as dynamic ones.
      Then at boot up, we can check if the section that holds the trace_printk
      formats is non-empty, and if it does contain something, then we
      know a trace_printk() has been added to the kernel. At this time
      the trace_printk per cpu buffers are allocated. A check is also
      done at module load time in case a module is added that contains a
      trace_printk().
      
      Once the buffers are allocated, they are never freed. If you use
      a trace_printk() then you should know what you are doing.
      
      A buffer is made for each type of context:
      
        normal
        softirq
        irq
        nmi
      
      The context is checked and the appropriate buffer is used.
      This allows for totally lockless usage of trace_printk(),
      and they no longer even disable interrupts.
      Requested-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      07d777fe
  3. 13 Apr, 2012 2 commits
    • Ingo Molnar's avatar
      Merge tag 'v3.4-rc2' into perf/core · a385ec4f
      Ingo Molnar authored
      Merge Linux 3.4-rc2: we were on v3.3, update the base.
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a385ec4f
    • Ingo Molnar's avatar
      Merge tag 'perf-core-for-mingo' of... · 659c36fc
      Ingo Molnar authored
      Merge tag 'perf-core-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Fixes and improvements for perf/core:
      
      . Overhaul the tools/ makefiles, gluing them to the top level Makefile, from
        Borislav Petkov.
      
      . Move the UI files from tools/perf/util/ui/ to tools/perf/ui/. Also move
        the GTK+ browser to tools/perf/ui/gtk/, from Namhyung Kim.
      
      . Only fallback to sw cycles counter on ENOENT for the hw cycles, from
        Robert Richter
      
      . Trivial fixes from Robert Richter
      
      . Handle the autogenerated bison/flex files better, from Namhyung and Jiri Olsa.
      
      . Navigate jump instructions in the annotate browser, just press enter or ->,
        still needs support for a jump navigation history, i.e. to go back.
      
      . Search string in the annotate browser: same keys as vim:
           / forward
           n next backward/forward
           ? backward
      
      . Clarify number of events/samples in the report header, from Ashay Rane
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      659c36fc
  4. 11 Apr, 2012 13 commits
  5. 08 Apr, 2012 1 commit
  6. 07 Apr, 2012 19 commits