1. 13 Dec, 2016 1 commit
  2. 12 Dec, 2016 2 commits
  3. 09 Dec, 2016 8 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing/fgraph: Have wakeup and irqsoff tracers ignore graph functions too · 1a414428
      Steven Rostedt (Red Hat) authored
      Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
      well. The ftrace infrastructure will ignore the return paths of all
      functions leaving them hanging without an end:
      
        # echo '*spin*' > set_graph_notrace
        # cat trace
        [...]
                _raw_spin_lock() {
                  preempt_count_add() {
                  do_raw_spin_lock() {
                update_rq_clock();
      
      Where the '*spin*' functions should have looked like this:
      
                _raw_spin_lock() {
                  preempt_count_add();
                  do_raw_spin_lock();
                }
                update_rq_clock();
      
      Instead, have the wakeup and irqsoff tracers ignore the functions that are
      set by the set_graph_notrace like the function_graph tracer does. Move
      the logic in the function_graph tracer into a header to allow wakeup and
      irqsoff tracers to use it as well.
      
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1a414428
    • Steven Rostedt (Red Hat)'s avatar
      fgraph: Handle a case where a tracer ignores set_graph_notrace · 794de08a
      Steven Rostedt (Red Hat) authored
      Both the wakeup and irqsoff tracers can use the function graph tracer when
      the display-graph option is set. The problem is that they ignore the notrace
      file, and record the entry of functions that would be ignored by the
      function_graph tracer. This causes the trace->depth to be recorded into the
      ring buffer. The set_graph_notrace uses a trick by adding a large negative
      number to the trace->depth when a graph function is to be ignored.
      
      On trace output, the graph function uses the depth to record a stack of
      functions. But since the depth is negative, it accesses the array with a
      negative number and causes an out of bounds access that can cause a kernel
      oops or corrupt data.
      
      Have the print functions handle cases where a tracer still records functions
      even when they are in set_graph_notrace.
      
      Also add warnings if the depth is below zero before accessing the array.
      
      Note, the function graph logic will still prevent the return of these
      functions from being recorded, which means that they will be left hanging
      without a return. For example:
      
         # echo '*spin*' > set_graph_notrace
         # echo 1 > options/display-graph
         # echo wakeup > current_tracer
         # cat trace
         [...]
            _raw_spin_lock() {
              preempt_count_add() {
              do_raw_spin_lock() {
            update_rq_clock();
      
      Where it should look like:
      
            _raw_spin_lock() {
              preempt_count_add();
              do_raw_spin_lock();
            }
            update_rq_clock();
      
      Cc: stable@vger.kernel.org
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Fixes: 29ad23b0 ("ftrace: Add set_graph_notrace filter")
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      794de08a
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Replace kmap with copy_from_user() in trace_marker writing · 656c7f0d
      Steven Rostedt (Red Hat) authored
      Instead of using get_user_pages_fast() and kmap_atomic() when writing
      to the trace_marker file, just allocate enough space on the ring buffer
      directly, and write into it via copy_from_user().
      
      Writing into the trace_marker file use to allocate a temporary buffer
      to perform the copy_from_user(), as we didn't want to write into the
      ring buffer if the copy failed. But as a trace_marker write is suppose
      to be extremely fast, and allocating memory causes other tracepoints to
      trigger, Peter Zijlstra suggested using get_user_pages_fast() and
      kmap_atomic() to keep the user space pages in memory and reading it
      directly. But Henrik Austad had issues with this because it required taking
      the mm->mmap_sem and causing long delays with the write.
      
      Instead, just allocate the space in the ring buffer and use
      copy_from_user() directly. If it faults, return -EFAULT and write
      "<faulted>" into the ring buffer.
      
      Link: http://lkml.kernel.org/r/20161208124018.72dd0f86@gandalf.local.home
      
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Henrik Austad <henrik@austad.us>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Updates: d696b58c "tracing: Do not allocate buffer for trace_marker"
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      656c7f0d
    • Steven Rostedt (Red Hat)'s avatar
      ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it · 847fa1a6
      Steven Rostedt (Red Hat) authored
      With new binutils, gcc may get smart with its optimization and change a jmp
      from a 5 byte jump to a 2 byte one even though it was jumping to a global
      function. But that global function existed within a 2 byte radius, and gcc
      was able to optimize it. Unfortunately, that jump was also being modified
      when function graph tracing begins. Since ftrace expected that jump to be 5
      bytes, but it was only two, it overwrote code after the jump, causing a
      crash.
      
      This was fixed for x86_64 with commit 8329e818, with the same subject as
      this commit, but nothing was done for x86_32.
      
      Cc: stable@vger.kernel.org
      Fixes: d61f82d0 ("ftrace: use dynamic patching for updating mcount calls")
      Reported-by: default avatarColin Ian King <colin.king@canonical.com>
      Tested-by: default avatarColin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      847fa1a6
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Allow benchmark to be enabled at early_initcall() · 9c1f6bb8
      Steven Rostedt (Red Hat) authored
      The trace event start up selftests fails when the trace benchmark is
      enabled, because it is disabled during boot. It really only needs to be
      disabled before scheduling is set up, as it creates a thread.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      9c1f6bb8
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Have system enable return error if one of the events fail · 989a0a3d
      Steven Rostedt (Red Hat) authored
      If one of the events within a system fails to enable when "1" is written
      to the system "enable" file, it should return an error. Note, some events
      may still be enabled, but the user should know that something did go wrong.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      989a0a3d
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Do not start benchmark on boot up · 1dd349ab
      Steven Rostedt (Red Hat) authored
      Trace events are enabled very early on boot up via the boot command line
      parameter. The benchmark tool creates a new thread to perform the trace
      event benchmarking. But at start up, it is called before scheduling is set
      up and because it creates a new thread before the init thread is created,
      this crashes the kernel.
      
      Have the benchmark fail to register when started via the kernel command
      line.
      
      Also, since the registering of a tracepoint now can handle failure cases,
      return -ENOMEM instead of warning if the thread cannot be created.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1dd349ab
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Have the reg function allow to fail · 8cf868af
      Steven Rostedt (Red Hat) authored
      Some tracepoints have a registration function that gets enabled when the
      tracepoint is enabled. There may be cases that the registraction function
      must fail (for example, can't allocate enough memory). In this case, the
      tracepoint should also fail to register, otherwise the user would not know
      why the tracepoint is not working.
      
      Cc: David Howells <dhowells@redhat.com>
      Cc: Seiji Aguchi <seiji.aguchi@hds.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      8cf868af
  4. 24 Nov, 2016 4 commits
  5. 23 Nov, 2016 4 commits
  6. 22 Nov, 2016 15 commits
  7. 15 Nov, 2016 1 commit
  8. 14 Nov, 2016 5 commits