1. 27 Feb, 2010 1 commit
  2. 26 Feb, 2010 2 commits
  3. 25 Feb, 2010 7 commits
    • Wenji Huang's avatar
      tracing: Simplify memory recycle of trace_define_field · 7b60997f
      Wenji Huang authored
      Discard freeing field->type since it is not necessary.
      Reviewed-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-5-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7b60997f
    • Wenji Huang's avatar
      tracing: Remove unnecessary variable in print_graph_return · c85f3a91
      Wenji Huang authored
      The "cpu" variable is declared at the start of the function and
      also within a branch, with the exact same initialization.
      
      Remove the local variable of the same name in the branch.
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-3-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      c85f3a91
    • Wenji Huang's avatar
      tracing: Fix typo of info text in trace_kprobe.c · a5efd925
      Wenji Huang authored
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-2-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a5efd925
    • Wenji Huang's avatar
      tracing: Fix typo in prof_sysexit_enable() · 6574658b
      Wenji Huang authored
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-1-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      6574658b
    • Li Zefan's avatar
      tracing: Remove CONFIG_TRACE_POWER from kernel config · 1ab83a89
      Li Zefan authored
      The power tracer has been converted to power trace events.
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4B84D50E.4070806@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1ab83a89
    • Jeff Mahoney's avatar
      tracing: Fix ftrace_event_call alignment for use with gcc 4.5 · 86c38a31
      Jeff Mahoney authored
      GCC 4.5 introduces behavior that forces the alignment of structures to
       use the largest possible value. The default value is 32 bytes, so if
       some structures are defined with a 4-byte alignment and others aren't
       declared with an alignment constraint at all - it will align at 32-bytes.
      
       For things like the ftrace events, this results in a non-standard array.
       When initializing the ftrace subsystem, we traverse the _ftrace_events
       section and call the initialization callback for each event. When the
       structures are misaligned, we could be treating another part of the
       structure (or the zeroed out space between them) as a function pointer.
      
       This patch forces the alignment for all the ftrace_event_call structures
       to 4 bytes.
      
       Without this patch, the kernel fails to boot very early when built with
       gcc 4.5.
      
       It's trivial to check the alignment of the members of the array, so it
       might be worthwhile to add something to the build system to do that
       automatically. Unfortunately, that only covers this case. I've asked one
       of the gcc developers about adding a warning when this condition is seen.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      LKML-Reference: <4B85770B.6010901@suse.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      86c38a31
    • Steven Rostedt's avatar
      ftrace: Remove memory barriers from NMI code when not needed · 0c54dd34
      Steven Rostedt authored
      The code in stop_machine that modifies the kernel text has a bit
      of logic to handle the case of NMIs. stop_machine does not prevent
      NMIs from executing, and if an NMI were to trigger on another CPU
      as the modifying CPU is changing the NMI text, a GPF could result.
      
      To prevent the GPF, the NMI calls ftrace_nmi_enter() which may
      modify the code first, then any other NMIs will just change the
      text to the same content which will do no harm. The code that
      stop_machine called must wait for NMIs to finish while it changes
      each location in the kernel. That code may also change the text
      to what the NMI changed it to. The key is that the text will never
      change content while another CPU is executing it.
      
      To make the above work, the call to ftrace_nmi_enter() must also
      do a smp_mb() as well as atomic_inc().  But for applications like
      perf that require a high number of NMIs for profiling, this can have
      a dramatic effect on the system. Not only is it doing a full memory
      barrier on both nmi_enter() as well as nmi_exit() it is also
      modifying a global variable with an atomic operation. This kills
      performance on large SMP machines.
      
      Since the memory barriers are only needed when ftrace is in the
      process of modifying the text (which is seldom), this patch
      adds a "modifying_code" variable that gets set before stop machine
      is executed and cleared afterwards.
      
      The NMIs will check this variable and store it in a per CPU
      "save_modifying_code" variable that it will use to check if it
      needs to do the memory barriers and atomic dec on NMI exit.
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      0c54dd34
  4. 24 Feb, 2010 13 commits
  5. 23 Feb, 2010 17 commits