1. 16 Nov, 2008 11 commits
    • Frederic Weisbecker's avatar
      tracing/ftrace: change the type of the init() callback · 1c80025a
      Frederic Weisbecker authored
      Impact: extend the ->init() method with the ability to fail
      
      This bring a way to know if the initialization of a tracer successed.
      A tracer must return 0 on success and a traditional error (ie:
      -ENOMEM) if it fails.
      
      If a tracer fails to init, it is free to print a detailed warn. The
      tracing api will not and switch to a new tracer will just return the
      error from the init callback.
      
      Note: this will be used for the return tracer.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      1c80025a
    • Frederic Weisbecker's avatar
      tracing/ftrace: fix unexpected -EINVAL when longest tracer name is set · e6e7a65a
      Frederic Weisbecker authored
      Impact: fix confusing write() -EINVAL when changing the tracer
      
      The following commit d9e54076 remade
      alive the bug which made the set of a new tracer returning -EINVAL if
      this is the longest name of tracer. This patch corrects it.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e6e7a65a
    • Steven Rostedt's avatar
      ftrace: make filtered functions effective on setting · ee02a2e5
      Steven Rostedt authored
      Impact: set filtered functions at time the filter is set
      
      It can be confusing when the set_filter_functions is set (or cleared)
      and the functions being recorded by the dynamic tracer does not
      match.
      
      This patch causes the code to be updated if the function tracer is
      enabled and the filter is changed.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ee02a2e5
    • Steven Rostedt's avatar
      ftrace: fix dyn ftrace filter · 982c350b
      Steven Rostedt authored
      Impact: correct implementation of dyn ftrace filter
      
      The old decisions made by the filter algorithm was complex and incorrect.
      This lead to inconsistent enabling or disabling of functions when
      the filter was used.
      
      This patch simplifies that code and in doing so, corrects the usage
      of the filters.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      982c350b
    • Steven Rostedt's avatar
      ftrace: allow NULL pointers in mcount_loc · 20e5227e
      Steven Rostedt authored
      Impact: make ftrace_convert_nops() more permissive
      
      Due to the way different architecture linkers combine the data sections
      of the mcount_loc (the section that lists all the locations that
      call mcount), there may be zeros added in that section. This is usually
      due to strange alignments that the linker performs, that pads in zeros.
      
      This patch makes the conversion code to nops skip any pointer in
      the mcount_loc section that is NULL.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      20e5227e
    • Steven Rostedt's avatar
      ftrace: pass module struct to arch dynamic ftrace functions · 31e88909
      Steven Rostedt authored
      Impact: allow archs more flexibility on dynamic ftrace implementations
      
      Dynamic ftrace has largly been developed on x86. Since x86 does not
      have the same limitations as other architectures, the ftrace interaction
      between the generic code and the architecture specific code was not
      flexible enough to handle some of the issues that other architectures
      have.
      
      Most notably, module trampolines. Due to the limited branch distance
      that archs make in calling kernel core code from modules, the module
      load code must create a trampoline to jump to what will make the
      larger jump into core kernel code.
      
      The problem arises when this happens to a call to mcount. Ftrace checks
      all code before modifying it and makes sure the current code is what
      it expects. Right now, there is not enough information to handle modifying
      module trampolines.
      
      This patch changes the API between generic dynamic ftrace code and
      the arch dependent code. There is now two functions for modifying code:
      
        ftrace_make_nop(mod, rec, addr) - convert the code at rec->ip into
             a nop, where the original text is calling addr. (mod is the
             module struct if called by module init)
      
        ftrace_make_caller(rec, addr) - convert the code rec->ip that should
             be a nop into a caller to addr.
      
      The record "rec" now has a new field called "arch" where the architecture
      can add any special attributes to each call site record.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      31e88909
    • Steven Rostedt's avatar
      ftrace: replace raw_local_irq_save with local_irq_save · d51ad7ac
      Steven Rostedt authored
      Impact: fix lockdep disabling itself when function tracing is enabled
      
      The raw_local_irq_saves used in ftrace is causing problems with
      lockdep. (it thinks the irq flags are out of sync and disables
      itself with a warning)
      
      The raw ops here are not needed, and the normal local_irq_save is fine.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d51ad7ac
    • Steven Rostedt's avatar
      ftrace: do not process freed records · 918c1154
      Steven Rostedt authored
      Impact: keep from converting freed records
      
      When the tracer is started or stopped, it converts all code pointed
      to by the saved records into callers to ftrace or nops. When modules
      are unloaded, their records are freed, but they still exist within
      the record pages.
      
      This patch changes the code to skip over freed records.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      918c1154
    • Steven Rostedt's avatar
      ftrace: disable ftrace on anomalies in trace start and stop · b17e8a37
      Steven Rostedt authored
      Impact: robust feature to disable ftrace on start or stop tracing on error
      
      Currently only the initial conversion to nops will disable ftrace
      on an anomaly. But if an anomaly happens on start or stopping of the
      tracer, it will silently fail.
      
      This patch adds a check there too, to disable ftrace and warn if the
      conversion fails.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      b17e8a37
    • Steven Rostedt's avatar
      ftrace: remove condition from ftrace_record_ip · f3c7ac40
      Steven Rostedt authored
      Impact: let module functions be recorded when dyn ftrace not enabled
      
      When dynamic ftrace had a daemon and a hash to record the locations
      of mcount callers at run time, the recording needed to stop when
      ftrace was disabled. But now that the recording is done at compile time
      and the ftrace_record_ip is only called at boot up and when a module
      is loaded, we no longer need to check if ftrace_enabled is set.
      In fact, this breaks module load if it is not set because we skip
      over module functions.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f3c7ac40
    • Ingo Molnar's avatar
  2. 13 Nov, 2008 8 commits
  3. 12 Nov, 2008 19 commits
    • Frederic Weisbecker's avatar
      tracing/function-return-tracer: call prepare_ftrace_return by registers · 1dc1c6ad
      Frederic Weisbecker authored
      Impact: Optimize a bit the function return tracer
      
      This patch changes the calling convention of prepare_ftrace_return to
      pass its arguments by register. This will optimize it a bit and
      prepare it to support dynamic tracing.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      1dc1c6ad
    • Frederic Weisbecker's avatar
      tracing/function-return-tracer: make the function return tracer lockless · 62d59d17
      Frederic Weisbecker authored
      Impact: remove spinlocks and irq disabling in function return tracer.
      
      I've tried to figure out all of the race condition that could happen
      when the tracer pushes or pops a return address trace to/from the
      current thread_info.
      
      Theory:
      
      _ One thread can only execute on one cpu at a time. So this code
        doesn't need to be SMP-safe. Just drop the spinlock.
      
      _ The only race could happen between the current thread and an
        interrupt. If an interrupt is raised, it will increase the index of
        the return stack storage and then execute until the end of the
        tracing to finally free the index it used. We don't need to disable
        irqs.
      
      This is theorical. In practice, I've tested it with a two-core SMP and
      had no problem at all. Perhaps -tip testing could confirm it.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      62d59d17
    • Steven Rostedt's avatar
      ftrace: rename trace_unlikely.c file · 94b80ffd
      Steven Rostedt authored
      Impact: File name change of trace_unlikely.c
      
      The "unlikely" name for the tracer is quite ugly. We renamed all the
      parts of it to "branch" and now it is time to rename the file too.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      94b80ffd
    • Steven Rostedt's avatar
      ftrace: add tracer called branch · 80e5ea45
      Steven Rostedt authored
      Impact: added new branch tracer
      
      Currently the tracing of branch profiling (unlikelys and likelys hit)
      is only activated by the iter_ctrl. This patch adds a tracer called
      "branch" that will just trace the branch profiling. The advantage
      of adding this tracer is that it can be added to the ftrace selftests
      on startup.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      80e5ea45
    • Steven Rostedt's avatar
      ftrace: rename unlikely iter_ctrl to branch · 9f029e83
      Steven Rostedt authored
      Impact: rename of iter_ctrl unlikely to branch
      
      The unlikely name is ugly. This patch converts the iter_ctrl command
      "unlikely" and "nounlikely" to "branch" and "nobranch" respectively.
      
      It also renames a lot of internal functions to use "branch" instead
      of "unlikely".
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      9f029e83
    • Steven Rostedt's avatar
      trace: rename unlikely profiler to branch profiler · 2ed84eeb
      Steven Rostedt authored
      Impact: name change of unlikely tracer and profiler
      
      Ingo Molnar suggested changing the config from UNLIKELY_PROFILE
      to BRANCH_PROFILING. I never did like the "unlikely" name so I
      went one step farther, and renamed all the unlikely configurations
      to a "BRANCH" variant.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2ed84eeb
    • Stephen Hemminger's avatar
      tracing/fastboot: fix perlcritic warning · 68f96c0c
      Stephen Hemminger authored
      Impact: cleanup
      
      Fix the following warning from the perl syntax checking tool perlcritic.
      This tool is a lint like tool that checks for perl best practices.
      
      Loop iterator is not lexical at line 113, column 1.
      See page 108 of PBP.  (Severity: 5)
      Signed-off-by: default avatarStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      68f96c0c
    • Ingo Molnar's avatar
      tracing: finetune branch-tracer output · 68d119f0
      Ingo Molnar authored
      Steve suggested the to change the output from this:
      
      >  bash-3471  [003]   357.014755: [ MISS ] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [ .... ] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [ .... ] calc_delta_fair:sched_fair.c:411
      
      to this:
      
      >  bash-3471  [003]   357.014755: [ MISS ] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [  ok  ] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [  ok  ] calc_delta_fair:sched_fair.c:411
      
      as it makes it clearer to the user what it means exactly.
      Acked-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      68d119f0
    • Ingo Molnar's avatar
      tracing: branch tracer, fix vdso crash · 2b7d0390
      Ingo Molnar authored
      Impact: fix bootup crash
      
      the branch tracer missed arch/x86/vdso/vclock_gettime.c from
      disabling tracing, which caused such bootup crashes:
      
        [  201.840097] init[1]: segfault at 7fffed3fe7c0 ip 00007fffed3fea2e sp 000077
      
      also clean up the ugly ifdefs in arch/x86/kernel/vsyscall_64.c by
      creating DISABLE_UNLIKELY_PROFILE facility for code to turn off
      instrumentation on a per file basis.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      2b7d0390
    • Ingo Molnar's avatar
      tracing: branch tracer, tweak output · f88c4ae9
      Ingo Molnar authored
      Impact: modify the tracer output, to make it a bit easier to read
      
      Change the output from:
      
      >  bash-3471  [003]   357.014755: [INCORRECT] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [correct] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [correct] calc_delta_fair:sched_fair.c:411
      
      to:
      
      >  bash-3471  [003]   357.014755: [ MISS ] sched_info_dequeued:sched_stats.h:177
      >  bash-3471  [003]   357.014756: [ .... ] update_curr:sched_fair.c:489
      >  bash-3471  [003]   357.014758: [ .... ] calc_delta_fair:sched_fair.c:411
      
      it's good to have fields aligned vertically, and the only important
      information is a prediction miss, so display only that information.
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      f88c4ae9
    • Steven Rostedt's avatar
      tracing: likely/unlikely branch annotation tracer · 52f232cb
      Steven Rostedt authored
      Impact: new likely/unlikely branch tracer
      
      This patch adds a way to record the instances of the likely() and unlikely()
      branch condition annotations.
      
      When "unlikely" is set in /debugfs/tracing/iter_ctrl the unlikely conditions
      will be added to any of the ftrace tracers. The change takes effect when
      a new tracer is passed into the current_tracer file.
      
      For example:
      
       bash-3471  [003]   357.014755: [INCORRECT] sched_info_dequeued:sched_stats.h:177
       bash-3471  [003]   357.014756: [correct] update_curr:sched_fair.c:489
       bash-3471  [003]   357.014758: [correct] calc_delta_fair:sched_fair.c:411
       bash-3471  [003]   357.014759: [correct] account_group_exec_runtime:sched_stats.h:356
       bash-3471  [003]   357.014761: [correct] update_curr:sched_fair.c:489
       bash-3471  [003]   357.014763: [INCORRECT] calc_delta_fair:sched_fair.c:411
       bash-3471  [003]   357.014765: [correct] calc_delta_mine:sched.c:1279
      
      Which shows the normal tracer heading, as well as whether the condition was
      correct "[correct]" or was mistaken "[INCORRECT]", followed by the function,
      file name and line number.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      52f232cb
    • Steven Rostedt's avatar
      tracing: profile likely and unlikely annotations · 1f0d69a9
      Steven Rostedt authored
      Impact: new unlikely/likely profiler
      
      Andrew Morton recently suggested having an in-kernel way to profile
      likely and unlikely macros. This patch achieves that goal.
      
      When configured, every(*) likely and unlikely macro gets a counter attached
      to it. When the condition is hit, the hit and misses of that condition
      are recorded. These numbers can later be retrieved by:
      
        /debugfs/tracing/profile_likely    - All likely markers
        /debugfs/tracing/profile_unlikely  - All unlikely markers.
      
      # cat /debug/tracing/profile_unlikely | head
       correct incorrect  %        Function                  File              Line
       ------- ---------  -        --------                  ----              ----
          2167        0   0 do_arch_prctl                  process_64.c         832
             0        0   0 do_arch_prctl                  process_64.c         804
          2670        0   0 IS_ERR                         err.h                34
         71230     5693   7 __switch_to                    process_64.c         673
         76919        0   0 __switch_to                    process_64.c         639
         43184    33743  43 __switch_to                    process_64.c         624
         12740    64181  83 __switch_to                    process_64.c         594
         12740    64174  83 __switch_to                    process_64.c         590
      
      # cat /debug/tracing/profile_unlikely | \
        awk '{ if ($3 > 25) print $0; }' |head -20
         44963    35259  43 __switch_to                    process_64.c         624
         12762    67454  84 __switch_to                    process_64.c         594
         12762    67447  84 __switch_to                    process_64.c         590
          1478      595  28 syscall_get_error              syscall.h            51
             0     2821 100 syscall_trace_leave            ptrace.c             1567
             0        1 100 native_smp_prepare_cpus        smpboot.c            1237
         86338   265881  75 calc_delta_fair                sched_fair.c         408
        210410   108540  34 calc_delta_mine                sched.c              1267
             0    54550 100 sched_info_queued              sched_stats.h        222
         51899    66435  56 pick_next_task_fair            sched_fair.c         1422
             6       10  62 yield_task_fair                sched_fair.c         982
          7325     2692  26 rt_policy                      sched.c              144
             0     1270 100 pre_schedule_rt                sched_rt.c           1261
          1268    48073  97 pick_next_task_rt              sched_rt.c           884
             0    45181 100 sched_info_dequeued            sched_stats.h        177
             0       15 100 sched_move_task                sched.c              8700
             0       15 100 sched_move_task                sched.c              8690
         53167    33217  38 schedule                       sched.c              4457
             0    80208 100 sched_info_switch              sched_stats.h        270
         30585    49631  61 context_switch                 sched.c              2619
      
      # cat /debug/tracing/profile_likely | awk '{ if ($3 > 25) print $0; }'
         39900    36577  47 pick_next_task                 sched.c              4397
         20824    15233  42 switch_mm                      mmu_context_64.h     18
             0        7 100 __cancel_work_timer            workqueue.c          560
           617    66484  99 clocksource_adjust             timekeeping.c        456
             0   346340 100 audit_syscall_exit             auditsc.c            1570
            38   347350  99 audit_get_context              auditsc.c            732
             0   345244 100 audit_syscall_entry            auditsc.c            1541
            38     1017  96 audit_free                     auditsc.c            1446
             0     1090 100 audit_alloc                    auditsc.c            862
          2618     1090  29 audit_alloc                    auditsc.c            858
             0        6 100 move_masked_irq                migration.c          9
             1      198  99 probe_sched_wakeup             trace_sched_switch.c 58
             2        2  50 probe_wakeup                   trace_sched_wakeup.c 227
             0        2 100 probe_wakeup_sched_switch      trace_sched_wakeup.c 144
          4514     2090  31 __grab_cache_page              filemap.c            2149
         12882   228786  94 mapping_unevictable            pagemap.h            50
             4       11  73 __flush_cpu_slab               slub.c               1466
        627757   330451  34 slab_free                      slub.c               1731
          2959    61245  95 dentry_lru_del_init            dcache.c             153
           946     1217  56 load_elf_binary                binfmt_elf.c         904
           102       82  44 disk_put_part                  genhd.h              206
             1        1  50 dst_gc_task                    dst.c                82
             0       19 100 tcp_mss_split_point            tcp_output.c         1126
      
      As you can see by the above, there's a bit of work to do in rethinking
      the use of some unlikelys and likelys. Note: the unlikely case had 71 hits
      that were more than 25%.
      
      Note:  After submitting my first version of this patch, Andrew Morton
        showed me a version written by Daniel Walker, where I picked up
        the following ideas from:
      
        1)  Using __builtin_constant_p to avoid profiling fixed values.
        2)  Using __FILE__ instead of instruction pointers.
        3)  Using the preprocessor to stop all profiling of likely
             annotations from vsyscall_64.c.
      
      Thanks to Andrew Morton, Arjan van de Ven, Theodore Tso and Ingo Molnar
      for their feed back on this patch.
      
      (*) Not ever unlikely is recorded, those that are used by vsyscalls
       (a few of them) had to have profiling disabled.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Theodore Tso <tytso@mit.edu>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      1f0d69a9
    • Ingo Molnar's avatar
      Merge branches 'tracing/fastboot', 'tracing/function-return-tracer' and... · cb9382e5
      Ingo Molnar authored
      Merge branches 'tracing/fastboot', 'tracing/function-return-tracer' and 'tracing/ring-buffer' into tracing/core
      cb9382e5
    • Steven Rostedt's avatar
      ring-buffer: fix deadlock from reader_lock in read_start · 642edba5
      Steven Rostedt authored
      Impact: deadlock fix in ring_buffer_read_start
      
      The ring_buffer_iter_reset was called from ring_buffer_read_start
      where both grabbed the reader_lock.
      
      This patch separates out the internals of ring_buffer_iter_reset
      to its own function so that both APIs may grab the reader_lock.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      642edba5
    • Steven Rostedt's avatar
      ring-buffer: no preempt for sched_clock() · 47e74f2b
      Steven Rostedt authored
      Impact: disable preemption when calling sched_clock()
      
      The ring_buffer_time_stamp still uses sched_clock as its counter.
      But it is a bug to call it with preemption enabled. This requirement
      should not be pushed to the ring_buffer_time_stamp callers, so
      the ring_buffer_time_stamp needs to disable preemption when calling
      sched_clock.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      47e74f2b
    • Frederic Weisbecker's avatar
      tracing/fastboot: Use the ring-buffer timestamp for initcall entries · 74239072
      Frederic Weisbecker authored
      Impact: Split the boot tracer entries in two parts: call and return
      
      Now that we are using the sched tracer from the boot tracer, we want
      to use the same timestamp than the ring-buffer to have consistent time
      captures between sched events and initcall events.
      
      So we get rid of the old time capture by the boot tracer and split the
      initcall events in two parts: call and return. This way we have the
      ring buffer timestamp of both.
      
      An example trace:
      
      [   27.904149584] calling  net_ns_init+0x0/0x1c0 @ 1
      [   27.904429624] initcall net_ns_init+0x0/0x1c0 returned 0 after 0 msecs
      [   27.904575926] calling  reboot_init+0x0/0x20 @ 1
      [   27.904655399] initcall reboot_init+0x0/0x20 returned 0 after 0 msecs
      [   27.904800228] calling  sysctl_init+0x0/0x30 @ 1
      [   27.905142914] initcall sysctl_init+0x0/0x30 returned 0 after 0 msecs
      [   27.905287211] calling  ksysfs_init+0x0/0xb0 @ 1
       ##### CPU 0 buffer started ####
                  init-1     [000]    27.905395:      1:120:R   + [001]    11:115:S
       ##### CPU 1 buffer started ####
                <idle>-0     [001]    27.905425:      0:140:R ==> [001]    11:115:R
                  init-1     [000]    27.905426:      1:120:D ==> [000]     0:140:R
                <idle>-0     [000]    27.905431:      0:140:R   + [000]     4:115:S
                <idle>-0     [000]    27.905451:      0:140:R ==> [000]     4:115:R
           ksoftirqd/0-4     [000]    27.905456:      4:115:S ==> [000]     0:140:R
                 udevd-11    [001]    27.905458:     11:115:R   + [001]    14:115:R
                <idle>-0     [000]    27.905459:      0:140:R   + [000]     4:115:S
                <idle>-0     [000]    27.905462:      0:140:R ==> [000]     4:115:R
                 udevd-11    [001]    27.905462:     11:115:R ==> [001]    14:115:R
           ksoftirqd/0-4     [000]    27.905467:      4:115:S ==> [000]     0:140:R
                <idle>-0     [000]    27.905470:      0:140:R   + [000]     4:115:S
                <idle>-0     [000]    27.905473:      0:140:R ==> [000]     4:115:R
           ksoftirqd/0-4     [000]    27.905476:      4:115:S ==> [000]     0:140:R
                <idle>-0     [000]    27.905479:      0:140:R   + [000]     4:115:S
                <idle>-0     [000]    27.905482:      0:140:R ==> [000]     4:115:R
           ksoftirqd/0-4     [000]    27.905486:      4:115:S ==> [000]     0:140:R
                 udevd-14    [001]    27.905499:     14:120:X ==> [001]    11:115:R
                 udevd-11    [001]    27.905506:     11:115:R   + [000]     1:120:D
                <idle>-0     [000]    27.905515:      0:140:R ==> [000]     1:120:R
                 udevd-11    [001]    27.905517:     11:115:S ==> [001]     0:140:R
      [   27.905557107] initcall ksysfs_init+0x0/0xb0 returned 0 after 3906 msecs
      [   27.905705736] calling  init_jiffies_clocksource+0x0/0x10 @ 1
      [   27.905779239] initcall init_jiffies_clocksource+0x0/0x10 returned 0 after 0 msecs
      [   27.906769814] calling  pm_init+0x0/0x30 @ 1
      [   27.906853627] initcall pm_init+0x0/0x30 returned 0 after 0 msecs
      [   27.906997803] calling  pm_disk_init+0x0/0x20 @ 1
      [   27.907076946] initcall pm_disk_init+0x0/0x20 returned 0 after 0 msecs
      [   27.907222556] calling  swsusp_header_init+0x0/0x30 @ 1
      [   27.907294325] initcall swsusp_header_init+0x0/0x30 returned 0 after 0 msecs
      [   27.907439620] calling  stop_machine_init+0x0/0x50 @ 1
                  init-1     [000]    27.907485:      1:120:R   + [000]     2:115:S
                  init-1     [000]    27.907490:      1:120:D ==> [000]     2:115:R
              kthreadd-2     [000]    27.907507:      2:115:R   + [001]    15:115:R
                <idle>-0     [001]    27.907517:      0:140:R ==> [001]    15:115:R
              kthreadd-2     [000]    27.907517:      2:115:D ==> [000]     0:140:R
                <idle>-0     [000]    27.907521:      0:140:R   + [000]     4:115:S
                <idle>-0     [000]    27.907524:      0:140:R ==> [000]     4:115:R
                 udevd-15    [001]    27.907527:     15:115:D   + [000]     2:115:D
           ksoftirqd/0-4     [000]    27.907537:      4:115:S ==> [000]     2:115:R
                 udevd-15    [001]    27.907537:     15:115:D ==> [001]     0:140:R
              kthreadd-2     [000]    27.907546:      2:115:R   + [000]     1:120:D
              kthreadd-2     [000]    27.907550:      2:115:S ==> [000]     1:120:R
                  init-1     [000]    27.907584:      1:120:R   + [000]    15:  0:D
                  init-1     [000]    27.907589:      1:120:R   + [000]     2:115:S
                  init-1     [000]    27.907593:      1:120:D ==> [000]    15:  0:R
                 udevd-15    [000]    27.907601:     15:  0:S ==> [000]     2:115:R
       ##### CPU 0 buffer started ####
              kthreadd-2     [000]    27.907616:      2:115:R   + [001]    16:115:R
       ##### CPU 1 buffer started ####
                <idle>-0     [001]    27.907620:      0:140:R ==> [001]    16:115:R
              kthreadd-2     [000]    27.907621:      2:115:D ==> [000]     0:140:R
                 udevd-16    [001]    27.907625:     16:115:D   + [000]     2:115:D
                <idle>-0     [000]    27.907628:      0:140:R   + [000]     4:115:S
                 udevd-16    [001]    27.907629:     16:115:D ==> [001]     0:140:R
                <idle>-0     [000]    27.907631:      0:140:R ==> [000]     4:115:R
           ksoftirqd/0-4     [000]    27.907636:      4:115:S ==> [000]     2:115:R
              kthreadd-2     [000]    27.907644:      2:115:R   + [000]     1:120:D
              kthreadd-2     [000]    27.907647:      2:115:S ==> [000]     1:120:R
                  init-1     [000]    27.907657:      1:120:R   + [001]    16:  0:D
                <idle>-0     [001]    27.907666:      0:140:R ==> [001]    16:  0:R
      [   27.907703862] initcall stop_machine_init+0x0/0x50 returned 0 after 0 msecs
      [   27.907850704] calling  filelock_init+0x0/0x30 @ 1
      [   27.907926573] initcall filelock_init+0x0/0x30 returned 0 after 0 msecs
      [   27.908071327] calling  init_script_binfmt+0x0/0x10 @ 1
      [   27.908165195] initcall init_script_binfmt+0x0/0x10 returned 0 after 0 msecs
      [   27.908309461] calling  init_elf_binfmt+0x0/0x10 @ 1
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      74239072
    • Frederic Weisbecker's avatar
      tracing/fastboot: move boot tracer structs and funcs into their own header. · 3f5ec136
      Frederic Weisbecker authored
      Impact: Cleanups on the boot tracer and ftrace
      
      This patch bring some cleanups about the boot tracer headers. The
      functions and structures of this tracer have nothing related to ftrace
      and should have so their own header file.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3f5ec136
    • Ingo Molnar's avatar
    • Ingo Molnar's avatar
      Merge branches 'tracing/ftrace' and 'tracing/urgent' into tracing/core · d06bbd66
      Ingo Molnar authored
      Conflicts:
      	kernel/trace/ring_buffer.c
      d06bbd66
  4. 11 Nov, 2008 2 commits