- 09 Aug, 2007 34 commits
-
-
Ingo Molnar authored
remove the 'u64 now' parameter from pick_next_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from set_next_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from dequeue_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from enqueue_entity(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from enqueue_sleeper(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from __enqueue_sleeper(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_stats_curr_end(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_stats_dequeue(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_stats_curr_start(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_stats_wait_end(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from __update_stats_wait_end(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_stats_enqueue(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_stats_wait_start(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from update_curr(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the 'u64 now' parameter from print_cfs_rq(). ( identity transformation that causes no change in functionality. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
change all 'now' timestamp uses in assignments to rq->clock. ( this is an identity transformation that causes no functionality change: all such new rq->clock is necessarily preceded by an update_rq_clock() call. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the (now unused) __rq_clock() function. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
eliminate __rq_clock() use by changing it to: __update_rq_clock(rq) now = rq->clock; identity transformation - no change in behavior. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
remove the now unused rq_clock() function. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
eliminate rq_clock() use by changing it to: update_rq_clock(rq) now = rq->clock; identity transformation - no change in behavior. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
add the [__]update_rq_clock(rq) functions. (No change in functionality, just reorganization to prepare for elimination of the heavy 64-bit timestamp-passing in the scheduler.) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Williams authored
There are two problems with balance_tasks() and how it used: 1. The variables best_prio and best_prio_seen (inherited from the old move_tasks()) were only required to handle problems caused by the active/expired arrays, the order in which they were processed and the possibility that the task with the highest priority could be on either. These issues are no longer present and the extra overhead associated with their use is unnecessary (and possibly wrong). 2. In the absence of CONFIG_FAIR_GROUP_SCHED being set, the same this_best_prio variable needs to be used by all scheduling classes or there is a risk of moving too much load. E.g. if the highest priority task on this at the beginning is a fairly low priority task and the rt class migrates a task (during its turn) then that moved task becomes the new highest priority task on this_rq but when the sched_fair class initializes its copy of this_best_prio it will get the priority of the original highest priority task as, due to the run queue locks being held, the reschedule triggered by pull_task() will not have taken place. This could result in inappropriate overriding of skip_for_load and excessive load being moved. The attached patch addresses these problems by deleting all reference to best_prio and best_prio_seen and making this_best_prio a reference parameter to the various functions involved. load_balance_fair() has also been modified so that this_best_prio is only reset (in the loop) if CONFIG_FAIR_GROUP_SCHED is set. This should preserve the effect of helping spread groups' higher priority tasks around the available CPUs while improving system performance when CONFIG_FAIR_GROUP_SCHED isn't set. Signed-off-by: Peter Williams <pwil3058@bigpond.net.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Document the design thinking behind nice levels. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Alexey Dobriyan authored
kernel.sched_domain hierarchy is under CTL_UNNUMBERED and thus unreachable to sysctl(2). Generating .ctl_number's in such situation is not useful. Signed-off-by: Alexey Dobriyan <adobriyan@sw.ru> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
small delta_exec accounting fix: increase delta_exec and increase sum_exec_runtime even if the task is not on the runqueue anymore. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
cleanup: delta_mine is an unsigned value. no code impact: text data bss dec hex filename 27823 2726 16 30565 7765 sched.o.before 27823 2726 16 30565 7765 sched.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
speed up schedule(): share the 'now' parameter that deactivate_task() was calculating internally. ( this also fixes the small accounting window between the deactivate call and the pick_next_task() call. ) Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
uninline rq_clock() to save 263 bytes of code: text data bss dec hex filename 39561 3642 24 43227 a8db sched.o.before 39298 3642 24 42964 a7d4 sched.o.after Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Josh Triplett authored
sched_fair.c defines print_cfs_stats, and sched_debug.c uses it, but sched.c includes both sched_fair.c and sched_debug.c, so all the references to print_cfs_stats occur in the same compilation unit. Thus, mark print_cfs_stats static. Eliminates a sparse warning: warning: symbol 'print_cfs_stats' was not declared. Should it be static? Signed-off-by: Josh Triplett <josh@kernel.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ulrich Drepper authored
here's another tiny cleanup. The generated code is not affected (gcc is smart enough) but for people looking over the code it is just irritating to have the extra conditional. Signed-off-by: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Thomas Voegtle authored
a little hint to switch on CONFIG_SCHED_DEBUG should be given. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Williams authored
The move_tasks() function is currently multiplexed with two distinct capabilities: 1. attempt to move a specified amount of weighted load from one run queue to another; and 2. attempt to move a specified number of tasks from one run queue to another. The first of these capabilities is used in two places, load_balance() and load_balance_idle(), and in both of these cases the return value of move_tasks() is used purely to decide if tasks/load were moved and no notice of the actual number of tasks moved is taken. The second capability is used in exactly one place, active_load_balance(), to attempt to move exactly one task and, as before, the return value is only used as an indicator of success or failure. This multiplexing of sched_task() was introduced, by me, as part of the smpnice patches and was motivated by the fact that the alternative, one function to move specified load and one to move a single task, would have led to two functions of roughly the same complexity as the old move_tasks() (or the new balance_tasks()). However, the new modular design of the new CFS scheduler allows a simpler solution to be adopted and this patch addresses that solution by: 1. adding a new function, move_one_task(), to be used by active_load_balance(); and 2. making move_tasks() a single purpose function that tries to move a specified weighted load and returns 1 for success and 0 for failure. One of the consequences of these changes is that neither move_one_task() or the new move_tasks() care how many tasks sched_class.load_balance() moves and this enables its interface to be simplified by returning the amount of load moved as its result and removing the load_moved pointer from the argument list. This helps simplify the new move_tasks() and slightly reduces the amount of work done in each of sched_class.load_balance()'s implementations. Further simplification, e.g. changes to balance_tasks(), are possible but (slightly) complicated by the special needs of load_balance_fair() so I've left them to a later patch (if this one gets accepted). NB Since move_tasks() gets called with two run queue locks held even small reductions in overhead are worthwhile. [ mingo@elte.hu ] this change also reduces code size nicely: text data bss dec hex filename 39216 3618 24 42858 a76a sched.o.before 39173 3618 24 42815 a73f sched.o.after Signed-off-by: Peter Williams <pwil3058@bigpond.net.au> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Peter Williams suggested to flip the order of update_cpu_load(rq) with the ->task_tick() call. This is a NOP for the current scheduler (the two functions are independent of each other), ->task_tick() might create some state for update_cpu_load() in the future (or in PlugSched). Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
batch up the sleeper bonus sum a bit more. Anything below sched-granularity is too small to make a practical difference anyway. this optimization reduces the math in high-frequency scheduling scenarios. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 07 Aug, 2007 6 commits
-
-
Linus Torvalds authored
* master.kernel.org:/home/rmk/linux-2.6-arm: [ARM] rpc: update defconfig [ARM] pata_icside: fix the FIXMEs [ARM] 4542/1: AT91: include atmel_lcdc.h in at91sam926{1,3}_devices.c [ARM] 4541/1: iop: defconfig updates [ARM] 4531/1: remove is_in_rom() protptype
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: [CRYPTO] api: fix writting into unallocated memory in setkey_aligned
-
Al Viro authored
C99 6.10.3[11]: preprocessing directive within the argument list of macro invocation => undefined behaviour. Don't do that... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rusty Russell authored
Lguest drivers need to default to "Y" otherwise they're never selected for new builds. (We don't bother prompting, because they're less than 4k combined, and implied by selecting lguest support). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Avi Kivity authored
More fallout from the writeback fixes: debug register transfer instructions do their own writeback and thus need to disable the general writeback mechanism. This fixes oopses and some guest failures on AMD machines (the Intel variant decodes the instruction in hardware and thus does not need emulation). Cc: Alistair John Strachan <alistair@devzero.co.uk> Signed-off-by: Avi Kivity <avi@qumranet.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: [NETFILTER]: Add xt_statistic.h to the header list for usermode programs [BNX2]: Fix suspend/resume problem. [TG3]: Fix suspend/resume problem.
-