Commit bced76ae authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched: Fix lockup by limiting load-balance retries on lock-break

Eric and David reported dead machines and traced it to commit
a195f004 ("sched: Fix load-balance lock-breaking"), it turns out
there's still a scenario where we can end up re-trying forever.

Since there is no strict forward progress guarantee in the
load-balance iteration we can get stuck re-retrying the same
task-set over and over.

Creating a forward progress guarantee with the existing
structure is somewhat non-trivial, for now simply terminate the
retry loop after a few tries.
Reported-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
Tested-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
Reported-by: default avatarDavid Ahern <dsahern@gmail.com>
[ logic cleanup as suggested by Eric ]
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1326297936.2442.157.camel@twinsSigned-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 6db9dc15
......@@ -3130,8 +3130,10 @@ task_hot(struct task_struct *p, u64 now, struct sched_domain *sd)
}
#define LBF_ALL_PINNED 0x01
#define LBF_NEED_BREAK 0x02
#define LBF_ABORT 0x04
#define LBF_NEED_BREAK 0x02 /* clears into HAD_BREAK */
#define LBF_HAD_BREAK 0x04
#define LBF_HAD_BREAKS 0x0C /* count HAD_BREAKs overflows into ABORT */
#define LBF_ABORT 0x10
/*
* can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
......@@ -4508,7 +4510,9 @@ static int load_balance(int this_cpu, struct rq *this_rq,
goto out_balanced;
if (lb_flags & LBF_NEED_BREAK) {
lb_flags &= ~LBF_NEED_BREAK;
lb_flags += LBF_HAD_BREAK - LBF_NEED_BREAK;
if (lb_flags & LBF_ABORT)
goto out_balanced;
goto redo;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment