Commit 6ecee34d authored by Ingo Molnar's avatar Ingo Molnar Committed by Linus Torvalds

[PATCH] sched: net: fix scheduling latencies in netstat

The attached patch fixes long scheduling latencies caused by access to the
/proc/net/tcp file.  The seqfile functions keep softirqs disabled for a
very long time (i've seen reports of 20+ msecs, if there are enough sockets
in the system).  With the attached patch it's below 100 usecs.

The cond_resched_softirq() relies on the implicit knowledge that this code
executes in process context and runs with softirqs disabled.

Potentially enabling softirqs means that the socket list might change
between buckets - but this is not an issue since seqfiles have a 4K
iteration granularity anyway and /proc/net/tcp is often (much) larger than
that.
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 116194f2
......@@ -2220,7 +2220,10 @@ static void *established_get_first(struct seq_file *seq)
struct sock *sk;
struct hlist_node *node;
struct tcp_tw_bucket *tw;
/* We can reschedule _before_ having picked the target: */
cond_resched_softirq();
read_lock(&tcp_ehash[st->bucket].lock);
sk_for_each(sk, node, &tcp_ehash[st->bucket].chain) {
if (sk->sk_family != st->family) {
......@@ -2267,6 +2270,10 @@ static void *established_get_next(struct seq_file *seq, void *cur)
}
read_unlock(&tcp_ehash[st->bucket].lock);
st->state = TCP_SEQ_STATE_ESTABLISHED;
/* We can reschedule between buckets: */
cond_resched_softirq();
if (++st->bucket < tcp_ehash_size) {
read_lock(&tcp_ehash[st->bucket].lock);
sk = sk_head(&tcp_ehash[st->bucket].chain);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment