Commit 4715883b authored by Marcelo Ricardo Leitner's avatar Marcelo Ricardo Leitner Committed by Ben Hutchings

ipv4: disable bh while doing route gc

Further tests revealed that after moving the garbage collector to a work
queue and protecting it with a spinlock may leave the system prone to
soft lockups if bottom half gets very busy.

It was reproced with a set of firewall rules that REJECTed packets. If
the NIC bottom half handler ends up running on the same CPU that is
running the garbage collector on a very large cache, the garbage
collector will not be able to do its job due to the amount of work
needed for handling the REJECTs and also won't reschedule.

The fix is to disable bottom half during the garbage collecting, as it
already was in the first place (most calls to it came from softirqs).
Signed-off-by: default avatarMarcelo Ricardo Leitner <mleitner@redhat.com>
Acked-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
parent ad5ca98f
...@@ -1000,7 +1000,7 @@ static void __do_rt_garbage_collect(int elasticity, int min_interval) ...@@ -1000,7 +1000,7 @@ static void __do_rt_garbage_collect(int elasticity, int min_interval)
* do not make it too frequently. * do not make it too frequently.
*/ */
spin_lock(&rt_gc_lock); spin_lock_bh(&rt_gc_lock);
RT_CACHE_STAT_INC(gc_total); RT_CACHE_STAT_INC(gc_total);
...@@ -1103,7 +1103,7 @@ static void __do_rt_garbage_collect(int elasticity, int min_interval) ...@@ -1103,7 +1103,7 @@ static void __do_rt_garbage_collect(int elasticity, int min_interval)
dst_entries_get_slow(&ipv4_dst_ops) < ipv4_dst_ops.gc_thresh) dst_entries_get_slow(&ipv4_dst_ops) < ipv4_dst_ops.gc_thresh)
expire = ip_rt_gc_timeout; expire = ip_rt_gc_timeout;
out: out:
spin_unlock(&rt_gc_lock); spin_unlock_bh(&rt_gc_lock);
} }
static void __rt_garbage_collect(struct work_struct *w) static void __rt_garbage_collect(struct work_struct *w)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment