1. 10 Oct, 2017 14 commits
    • Uladzislau Rezki's avatar
      sched/fair: Search a task from the tail of the queue · 93824900
      Uladzislau Rezki authored
      As a first step this patch makes cfs_tasks list as MRU one.
      It means, that when a next task is picked to run on physical
      CPU it is moved to the front of the list.
      
      Therefore, the cfs_tasks list is more or less sorted (except
      woken tasks) starting from recently given CPU time tasks toward
      tasks with max wait time in a run-queue, i.e. MRU list.
      
      Second, as part of the load balance operation, this approach
      starts detach_tasks()/detach_one_task() from the tail of the
      queue instead of the head, giving some advantages:
      
       - tends to pick a task with highest wait time;
      
       - tasks located in the tail are less likely cache-hot,
         therefore the can_migrate_task() decision is higher.
      
      hackbench illustrates slightly better performance. For example
      doing 1000 samples and 40 groups on i5-3320M CPU, it shows below
      figures:
      
       default: 0.657 avg
       patched: 0.646 avg
      Signed-off-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Kirill Tkhai <tkhai@yandex.ru>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
      Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Link: http://lkml.kernel.org/r/20170913102430.8985-2-urezki@gmail.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      93824900
    • Suravee Suthikulpanit's avatar
      sched/topology: Introduce NUMA identity node sched domain · 051f3ca0
      Suravee Suthikulpanit authored
      On AMD Family17h-based (EPYC) system, a logical NUMA node can contain
      upto 8 cores (16 threads) with the following topology.
      
                   ----------------------------
               C0  | T0 T1 |    ||    | T0 T1 | C4
                   --------|    ||    |--------
               C1  | T0 T1 | L3 || L3 | T0 T1 | C5
                   --------|    ||    |--------
               C2  | T0 T1 | #0 || #1 | T0 T1 | C6
                   --------|    ||    |--------
               C3  | T0 T1 |    ||    | T0 T1 | C7
                   ----------------------------
      
      Here, there are 2 last-level (L3) caches per logical NUMA node.
      A socket can contain upto 4 NUMA nodes, and a system can support
      upto 2 sockets. With full system configuration, current scheduler
      creates 4 sched domains:
      
        domain0 SMT       (span a core)
        domain1 MC        (span a last-level-cache)
        domain2 NUMA      (span a socket: 4 nodes)
        domain3 NUMA      (span a system: 8 nodes)
      
      Note that there is no domain to represent cpus spaning a logical
      NUMA node.  With this hierarchy of sched domains, the scheduler does
      not balance properly in the following cases:
      
      Case1:
      
       When running 8 tasks, a properly balanced system should
       schedule a task per logical NUMA node. This is not the case for
       the current scheduler.
      
      Case2:
      
       In some cases, threads are scheduled on the same cpu, while other
       cpus are idle. This results in run-to-run inconsistency. For example:
      
        taskset -c 0-7 sysbench --num-threads=8 --test=cpu \
                                --cpu-max-prime=100000 run
      
      Total execution time ranges from 25.1s to 33.5s depending on threads
      placement, where 25.1s is when all 8 threads are balanced properly
      on 8 cpus.
      
      Introducing NUMA identity node sched domain, which is based on how
      SRAT/SLIT table define a logical NUMA node. This results in the following
      hierarchy of sched domains on the same system described above.
      
        domain0 SMT       (span a core)
        domain1 MC        (span a last-level-cache)
        domain2 NODE      (span a logical NUMA node)
        domain3 NUMA      (span a socket: 4 nodes)
        domain4 NUMA      (span a system: 8 nodes)
      
      This fixes the improper load balancing cases mentioned above.
      Signed-off-by: default avatarSuravee Suthikulpanit <suravee.suthikulpanit@amd.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bp@suse.de
      Link: http://lkml.kernel.org/r/1504768805-46716-1-git-send-email-suravee.suthikulpanit@amd.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      051f3ca0
    • Peter Zijlstra's avatar
      sched/topology: Restore SD_PREFER_SIBLING on MC domains · ed4ad1ca
      Peter Zijlstra authored
      The normal x86_topology on NHM+ machines degenerates because the MC
      and CPU domains are of the same size, therefore MC inherits
      SD_PREFER_SIBLING from CPU (which then gets taken out). The result is
      that we'll spread tasks across the first NUMA level in order to
      maximize cache utilization.
      
      However, for the x86_numa_in_package_topology we loose the CPU domain,
      and we'll not have SD_PREFER_SIBLING set anywhere, giving a distinct
      difference in behaviour.
      
      Commit:
      
        8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs")
      
      made a fail by not preserving the SD_PREFER_SIBLING for the !power_saving
      case on both CPU and MC.
      
      Then commit:
      
        6956dc56 ("sched/numa: Add SD_PERFER_SIBLING to CPU domain")
      
      adds it back to the CPU but not MC.
      
      Restore that now, such that we get consistent spreading behaviour wrt
      L3 and NUMA.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      ed4ad1ca
    • luca abeni's avatar
      sched/deadline: Use C bitfields for the state flags · 799ba82d
      luca abeni authored
      Ask the compiler to use a single bit for storing true / false values,
      instead of wasting the size of a whole int value.
      Tested with gcc 5.4.0 on x86_64, and the compiler produces the expected
      Assembly (similar to the Assembly code generated when explicitly accessing
      the bits with bitmasks, "&" and "|").
      Signed-off-by: default avatarluca abeni <luca.abeni@santannapisa.it>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-5-git-send-email-luca.abeni@santannapisa.itSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      799ba82d
    • Peter Zijlstra's avatar
      sched/deadline: Rename __dl_clear() to __dl_sub() · 8c0944ce
      Peter Zijlstra authored
      __dl_sub() is more meaningful as a name, and is more consistent
      with the naming of the dual function (__dl_add()).
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarLuca Abeni <luca.abeni@santannapisa.it>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-4-git-send-email-luca.abeni@santannapisa.itSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8c0944ce
    • Luca Abeni's avatar
      sched/deadline: Fix switching to -deadline · 295d6d5e
      Luca Abeni authored
      Fix a bug introduced in:
      
        72f9f3fd ("sched/deadline: Remove dl_new from struct sched_dl_entity")
      
      After that commit, when switching to -deadline if the scheduling
      deadline of a task is in the past then switched_to_dl() calls
      setup_new_entity() to properly initialize the scheduling deadline
      and runtime.
      
      The problem is that the task is enqueued _before_ having its parameters
      initialized by setup_new_entity(), and this can cause problems.
      For example, a task with its out-of-date deadline in the past will
      potentially be enqueued as the highest priority one; however, its
      adjusted deadline may not be the earliest one.
      
      This patch fixes the problem by initializing the task's parameters before
      enqueuing it.
      Signed-off-by: default avatarluca abeni <luca.abeni@santannapisa.it>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-3-git-send-email-luca.abeni@santannapisa.itSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      295d6d5e
    • luca abeni's avatar
      sched/headers: Remove duplicate prototype of __dl_clear_params() · e964d350
      luca abeni authored
      Signed-off-by: default avatarluca abeni <luca.abeni@santannapisa.it>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDaniel Bristot de Oliveira <bristot@redhat.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mathieu Poirier <mathieu.poirier@linaro.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1504778971-13573-2-git-send-email-luca.abeni@santannapisa.itSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e964d350
    • Peter Zijlstra's avatar
      sched/debug: Rename task-state printing helpers · 1d48b080
      Peter Zijlstra authored
      Steve requested better names for the new task-state helper functions.
      
      So introduce the concept of task-state index for the printing and
      rename __get_task_state() to task_state_index() and
      __task_state_to_char() to task_index_to_char().
      Requested-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20170929115016.pzlqc7ss3ccystyg@hirez.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1d48b080
    • Peter Zijlstra's avatar
      sched/idle: Move quiet_vmstate() into the NOHZ code · 62cb1188
      Peter Zijlstra authored
      quiet_vmstat() is an expensive function that only makes sense when we
      go into NOHZ.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aubrey.li@linux.intel.com
      Cc: cl@linux.com
      Cc: fweisbec@gmail.com
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      62cb1188
    • Ingo Molnar's avatar
    • Peter Zijlstra's avatar
      sched/core: Ensure load_balance() respects the active_mask · 024c9d2f
      Peter Zijlstra authored
      While load_balance() masks the source CPUs against active_mask, it had
      a hole against the destination CPU. Ensure the destination CPU is also
      part of the 'domain-mask & active-mask' set.
      Reported-by: default avatarLevin, Alexander (Sasha Levin) <alexander.levin@verizon.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 77d1dfda ("sched/topology, cpuset: Avoid spurious/wrong domain rebuilds")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      024c9d2f
    • Peter Zijlstra's avatar
      sched/core: Address more wake_affine() regressions · f2cdd9cc
      Peter Zijlstra authored
      The trivial wake_affine_idle() implementation is very good for a
      number of workloads, but it comes apart at the moment there are no
      idle CPUs left, IOW. the overloaded case.
      
      hackbench:
      
      		NO_WA_WEIGHT		WA_WEIGHT
      
      hackbench-20  : 7.362717561 seconds	6.450509391 seconds
      
      (win)
      
      netperf:
      
      		  NO_WA_WEIGHT		WA_WEIGHT
      
      TCP_SENDFILE-1	: Avg: 54524.6		Avg: 52224.3
      TCP_SENDFILE-10	: Avg: 48185.2          Avg: 46504.3
      TCP_SENDFILE-20	: Avg: 29031.2          Avg: 28610.3
      TCP_SENDFILE-40	: Avg: 9819.72          Avg: 9253.12
      TCP_SENDFILE-80	: Avg: 5355.3           Avg: 4687.4
      
      TCP_STREAM-1	: Avg: 41448.3          Avg: 42254
      TCP_STREAM-10	: Avg: 24123.2          Avg: 25847.9
      TCP_STREAM-20	: Avg: 15834.5          Avg: 18374.4
      TCP_STREAM-40	: Avg: 5583.91          Avg: 5599.57
      TCP_STREAM-80	: Avg: 2329.66          Avg: 2726.41
      
      TCP_RR-1	: Avg: 80473.5          Avg: 82638.8
      TCP_RR-10	: Avg: 72660.5          Avg: 73265.1
      TCP_RR-20	: Avg: 52607.1          Avg: 52634.5
      TCP_RR-40	: Avg: 57199.2          Avg: 56302.3
      TCP_RR-80	: Avg: 25330.3          Avg: 26867.9
      
      UDP_RR-1	: Avg: 108266           Avg: 107844
      UDP_RR-10	: Avg: 95480            Avg: 95245.2
      UDP_RR-20	: Avg: 68770.8          Avg: 68673.7
      UDP_RR-40	: Avg: 76231            Avg: 75419.1
      UDP_RR-80	: Avg: 34578.3          Avg: 35639.1
      
      UDP_STREAM-1	: Avg: 64684.3          Avg: 66606
      UDP_STREAM-10	: Avg: 52701.2          Avg: 52959.5
      UDP_STREAM-20	: Avg: 30376.4          Avg: 29704
      UDP_STREAM-40	: Avg: 15685.8          Avg: 15266.5
      UDP_STREAM-80	: Avg: 8415.13          Avg: 7388.97
      
      (wins and losses)
      
      sysbench:
      
      		    NO_WA_WEIGHT		WA_WEIGHT
      
      sysbench-mysql-2  :  2135.17 per sec.		 2142.51 per sec.
      sysbench-mysql-5  :  4809.68 per sec.            4800.19 per sec.
      sysbench-mysql-10 :  9158.59 per sec.            9157.05 per sec.
      sysbench-mysql-20 : 14570.70 per sec.           14543.55 per sec.
      sysbench-mysql-40 : 22130.56 per sec.           22184.82 per sec.
      sysbench-mysql-80 : 20995.56 per sec.           21904.18 per sec.
      
      sysbench-psql-2   :  1679.58 per sec.            1705.06 per sec.
      sysbench-psql-5   :  3797.69 per sec.            3879.93 per sec.
      sysbench-psql-10  :  7253.22 per sec.            7258.06 per sec.
      sysbench-psql-20  : 11166.75 per sec.           11220.00 per sec.
      sysbench-psql-40  : 17277.28 per sec.           17359.78 per sec.
      sysbench-psql-80  : 17112.44 per sec.           17221.16 per sec.
      
      (increase on the top end)
      
      tbench:
      
      NO_WA_WEIGHT
      
      Throughput 685.211 MB/sec   2 clients   2 procs  max_latency=0.123 ms
      Throughput 1596.64 MB/sec   5 clients   5 procs  max_latency=0.119 ms
      Throughput 2985.47 MB/sec  10 clients  10 procs  max_latency=0.262 ms
      Throughput 4521.15 MB/sec  20 clients  20 procs  max_latency=0.506 ms
      Throughput 9438.1  MB/sec  40 clients  40 procs  max_latency=2.052 ms
      Throughput 8210.5  MB/sec  80 clients  80 procs  max_latency=8.310 ms
      
      WA_WEIGHT
      
      Throughput 697.292 MB/sec   2 clients   2 procs  max_latency=0.127 ms
      Throughput 1596.48 MB/sec   5 clients   5 procs  max_latency=0.080 ms
      Throughput 2975.22 MB/sec  10 clients  10 procs  max_latency=0.254 ms
      Throughput 4575.14 MB/sec  20 clients  20 procs  max_latency=0.502 ms
      Throughput 9468.65 MB/sec  40 clients  40 procs  max_latency=2.069 ms
      Throughput 8631.73 MB/sec  80 clients  80 procs  max_latency=8.605 ms
      
      (increase on the top end)
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f2cdd9cc
    • Peter Zijlstra's avatar
      sched/core: Fix wake_affine() performance regression · d153b153
      Peter Zijlstra authored
      Eric reported a sysbench regression against commit:
      
        3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      
      Similarly, Rik was looking at the NAS-lu.C benchmark, which regressed
      against his v3.10 enterprise kernel.
      
      PRE (current tip/master):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64110  (2136.94 per sec.)
         5: [30 secs]     transactions:                        143644 (4787.99 per sec.)
        10: [30 secs]     transactions:                        274298 (9142.93 per sec.)
        20: [30 secs]     transactions:                        418683 (13955.45 per sec.)
        40: [30 secs]     transactions:                        320731 (10690.15 per sec.)
        80: [30 secs]     transactions:                        355096 (11834.28 per sec.)
      
       hsw-ex NAS:
      
       OMP_PROC_BIND/lu.C.x_threads_144_run_1.log: Time in seconds =                    18.01
       OMP_PROC_BIND/lu.C.x_threads_144_run_2.log: Time in seconds =                    17.89
       OMP_PROC_BIND/lu.C.x_threads_144_run_3.log: Time in seconds =                    17.93
       lu.C.x_threads_144_run_1.log: Time in seconds =                   434.68
       lu.C.x_threads_144_run_2.log: Time in seconds =                   405.36
       lu.C.x_threads_144_run_3.log: Time in seconds =                   433.83
      
      POST (+patch):
      
       ivb-ep sysbench:
      
         2: [30 secs]     transactions:                        64494  (2149.75 per sec.)
         5: [30 secs]     transactions:                        145114 (4836.99 per sec.)
        10: [30 secs]     transactions:                        278311 (9276.69 per sec.)
        20: [30 secs]     transactions:                        437169 (14571.60 per sec.)
        40: [30 secs]     transactions:                        669837 (22326.73 per sec.)
        80: [30 secs]     transactions:                        631739 (21055.88 per sec.)
      
       hsw-ex NAS:
      
       lu.C.x_threads_144_run_1.log: Time in seconds =                    23.36
       lu.C.x_threads_144_run_2.log: Time in seconds =                    22.96
       lu.C.x_threads_144_run_3.log: Time in seconds =                    22.52
      
      This patch takes out all the shiny wake_affine() stuff and goes back to
      utter basics. Between the two CPUs involved with the wakeup (the CPU
      doing the wakeup and the CPU we ran on previously) pick the CPU we can
      run on _now_.
      
      This restores much of the regressions against the older kernels,
      but leaves some ground in the overloaded case. The default-enabled
      WA_WEIGHT (which will be introduced in the next patch) is an attempt
      to address the overloaded situation.
      Reported-by: default avatarEric Farman <farman@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: jinpuwang@gmail.com
      Cc: vcaputo@pengaru.com
      Fixes: 3fed382b ("sched/numa: Implement NUMA node level wake_affine()")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d153b153
    • Linus Torvalds's avatar
      Merge branch 'ppc-bundle' (bundle from Michael Ellerman) · 529a86e0
      Linus Torvalds authored
      Merge powerpc transactional memory fixes from Michael Ellerman:
       "I figured I'd still send you the commits using a bundle to make sure
        it works in case I need to do it again in future"
      
      This fixes transactional memory state restore for powerpc.
      
      * bundle'd patches from Michael Ellerman:
        powerpc/tm: Fix illegal TM state in signal handler
        powerpc/64s: Use emergency stack for kernel TM Bad Thing program checks
      529a86e0
  2. 09 Oct, 2017 22 commits
    • Linus Torvalds's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · ff33952e
      Linus Torvalds authored
      Pull networking fixes from David Miller:
      
       1) Fix object leak on IPSEC offload failure, from Steffen Klassert.
      
       2) Fix range checks in ipset address range addition operations, from
          Jozsef Kadlecsik.
      
       3) Fix pernet ops unregistration order in ipset, from Florian Westphal.
      
       4) Add missing netlink attribute policy for nl80211 packet pattern
          attrs, from Peng Xu.
      
       5) Fix PPP device destruction race, from Guillaume Nault.
      
       6) Write marks get lost when BPF verifier processes R1=R2 register
          assignments, causing incorrect liveness information and less state
          pruning. Fix from Alexei Starovoitov.
      
       7) Fix blockhole routes so that they are marked dead and therefore not
          cached in sockets, otherwise IPSEC stops working. From Steffen
          Klassert.
      
       8) Fix broadcast handling of UDP socket early demux, from Paolo Abeni.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (37 commits)
        cdc_ether: flag the u-blox TOBY-L2 and SARA-U2 as wwan
        net: thunderx: mark expected switch fall-throughs in nicvf_main()
        udp: fix bcast packet reception
        netlink: do not set cb_running if dump's start() errs
        ipv4: Fix traffic triggered IPsec connections.
        ipv6: Fix traffic triggered IPsec connections.
        ixgbe: incorrect XDP ring accounting in ethtool tx_frame param
        net: ixgbe: Use new PCI_DEV_FLAGS_NO_RELAXED_ORDERING flag
        Revert commit 1a8b6d76 ("net:add one common config...")
        ixgbe: fix masking of bits read from IXGBE_VXLANCTRL register
        ixgbe: Return error when getting PHY address if PHY access is not supported
        netfilter: xt_bpf: Fix XT_BPF_MODE_FD_PINNED mode of 'xt_bpf_info_v1'
        netfilter: SYNPROXY: skip non-tcp packet in {ipv4, ipv6}_synproxy_hook
        tipc: Unclone message at secondary destination lookup
        tipc: correct initialization of skb list
        gso: fix payload length when gso_size is zero
        mlxsw: spectrum_router: Avoid expensive lookup during route removal
        bpf: fix liveness marking
        doc: Fix typo "8023.ad" in bonding documentation
        ipv6: fix net.ipv6.conf.all.accept_dad behaviour for real
        ...
      ff33952e
    • Aleksander Morgado's avatar
      cdc_ether: flag the u-blox TOBY-L2 and SARA-U2 as wwan · fdfbad32
      Aleksander Morgado authored
      The u-blox TOBY-L2 is a LTE Cat 4 module with HSPA+ and 2G fallback.
      This module allows switching to different USB profiles with the
      'AT+UUSBCONF' command, and provides a ECM network interface when the
      'AT+UUSBCONF=2' profile is selected.
      
      The u-blox SARA-U2 is a HSPA module with 2G fallback. The default USB
      configuration includes a ECM network interface.
      
      Both these modules are controlled via AT commands through one of the
      TTYs exposed. Connecting these modules may be done just by activating
      the desired PDP context with 'AT+CGACT=1,<cid>' and then running DHCP
      on the ECM interface.
      Signed-off-by: default avatarAleksander Morgado <aleksander@aleksander.es>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fdfbad32
    • Linus Torvalds's avatar
      Merge tag 'nfs-for-4.14-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs · 68ebe3cb
      Linus Torvalds authored
      Pull NFS client bugfixes from Trond Myklebust:
       "Hightlights include:
      
        stable fixes:
         - nfs/filelayout: fix oops when freeing filelayout segment
         - NFS: Fix uninitialized rpc_wait_queue
      
        bugfixes:
         - NFSv4/pnfs: Fix an infinite layoutget loop
         - nfs: RPC_MAX_AUTH_SIZE is in bytes"
      
      * tag 'nfs-for-4.14-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
        NFSv4/pnfs: Fix an infinite layoutget loop
        nfs/filelayout: fix oops when freeing filelayout segment
        sunrpc: remove redundant initialization of sock
        NFS: Fix uninitialized rpc_wait_queue
        NFS: Cleanup error handling in nfs_idmap_request_key()
        nfs: RPC_MAX_AUTH_SIZE is in bytes
      68ebe3cb
    • Gustavo A. R. Silva's avatar
      net: thunderx: mark expected switch fall-throughs in nicvf_main() · 1a2ace56
      Gustavo A. R. Silva authored
      In preparation to enabling -Wimplicit-fallthrough, mark switch cases
      where we are expecting to fall through.
      
      Cc: Sunil Goutham <sgoutham@cavium.com>
      Cc: Robert Richter <rric@kernel.org>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: netdev@vger.kernel.org
      Signed-off-by: default avatarGustavo A. R. Silva <gustavo@embeddedor.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1a2ace56
    • David S. Miller's avatar
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf · fb60bccc
      David S. Miller authored
      Pablo Neira Ayuso says:
      
      ====================
      Netfilter/IPVS fixes for net
      
      The following patchset contains Netfilter/IPVS fixes for your net tree,
      they are:
      
      1) Fix packet drops due to incorrect ECN handling in IPVS, from Vadim
         Fedorenko.
      
      2) Fix splat with mark restoration in xt_socket with non-full-sock,
         patch from Subash Abhinov Kasiviswanathan.
      
      3) ipset bogusly bails out when adding IPv4 range containing more than
         2^31 addresses, from Jozsef Kadlecsik.
      
      4) Incorrect pernet unregistration order in ipset, from Florian Westphal.
      
      5) Races between dump and swap in ipset results in BUG_ON splats, from
         Ross Lagerwall.
      
      6) Fix chain renames in nf_tables, from JingPiao Chen.
      
      7) Fix race in pernet codepath with ebtables table registration, from
         Artem Savkov.
      
      8) Memory leak in error path in set name allocation in nf_tables, patch
         from Arvind Yadav.
      
      9) Don't dump chain counters if they are not available, this fixes a
         crash when listing the ruleset.
      
      10) Fix out of bound memory read in strlcpy() in x_tables compat code,
          from Eric Dumazet.
      
      11) Make sure we only process TCP packets in SYNPROXY hooks, patch from
          Lin Zhang.
      
      12) Cannot load rules incrementally anymore after xt_bpf with pinned
          objects, added in revision 1. From Shmulik Ladkani.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      fb60bccc
    • David S. Miller's avatar
      Merge branch '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue · 5766cd68
      David S. Miller authored
      Jeff Kirsher says:
      
      ====================
      Intel Wired LAN Driver Updates 2017-10-09
      
      This series contains updates to ixgbe and arch/Kconfig.
      
      Mark fixes a case where PHY register access is not supported and we were
      returning a PHY address, when we should have been returning -EOPNOTSUPP.
      
      Sabrina Dubroca fixes the use of a logical "and" when it should have been
      the bitwise "and" operator.
      
      Ding Tianhong reverts the commit that added the Kconfig bool option
      ARCH_WANT_RELAX_ORDER, since there is now a new flag
      PCI_DEV_FLAGS_NO_RELAXED_ORDERING that has been added to indicate that
      Relaxed Ordering Attributes should not be used for Transaction Layer
      Packets.  Then follows up with making the needed changes to ixgbe to
      use the new PCI_DEV_FLAGS_NO_RELAXED_ORDERING flag.
      
      John Fastabend fixes an issue in the ring accounting when the transmit
      ring parameters are changed via ethtool when an XDP program is attached.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      5766cd68
    • Paolo Abeni's avatar
      udp: fix bcast packet reception · 996b44fc
      Paolo Abeni authored
      The commit bc044e8d ("udp: perform source validation for
      mcast early demux") does not take into account that broadcast packets
      lands in the same code path and they need different checks for the
      source address - notably, zero source address are valid for bcast
      and invalid for mcast.
      
      As a result, 2nd and later broadcast packets with 0 source address
      landing to the same socket are dropped. This breaks dhcp servers.
      
      Since we don't have stringent performance requirements for ingress
      broadcast traffic, fix it by disabling UDP early demux such traffic.
      Reported-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Fixes: bc044e8d ("udp: perform source validation for mcast early demux")
      Signed-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      996b44fc
    • Jason A. Donenfeld's avatar
      netlink: do not set cb_running if dump's start() errs · 41c87425
      Jason A. Donenfeld authored
      It turns out that multiple places can call netlink_dump(), which means
      it's still possible to dereference partially initialized values in
      dump() that were the result of a faulty returned start().
      
      This fixes the issue by calling start() _before_ setting cb_running to
      true, so that there's no chance at all of hitting the dump() function
      through any indirect paths.
      
      It also moves the call to start() to be when the mutex is held. This has
      the nice side effect of serializing invocations to start(), which is
      likely desirable anyway. It also prevents any possible other races that
      might come out of this logic.
      
      In testing this with several different pieces of tricky code to trigger
      these issues, this commit fixes all avenues that I'm aware of.
      Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
      Cc: Johannes Berg <johannes@sipsolutions.net>
      Reviewed-by: default avatarJohannes Berg <johannes@sipsolutions.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      41c87425
    • David S. Miller's avatar
      Merge tag 'mac80211-for-davem-2017-10-09' of... · 6df4d17c
      David S. Miller authored
      Merge tag 'mac80211-for-davem-2017-10-09' of git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211
      
      Johannes Berg says:
      
      ====================
      pull-request: mac80211 2017-10-09
      
      The QCA folks found another netlink problem - we were missing validation
      of some attributes. It's not super problematic since one can only read a
      few bytes beyond the message (and that memory must exist), but here's the
      fix for it.
      
      I thought perhaps we can make nla_parse_nested() require a policy, but
      given the two-stage validation/parsing in regular netlink that won't work.
      
      Please pull and let me know if there's any problem.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6df4d17c
    • David S. Miller's avatar
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec · 93b03193
      David S. Miller authored
      Steffen Klassert says:
      
      ====================
      pull request (net): ipsec 2017-10-09
      
      1) Fix some error paths of the IPsec offloading API.
      
      2) Fix a NULL pointer dereference when IPsec is used
         with vti. From Alexey Kodanev.
      
      3) Don't call xfrm_policy_cache_flush under xfrm_state_lock,
         it triggers several locking warnings. From Artem Savkov.
      
      Please pull or let me know if there are problems.
      ====================
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      93b03193
    • Steffen Klassert's avatar
      ipv4: Fix traffic triggered IPsec connections. · 6c0e7284
      Steffen Klassert authored
      A recent patch removed the dst_free() on the allocated
      dst_entry in ipv4_blackhole_route(). The dst_free() marked the
      dst_entry as dead and added it to the gc list. I.e. it was setup
      for a one time usage. As a result we may now have a blackhole
      route cached at a socket on some IPsec scenarios. This makes the
      connection unusable.
      
      Fix this by marking the dst_entry directly at allocation time
      as 'dead', so it is used only once.
      
      Fixes: b838d5e1 ("ipv4: mark DST_NOGC and remove the operation of dst_free()")
      Reported-by: default avatarTobias Brunner <tobias@strongswan.org>
      Signed-off-by: default avatarSteffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      6c0e7284
    • Steffen Klassert's avatar
      ipv6: Fix traffic triggered IPsec connections. · 62cf27e5
      Steffen Klassert authored
      A recent patch removed the dst_free() on the allocated
      dst_entry in ipv6_blackhole_route(). The dst_free() marked
      the dst_entry as dead and added it to the gc list. I.e. it
      was setup for a one time usage. As a result we may now have
      a blackhole route cached at a socket on some IPsec scenarios.
      This makes the connection unusable.
      
      Fix this by marking the dst_entry directly at allocation time
      as 'dead', so it is used only once.
      
      Fixes: 587fea74 ("ipv6: mark DST_NOGC and remove the operation of dst_free()")
      Reported-by: default avatarTobias Brunner <tobias@strongswan.org>
      Signed-off-by: default avatarSteffen Klassert <steffen.klassert@secunet.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      62cf27e5
    • John Fastabend's avatar
      ixgbe: incorrect XDP ring accounting in ethtool tx_frame param · 8e679021
      John Fastabend authored
      Changing the TX ring parameters with an XDP program attached may
      cause the XDP queues to be cleared and the TX rings to be incorrectly
      configured.
      
      Fix by doing correct ring accounting in setup call.
      
      Fixes: 33fdc82f ("ixgbe: add support for XDP_TX action")
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      8e679021
    • Ding Tianhong's avatar
      net: ixgbe: Use new PCI_DEV_FLAGS_NO_RELAXED_ORDERING flag · 5e0fac63
      Ding Tianhong authored
      The ixgbe driver use the compile check to determine if it can
      send TLPs to Root Port with the Relaxed Ordering Attribute set,
      this is too inconvenient, now the new flag PCI_DEV_FLAGS_NO_RELAXED_ORDERING
      has been added to the kernel and we could check the bit4 in the PCIe
      Device Control register to determine whether we should use the Relaxed
      Ordering Attributes or not, so use this new way in the ixgbe driver.
      Signed-off-by: default avatarDing Tianhong <dingtianhong@huawei.com>
      Acked-by: default avatarEmil Tantilov <emil.s.tantilov@intel.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      5e0fac63
    • Ding Tianhong's avatar
      Revert commit 1a8b6d76 ("net:add one common config...") · f4986d25
      Ding Tianhong authored
      The new flag PCI_DEV_FLAGS_NO_RELAXED_ORDERING has been added
      to indicate that Relaxed Ordering Attributes (RO) should not
      be used for Transaction Layer Packets (TLP) targeted toward
      these affected Root Port, it will clear the bit4 in the PCIe
      Device Control register, so the PCIe device drivers could
      query PCIe configuration space to determine if it can send
      TLPs to Root Port with the Relaxed Ordering Attributes set.
      
      With this new flag  we don't need the config ARCH_WANT_RELAX_ORDER
      to control the Relaxed Ordering Attributes for the ixgbe drivers
      just like the commit 1a8b6d76 ("net:add one common config...") did,
      so revert this commit.
      Signed-off-by: default avatarDing Tianhong <dingtianhong@huawei.com>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      f4986d25
    • Sabrina Dubroca's avatar
      ixgbe: fix masking of bits read from IXGBE_VXLANCTRL register · a39221ce
      Sabrina Dubroca authored
      In ixgbe_clear_udp_tunnel_port(), we read the IXGBE_VXLANCTRL register
      and then try to mask some bits out of the value, using the logical
      instead of bitwise and operator.
      
      Fixes: a21d0822 ("ixgbe: add support for geneve Rx offload")
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Tested-by: default avatarAndrew Bowers <andrewx.bowers@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      a39221ce
    • Mark D Rustad's avatar
      ixgbe: Return error when getting PHY address if PHY access is not supported · e0f06bba
      Mark D Rustad authored
      In cases where PHY register access is not supported, don't mislead
      a caller into thinking that it is supported by returning a PHY
      address. Instead, return -EOPNOTSUPP when PHY access is not
      supported.
      Signed-off-by: default avatarMark Rustad <mark.d.rustad@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      e0f06bba
    • Shmulik Ladkani's avatar
      netfilter: xt_bpf: Fix XT_BPF_MODE_FD_PINNED mode of 'xt_bpf_info_v1' · 98589a09
      Shmulik Ladkani authored
      Commit 2c16d603 ("netfilter: xt_bpf: support ebpf") introduced
      support for attaching an eBPF object by an fd, with the
      'bpf_mt_check_v1' ABI expecting the '.fd' to be specified upon each
      IPT_SO_SET_REPLACE call.
      
      However this breaks subsequent iptables calls:
      
       # iptables -A INPUT -m bpf --object-pinned /sys/fs/bpf/xxx -j ACCEPT
       # iptables -A INPUT -s 5.6.7.8 -j ACCEPT
       iptables: Invalid argument. Run `dmesg' for more information.
      
      That's because iptables works by loading existing rules using
      IPT_SO_GET_ENTRIES to userspace, then issuing IPT_SO_SET_REPLACE with
      the replacement set.
      
      However, the loaded 'xt_bpf_info_v1' has an arbitrary '.fd' number
      (from the initial "iptables -m bpf" invocation) - so when 2nd invocation
      occurs, userspace passes a bogus fd number, which leads to
      'bpf_mt_check_v1' to fail.
      
      One suggested solution [1] was to hack iptables userspace, to perform a
      "entries fixup" immediatley after IPT_SO_GET_ENTRIES, by opening a new,
      process-local fd per every 'xt_bpf_info_v1' entry seen.
      
      However, in [2] both Pablo Neira Ayuso and Willem de Bruijn suggested to
      depricate the xt_bpf_info_v1 ABI dealing with pinned ebpf objects.
      
      This fix changes the XT_BPF_MODE_FD_PINNED behavior to ignore the given
      '.fd' and instead perform an in-kernel lookup for the bpf object given
      the provided '.path'.
      
      It also defines an alias for the XT_BPF_MODE_FD_PINNED mode, named
      XT_BPF_MODE_PATH_PINNED, to better reflect the fact that the user is
      expected to provide the path of the pinned object.
      
      Existing XT_BPF_MODE_FD_ELF behavior (non-pinned fd mode) is preserved.
      
      References: [1] https://marc.info/?l=netfilter-devel&m=150564724607440&w=2
                  [2] https://marc.info/?l=netfilter-devel&m=150575727129880&w=2Reported-by: default avatarRafael Buchbinder <rafi@rbk.ms>
      Signed-off-by: default avatarShmulik Ladkani <shmulik.ladkani@gmail.com>
      Acked-by: default avatarWillem de Bruijn <willemb@google.com>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      98589a09
    • Lin Zhang's avatar
      netfilter: SYNPROXY: skip non-tcp packet in {ipv4, ipv6}_synproxy_hook · 49f817d7
      Lin Zhang authored
      In function {ipv4,ipv6}_synproxy_hook we expect a normal tcp packet, but
      the real server maybe reply an icmp error packet related to the exist
      tcp conntrack, so we will access wrong tcp data.
      
      Fix it by checking for the protocol field and only process tcp traffic.
      Signed-off-by: default avatarLin Zhang <xiaolou4617@gmail.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      49f817d7
    • Jon Maloy's avatar
      tipc: Unclone message at secondary destination lookup · a9e2971b
      Jon Maloy authored
      When a bundling message is received, the function tipc_link_input()
      calls function tipc_msg_extract() to unbundle all inner messages of
      the bundling message before adding them to input queue.
      
      The function tipc_msg_extract() just clones all inner skb for all
      inner messagges from the bundling skb. This means that the skb
      headroom of an inner message overlaps with the data part of the
      preceding message in the bundle.
      
      If the message in question is a name addressed message, it may be
      subject to a secondary destination lookup, and eventually be sent out
      on one of the interfaces again. But, since what is perceived as headroom
      by the device driver in reality is the last bytes of the preceding
      message in the bundle, the latter will be overwritten by the MAC
      addresses of the L2 header. If the preceding message has not yet been
      consumed by the user, it will evenually be delivered with corrupted
      contents.
      
      This commit fixes this by uncloning all messages passing through the
      function tipc_msg_lookup_dest(), hence ensuring that the headroom
      is always valid when the message is passed on.
      Signed-off-by: default avatarTung Nguyen <tung.q.nguyen@dektech.com.au>
      Signed-off-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      a9e2971b
    • Jon Maloy's avatar
      tipc: correct initialization of skb list · 3382605f
      Jon Maloy authored
      We change the initialization of the skb transmit buffer queues
      in the functions tipc_bcast_xmit() and tipc_rcast_xmit() to also
      initialize their spinlocks. This is needed because we may, during
      error conditions, need to call skb_queue_purge() on those queues
      further down the stack.
      Signed-off-by: default avatarJon Maloy <jon.maloy@ericsson.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3382605f
    • Linus Torvalds's avatar
      Linux 4.14-rc4 · 8a5776a5
      Linus Torvalds authored
      8a5776a5
  3. 08 Oct, 2017 2 commits
  4. 07 Oct, 2017 2 commits
    • Alexei Starovoitov's avatar
      bpf: fix liveness marking · 8fe2d6cc
      Alexei Starovoitov authored
      while processing Rx = Ry instruction the verifier does
      regs[insn->dst_reg] = regs[insn->src_reg]
      which often clears write mark (when Ry doesn't have it)
      that was just set by check_reg_arg(Rx) prior to the assignment.
      That causes mark_reg_read() to keep marking Rx in this block as
      REG_LIVE_READ (since the logic incorrectly misses that it's
      screened by the write) and in many of its parents (until lucky
      write into the same Rx or beginning of the program).
      That causes is_state_visited() logic to miss many pruning opportunities.
      
      Furthermore mark_reg_read() logic propagates the read mark
      for BPF_REG_FP as well (though it's readonly) which causes
      harmless but unnecssary work during is_state_visited().
      Note that do_propagate_liveness() skips FP correctly,
      so do the same in mark_reg_read() as well.
      It saves 0.2 seconds for the test below
      
      program               before  after
      bpf_lb-DLB_L3.o       2604    2304
      bpf_lb-DLB_L4.o       11159   3723
      bpf_lb-DUNKNOWN.o     1116    1110
      bpf_lxc-DDROP_ALL.o   34566   28004
      bpf_lxc-DUNKNOWN.o    53267   39026
      bpf_netdev.o          17843   16943
      bpf_overlay.o         8672    7929
      time                  ~11 sec  ~4 sec
      
      Fixes: dc503a8a ("bpf/verifier: track liveness for pruning")
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarEdward Cree <ecree@solarflare.com>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      8fe2d6cc
    • Axel Beckert's avatar
      doc: Fix typo "8023.ad" in bonding documentation · 00a534e5
      Axel Beckert authored
      Should be "802.3ad" like everywhere else in the document.
      Signed-off-by: default avatarAxel Beckert <abe@deuxchevaux.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      00a534e5