- 11 Oct, 2011 1 commit
-
-
Stephen Rothwell authored
Commit 1230db8e ("llist: Make some llist functions inline") has deleted the definitions, causing problems for (not upstream yet) code that tries to make use of them. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Huang Ying <ying.huang@intel.com> Cc: David Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20111005172528.0d0a8afc65acef7ace22a24e@canb.auug.org.auSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
- 06 Oct, 2011 7 commits
-
-
Thomas Gleixner authored
Avoid taking locks from debug prints, this avoids latencies on -rt, and improves reliability of the debug code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
The default rt-throttling is a source of never ending questions. Warn once when we go into throttling so folks have that info in dmesg. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1110051331480.18778@ionosSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Currently every sched_class::set_cpus_allowed() implementation has to copy the cpumask into task_struct::cpus_allowed, this is pointless, put this copy in the generic code. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/n/tip-jhl5s9fckd9ptw1fzbqqlrd3@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
This task is preparatory for the migrate_disable() implementation, but stands on its own and provides a cleanup. It currently only converts those sites required for task-placement. Kosaki-san once mentioned replacing cpus_allowed with a proper cpumask_t instead of the NR_CPUS sized array it currently is, that would also require something like this. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Link: http://lkml.kernel.org/n/tip-e42skvaddos99psip0vce41o@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Suresh Siddha authored
rq's idle_at_tick is set to idle/busy during the timer tick depending on the cpu was idle or not. This will be used later in the load balance that will be done in the softirq context (which is a process context in -RT kernels). For nohz kernels, for the cpu doing nohz idle load balance on behalf of all the idle cpu's, its rq->idle_at_tick might have a stale value (which is recorded when it got the timer tick presumably when it is busy). As the nohz idle load balancing is also being done at the same place as the regular load balancing, nohz idle load balancing was bailing out when it sees rq's idle_at_tick not set. Thus leading to poor system utilization. Rename rq's idle_at_tick to idle_balance and set it when someone requests for nohz idle balance on an idle cpu. Reported-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111003220934.892350549@sbsiddha-desk.sc.intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Suresh Siddha authored
Current use of smp call function to kick the nohz idle balance can deadlock in this scenario. 1. cpu-A did a generic_exec_single() to cpu-B and after queuing its call single data (csd) to the call single queue, cpu-A took a timer interrupt. Actual IPI to cpu-B to process the call single queue is not yet sent. 2. As part of the timer interrupt handler, cpu-A decided to kick cpu-B for the idle load balancing (sets cpu-B's rq->nohz_balance_kick to 1) and __smp_call_function_single() with nowait will queue the csd to the cpu-B's queue. But the generic_exec_single() won't send an IPI to cpu-B as the call single queue was not empty. 3. cpu-A is busy with lot of interrupts 4. Meanwhile cpu-B is entering and exiting idle and noticed that it has it's rq->nohz_balance_kick set to '1'. So it will go ahead and do the idle load balancer and clear its rq->nohz_balance_kick. 5. At this point, csd queued as part of the step-2 above is still locked and waiting to be serviced on cpu-B. 6. cpu-A is still busy with interrupt load and now it got another timer interrupt and as part of it decided to kick cpu-B for another idle load balancing (as it finds cpu-B's rq->nohz_balance_kick cleared in step-4 above) and does __smp_call_function_single() with the same csd that is still locked. 7. And we get a deadlock waiting for the csd_lock() in the __smp_call_function_single(). Main issue here is that cpu-B can service the idle load balancer kick request from cpu-A even with out receiving the IPI and this lead to doing multiple __smp_call_function_single() on the same csd leading to deadlock. To kick a cpu, scheduler already has the reschedule vector reserved. Use that mechanism (kick_process()) instead of using the generic smp call function mechanism to kick off the nohz idle load balancing and avoid the deadlock. [ This issue is present from 2.6.35+ kernels, but marking it -stable only from v3.0+ as the proposed fix depends on the scheduler_ipi() that is introduced recently. ] Reported-by: Prarit Bhargava <prarit@redhat.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: stable@kernel.org # v3.0+ Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20111003220934.834943260@sbsiddha-desk.sc.intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Merge reason: pick up latest fixes. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 05 Oct, 2011 1 commit
-
-
Linus Torvalds authored
-
- 04 Oct, 2011 22 commits
-
-
git://github.com/davem330/netLinus Torvalds authored
* git://github.com/davem330/net: pch_gbe: Fixed the issue on which a network freezes pch_gbe: Fixed the issue on which PC was frozen when link was downed. make PACKET_STATISTICS getsockopt report consistently between ring and non-ring net: xen-netback: correctly restart Tx after a VM restore/migrate bonding: properly stop queuing work when requested can bcm: fix incomplete tx_setup fix RDSRDMA: Fix cleanup of rds_iw_mr_pool net: Documentation: Fix type of variables ibmveth: Fix oops on request_irq failure ipv6: nullify ipv6_ac_list and ipv6_fl_list when creating new socket cxgb4: Fix EEH on IBM P7IOC can bcm: fix tx_setup off-by-one errors MAINTAINERS: tehuti: Alexander Indenbaum's address bounces dp83640: reduce driver noise ptp: fix L2 event message recognition
-
git://github.com/tiwai/soundLinus Torvalds authored
* 'fix/asoc' of git://github.com/tiwai/sound: ASoC: omap_mcpdm_remove cannot be __devexit ASoC: Fix setting update bits for WM8753_LADC and WM8753_RADC ASoC: use a valid device for dev_err() in Zylonite
-
git://people.freedesktop.org/~airlied/linuxLinus Torvalds authored
* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: drm/radeon/kms: fix channel_remap setup (v2) drm/radeon: Set cursor x/y to 0 when x/yorigin > 0. drm/radeon: Update AVIVO cursor coordinate origin before x/yorigin calculation. drm/radeon: Simplify cursor x/yorigin calculation. drm/radeon/kms: fix cursor image off-by-one error drm/radeon/kms: Fix logic error in DP HPD handler drm/radeon/kms: add retry limits for native DP aux defer drm/radeon/kms: fix regression in DP aux defer handling
-
git://git.secretlab.ca/git/linux-2.6Linus Torvalds authored
* 'spi/merge' of git://git.secretlab.ca/git/linux-2.6: spi-topcliff-pch: Fix overrun issue spi-topcliff-pch: Add recovery processing in case FIFO overrun error occurs spi-topcliff-pch: Fix CPU read complete condition issue spi-topcliff-pch: Fix SSN Control issue spi-topcliff-pch: add tx-memory clear after complete transmitting
-
Jon Mason authored
Add the ability to disable PCI-E MPS turning and using the BIOS configured MPS defaults. Due to the number of issues recently discovered on some x86 chipsets, make this the default behavior. Also, add the option for peer to peer DMA MPS configuration. Peer to peer DMA is outside the scope of this patch, but MPS configuration could prevent it from working by having the MPS on one root port different than the MPS on another. To work around this, simply make the system wide MPS the smallest possible value (128B). Signed-off-by: Jon Mason <mason@myri.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alex Deucher authored
Most asics just use the hw default value which requires no explicit programming. For those that need a different value, the vbios will program it properly. As such, there's no need to program these registers explicitly in the driver. Changing MC_SHARED_CHREMAP requires a reload of all data in vram otherwise its contents will be scambled. Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=40103 v2: drop now unused channel_remap functions. Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Michel Dänzer <michel.daenzer@amd.com> Cc: stable@kernel.org Signed-off-by: Dave Airlie <airlied@redhat.com>
-
Tomoya MORINAGA authored
We found that adding load, Rx data sometimes drops.(with DMA transfer mode) The cause is that before starting Rx-DMA processing, Tx-DMA processing starts. This causes FIFO overrun occurs. This patch fixes the issue by modifying FIFO tx-threshold and DMA descriptor size like below. Current this patch Rx-descriptor 4Byte+12Byte*341 --> 12Byte*340-4Byte-12Byte Rx-threshold (Not modified) Tx-descriptor 4Byte+12Byte*341 --> 16Byte-12Byte*340 Rx-threshold 12Byte --> 2Byte Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Tomoya MORINAGA authored
Add recovery processing in case FIFO overrun error occurs with DMA transfer mode. Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Tomoya MORINAGA authored
We found Rx data sometimes drops.(with non-DMA transfer mode) The cause is read complete condition is not true. This patch fixes the issue. Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Tomoya MORINAGA authored
During processing 1 command/data series, SSN should keep LOW. However, currently, SSN becomes HIGH. This patch fixes the issue. Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Tomoya MORINAGA authored
Currently, in case of reading date from SPI flash, command is sent twice. The cause is that tx-memory clear processing is missing . This patch adds the tx-momory clear processing. Signed-off-by: Tomoya MORINAGA <tomoya-linux@dsn.okisemi.com> Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
-
Thomas Gleixner authored
On -rt we observed hackbench waking all 400 tasks to a single cpu. This is because of select_idle_sibling()'s interaction with the new ipi based wakeup scheme. The existing idle_cpu() test only checks to see if the current task on that cpu is the idle task, it does not take already queued tasks into account, nor does it take queued to be woken tasks into account. If the remote wakeup IPIs come hard enough, there won't be time to schedule away from the idle task, and would thus keep thinking the cpu was in fact idle, regardless of the fact that there were already several hundred tasks runnable. We couldn't reproduce on mainline, but there's no reason it couldn't happen. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-3o30p18b2paswpc9ohy2gltp@git.kernel.orgSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Initial benchmarks show they're a net loss: $ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i; done $ echo 4096 32000 64 128 > /proc/sys/kernel/sem $ ./sembench -t 2048 -w 1900 -o 0 Pre: run time 30 seconds 778936 worker burns per second run time 30 seconds 912190 worker burns per second run time 30 seconds 817506 worker burns per second run time 30 seconds 830870 worker burns per second run time 30 seconds 845056 worker burns per second Post: run time 30 seconds 905920 worker burns per second run time 30 seconds 849046 worker burns per second run time 30 seconds 886286 worker burns per second run time 30 seconds 822320 worker burns per second run time 30 seconds 900283 worker burns per second So about 4% faster. (!) cpu_relax() stalls the pipeline, therefore, when used in a tight loop it has the following benefits: - allows SMT siblings to have a go; - reduces pressure on the CPU interconnect. However, cmpxchg loops are unfair and thus have unbounded completion time, therefore we should avoid getting in such heavily contended situations where the above benefits make any difference. A typical cmpxchg loop should not go round more than a handfull of times at worst, therefore adding extra delays just slows things down. Since the llist primitives are new, there aren't any bad users yet, and we should avoid growing them. Heavily contended sites should generally be better off using the ticket locks for serialization since they provide bounded completion times (fifo-fair over the cpus). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1315836358.26517.43.camel@twinsSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
Use the generic llist primitives. We had a private lockless list implementation in the scheduler in the wake-list code, now that we have a generic llist implementation that provides all required operations, switch to it. This patch is not expected to change any behavior. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1315836353.26517.42.camel@twinsSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Peter Zijlstra authored
So we don't have to expose the struct list_node member. Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315836348.26517.41.camel@twinsSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Huang Ying authored
Use llist in irq_work instead of the lock-less linked list implementation in irq_work to avoid the code duplication. Signed-off-by: Huang Ying <ying.huang@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-6-git-send-email-ying.huang@intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Huang Ying authored
Extend the llist_add*() functions to return a success indicator, this allows us in the scheduler code to send an IPI if the queue was empty. ( There's no effect on existing users, because the list_add_xxx() functions are inline, thus this will be optimized out by the compiler if not used by callers. ) Signed-off-by: Huang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-5-git-send-email-ying.huang@intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Huang Ying authored
If in llist_add()/etc. functions the first cmpxchg() call succeeds, it is not necessary to use cpu_relax() before the cmpxchg(). So cpu_relax() in a busy loop involving cmpxchg() should go after cmpxchg() instead of before that. This patch fixes this for all involved llist functions. Signed-off-by: Huang Ying <ying.huang@intel.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-4-git-send-email-ying.huang@intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Remove the nmi() checks spread around the code. in_nmi() is not available on every architecture and it's a pretty obscure and ugly check in any case. Cc: Huang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-3-git-send-email-ying.huang@intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Huang Ying authored
Because llist code will be used in performance critical scheduler code path, make llist_add() and llist_del_all() inline to avoid function calling overhead and related 'glue' overhead. Signed-off-by: Huang Ying <ying.huang@intel.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-2-git-send-email-ying.huang@intel.comSigned-off-by: Ingo Molnar <mingo@elte.hu>
-
Ingo Molnar authored
Merge reason: pick up the latest fixes. Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
Takashi Iwai authored
Commit 2a7fade7 ("hwmon: lis3: Power on corrections") caused a regression on HP laptops with 8bit chip. Writing CTRL2_BOOT_8B bit seems clearing the BIOS setup, and no proper interrupt for DriveGuard will be triggered any more. Since the init code there is basically only for embedded devices, put a pdata check so that the problematic initialization will be skipped for hp_accel stuff. Signed-off-by: Takashi Iwai <tiwai@suse.de> Cc: Eric Piel <eric.piel@tremplin-utc.net> Cc: Samu Onkalo <samu.p.onkalo@nokia.com> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 03 Oct, 2011 9 commits
-
-
git://github.com/groeck/linuxLinus Torvalds authored
* 'hwmon-for-linus' of git://github.com/groeck/linux: hwmon: (coretemp) Avoid leaving around dangling pointer hwmon: (coretemp) Fixup platform device ID change
-
git://github.com/davem330/ideLinus Torvalds authored
* git://github.com/davem330/ide: ide-disk: Fix request requeuing
-
git://github.com/chrismason/linuxLinus Torvalds authored
* 'btrfs-3.0' of git://github.com/chrismason/linux: Btrfs: force a page fault if we have a shorty copy on a page boundary
-
Borislav Petkov authored
Simon Kirby reported that on his RAID setup with idedisk underneath the box OOMs after a couple of days of runtime. Running with CONFIG_DEBUG_KMEMLEAK pointed to idedisk_prep_fn() which unconditionally allocates an ide_cmd struct. However, ide_requeue_and_plug() can be called more than once per request, either from the request issue or the IRQ handler path and do blk_peek_request() ends up in idedisk_prep_fn() repeatedly, allocating a struct ide_cmd everytime and "forgetting" the previous pointer. Make sure the code reuses the old allocated chunk. Reported-and-tested-by: Simon Kirby <sim@hostway.ca> Cc: <stable@kernel.org> [ 39.x, 3.0.x ] Link: http://marc.info/?l=linux-kernel&m=131667641517919 Link: http://lkml.kernel.org/r/20110922072643.GA27232@hostway.caSigned-off-by: Borislav Petkov <bp@alien8.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Toshiharu Okada authored
The pch_gbe driver has an issue which a network stops, when receiving traffic is high. In the case, The link down and up are necessary to return a network. This patch fixed this issue. Signed-off-by: Toshiharu Okada <toshiharu-linux@dsn.okisemi.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Toshiharu Okada authored
When a link was downed during network use, there is an issue on which PC freezes. This patch fixed this issue. Signed-off-by: Toshiharu Okada <toshiharu-linux@dsn.okisemi.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Willem de Bruijn authored
This is a minor change. Up until kernel 2.6.32, getsockopt(fd, SOL_PACKET, PACKET_STATISTICS, ...) would return total and dropped packets since its last invocation. The introduction of socket queue overflow reporting [1] changed drop rate calculation in the normal packet socket path, but not when using a packet ring. As a result, the getsockopt now returns different statistics depending on the reception method used. With a ring, it still returns the count since the last call, as counts are incremented in tpacket_rcv and reset in getsockopt. Without a ring, it returns 0 if no drops occurred since the last getsockopt and the total drops over the lifespan of the socket otherwise. The culprit is this line in packet_rcv, executed on a drop: drop_n_acct: po->stats.tp_drops = atomic_inc_return(&sk->sk_drops); As it shows, the new drop number it taken from the socket drop counter, which is not reset at getsockopt. I put together a small example that demonstrates the issue [2]. It runs for 10 seconds and overflows the queue/ring on every odd second. The reported drop rates are: ring: 16, 0, 16, 0, 16, ... non-ring: 0, 15, 0, 30, 0, 46, 0, 60, 0 , 74. Note how the even ring counts monotonically increase. Because the getsockopt adds tp_drops to tp_packets, total counts are similarly reported cumulatively. Long story short, reinstating the original code, as the below patch does, fixes the issue at the cost of additional per-packet cycles. Another solution that does not introduce per-packet overhead is be to keep the current data path, record the value of sk_drops at getsockopt() at call N in a new field in struct packetsock and subtract that when reporting at call N+1. I'll be happy to code that, instead, it's just more messy. [1] http://patchwork.ozlabs.org/patch/35665/ [2] http://kernel.googlecode.com/files/test-packetsock-getstatistics.cSigned-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
David Vrabel authored
If a VM is saved and restored (or migrated) the netback driver will no longer process any Tx packets from the frontend. xenvif_up() does not schedule the processing of any pending Tx requests from the front end because the carrier is off. Without this initial kick the frontend just adds Tx requests to the ring without raising an event (until the ring is full). This was caused by 47103041 (net: xen-netback: convert to hw_features) which reordered the calls to xenvif_up() and netif_carrier_on() in xenvif_connect(). Signed-off-by: David Vrabel <david.vrabel@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andy Gospodarek authored
During a test where a pair of bonding interfaces using ARP monitoring were both brought up and torn down (with an rmmod) repeatedly, a panic in the timer code was noticed. I tracked this down and determined that any of the bonding functions that ran as workqueue handlers and requeued more work might not properly exit when the module was removed. There was a flag protected by the bond lock called kill_timers that is set when the interface goes down or the module is removed, but many of the functions that monitor link status now unlock the bond lock to take rtnl first. There is a chance that another CPU running the rmmod could get the lock and set kill_timers after the first check has passed. This patch does not allow any function to queue work that will make itself run unless kill_timers is not set. I also noticed while doing this work that bond_resend_igmp_join_requests did not have a check for kill_timers, so I added the needed call there as well. Signed-off-by: Andy Gospodarek <andy@greyhouse.net> Reported-by: Liang Zheng <lzheng@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-