1. 15 May, 2015 14 commits
    • Nicolas Iooss's avatar
      wl18xx: show rx_frames_per_rates as an array as it really is · dbbf764b
      Nicolas Iooss authored
      commit a3fa71c4 upstream.
      
      In struct wl18xx_acx_rx_rate_stat, rx_frames_per_rates field is an
      array, not a number.  This means WL18XX_DEBUGFS_FWSTATS_FILE can't be
      used to display this field in debugfs (it would display a pointer, not
      the actual data).  Use WL18XX_DEBUGFS_FWSTATS_FILE_ARRAY instead.
      
      This bug has been found by adding a __printf attribute to
      wl1271_format_buffer.  gcc complained about "format '%u' expects
      argument of type 'unsigned int', but argument 5 has type 'u32 *'".
      
      Fixes: c5d94169 ("wl18xx: use new fw stats structures")
      Signed-off-by: default avatarNicolas Iooss <nicolas.iooss_linux@m4x.org>
      Signed-off-by: default avatarKalle Valo <kvalo@codeaurora.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      dbbf764b
    • mancha security's avatar
      lib: memzero_explicit: use barrier instead of OPTIMIZER_HIDE_VAR · bf0e4b9a
      mancha security authored
      commit 0b053c95 upstream.
      
      OPTIMIZER_HIDE_VAR(), as defined when using gcc, is insufficient to
      ensure protection from dead store optimization.
      
      For the random driver and crypto drivers, calls are emitted ...
      
        $ gdb vmlinux
        (gdb) disassemble memzero_explicit
        Dump of assembler code for function memzero_explicit:
          0xffffffff813a18b0 <+0>:	push   %rbp
          0xffffffff813a18b1 <+1>:	mov    %rsi,%rdx
          0xffffffff813a18b4 <+4>:	xor    %esi,%esi
          0xffffffff813a18b6 <+6>:	mov    %rsp,%rbp
          0xffffffff813a18b9 <+9>:	callq  0xffffffff813a7120 <memset>
          0xffffffff813a18be <+14>:	pop    %rbp
          0xffffffff813a18bf <+15>:	retq
        End of assembler dump.
      
        (gdb) disassemble extract_entropy
        [...]
          0xffffffff814a5009 <+313>:	mov    %r12,%rdi
          0xffffffff814a500c <+316>:	mov    $0xa,%esi
          0xffffffff814a5011 <+321>:	callq  0xffffffff813a18b0 <memzero_explicit>
          0xffffffff814a5016 <+326>:	mov    -0x48(%rbp),%rax
        [...]
      
      ... but in case in future we might use facilities such as LTO, then
      OPTIMIZER_HIDE_VAR() is not sufficient to protect gcc from a possible
      eviction of the memset(). We have to use a compiler barrier instead.
      
      Minimal test example when we assume memzero_explicit() would *not* be
      a call, but would have been *inlined* instead:
      
        static inline void memzero_explicit(void *s, size_t count)
        {
          memset(s, 0, count);
          <foo>
        }
      
        int main(void)
        {
          char buff[20];
      
          snprintf(buff, sizeof(buff) - 1, "test");
          printf("%s", buff);
      
          memzero_explicit(buff, sizeof(buff));
          return 0;
        }
      
      With <foo> := OPTIMIZER_HIDE_VAR():
      
        (gdb) disassemble main
        Dump of assembler code for function main:
        [...]
         0x0000000000400464 <+36>:	callq  0x400410 <printf@plt>
         0x0000000000400469 <+41>:	xor    %eax,%eax
         0x000000000040046b <+43>:	add    $0x28,%rsp
         0x000000000040046f <+47>:	retq
        End of assembler dump.
      
      With <foo> := barrier():
      
        (gdb) disassemble main
        Dump of assembler code for function main:
        [...]
         0x0000000000400464 <+36>:	callq  0x400410 <printf@plt>
         0x0000000000400469 <+41>:	movq   $0x0,(%rsp)
         0x0000000000400471 <+49>:	movq   $0x0,0x8(%rsp)
         0x000000000040047a <+58>:	movl   $0x0,0x10(%rsp)
         0x0000000000400482 <+66>:	xor    %eax,%eax
         0x0000000000400484 <+68>:	add    $0x28,%rsp
         0x0000000000400488 <+72>:	retq
        End of assembler dump.
      
      As can be seen, movq, movq, movl are being emitted inlined
      via memset().
      
      Reference: http://thread.gmane.org/gmane.linux.kernel.cryptoapi/13764/
      Fixes: d4c5efdb ("random: add and use memzero_explicit() for clearing data")
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarmancha security <mancha1@zoho.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Acked-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      bf0e4b9a
    • Sabrina Dubroca's avatar
      e1000: add dummy allocator to fix race condition between mtu change and netpoll · 47cd2dc5
      Sabrina Dubroca authored
      commit 08e83316 upstream.
      
      There is a race condition between e1000_change_mtu's cleanups and
      netpoll, when we change the MTU across jumbo size:
      
      Changing MTU frees all the rx buffers:
          e1000_change_mtu -> e1000_down -> e1000_clean_all_rx_rings ->
              e1000_clean_rx_ring
      
      Then, close to the end of e1000_change_mtu:
          pr_info -> ... -> netpoll_poll_dev -> e1000_clean ->
              e1000_clean_rx_irq -> e1000_alloc_rx_buffers -> e1000_alloc_frag
      
      And when we come back to do the rest of the MTU change:
          e1000_up -> e1000_configure -> e1000_configure_rx ->
              e1000_alloc_jumbo_rx_buffers
      
      alloc_jumbo finds the buffers already != NULL, since data (shared with
      page in e1000_rx_buffer->rxbuf) has been re-alloc'd, but it's garbage,
      or at least not what is expected when in jumbo state.
      
      This results in an unusable adapter (packets don't get through), and a
      NULL pointer dereference on the next call to e1000_clean_rx_ring
      (other mtu change, link down, shutdown):
      
      BUG: unable to handle kernel NULL pointer dereference at           (null)
      IP: [<ffffffff81194d6e>] put_compound_page+0x7e/0x330
      
          [...]
      
      Call Trace:
       [<ffffffff81195445>] put_page+0x55/0x60
       [<ffffffff815d9f44>] e1000_clean_rx_ring+0x134/0x200
       [<ffffffff815da055>] e1000_clean_all_rx_rings+0x45/0x60
       [<ffffffff815df5e0>] e1000_down+0x1c0/0x1d0
       [<ffffffff811e2260>] ? deactivate_slab+0x7f0/0x840
       [<ffffffff815e21bc>] e1000_change_mtu+0xdc/0x170
       [<ffffffff81647050>] dev_set_mtu+0xa0/0x140
       [<ffffffff81664218>] do_setlink+0x218/0xac0
       [<ffffffff814459e9>] ? nla_parse+0xb9/0x120
       [<ffffffff816652d0>] rtnl_newlink+0x6d0/0x890
       [<ffffffff8104f000>] ? kvm_clock_read+0x20/0x40
       [<ffffffff810a2068>] ? sched_clock_cpu+0xa8/0x100
       [<ffffffff81663802>] rtnetlink_rcv_msg+0x92/0x260
      
      By setting the allocator to a dummy version, netpoll can't mess up our
      rx buffers.  The allocator is set back to a sane value in
      e1000_configure_rx.
      
      Fixes: edbbb3ca ("e1000: implement jumbo receive with partial descriptors")
      Signed-off-by: default avatarSabrina Dubroca <sd@queasysnail.net>
      Tested-by: default avatarAaron Brown <aaron.f.brown@intel.com>
      Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      47cd2dc5
    • Calvin Owens's avatar
      ksoftirqd: Enable IRQs and call cond_resched() before poking RCU · 4e75ff4f
      Calvin Owens authored
      commit 28423ad2 upstream.
      
      While debugging an issue with excessive softirq usage, I encountered the
      following note in commit 3e339b5d ("softirq: Use hotplug thread
      infrastructure"):
      
          [ paulmck: Call rcu_note_context_switch() with interrupts enabled. ]
      
      ...but despite this note, the patch still calls RCU with IRQs disabled.
      
      This seemingly innocuous change caused a significant regression in softirq
      CPU usage on the sending side of a large TCP transfer (~1 GB/s): when
      introducing 0.01% packet loss, the softirq usage would jump to around 25%,
      spiking as high as 50%. Before the change, the usage would never exceed 5%.
      
      Moving the call to rcu_note_context_switch() after the cond_sched() call,
      as it was originally before the hotplug patch, completely eliminated this
      problem.
      Signed-off-by: default avatarCalvin Owens <calvinowens@fb.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarMike Galbraith <mgalbraith@suse.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      4e75ff4f
    • Al Viro's avatar
      RCU pathwalk breakage when running into a symlink overmounting something · 6e310c82
      Al Viro authored
      commit 3cab989a upstream.
      
      Calling unlazy_walk() in walk_component() and do_last() when we find
      a symlink that needs to be followed doesn't acquire a reference to vfsmount.
      That's fine when the symlink is on the same vfsmount as the parent directory
      (which is almost always the case), but it's not always true - one _can_
      manage to bind a symlink on top of something.  And in such cases we end up
      with excessive mntput().
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      6e310c82
    • Dmitry Torokhov's avatar
      drm/i915: cope with large i2c transfers · 378e22f6
      Dmitry Torokhov authored
      commit 9535c475 upstream.
      
      The hardware, according to the specs, is limited to 256 byte transfers,
      and current driver has no protections in case users attempt to do larger
      transfers. The code will just stomp over status register and mayhem
      ensues.
      
      Let's split larger transfers into digestable chunks. Doing this allows
      Atmel MXT driver on Pixel 1 function properly (it hasn't since commit
      9d8dc3e5 "Input: atmel_mxt_ts -
      implement T44 message handling" which tries to consume multiple
      touchscreen/touchpad reports in a single transaction).
      Reviewed-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarDmitry Torokhov <dmitry.torokhov@gmail.com>
      Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      378e22f6
    • Alex Deucher's avatar
      drm/radeon: fix doublescan modes (v2) · b1a930b8
      Alex Deucher authored
      commit fd99a094 upstream.
      
      Use the correct flags for atom.
      
      v2: handle DRM_MODE_FLAG_DBLCLK
      Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      b1a930b8
    • Mark Brown's avatar
      i2c: core: Export bus recovery functions · 5d916df8
      Mark Brown authored
      commit c1c21f4e upstream.
      
      Current -next fails to link an ARM allmodconfig because drivers that use
      the core recovery functions can be built as modules but those functions
      are not exported:
      
      ERROR: "i2c_generic_gpio_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined!
      ERROR: "i2c_generic_scl_recovery" [drivers/i2c/busses/i2c-davinci.ko] undefined!
      ERROR: "i2c_recover_bus" [drivers/i2c/busses/i2c-davinci.ko] undefined!
      
      Add exports to fix this.
      
      Fixes: 5f9296ba (i2c: Add bus recovery infrastructure)
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Signed-off-by: default avatarWolfram Sang <wsa@the-dreams.de>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      5d916df8
    • Erez Shitrit's avatar
      IB/mlx4: Fix WQE LSO segment calculation · ef7c2618
      Erez Shitrit authored
      commit ca9b590c upstream.
      
      The current code decreases from the mss size (which is the gso_size
      from the kernel skb) the size of the packet headers.
      
      It shouldn't do that because the mss that comes from the stack
      (e.g IPoIB) includes only the tcp payload without the headers.
      
      The result is indication to the HW that each packet that the HW sends
      is smaller than what it could be, and too many packets will be sent
      for big messages.
      
      An easy way to demonstrate one more aspect of the problem is by
      configuring the ipoib mtu to be less than 2*hlen (2*56) and then
      run app sending big TCP messages. This will tell the HW to send packets
      with giant (negative value which under unsigned arithmetics becomes
      a huge positive one) length and the QP moves to SQE state.
      
      Fixes: b832be1e ('IB/mlx4: Add IPoIB LSO support')
      Reported-by: default avatarMatthew Finlay <matt@mellanox.com>
      Signed-off-by: default avatarErez Shitrit <erezsh@mellanox.com>
      Signed-off-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ef7c2618
    • Yann Droneaud's avatar
      IB/core: don't disallow registering region starting at 0x0 · 06c7b3ec
      Yann Droneaud authored
      commit 66578b0b upstream.
      
      In a call to ib_umem_get(), if address is 0x0 and size is
      already page aligned, check added in commit 8494057a
      ("IB/uverbs: Prevent integer overflow in ib_umem_get address
      arithmetic") will refuse to register a memory region that
      could otherwise be valid (provided vm.mmap_min_addr sysctl
      and mmap_low_allowed SELinux knobs allow userspace to map
      something at address 0x0).
      
      This patch allows back such registration: ib_umem_get()
      should probably don't care of the base address provided it
      can be pinned with get_user_pages().
      
      There's two possible overflows, in (addr + size) and in
      PAGE_ALIGN(addr + size), this patch keep ensuring none
      of them happen while allowing to pin memory at address
      0x0. Anyway, the case of size equal 0 is no more (partially)
      handled as 0-length memory region are disallowed by an
      earlier check.
      
      Link: http://mid.gmane.org/cover.1428929103.git.ydroneaud@opteya.com
      Cc: Shachar Raindel <raindel@mellanox.com>
      Cc: Jack Morgenstein <jackm@mellanox.com>
      Cc: Or Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarYann Droneaud <ydroneaud@opteya.com>
      Reviewed-by: default avatarSagi Grimberg <sagig@mellanox.com>
      Reviewed-by: default avatarHaggai Eran <haggaie@mellanox.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      06c7b3ec
    • Yann Droneaud's avatar
      IB/core: disallow registering 0-sized memory region · 345726e4
      Yann Droneaud authored
      commit 8abaae62 upstream.
      
      If ib_umem_get() is called with a size equal to 0 and an
      non-page aligned address, one page will be pinned and a
      0-sized umem will be returned to the caller.
      
      This should not be allowed: it's not expected for a memory
      region to have a size equal to 0.
      
      This patch adds a check to explicitly refuse to register
      a 0-sized region.
      
      Link: http://mid.gmane.org/cover.1428929103.git.ydroneaud@opteya.com
      Cc: Shachar Raindel <raindel@mellanox.com>
      Cc: Jack Morgenstein <jackm@mellanox.com>
      Cc: Or Gerlitz <ogerlitz@mellanox.com>
      Signed-off-by: default avatarYann Droneaud <ydroneaud@opteya.com>
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      345726e4
    • Ezequiel Garcia's avatar
      stk1160: Make sure current buffer is released · 19b7fcb1
      Ezequiel Garcia authored
      commit aeff0927 upstream.
      
      The available (i.e. not used) buffers are returned by stk1160_clear_queue(),
      on the stop_streaming() path. However, this is insufficient and the current
      buffer must be released as well. Fix it.
      Signed-off-by: default avatarEzequiel Garcia <ezequiel@vanguardiasur.com.ar>
      Signed-off-by: default avatarHans Verkuil <hans.verkuil@cisco.com>
      Signed-off-by: default avatarMauro Carvalho Chehab <mchehab@osg.samsung.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      19b7fcb1
    • James Bottomley's avatar
      mvsas: fix panic on expander attached SATA devices · e0344d11
      James Bottomley authored
      commit 56cbd0cc upstream.
      
      mvsas is giving a General protection fault when it encounters an expander
      attached ATA device.  Analysis of mvs_task_prep_ata() shows that the driver is
      assuming all ATA devices are locally attached and obtaining the phy mask by
      indexing the local phy table (in the HBA structure) with the phy id.  Since
      expanders have many more phys than the HBA, this is causing the index into the
      HBA phy table to overflow and returning rubbish as the pointer.
      
      mvs_task_prep_ssp() instead does the phy mask using the port properties.
      Mirror this in mvs_task_prep_ata() to fix the panic.
      Reported-by: default avatarAdam Talbot <ajtalbot1@gmail.com>
      Tested-by: default avatarAdam Talbot <ajtalbot1@gmail.com>
      Signed-off-by: default avatarJames Bottomley <JBottomley@Odin.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      e0344d11
    • K. Y. Srinivasan's avatar
      Drivers: hv: vmbus: Fix a bug in the error path in vmbus_open() · ec091a7e
      K. Y. Srinivasan authored
      commit 40384e4b upstream.
      
      Correctly rollback state if the failure occurs after we have handed over
      the ownership of the buffer to the host.
      Signed-off-by: default avatarK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ec091a7e
  2. 04 May, 2015 26 commits