1. 25 Jun, 2019 27 commits
  2. 22 Jun, 2019 13 commits
    • Greg Kroah-Hartman's avatar
      Linux 5.1.14 · 5f0a74b4
      Greg Kroah-Hartman authored
      5f0a74b4
    • Eric Dumazet's avatar
      tcp: refine memory limit test in tcp_fragment() · b27f2c88
      Eric Dumazet authored
      commit b6653b36 upstream.
      
      tcp_fragment() might be called for skbs in the write queue.
      
      Memory limits might have been exceeded because tcp_sendmsg() only
      checks limits at full skb (64KB) boundaries.
      
      Therefore, we need to make sure tcp_fragment() wont punish applications
      that might have setup very low SO_SNDBUF values.
      
      Fixes: f070ef2a ("tcp: tcp_fragment() should apply sane memory limits")
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Reported-by: default avatarChristoph Paasch <cpaasch@apple.com>
      Tested-by: default avatarChristoph Paasch <cpaasch@apple.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b27f2c88
    • Greg Kroah-Hartman's avatar
      Linux 5.1.13 · 7da3d9e6
      Greg Kroah-Hartman authored
      7da3d9e6
    • Andrea Arcangeli's avatar
      coredump: fix race condition between collapse_huge_page() and core dumping · d7dcfd25
      Andrea Arcangeli authored
      commit 59ea6d06 upstream.
      
      When fixing the race conditions between the coredump and the mmap_sem
      holders outside the context of the process, we focused on
      mmget_not_zero()/get_task_mm() callers in 04f5866e ("coredump: fix
      race condition between mmget_not_zero()/get_task_mm() and core
      dumping"), but those aren't the only cases where the mmap_sem can be
      taken outside of the context of the process as Michal Hocko noticed
      while backporting that commit to older -stable kernels.
      
      If mmgrab() is called in the context of the process, but then the
      mm_count reference is transferred outside the context of the process,
      that can also be a problem if the mmap_sem has to be taken for writing
      through that mm_count reference.
      
      khugepaged registration calls mmgrab() in the context of the process,
      but the mmap_sem for writing is taken later in the context of the
      khugepaged kernel thread.
      
      collapse_huge_page() after taking the mmap_sem for writing doesn't
      modify any vma, so it's not obvious that it could cause a problem to the
      coredump, but it happens to modify the pmd in a way that breaks an
      invariant that pmd_trans_huge_lock() relies upon.  collapse_huge_page()
      needs the mmap_sem for writing just to block concurrent page faults that
      call pmd_trans_huge_lock().
      
      Specifically the invariant that "!pmd_trans_huge()" cannot become a
      "pmd_trans_huge()" doesn't hold while collapse_huge_page() runs.
      
      The coredump will call __get_user_pages() without mmap_sem for reading,
      which eventually can invoke a lockless page fault which will need a
      functional pmd_trans_huge_lock().
      
      So collapse_huge_page() needs to use mmget_still_valid() to check it's
      not running concurrently with the coredump...  as long as the coredump
      can invoke page faults without holding the mmap_sem for reading.
      
      This has "Fixes: khugepaged" to facilitate backporting, but in my view
      it's more a bug in the coredump code that will eventually have to be
      rewritten to stop invoking page faults without the mmap_sem for reading.
      So the long term plan is still to drop all mmget_still_valid().
      
      Link: http://lkml.kernel.org/r/20190607161558.32104-1-aarcange@redhat.com
      Fixes: ba76149f ("thp: khugepaged")
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Jason Gunthorpe <jgg@mellanox.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d7dcfd25
    • Sagi Grimberg's avatar
      nvme-tcp: fix queue mapping when queue count is limited · 3111e132
      Sagi Grimberg authored
      commit 64861993 upstream.
      
      When the controller supports less queues than requested, we
      should make sure that queue mapping does the right thing and
      not assume that all queues are available. This fixes a crash
      when the controller supports less queues than requested.
      
      The rules are:
      1. if no write queues are requested, we assign the available queues
         to the default queue map. The default and read queue maps share the
         existing queues.
      2. if write queues are requested:
        - first make sure that read queue map gets the requested
          nr_io_queues count
        - then grant the default queue map the minimum between the requested
          nr_write_queues and the remaining queues. If there are no available
          queues to dedicate to the default queue map, fallback to (1) and
          share all the queues in the existing queue map.
      
      Also, provide a log indication on how we constructed the different
      queue maps.
      Reported-by: default avatarHarris, James R <james.r.harris@intel.com>
      Tested-by: default avatarJim Harris <james.r.harris@intel.com>
      Cc: <stable@vger.kernel.org> # v5.0+
      Suggested-by: default avatarRoy Shterman <roys@lightbitslabs.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3111e132
    • Sagi Grimberg's avatar
      nvme-tcp: fix possible null deref on a timed out io queue connect · d28c4813
      Sagi Grimberg authored
      commit f34e2589 upstream.
      
      If I/O queue connect times out, we might have freed the queue socket
      already, so check for that on the error path in nvme_tcp_start_queue.
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d28c4813
    • Sagi Grimberg's avatar
      nvme-tcp: rename function to have nvme_tcp prefix · bd445693
      Sagi Grimberg authored
      commit efb973b1 upstream.
      
      usually nvme_ prefix is for core functions.
      While we're cleaning up, remove redundant empty lines
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarMinwoo Im <minwoo.im@samsung.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bd445693
    • Yang Shi's avatar
      mm: mmu_gather: remove __tlb_reset_range() for force flush · b936bac3
      Yang Shi authored
      commit 7a30df49 upstream.
      
      A few new fields were added to mmu_gather to make TLB flush smarter for
      huge page by telling what level of page table is changed.
      
      __tlb_reset_range() is used to reset all these page table state to
      unchanged, which is called by TLB flush for parallel mapping changes for
      the same range under non-exclusive lock (i.e.  read mmap_sem).
      
      Before commit dd2283f2 ("mm: mmap: zap pages with read mmap_sem in
      munmap"), the syscalls (e.g.  MADV_DONTNEED, MADV_FREE) which may update
      PTEs in parallel don't remove page tables.  But, the forementioned
      commit may do munmap() under read mmap_sem and free page tables.  This
      may result in program hang on aarch64 reported by Jan Stancek.  The
      problem could be reproduced by his test program with slightly modified
      below.
      
      ---8<---
      
      static int map_size = 4096;
      static int num_iter = 500;
      static long threads_total;
      
      static void *distant_area;
      
      void *map_write_unmap(void *ptr)
      {
      	int *fd = ptr;
      	unsigned char *map_address;
      	int i, j = 0;
      
      	for (i = 0; i < num_iter; i++) {
      		map_address = mmap(distant_area, (size_t) map_size, PROT_WRITE | PROT_READ,
      			MAP_SHARED | MAP_ANONYMOUS, -1, 0);
      		if (map_address == MAP_FAILED) {
      			perror("mmap");
      			exit(1);
      		}
      
      		for (j = 0; j < map_size; j++)
      			map_address[j] = 'b';
      
      		if (munmap(map_address, map_size) == -1) {
      			perror("munmap");
      			exit(1);
      		}
      	}
      
      	return NULL;
      }
      
      void *dummy(void *ptr)
      {
      	return NULL;
      }
      
      int main(void)
      {
      	pthread_t thid[2];
      
      	/* hint for mmap in map_write_unmap() */
      	distant_area = mmap(0, DISTANT_MMAP_SIZE, PROT_WRITE | PROT_READ,
      			MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
      	munmap(distant_area, (size_t)DISTANT_MMAP_SIZE);
      	distant_area += DISTANT_MMAP_SIZE / 2;
      
      	while (1) {
      		pthread_create(&thid[0], NULL, map_write_unmap, NULL);
      		pthread_create(&thid[1], NULL, dummy, NULL);
      
      		pthread_join(thid[0], NULL);
      		pthread_join(thid[1], NULL);
      	}
      }
      ---8<---
      
      The program may bring in parallel execution like below:
      
              t1                                        t2
      munmap(map_address)
        downgrade_write(&mm->mmap_sem);
        unmap_region()
        tlb_gather_mmu()
          inc_tlb_flush_pending(tlb->mm);
        free_pgtables()
          tlb->freed_tables = 1
          tlb->cleared_pmds = 1
      
                                              pthread_exit()
                                              madvise(thread_stack, 8M, MADV_DONTNEED)
                                                zap_page_range()
                                                  tlb_gather_mmu()
                                                    inc_tlb_flush_pending(tlb->mm);
      
        tlb_finish_mmu()
          if (mm_tlb_flush_nested(tlb->mm))
            __tlb_reset_range()
      
      __tlb_reset_range() would reset freed_tables and cleared_* bits, but this
      may cause inconsistency for munmap() which do free page tables.  Then it
      may result in some architectures, e.g.  aarch64, may not flush TLB
      completely as expected to have stale TLB entries remained.
      
      Use fullmm flush since it yields much better performance on aarch64 and
      non-fullmm doesn't yields significant difference on x86.
      
      The original proposed fix came from Jan Stancek who mainly debugged this
      issue, I just wrapped up everything together.
      
      Jan's testing results:
      
      v5.2-rc2-24-gbec7550c
      --------------------------
               mean     stddev
      real    37.382   2.780
      user     1.420   0.078
      sys     54.658   1.855
      
      v5.2-rc2-24-gbec7550c + "mm: mmu_gather: remove __tlb_reset_range() for force flush"
      ---------------------------------------------------------------------------------------_
               mean     stddev
      real    37.119   2.105
      user     1.548   0.087
      sys     55.698   1.357
      
      [akpm@linux-foundation.org: coding-style fixes]
      Link: http://lkml.kernel.org/r/1558322252-113575-1-git-send-email-yang.shi@linux.alibaba.com
      Fixes: dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap")
      Signed-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Signed-off-by: default avatarJan Stancek <jstancek@redhat.com>
      Reported-by: default avatarJan Stancek <jstancek@redhat.com>
      Tested-by: default avatarJan Stancek <jstancek@redhat.com>
      Suggested-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarWill Deacon <will.deacon@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Nick Piggin <npiggin@gmail.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: <stable@vger.kernel.org>	[4.20+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b936bac3
    • Tobin C. Harding's avatar
      ocfs2: fix error path kobject memory leak · bb1313f8
      Tobin C. Harding authored
      [ Upstream commit b9fba67b ]
      
      If a call to kobject_init_and_add() fails we should call kobject_put()
      otherwise we leak memory.
      
      Add call to kobject_put() in the error path of call to
      kobject_init_and_add().  Please note, this has the side effect that the
      release method is called if kobject_init_and_add() fails.
      
      Link: http://lkml.kernel.org/r/20190513033458.2824-1-tobin@kernel.orgSigned-off-by: default avatarTobin C. Harding <tobin@kernel.org>
      Reviewed-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Reviewed-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Changwei Ge <gechangwei@live.cn>
      Cc: Gang He <ghe@suse.com>
      Cc: Jun Piao <piaojun@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      bb1313f8
    • Amit Cohen's avatar
      mlxsw: spectrum: Prevent force of 56G · f8d78918
      Amit Cohen authored
      [ Upstream commit 275e928f ]
      
      Force of 56G is not supported by hardware in Ethernet devices. This
      configuration fails with a bad parameter error from firmware.
      
      Add check of this case. Instead of trying to set 56G with autoneg off,
      return a meaningful error.
      
      Fixes: 56ade8fe ("mlxsw: spectrum: Add initial support for Spectrum ASIC")
      Signed-off-by: default avatarAmit Cohen <amitc@mellanox.com>
      Acked-by: default avatarJiri Pirko <jiri@mellanox.com>
      Signed-off-by: default avatarIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      f8d78918
    • Jason Yan's avatar
      scsi: libsas: delete sas port if expander discover failed · 16044c98
      Jason Yan authored
      [ Upstream commit 3b054179 ]
      
      The sas_port(phy->port) allocated in sas_ex_discover_expander() will not be
      deleted when the expander failed to discover. This will cause resource leak
      and a further issue of kernel BUG like below:
      
      [159785.843156]  port-2:17:29: trying to add phy phy-2:17:29 fails: it's
      already part of another port
      [159785.852144] ------------[ cut here  ]------------
      [159785.856833] kernel BUG at drivers/scsi/scsi_transport_sas.c:1086!
      [159785.863000] Internal error: Oops - BUG: 0 [#1] SMP
      [159785.867866] CPU: 39 PID: 16993 Comm: kworker/u96:2 Tainted: G
      W  OE     4.19.25-vhulk1901.1.0.h111.aarch64 #1
      [159785.878458] Hardware name: Huawei Technologies Co., Ltd.
      Hi1620EVBCS/Hi1620EVBCS, BIOS Hi1620 CS B070 1P TA 03/21/2019
      [159785.889231] Workqueue: 0000:74:02.0_disco_q sas_discover_domain
      [159785.895224] pstate: 40c00009 (nZcv daif +PAN +UAO)
      [159785.900094] pc : sas_port_add_phy+0x188/0x1b8
      [159785.904524] lr : sas_port_add_phy+0x188/0x1b8
      [159785.908952] sp : ffff0001120e3b80
      [159785.912341] x29: ffff0001120e3b80 x28: 0000000000000000
      [159785.917727] x27: ffff802ade8f5400 x26: ffff0000681b7560
      [159785.923111] x25: ffff802adf11a800 x24: ffff0000680e8000
      [159785.928496] x23: ffff802ade8f5728 x22: ffff802ade8f5708
      [159785.933880] x21: ffff802adea2db40 x20: ffff802ade8f5400
      [159785.939264] x19: ffff802adea2d800 x18: 0000000000000010
      [159785.944649] x17: 00000000821bf734 x16: ffff00006714faa0
      [159785.950033] x15: ffff0000e8ab4ecf x14: 7261702079646165
      [159785.955417] x13: 726c612073277469 x12: ffff00006887b830
      [159785.960802] x11: ffff00006773eaa0 x10: 7968702079687020
      [159785.966186] x9 : 0000000000002453 x8 : 726f702072656874
      [159785.971570] x7 : 6f6e6120666f2074 x6 : ffff802bcfb21290
      [159785.976955] x5 : ffff802bcfb21290 x4 : 0000000000000000
      [159785.982339] x3 : ffff802bcfb298c8 x2 : 337752b234c2ab00
      [159785.987723] x1 : 337752b234c2ab00 x0 : 0000000000000000
      [159785.993108] Process kworker/u96:2 (pid: 16993, stack limit =
      0x0000000072dae094)
      [159786.000576] Call trace:
      [159786.003097]  sas_port_add_phy+0x188/0x1b8
      [159786.007179]  sas_ex_get_linkrate.isra.5+0x134/0x140
      [159786.012130]  sas_ex_discover_expander+0x128/0x408
      [159786.016906]  sas_ex_discover_dev+0x218/0x4c8
      [159786.021249]  sas_ex_discover_devices+0x9c/0x1a8
      [159786.025852]  sas_discover_root_expander+0x134/0x160
      [159786.030802]  sas_discover_domain+0x1b8/0x1e8
      [159786.035148]  process_one_work+0x1b4/0x3f8
      [159786.039230]  worker_thread+0x54/0x470
      [159786.042967]  kthread+0x134/0x138
      [159786.046269]  ret_from_fork+0x10/0x18
      [159786.049918] Code: 91322300 f0004402 91178042 97fe4c9b (d4210000)
      [159786.056083] Modules linked in: hns3_enet_ut(OE) hclge(OE) hnae3(OE)
      hisi_sas_test_hw(OE) hisi_sas_test_main(OE) serdes(OE)
      [159786.067202] ---[ end trace 03622b9e2d99e196  ]---
      [159786.071893] Kernel panic - not syncing: Fatal exception
      [159786.077190] SMP: stopping secondary CPUs
      [159786.081192] Kernel Offset: disabled
      [159786.084753] CPU features: 0x2,a2a00a38
      
      Fixes: 2908d778 ("[SCSI] aic94xx: new driver")
      Reported-by: default avatarJian Luo <luojian5@huawei.com>
      Signed-off-by: default avatarJason Yan <yanaijie@huawei.com>
      CC: John Garry <john.garry@huawei.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      16044c98
    • YueHaibing's avatar
      scsi: scsi_dh_alua: Fix possible null-ptr-deref · 5a524603
      YueHaibing authored
      [ Upstream commit 12e750bc ]
      
      If alloc_workqueue fails in alua_init, it should return -ENOMEM, otherwise
      it will trigger null-ptr-deref while unloading module which calls
      destroy_workqueue dereference
      wq->lock like this:
      
      BUG: KASAN: null-ptr-deref in __lock_acquire+0x6b4/0x1ee0
      Read of size 8 at addr 0000000000000080 by task syz-executor.0/7045
      
      CPU: 0 PID: 7045 Comm: syz-executor.0 Tainted: G         C        5.1.0+ #28
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1
      Call Trace:
       dump_stack+0xa9/0x10e
       __kasan_report+0x171/0x18d
       ? __lock_acquire+0x6b4/0x1ee0
       kasan_report+0xe/0x20
       __lock_acquire+0x6b4/0x1ee0
       lock_acquire+0xb4/0x1b0
       __mutex_lock+0xd8/0xb90
       drain_workqueue+0x25/0x290
       destroy_workqueue+0x1f/0x3f0
       __x64_sys_delete_module+0x244/0x330
       do_syscall_64+0x72/0x2a0
       entry_SYSCALL_64_after_hwframe+0x49/0xbe
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Fixes: 03197b61 ("scsi_dh_alua: Use workqueue for RTPG")
      Signed-off-by: default avatarYueHaibing <yuehaibing@huawei.com>
      Reviewed-by: default avatarBart Van Assche <bvanassche@acm.org>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      5a524603
    • Lianbo Jiang's avatar
      scsi: smartpqi: properly set both the DMA mask and the coherent DMA mask · cc58a0b1
      Lianbo Jiang authored
      [ Upstream commit 1d94f06e ]
      
      When SME is enabled, the smartpqi driver won't work on the HP DL385 G10
      machine, which causes the failure of kernel boot because it fails to
      allocate pqi error buffer. Please refer to the kernel log:
      ....
      [    9.431749] usbcore: registered new interface driver uas
      [    9.441524] Microsemi PQI Driver (v1.1.4-130)
      [    9.442956] i40e 0000:04:00.0: fw 6.70.48768 api 1.7 nvm 10.2.5
      [    9.447237] smartpqi 0000:23:00.0: Microsemi Smart Family Controller found
               Starting dracut initqueue hook...
      [  OK  ] Started Show Plymouth Boot Scre[    9.471654] Broadcom NetXtreme-C/E driver bnxt_en v1.9.1
      en.
      [  OK  ] Started Forward Password Requests to Plymouth Directory Watch.
      [[0;[    9.487108] smartpqi 0000:23:00.0: failed to allocate PQI error buffer
      ....
      [  139.050544] dracut-initqueue[949]: Warning: dracut-initqueue timeout - starting timeout scripts
      [  139.589779] dracut-initqueue[949]: Warning: dracut-initqueue timeout - starting timeout scripts
      
      Basically, the fact that the coherent DMA mask value wasn't set caused the
      driver to fall back to SWIOTLB when SME is active.
      
      For correct operation, lets call the dma_set_mask_and_coherent() to
      properly set the mask for both streaming and coherent, in order to inform
      the kernel about the devices DMA addressing capabilities.
      Signed-off-by: default avatarLianbo Jiang <lijiang@redhat.com>
      Acked-by: default avatarDon Brace <don.brace@microsemi.com>
      Tested-by: default avatarDon Brace <don.brace@microsemi.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      cc58a0b1