1. 12 Jan, 2016 1 commit
    • Keith Busch's avatar
      block: split bios to max possible length · e36f6204
      Keith Busch authored
      This splits bio in the middle of a vector to form the largest possible
      bio at the h/w's desired alignment, and guarantees the bio being split
      will have some data.
      
      The criteria for splitting is changed from the max sectors to the h/w's
      optimal sector alignment if it is provided. For h/w that advertise their
      block storage's underlying chunk size, it's a big performance win to not
      submit commands that cross them. If sector alignment is not provided,
      this patch uses the max sectors as before.
      
      This addresses the performance issue commit d3805611 attempted to
      fix, but was reverted due to splitting logic error.
      Signed-off-by: default avatarKeith Busch <keith.busch@intel.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Ming Lei <tom.leiming@gmail.com>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: <stable@vger.kernel.org> # 4.4.x-
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      e36f6204
  2. 03 Dec, 2015 3 commits
  3. 01 Dec, 2015 1 commit
  4. 25 Nov, 2015 1 commit
    • Jens Axboe's avatar
      Revert "blk-flush: Queue through IO scheduler when flush not required" · d7cf931d
      Jens Axboe authored
      This reverts commit 1b2ff19e.
      
      Jan writes:
      
      --
      
      Thanks for report! After some investigation I found out we allocate
      elevator specific data in __get_request() only for non-flush requests. And
      this is actually required since the flush machinery uses the space in
      struct request for something else. Doh. So my patch is just wrong and not
      easy to fix since at the time __get_request() is called we are not sure
      whether the flush machinery will be used in the end. Jens, please revert
      1b2ff19e. Thanks!
      
      I'm somewhat surprised that you can reliably hit the race where flushing
      gets disabled for the device just while the request is in flight. But I
      guess during boot it makes some sense.
      
      --
      
      So let's just revert it, we can fix the queue run manually after the
      fact. This race is rare enough that it didn't trigger in testing, it
      requires the specific disable-while-in-flight scenario to trigger.
      d7cf931d
  5. 24 Nov, 2015 14 commits
    • Jens Axboe's avatar
      block: clarify blk_add_timer() use case for blk-mq · 3b627a3f
      Jens Axboe authored
      Just a comment update on not needing queue_lock, and that we aren't
      really adding the request to a timeout list for !mq.
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      3b627a3f
    • Geliang Tang's avatar
      bio: use offset_in_page macro · bd5cecea
      Geliang Tang authored
      Use offset_in_page macro instead of (addr & ~PAGE_MASK).
      Signed-off-by: default avatarGeliang Tang <geliangtang@163.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      bd5cecea
    • Wei Tang's avatar
      block: do not initialise statics to 0 or NULL · 1fe8f348
      Wei Tang authored
      This patch fixes the checkpatch.pl error to genhd.c:
      
      ERROR: do not initialise statics to 0 or NULL
      Signed-off-by: default avatarWei Tang <tangwei@cmss.chinamobile.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      1fe8f348
    • Wei Tang's avatar
      block: do not initialise globals to 0 or NULL · d674d414
      Wei Tang authored
      This patch fixes the checkpatch.pl error to blk-exec.c:
      
      ERROR: do not initialise globals to 0 or NULL
      Signed-off-by: default avatarWei Tang <tangwei@cmss.chinamobile.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      d674d414
    • Ilya Dryomov's avatar
      block: rename request_queue slab cache · c2789bd4
      Ilya Dryomov authored
      Name the cache after the actual name of the struct.
      Signed-off-by: default avatarIlya Dryomov <idryomov@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      c2789bd4
    • Christoph Hellwig's avatar
      block: fix blk_abort_request for blk-mq drivers · 55ce0da1
      Christoph Hellwig authored
      We only added the request to the request list for the !blk-mq case,
      so we should only delete it in that case as well.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      55ce0da1
    • Christoph Hellwig's avatar
      nvme: add missing unmaps in nvme_queue_rq · bf508e91
      Christoph Hellwig authored
      When we fail various metadata related operations in nvme_queue_rq we
      need to unmap the data SGL.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarKeith Busch <keith.busch@intel.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      bf508e91
    • Nishanth Aravamudan's avatar
      NVMe: default to 4k device page size · c5c9f25b
      Nishanth Aravamudan authored
      We received a bug report recently when DDW (64-bit direct DMA on Power)
      is not enabled for NVMe devices. In that case, we fall back to 32-bit
      DMA via the IOMMU, which is always done via 4K TCEs (Translation Control
      Entries).
      
      The NVMe device driver, though, assumes that the DMA alignment for the
      PRP entries will match the device's page size, and that the DMA aligment
      matches the kernel's page aligment. On Power, the the IOMMU page size,
      as mentioned above, can be 4K, while the device can have a page size of
      8K, while the kernel has a page size of 64K. This eventually trips the
      BUG_ON in nvme_setup_prps(), as we have a 'dma_len' that is a multiple
      of 4K but not 8K (e.g., 0xF000).
      
      In this particular case of page sizes, we clearly want to use the
      IOMMU's page size in the driver. And generally, the NVMe driver in this
      function should be using the IOMMU's page size for the default device
      page size, rather than the kernel's page size. There is not currently an
      API to obtain the IOMMU's page size across all architectures and in the
      interest of a stop-gap fix to this functional issue, default the NVMe
      device page size to 4K, with the intent of adding such an API and
      implementation across all architectures in the next merge window.
      
      With the functionally equivalent v3 of this patch, our hardware test
      exerciser survives when using 32-bit DMA; without the patch, the kernel
      will BUG within a few minutes.
      
      Signed-off-by: Nishanth Aravamudan <nacc at linux.vnet.ibm.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      c5c9f25b
    • Linus Torvalds's avatar
      Merge tag 'dm-4.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm · 6ffeba96
      Linus Torvalds authored
      Pull device mapper fixes from Mike Snitzer:
       "Two fixes for 4.4-rc1's DM ioctl changes that introduced the potential
        for infinite recursion on ioctl (with DM multipath).
      
        And four stable fixes:
      
         - A DM thin-provisioning fix to restore 'error_if_no_space' setting
           when a thin-pool is made writable again (after having been out of
           space).
      
         - A DM thin-provisioning fix to properly advertise discard support
           for thin volumes that are stacked on a thin-pool whose underlying
           data device doesn't support discards.
      
         - A DM ioctl fix to allow ctrl-c to break out of an ioctl retry loop
           when DM multipath is configured to 'queue_if_no_path'.
      
         - A DM crypt fix for a possible hang on dm-crypt device removal"
      
      * tag 'dm-4.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
        dm thin: fix regression in advertised discard limits
        dm crypt: fix a possible hang due to race condition on exit
        dm mpath: fix infinite recursion in ioctl when no paths and !queue_if_no_path
        dm: do not reuse dm_blk_ioctl block_device input as local variable
        dm: fix ioctl retry termination with signal
        dm thin: restore requested 'error_if_no_space' setting on OODS to WRITE transition
      6ffeba96
    • Eric Dumazet's avatar
      pidns: fix NULL dereference in __task_pid_nr_ns() · 81b1a832
      Eric Dumazet authored
      I got a crash during a "perf top" session that was caused by a race in
      __task_pid_nr_ns() :
      
      pid_nr_ns() was inlined, but apparently compiler chose to read
      task->pids[type].pid twice, and the pid->level dereference crashed
      because we got a NULL pointer at the second read :
      
          if (pid && ns->level <= pid->level) { // CRASH
      
      Just use RCU API properly to solve this race, and not worry about "perf
      top" crashing hosts :(
      
      get_task_pid() can benefit from same fix.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      81b1a832
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.dk/linux-block · 4ce01c51
      Linus Torvalds authored
      Pull block layer fixes from Jens Axboe:
       "A round of fixes/updates for the current series.
      
        This looks a little bigger than it is, but that's mainly because we
        pushed the lightnvm enabled null_blk change out of the merge window so
        it could be updated a bit.  The rest of the volume is also mostly
        lightnvm.  In particular:
      
         - Lightnvm.  Various fixes, additions, updates from Matias and
           Javier, as well as from Wenwei Tao.
      
         - NVMe:
              - Fix for potential arithmetic overflow from Keith.
              - Also from Keith, ensure that we reap pending completions from
                a completion queue before deleting it.  Fixes kernel crashes
                when resetting a device with IO pending.
              - Various little lightnvm related tweaks from Matias.
      
         - Fixup flushes to go through the IO scheduler, for the cases where a
           flush is not required.  Fixes a case in CFQ where we would be
           idling and not see this request, hence not break the idling.  From
           Jan Kara.
      
         - Use list_{first,prev,next} in elevator.c for cleaner code.  From
           Gelian Tang.
      
         - Fix for a warning trigger on btrfs and raid on single queue blk-mq
           devices, where we would flush plug callbacks with preemption
           disabled.  From me.
      
         - A mac partition validation fix from Kees Cook.
      
         - Two merge fixes from Ming, marked stable.  A third part is adding a
           new warning so we'll notice this quicker in the future, if we screw
           up the accounting.
      
         - Cleanup of thread name/creation in mtip32xx from Rasmus Villemoes"
      
      * 'for-linus' of git://git.kernel.dk/linux-block: (32 commits)
        blk-merge: warn if figured out segment number is bigger than nr_phys_segments
        blk-merge: fix blk_bio_segment_split
        block: fix segment split
        blk-mq: fix calling unplug callbacks with preempt disabled
        mac: validate mac_partition is within sector
        mtip32xx: use formatting capability of kthread_create_on_node
        NVMe: reap completion entries when deleting queue
        lightnvm: add free and bad lun info to show luns
        lightnvm: keep track of block counts
        nvme: lightnvm: use admin queues for admin cmds
        lightnvm: missing free on init error
        lightnvm: wrong return value and redundant free
        null_blk: do not del gendisk with lightnvm
        null_blk: use device addressing mode
        null_blk: use ppa_cache pool
        NVMe: Fix possible arithmetic overflow for max segments
        blk-flush: Queue through IO scheduler when flush not required
        null_blk: register as a LightNVM device
        elevator: use list_{first,prev,next}_entry
        lightnvm: cleanup queue before target removal
        ...
      4ce01c51
    • Ming Lei's avatar
      blk-merge: warn if figured out segment number is bigger than nr_phys_segments · 12e57f59
      Ming Lei authored
      We had seen lots of reports of this kind issue, so add one
      warnning in blk-merge, then it can be triggered easily and
      avoid to depend on warning/bug from drivers.
      Signed-off-by: default avatarMing Lei <ming.lei@canonical.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      12e57f59
    • Ming Lei's avatar
      blk-merge: fix blk_bio_segment_split · 02e70742
      Ming Lei authored
      Commit bdced438(block: setup bi_phys_segments after
      splitting) introduces function of computing bio->bi_phys_segments
      during bio splitting.
      
      Unfortunately both bio->bi_seg_front_size and bio->bi_seg_back_size
      arn't computed, so too many physical segments may be obtained
      for one request since both the two are used to check if one segment
      across two bios can be possible.
      
      This patch fixes the issue by computing the two variables in
      blk_bio_segment_split().
      
      Fixes: bdced438(block: setup bi_phys_segments after splitting)
      Reported-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Reported-by: default avatarMark Salter <msalter@redhat.com>
      Tested-by: default avatarLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Tested-by: default avatarMark Salter <msalter@redhat.com>
      Signed-off-by: default avatarMing Lei <ming.lei@canonical.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      02e70742
    • Ming Lei's avatar
      block: fix segment split · 578270bf
      Ming Lei authored
      Inside blk_bio_segment_split(), previous bvec pointer(bvprvp)
      always points to the iterator local variable, which is obviously
      wrong, so fix it by pointing to the local variable of 'bvprv'.
      
      Fixes: 5014c311(block: fix bogus compiler warnings in blk-merge.c)
      Cc: stable@kernel.org #4.3
      Reported-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Reported-by: default avatarMark Salter <msalter@redhat.com>
      Tested-by: default avatarLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Tested-by: default avatarMark Salter <msalter@redhat.com>
      Signed-off-by: default avatarMing Lei <ming.lei@canonical.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      578270bf
  6. 23 Nov, 2015 3 commits
    • Linus Torvalds's avatar
      Merge tag 'linux-kselftest-4.4-rc3' of... · a2931547
      Linus Torvalds authored
      Merge tag 'linux-kselftest-4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest
      
      Pull kselftest fixes from Shuah Khan:
       "This update consists of one minor documentation fix and a fix to an
        existing test"
      
      * tag 'linux-kselftest-4.4-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest:
        selftests/seccomp: Get page size from sysconf
        tools:testing/selftests: fix typo in futex/README
      a2931547
    • Mike Snitzer's avatar
      dm thin: fix regression in advertised discard limits · 0fcb04d5
      Mike Snitzer authored
      When establishing a thin device's discard limits we cannot rely on the
      underlying thin-pool device's discard capabilities (which are inherited
      from the thin-pool's underlying data device) given that DM thin devices
      must provide discard support even when the thin-pool's underlying data
      device doesn't support discards.
      
      Users were exposed to this thin device discard limits regression if
      their thin-pool's underlying data device does _not_ support discards.
      This regression caused all upper-layers that called the
      blkdev_issue_discard() interface to not be able to issue discards to
      thin devices (because discard_granularity was 0).  This regression
      wasn't caught earlier because the device-mapper-test-suite's extensive
      'thin-provisioning' discard tests are only ever performed against
      thin-pool's with data devices that support discards.
      
      Fix is to have thin_io_hints() test the pool's 'discard_enabled' feature
      rather than inferring whether or not a thin device's discard support
      should be enabled by looking at the thin-pool's discard_granularity.
      
      Fixes: 21607670 ("dm thin: disable discard support for thin devices if pool's is disabled")
      Reported-by: default avatarMike Gerber <mike@sprachgewalt.de>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: stable@vger.kernel.org # 4.1+
      0fcb04d5
    • Linus Torvalds's avatar
      Linux 4.4-rc2 · 1ec21837
      Linus Torvalds authored
      1ec21837
  7. 22 Nov, 2015 17 commits
    • Linus Torvalds's avatar
      Merge branch 'akpm' (patches from Andrew) · 104e2a6f
      Linus Torvalds authored
      Merge slub bulk allocator updates from Andrew Morton:
       "This missed the merge window because I was waiting for some repairs to
        come in.  Nothing actually uses the bulk allocator yet and the changes
        to other code paths are pretty small.  And the net guys are waiting
        for this so they can start merging the client code"
      
      More comments from Jesper Dangaard Brouer:
       "The kmem_cache_alloc_bulk() call, in mm/slub.c, were included in
        previous kernel.  The present version contains a bug.  Vladimir
        Davydov noticed it contained a bug, when kernel is compiled with
        CONFIG_MEMCG_KMEM (see commit 03ec0ed5: "slub: fix kmem cgroup
        bug in kmem_cache_alloc_bulk").  Plus the mem cgroup counterpart in
        kmem_cache_free_bulk() were missing (see commit 03374518 "slub:
        add missing kmem cgroup support to kmem_cache_free_bulk").
      
        I don't consider the fix stable-material because there are no in-tree
        users of the API.
      
        But with known bugs (for memcg) I cannot start using the API in the
        net-tree"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        slab/slub: adjust kmem_cache_alloc_bulk API
        slub: add missing kmem cgroup support to kmem_cache_free_bulk
        slub: fix kmem cgroup bug in kmem_cache_alloc_bulk
        slub: optimize bulk slowpath free by detached freelist
        slub: support for bulk free with SLUB freelists
      104e2a6f
    • Linus Torvalds's avatar
      Merge tag 'tty-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty · dcfeda9d
      Linus Torvalds authored
      Pull tty/serial fixes from Greg KH:
       "Here are a few small tty/serial driver fixes for 4.4-rc2 that resolve
        some reported problems.
      
        All have been in linux-next, full details are in the shortlog below"
      
      * tag 'tty-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
        serial: export fsl8250_handle_irq
        serial: 8250_mid: Add missing dependency
        tty: audit: Fix audit source
        serial: etraxfs-uart: Fix crash
        serial: fsl_lpuart: Fix earlycon support
        bcm63xx_uart: Use the device name when registering an interrupt
        tty: Fix direct use of tty buffer work
        tty: Fix tty_send_xchar() lock order inversion
      dcfeda9d
    • Linus Torvalds's avatar
      Merge tag 'staging-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging · 7f217393
      Linus Torvalds authored
      Pull staging/IIO fixes from Greg KH:
       "Here are some staging and iio driver fixes for 4.4-rc2.  All of these
        are in response to issues that have been reported and have been in
        linux-next for a while"
      
      * tag 'staging-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging:
        Revert "Staging: wilc1000: coreconfigurator: Drop unneeded wrapper functions"
        iio: adc: xilinx: Fix VREFN scale
        iio: si7020: Swap data byte order
        iio: adc: vf610_adc: Fix division by zero error
        iio:ad7793: Fix ad7785 product ID
        iio: ad5064: Fix ad5629/ad5669 shift
        iio:ad5064: Make sure ad5064_i2c_write() returns 0 on success
        iio: lpc32xx_adc: fix warnings caused by enabling unprepared clock
        staging: iio: select IRQ_WORK for IIO_DUMMY_EVGEN
        vf610_adc: Fix internal temperature calculation
      7f217393
    • Linus Torvalds's avatar
      Merge tag 'usb-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb · 6d2d91b3
      Linus Torvalds authored
      Pull USB fixes from Greg KH:
       "Here are a number of USB fixes and new device ids for 4.4-rc2.  All
        have been in linux-next and the details are in the shortlog"
      
      * tag 'usb-4.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (28 commits)
        usblp: do not set TASK_INTERRUPTIBLE before lock
        USB: MAINTAINERS: cxacru
        usb: kconfig: fix warning of select USB_OTG
        USB: option: add XS Stick W100-2 from 4G Systems
        xhci: Fix a race in usb2 LPM resume, blocking U3 for usb2 devices
        usb: xhci: fix checking ep busy for CFC
        xhci: Workaround to get Intel xHCI reset working more reliably
        usb: chipidea: imx: fix a possible NULL dereference
        usb: chipidea: usbmisc_imx: fix a possible NULL dereference
        usb: chipidea: otg: gadget module load and unload support
        usb: chipidea: debug: disable usb irq while role switch
        ARM: dts: imx27.dtsi: change the clock information for usb
        usb: chipidea: imx: refine clock operations to adapt for all platforms
        usb: gadget: atmel_usba_udc: Expose correct device speed
        usb: musb: enable usb_dma parameter
        usb: phy: phy-mxs-usb: fix a possible NULL dereference
        usb: dwc3: gadget: let us set lower max_speed
        usb: musb: fix tx fifo flush handling
        usb: gadget: f_loopback: fix the warning during the enumeration
        usb: dwc2: host: Fix remote wakeup when not in DWC2_L2
        ...
      6d2d91b3
    • Linus Torvalds's avatar
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · 0ec7dc8d
      Linus Torvalds authored
      Pull MIPS fixes from Ralf Baechle:
      
       - Fix a flood of annoying build warnings
      
       - A number of fixes for Atheros 79xx platforms
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
        MIPS: ath79: Add a machine entry for booting OF machines
        MIPS: ath79: Fix the size of the MISC INTC registers in ar9132.dtsi
        MIPS: ath79: Fix the DDR control initialization on ar71xx and ar934x
        MIPS: Fix flood of warnings about comparsion being always true.
      0ec7dc8d
    • Linus Torvalds's avatar
      Merge branch 'parisc-4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux · 94521b2f
      Linus Torvalds authored
      Pull parisc update from Helge Deller:
       "This patchset adds Huge Page and HUGETLBFS support for parisc"
      
      Honestly, the hugepage support should have gone through in the merge
      window, and is not really an rc-time fix.  But it only touches
      arch/parisc, and I cannot find it in myself to care.  If one of the
      three parisc users notices a breakage, I will point at Helge and make
      rude farting noises.
      
      * 'parisc-4.4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
        parisc: Map kernel text and data on huge pages
        parisc: Add Huge Page and HUGETLBFS support
        parisc: Use long branch to do_syscall_trace_exit
        parisc: Increase initial kernel mapping to 32MB on 64bit kernel
        parisc: Initialize the fault vector earlier in the boot process.
        parisc: Add defines for Huge page support
        parisc: Drop unused MADV_xxxK_PAGES flags from asm/mman.h
        parisc: Drop definition of start_thread_som for HP-UX SOM binaries
        parisc: Fix wrong comment regarding first pmd entry flags
      94521b2f
    • Linus Torvalds's avatar
      Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 727cde6c
      Linus Torvalds authored
      Pull perf tool fixes from Thomas Gleixner:
       "A couple of fixes for perf tools:
      
         - Build system updates
      
         - Plug a memory leak in an error path of perf probe
      
         - Tear down probes correctly when adding fails
      
         - Fixes to the perf symbol handling
      
         - Fix ordering of event processing in buildid-list
      
         - Fix per DSO filtering in the histogram browser"
      
      * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        perf probe: Clear probe_trace_event when add_probe_trace_event() fails
        perf probe: Fix memory leaking on failure by clearing all probe_trace_events
        perf inject: Also re-pipe lost_samples event
        perf buildid-list: Requires ordered events
        perf symbols: Fix dso lookup by long name and missing buildids
        perf symbols: Allow forcing reading of non-root owned files by root
        perf hists browser: The dso can be obtained from popup_action->ms.map->dso
        perf hists browser: Fix 'd' hotkey action to filter by DSO
        perf symbols: Rebuild rbtree when adjusting symbols for kcore
        tools: Add a "make all" rule
        tools: Actually install tmon in the install rule
      727cde6c
    • Linus Torvalds's avatar
      Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 069ec229
      Linus Torvalds authored
      Pull x86 fixes from Thomas Gleixner:
       "This update contains:
      
         - MPX updates for handling 32bit processes
      
         - A fix for a long standing bug in 32bit signal frame handling
           related to FPU/XSAVE state
      
         - Handle get_xsave_addr() correctly in KVM
      
         - Fix SMAP check under paravirtualization
      
         - Add a comment to the static function trace entry to avoid further
           confusion about the difference to dynamic tracing"
      
      * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86/cpu: Fix SMAP check in PVOPS environments
        x86/ftrace: Add comment on static function tracing
        x86/fpu: Fix get_xsave_addr() behavior under virtualization
        x86/fpu: Fix 32-bit signal frame handling
        x86/mpx: Fix 32-bit address space calculation
        x86/mpx: Do proper get_user() when running 32-bit binaries on 64-bit kernels
      069ec229
    • Jesper Dangaard Brouer's avatar
      slab/slub: adjust kmem_cache_alloc_bulk API · 865762a8
      Jesper Dangaard Brouer authored
      Adjust kmem_cache_alloc_bulk API before we have any real users.
      
      Adjust API to return type 'int' instead of previously type 'bool'.  This
      is done to allow future extension of the bulk alloc API.
      
      A future extension could be to allow SLUB to stop at a page boundary, when
      specified by a flag, and then return the number of objects.
      
      The advantage of this approach, would make it easier to make bulk alloc
      run without local IRQs disabled.  With an approach of cmpxchg "stealing"
      the entire c->freelist or page->freelist.  To avoid overshooting we would
      stop processing at a slab-page boundary.  Else we always end up returning
      some objects at the cost of another cmpxchg.
      
      To keep compatible with future users of this API linking against an older
      kernel when using the new flag, we need to return the number of allocated
      objects with this API change.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      865762a8
    • Jesper Dangaard Brouer's avatar
      slub: add missing kmem cgroup support to kmem_cache_free_bulk · 03374518
      Jesper Dangaard Brouer authored
      Initial implementation missed support for kmem cgroup support in
      kmem_cache_free_bulk() call, add this.
      
      If CONFIG_MEMCG_KMEM is not enabled, the compiler should be smart enough
      to not add any asm code.
      
      Incoming bulk free objects can belong to different kmem cgroups, and
      object free call can happen at a later point outside memcg context.  Thus,
      we need to keep the orig kmem_cache, to correctly verify if a memcg object
      match against its "root_cache" (s->memcg_params.root_cache).
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03374518
    • Jesper Dangaard Brouer's avatar
      slub: fix kmem cgroup bug in kmem_cache_alloc_bulk · 03ec0ed5
      Jesper Dangaard Brouer authored
      The call slab_pre_alloc_hook() interacts with kmemgc and is not allowed to
      be called several times inside the bulk alloc for loop, due to the call to
      memcg_kmem_get_cache().
      
      This would result in hitting the VM_BUG_ON in __memcg_kmem_get_cache.
      
      As suggested by Vladimir Davydov, change slab_post_alloc_hook() to be able
      to handle an array of objects.
      
      A subtle detail is, loop iterator "i" in slab_post_alloc_hook() must have
      same type (size_t) as size argument.  This helps the compiler to easier
      realize that it can remove the loop, when all debug statements inside loop
      evaluates to nothing.  Note, this is only an issue because the kernel is
      compiled with GCC option: -fno-strict-overflow
      
      In slab_alloc_node() the compiler inlines and optimizes the invocation of
      slab_post_alloc_hook(s, flags, 1, &object) by removing the loop and access
      object directly.
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Reported-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Suggested-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03ec0ed5
    • Jesper Dangaard Brouer's avatar
      slub: optimize bulk slowpath free by detached freelist · d0ecd894
      Jesper Dangaard Brouer authored
      This change focus on improving the speed of object freeing in the
      "slowpath" of kmem_cache_free_bulk.
      
      The calls slab_free (fastpath) and __slab_free (slowpath) have been
      extended with support for bulk free, which amortize the overhead of
      the (locked) cmpxchg_double.
      
      To use the new bulking feature, we build what I call a detached
      freelist.  The detached freelist takes advantage of three properties:
      
       1) the free function call owns the object that is about to be freed,
          thus writing into this memory is synchronization-free.
      
       2) many freelist's can co-exist side-by-side in the same slab-page
          each with a separate head pointer.
      
       3) it is the visibility of the head pointer that needs synchronization.
      
      Given these properties, the brilliant part is that the detached
      freelist can be constructed without any need for synchronization.  The
      freelist is constructed directly in the page objects, without any
      synchronization needed.  The detached freelist is allocated on the
      stack of the function call kmem_cache_free_bulk.  Thus, the freelist
      head pointer is not visible to other CPUs.
      
      All objects in a SLUB freelist must belong to the same slab-page.
      Thus, constructing the detached freelist is about matching objects
      that belong to the same slab-page.  The bulk free array is scanned is
      a progressive manor with a limited look-ahead facility.
      
      Kmem debug support is handled in call of slab_free().
      
      Notice kmem_cache_free_bulk no longer need to disable IRQs. This
      only slowed down single free bulk with approx 3 cycles.
      
      Performance data:
       Benchmarked[1] obj size 256 bytes on CPU i7-4790K @ 4.00GHz
      
      SLUB fastpath single object quick reuse: 47 cycles(tsc) 11.931 ns
      
      To get stable and comparable numbers, the kernel have been booted with
      "slab_merge" (this also improve performance for larger bulk sizes).
      
      Performance data, compared against fallback bulking:
      
      bulk -  fallback bulk            - improvement with this patch
         1 -  62 cycles(tsc) 15.662 ns - 49 cycles(tsc) 12.407 ns- improved 21.0%
         2 -  55 cycles(tsc) 13.935 ns - 30 cycles(tsc) 7.506 ns - improved 45.5%
         3 -  53 cycles(tsc) 13.341 ns - 23 cycles(tsc) 5.865 ns - improved 56.6%
         4 -  52 cycles(tsc) 13.081 ns - 20 cycles(tsc) 5.048 ns - improved 61.5%
         8 -  50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.659 ns - improved 64.0%
        16 -  49 cycles(tsc) 12.412 ns - 17 cycles(tsc) 4.495 ns - improved 65.3%
        30 -  49 cycles(tsc) 12.484 ns - 18 cycles(tsc) 4.533 ns - improved 63.3%
        32 -  50 cycles(tsc) 12.627 ns - 18 cycles(tsc) 4.707 ns - improved 64.0%
        34 -  96 cycles(tsc) 24.243 ns - 23 cycles(tsc) 5.976 ns - improved 76.0%
        48 -  83 cycles(tsc) 20.818 ns - 21 cycles(tsc) 5.329 ns - improved 74.7%
        64 -  74 cycles(tsc) 18.700 ns - 20 cycles(tsc) 5.127 ns - improved 73.0%
       128 -  90 cycles(tsc) 22.734 ns - 27 cycles(tsc) 6.833 ns - improved 70.0%
       158 -  99 cycles(tsc) 24.776 ns - 30 cycles(tsc) 7.583 ns - improved 69.7%
       250 - 104 cycles(tsc) 26.089 ns - 37 cycles(tsc) 9.280 ns - improved 64.4%
      
      Performance data, compared current in-kernel bulking:
      
      bulk - curr in-kernel  - improvement with this patch
         1 -  46 cycles(tsc) - 49 cycles(tsc) - improved (cycles:-3) -6.5%
         2 -  27 cycles(tsc) - 30 cycles(tsc) - improved (cycles:-3) -11.1%
         3 -  21 cycles(tsc) - 23 cycles(tsc) - improved (cycles:-2) -9.5%
         4 -  18 cycles(tsc) - 20 cycles(tsc) - improved (cycles:-2) -11.1%
         8 -  17 cycles(tsc) - 18 cycles(tsc) - improved (cycles:-1) -5.9%
        16 -  18 cycles(tsc) - 17 cycles(tsc) - improved (cycles: 1)  5.6%
        30 -  18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0)  0.0%
        32 -  18 cycles(tsc) - 18 cycles(tsc) - improved (cycles: 0)  0.0%
        34 -  78 cycles(tsc) - 23 cycles(tsc) - improved (cycles:55) 70.5%
        48 -  60 cycles(tsc) - 21 cycles(tsc) - improved (cycles:39) 65.0%
        64 -  49 cycles(tsc) - 20 cycles(tsc) - improved (cycles:29) 59.2%
       128 -  69 cycles(tsc) - 27 cycles(tsc) - improved (cycles:42) 60.9%
       158 -  79 cycles(tsc) - 30 cycles(tsc) - improved (cycles:49) 62.0%
       250 -  86 cycles(tsc) - 37 cycles(tsc) - improved (cycles:49) 57.0%
      
      Performance with normal SLUB merging is significantly slower for
      larger bulking.  This is believed to (primarily) be an effect of not
      having to share the per-CPU data-structures, as tuning per-CPU size
      can achieve similar performance.
      
      bulk - slab_nomerge   -  normal SLUB merge
         1 -  49 cycles(tsc) - 49 cycles(tsc) - merge slower with cycles:0
         2 -  30 cycles(tsc) - 30 cycles(tsc) - merge slower with cycles:0
         3 -  23 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:0
         4 -  20 cycles(tsc) - 20 cycles(tsc) - merge slower with cycles:0
         8 -  18 cycles(tsc) - 18 cycles(tsc) - merge slower with cycles:0
        16 -  17 cycles(tsc) - 17 cycles(tsc) - merge slower with cycles:0
        30 -  18 cycles(tsc) - 23 cycles(tsc) - merge slower with cycles:5
        32 -  18 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:4
        34 -  23 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:-1
        48 -  21 cycles(tsc) - 22 cycles(tsc) - merge slower with cycles:1
        64 -  20 cycles(tsc) - 48 cycles(tsc) - merge slower with cycles:28
       128 -  27 cycles(tsc) - 57 cycles(tsc) - merge slower with cycles:30
       158 -  30 cycles(tsc) - 59 cycles(tsc) - merge slower with cycles:29
       250 -  37 cycles(tsc) - 56 cycles(tsc) - merge slower with cycles:19
      
      Joint work with Alexander Duyck.
      
      [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/slab_bulk_test01.c
      
      [akpm@linux-foundation.org: BUG_ON -> WARN_ON;return]
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d0ecd894
    • Jesper Dangaard Brouer's avatar
      slub: support for bulk free with SLUB freelists · 81084651
      Jesper Dangaard Brouer authored
      Make it possible to free a freelist with several objects by adjusting API
      of slab_free() and __slab_free() to have head, tail and an objects counter
      (cnt).
      
      Tail being NULL indicate single object free of head object.  This allow
      compiler inline constant propagation in slab_free() and
      slab_free_freelist_hook() to avoid adding any overhead in case of single
      object free.
      
      This allows a freelist with several objects (all within the same
      slab-page) to be free'ed using a single locked cmpxchg_double in
      __slab_free() and with an unlocked cmpxchg_double in slab_free().
      
      Object debugging on the free path is also extended to handle these
      freelists.  When CONFIG_SLUB_DEBUG is enabled it will also detect if
      objects don't belong to the same slab-page.
      
      These changes are needed for the next patch to bulk free the detached
      freelists it introduces and constructs.
      
      Micro benchmarking showed no performance reduction due to this change,
      when debugging is turned off (compiled with CONFIG_SLUB_DEBUG).
      Signed-off-by: default avatarJesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@redhat.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      81084651
    • Helge Deller's avatar
      parisc: Map kernel text and data on huge pages · 41b85a11
      Helge Deller authored
      Adjust the linker script and map_pages() to map kernel text and data on
      physical 1MB huge/large pages.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      41b85a11
    • Helge Deller's avatar
      parisc: Add Huge Page and HUGETLBFS support · 736d2169
      Helge Deller authored
      This patch adds huge page support to allow userspace to allocate huge
      pages and to use hugetlbfs filesystem on 32- and 64-bit Linux kernels.
      A later patch will add kernel support to map kernel text and data on
      huge pages.
      
      The only requirement is, that the kernel needs to be compiled for a
      PA8X00 CPU (PA2.0 architecture). Older PA1.X CPUs do not support
      variable page sizes. 64bit Kernels are compiled for PA2.0 by default.
      
      Technically on parisc multiple physical huge pages may be needed to
      emulate standard 2MB huge pages.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      736d2169
    • Helge Deller's avatar
      parisc: Use long branch to do_syscall_trace_exit · 337685e5
      Helge Deller authored
      Use the 22bit instead of the 17bit branch instruction on a 64bit kernel
      to reach the do_syscall_trace_exit function from the gateway page.
      A huge page enabled kernel may need the additional branch distance bits.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      337685e5
    • Helge Deller's avatar
      parisc: Increase initial kernel mapping to 32MB on 64bit kernel · 332b42e4
      Helge Deller authored
      For the 64bit kernel the initially 16 MB kernel memory might become too
      small if you build a kernel with many modules built-in and with kernel
      text and data areas mapped on huge pages.
      
      This patch increases the initial mapping to 32MB for 64bit kernels and
      keeps 16MB for 32bit kernels.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      332b42e4