1. 30 Aug, 2017 1 commit
    • Jin Yao's avatar
      perf report: Calculate the average cycles of iterations · c4ee0625
      Jin Yao authored
      The branch history code has a loop detection function. With this, we can
      get the number of iterations by calculating the removed loops.
      
      While it would be nice for knowing the average cycles of iterations.
      This patch adds up the cycles in branch entries of removed loops and
      save the result to the next branch entry (e.g. branch entry A).
      
      Finally it will display the iteration number and average cycles at the
      "from" of branch entry A.
      
      For example:
      perf record -g -j any,save_type ./div
      perf report --branch-history --no-children --stdio
      
      --22.63%--main div.c:42 (RET CROSS_2M)
                compute_flag div.c:28 (cycles:2 iter:173115 avg_cycles:2)
                |
                 --10.73%--compute_flag div.c:27 (RET CROSS_2M)
                           rand rand.c:28 (cycles:1)
                           rand rand.c:28 (RET CROSS_2M)
                           __random random.c:298 (cycles:1)
                           __random random.c:297 (COND_BWD CROSS_2M)
                           __random random.c:295 (cycles:1)
                           __random random.c:295 (COND_BWD CROSS_2M)
                           __random random.c:295 (cycles:1)
                           __random random.c:295 (RET CROSS_2M)
      Signed-off-by: default avatarYao Jin <yao.jin@linux.intel.com>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@kernel.org>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1502111115-18305-1-git-send-email-yao.jin@linux.intel.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      c4ee0625
  2. 29 Aug, 2017 9 commits
    • Ingo Molnar's avatar
      Merge tag 'perf-core-for-mingo-4.14-20170829' of... · 1b2f76d7
      Ingo Molnar authored
      Merge tag 'perf-core-for-mingo-4.14-20170829' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
       - Fix remote HITM detection for Skylake in 'perf c2c' (Jiri Olsa)
      
       - Fixes for the handling of PERF_RECORD_READ records (Jiri Olsa)
      
       - Fix kprobes blackist symbol lookup in 'perf probe' (Li Bin)
      
       - The PLT header and entry sizes are not the same in !x86, fix it for ARM and
         AARCH64 (Li Bin)
      
       - Beautify pkey_{alloc,free,mprotect} arguments in 'perf trace' (Arnaldo Carvalho de Melo)
      
       - Fix CC, AR, LD external definition, allow flex and bison to be
         externally defined and other related Makefile fixes (David Carrillo-Cisneros)
      
       - Sync CPU features kernel ABI headers with tooling headers (Arnaldo Carvalho de Melo)
      
       - Fix path to PMU formats in 'perf stat' documentation (Jack Henschel)
      
       - Fix static build with newer toolchains (Jiri Olsa)
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1b2f76d7
    • Li Bin's avatar
      perf symbols: Fix plt entry calculation for ARM and AARCH64 · b2f76050
      Li Bin authored
      On x86, the plt header size is as same as the plt entry size, and can be
      identified from shdr's sh_entsize of the plt.
      
      But we can't assume that the sh_entsize of the plt shdr is always the
      plt entry size in all architecture, and the plt header size may be not
      as same as the plt entry size in some architecure.
      
      On ARM, the plt header size is 20 bytes and the plt entry size is 12
      bytes (don't consider the FOUR_WORD_PLT case) that refer to the binutils
      implementation. The plt section is as follows:
      
      Disassembly of section .plt:
      000004a0 <__cxa_finalize@plt-0x14>:
       4a0:   e52de004        push    {lr}            ; (str lr, [sp, #-4]!)
       4a4:   e59fe004        ldr     lr, [pc, #4]    ; 4b0 <_init+0x1c>
       4a8:   e08fe00e        add     lr, pc, lr
       4ac:   e5bef008        ldr     pc, [lr, #8]!
       4b0:   00008424        .word   0x00008424
      
      000004b4 <__cxa_finalize@plt>:
       4b4:   e28fc600        add     ip, pc, #0, 12
       4b8:   e28cca08        add     ip, ip, #8, 20  ; 0x8000
       4bc:   e5bcf424        ldr     pc, [ip, #1060]!        ; 0x424
      
      000004c0 <printf@plt>:
       4c0:   e28fc600        add     ip, pc, #0, 12
       4c4:   e28cca08        add     ip, ip, #8, 20  ; 0x8000
       4c8:   e5bcf41c        ldr     pc, [ip, #1052]!        ; 0x41c
      
      On AARCH64, the plt header size is 32 bytes and the plt entry size is 16
      bytes.  The plt section is as follows:
      
      Disassembly of section .plt:
      0000000000000560 <__cxa_finalize@plt-0x20>:
       560:   a9bf7bf0        stp     x16, x30, [sp,#-16]!
       564:   90000090        adrp    x16, 10000 <__FRAME_END__+0xf8a8>
       568:   f944be11        ldr     x17, [x16,#2424]
       56c:   9125e210        add     x16, x16, #0x978
       570:   d61f0220        br      x17
       574:   d503201f        nop
       578:   d503201f        nop
       57c:   d503201f        nop
      
      0000000000000580 <__cxa_finalize@plt>:
       580:   90000090        adrp    x16, 10000 <__FRAME_END__+0xf8a8>
       584:   f944c211        ldr     x17, [x16,#2432]
       588:   91260210        add     x16, x16, #0x980
       58c:   d61f0220        br      x17
      
      0000000000000590 <__gmon_start__@plt>:
       590:   90000090        adrp    x16, 10000 <__FRAME_END__+0xf8a8>
       594:   f944c611        ldr     x17, [x16,#2440]
       598:   91262210        add     x16, x16, #0x988
       59c:   d61f0220        br      x17
      
      NOTES:
      
      In addition to ARM and AARCH64, other architectures, such as
      s390/alpha/mips/parisc/poperpc/sh/sparc/xtensa also need to consider
      this issue.
      Signed-off-by: default avatarLi Bin <huawei.libin@huawei.com>
      Acked-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexis Berlemont <alexis.berlemont@gmail.com>
      Cc: David Tolnay <dtolnay@gmail.com>
      Cc: Hanjun Guo <guohanjun@huawei.com>
      Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: zhangmengting@huawei.com
      Link: http://lkml.kernel.org/r/1496622849-21877-1-git-send-email-huawei.libin@huawei.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      b2f76050
    • Li Bin's avatar
      perf probe: Fix kprobe blacklist checking condition · 2c29461e
      Li Bin authored
      The commit 9aaf5a5f ("perf probe: Check kprobes blacklist when
      adding new events"), 'perf probe' supports checking the blacklist of the
      fuctions which can not be probed.  But the checking condition is wrong,
      that the end_addr of the symbol which is the start_addr of the next
      symbol can't be included.
      
      Committer notes:
      
      IOW make it match its kernel counterpart in kernel/kprobes.c:
      
        bool within_kprobe_blacklist(unsigned long addr)
      
      Each entry have as its end address not its end address, but the first
      address _outside_ that symbol, which for related functions, is the first
      address of the next symbol, like these from kernel/trace/trace_probe.c:
      
      0xffffffffbd198df0-0xffffffffbd198e40	print_type_u8
      0xffffffffbd198e40-0xffffffffbd198e90	print_type_u16
      0xffffffffbd198e90-0xffffffffbd198ee0	print_type_u32
      0xffffffffbd198ee0-0xffffffffbd198f30	print_type_u64
      0xffffffffbd198f30-0xffffffffbd198f80	print_type_s8
      0xffffffffbd198f80-0xffffffffbd198fd0	print_type_s16
      0xffffffffbd198fd0-0xffffffffbd199020	print_type_s32
      0xffffffffbd199020-0xffffffffbd199070	print_type_s64
      0xffffffffbd199070-0xffffffffbd1990c0	print_type_x8
      0xffffffffbd1990c0-0xffffffffbd199110	print_type_x16
      0xffffffffbd199110-0xffffffffbd199160	print_type_x32
      0xffffffffbd199160-0xffffffffbd1991b0	print_type_x64
      
      But not always:
      
      0xffffffffbd1997b0-0xffffffffbd1997c0	fetch_kernel_stack_address (kernel/trace/trace_probe.c)
      0xffffffffbd1c57f0-0xffffffffbd1c58b0	__context_tracking_enter   (kernel/context_tracking.c)
      Signed-off-by: default avatarLi Bin <huawei.libin@huawei.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: zhangmengting@huawei.com
      Fixes: 9aaf5a5f ("perf probe: Check kprobes blacklist when adding new events")
      Link: http://lkml.kernel.org/r/1504011443-7269-1-git-send-email-huawei.libin@huawei.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      2c29461e
    • Peter Zijlstra's avatar
      perf/x86: Fix caps/ for !Intel · 5da382eb
      Peter Zijlstra authored
      Move the 'max_precise' capability into generic x86 code where it
      belongs. This fixes a sysfs splat on !Intel systems where we fail to set
      x86_pmu_caps_group.atts.
      Reported-and-tested-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: hpa@zytor.com
      Fixes: 22688d1c20f5 ("x86/perf: Export some PMU attributes in caps/ directory")
      Link: http://lkml.kernel.org/r/20170828104650.2u3rsim4jafyjzv2@hirez.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5da382eb
    • Kan Liang's avatar
      perf/core, x86: Add PERF_SAMPLE_PHYS_ADDR · fc7ce9c7
      Kan Liang authored
      For understanding how the workload maps to memory channels and hardware
      behavior, it's very important to collect address maps with physical
      addresses. For example, 3D XPoint access can only be found by filtering
      the physical address.
      
      Add a new sample type for physical address.
      
      perf already has a facility to collect data virtual address. This patch
      introduces a function to convert the virtual address to physical address.
      The function is quite generic and can be extended to any architecture as
      long as a virtual address is provided.
      
       - For kernel direct mapping addresses, virt_to_phys is used to convert
         the virtual addresses to physical address.
      
       - For user virtual addresses, __get_user_pages_fast is used to walk the
         pages tables for user physical address.
      
       - This does not work for vmalloc addresses right now. These are not
         resolved, but code to do that could be added.
      
      The new sample type requires collecting the virtual address. The
      virtual address will not be output unless SAMPLE_ADDR is applied.
      
      For security, the physical address can only be exposed to root or
      privileged user.
      Tested-by: default avatarMadhavan Srinivasan <maddy@linux.vnet.ibm.com>
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: mpe@ellerman.id.au
      Link: http://lkml.kernel.org/r/1503967969-48278-1-git-send-email-kan.liang@intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      fc7ce9c7
    • Alexander Shishkin's avatar
      perf/core, pt, bts: Get rid of itrace_started · 8d4e6c4c
      Alexander Shishkin authored
      I just noticed that hw.itrace_started and hw.config are aliased to the
      same location. Now, the PT driver happens to use both, which works out
      fine by sheer luck:
      
       - STORE(hw.itrace_start) is ordered before STORE(hw.config), in the
          program order, although there are no compiler barriers to ensure that,
      
       - to the perf_log_itrace_start() hw.itrace_start looks set at the same
         time as when it is intended to be set because both stores happen in the
         same path,
      
       - hw.config is never reset to zero in the PT driver.
      
      Now, the use of hw.config by the PT driver makes more sense (it being a
      HW PMU) than messing around with itrace_started, which is an awkward API
      to begin with.
      
      This patch replaces hw.itrace_started with an attach_state bit and an
      API call for the PMU drivers to use to communicate the condition.
      Signed-off-by: default avatarAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: vince@deater.net
      Link: http://lkml.kernel.org/r/20170330153956.25994-1-alexander.shishkin@linux.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8d4e6c4c
    • Ingo Molnar's avatar
      e0563e04
    • Zhou Chengming's avatar
      perf/ftrace: Fix double traces of perf on ftrace:function · 75e83876
      Zhou Chengming authored
      When running perf on the ftrace:function tracepoint, there is a bug
      which can be reproduced by:
      
        perf record -e ftrace:function -a sleep 20 &
        perf record -e ftrace:function ls
        perf script
      
                    ls 10304 [005]   171.853235: ftrace:function:
        perf_output_begin
                    ls 10304 [005]   171.853237: ftrace:function:
        perf_output_begin
                    ls 10304 [005]   171.853239: ftrace:function:
        task_tgid_nr_ns
                    ls 10304 [005]   171.853240: ftrace:function:
        task_tgid_nr_ns
                    ls 10304 [005]   171.853242: ftrace:function:
        __task_pid_nr_ns
                    ls 10304 [005]   171.853244: ftrace:function:
        __task_pid_nr_ns
      
      We can see that all the function traces are doubled.
      
      The problem is caused by the inconsistency of the register
      function perf_ftrace_event_register() with the probe function
      perf_ftrace_function_call(). The former registers one probe
      for every perf_event. And the latter handles all perf_events
      on the current cpu. So when two perf_events on the current cpu,
      the traces of them will be doubled.
      
      So this patch adds an extra parameter "event" for perf_tp_event,
      only send sample data to this event when it's not NULL.
      Signed-off-by: default avatarZhou Chengming <zhouchengming1@huawei.com>
      Reviewed-by: default avatarJiri Olsa <jolsa@kernel.org>
      Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: huawei.libin@huawei.com
      Link: http://lkml.kernel.org/r/1503668977-12526-1-git-send-email-zhouchengming1@huawei.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      75e83876
    • Meng Xu's avatar
      perf/core: Fix potential double-fetch bug · f12f42ac
      Meng Xu authored
      While examining the kernel source code, I found a dangerous operation that
      could turn into a double-fetch situation (a race condition bug) where the same
      userspace memory region are fetched twice into kernel with sanity checks after
      the first fetch while missing checks after the second fetch.
      
        1. The first fetch happens in line 9573 get_user(size, &uattr->size).
      
        2. Subsequently the 'size' variable undergoes a few sanity checks and
           transformations (line 9577 to 9584).
      
        3. The second fetch happens in line 9610 copy_from_user(attr, uattr, size)
      
        4. Given that 'uattr' can be fully controlled in userspace, an attacker can
           race condition to override 'uattr->size' to arbitrary value (say, 0xFFFFFFFF)
           after the first fetch but before the second fetch. The changed value will be
           copied to 'attr->size'.
      
        5. There is no further checks on 'attr->size' until the end of this function,
           and once the function returns, we lose the context to verify that 'attr->size'
           conforms to the sanity checks performed in step 2 (line 9577 to 9584).
      
        6. My manual analysis shows that 'attr->size' is not used elsewhere later,
           so, there is no working exploit against it right now. However, this could
           easily turns to an exploitable one if careless developers start to use
           'attr->size' later.
      
      To fix this, override 'attr->size' from the second fetch to the one from the
      first fetch, regardless of what is actually copied in.
      
      In this way, it is assured that 'attr->size' is consistent with the checks
      performed after the first fetch.
      Signed-off-by: default avatarMeng Xu <mengxu.gatech@gmail.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: meng.xu@gatech.edu
      Cc: sanidhya@gatech.edu
      Cc: taesoo@gatech.edu
      Link: http://lkml.kernel.org/r/1503522470-35531-1-git-send-email-meng.xu@gatech.eduSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f12f42ac
  3. 28 Aug, 2017 26 commits
  4. 27 Aug, 2017 3 commits
    • Linus Torvalds's avatar
      Avoid page waitqueue race leaving possible page locker waiting · a8b169af
      Linus Torvalds authored
      The "lock_page_killable()" function waits for exclusive access to the
      page lock bit using the WQ_FLAG_EXCLUSIVE bit in the waitqueue entry
      set.
      
      That means that if it gets woken up, other waiters may have been
      skipped.
      
      That, in turn, means that if it sees the page being unlocked, it *must*
      take that lock and return success, even if a lethal signal is also
      pending.
      
      So instead of checking for lethal signals first, we need to check for
      them after we've checked the actual bit that we were waiting for.  Even
      if that might then delay the killing of the process.
      
      This matches the order of the old "wait_on_bit_lock()" infrastructure
      that the page locking used to use (and is still used in a few other
      areas).
      
      Note that if we still return an error after having unsuccessfully tried
      to acquire the page lock, that is ok: that means that some other thread
      was able to get ahead of us and lock the page, and when that other
      thread then unlocks the page, the wakeup event will be repeated.  So any
      other pending waiters will now get properly woken up.
      
      Fixes: 62906027 ("mm: add PageWaiters indicating tasks are waiting for a page bit")
      Cc: Nick Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8b169af
    • Linus Torvalds's avatar
      Minor page waitqueue cleanups · 3510ca20
      Linus Torvalds authored
      Tim Chen and Kan Liang have been battling a customer load that shows
      extremely long page wakeup lists.  The cause seems to be constant NUMA
      migration of a hot page that is shared across a lot of threads, but the
      actual root cause for the exact behavior has not been found.
      
      Tim has a patch that batches the wait list traversal at wakeup time, so
      that we at least don't get long uninterruptible cases where we traverse
      and wake up thousands of processes and get nasty latency spikes.  That
      is likely 4.14 material, but we're still discussing the page waitqueue
      specific parts of it.
      
      In the meantime, I've tried to look at making the page wait queues less
      expensive, and failing miserably.  If you have thousands of threads
      waiting for the same page, it will be painful.  We'll need to try to
      figure out the NUMA balancing issue some day, in addition to avoiding
      the excessive spinlock hold times.
      
      That said, having tried to rewrite the page wait queues, I can at least
      fix up some of the braindamage in the current situation. In particular:
      
       (a) we don't want to continue walking the page wait list if the bit
           we're waiting for already got set again (which seems to be one of
           the patterns of the bad load).  That makes no progress and just
           causes pointless cache pollution chasing the pointers.
      
       (b) we don't want to put the non-locking waiters always on the front of
           the queue, and the locking waiters always on the back.  Not only is
           that unfair, it means that we wake up thousands of reading threads
           that will just end up being blocked by the writer later anyway.
      
      Also add a comment about the layout of 'struct wait_page_key' - there is
      an external user of it in the cachefiles code that means that it has to
      match the layout of 'struct wait_bit_key' in the two first members.  It
      so happens to match, because 'struct page *' and 'unsigned long *' end
      up having the same values simply because the page flags are the first
      member in struct page.
      
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3510ca20
    • Linus Torvalds's avatar
      Clarify (and fix) MAX_LFS_FILESIZE macros · 0cc3b0ec
      Linus Torvalds authored
      We have a MAX_LFS_FILESIZE macro that is meant to be filled in by
      filesystems (and other IO targets) that know they are 64-bit clean and
      don't have any 32-bit limits in their IO path.
      
      It turns out that our 32-bit value for that limit was bogus.  On 32-bit,
      the VM layer is limited by the page cache to only 32-bit index values,
      but our logic for that was confusing and actually wrong.  We used to
      define that value to
      
      	(((loff_t)PAGE_SIZE << (BITS_PER_LONG-1))-1)
      
      which is actually odd in several ways: it limits the index to 31 bits,
      and then it limits files so that they can't have data in that last byte
      of a page that has the highest 31-bit index (ie page index 0x7fffffff).
      
      Neither of those limitations make sense.  The index is actually the full
      32 bit unsigned value, and we can use that whole full page.  So the
      maximum size of the file would logically be "PAGE_SIZE << BITS_PER_LONG".
      
      However, we do wan tto avoid the maximum index, because we have code
      that iterates over the page indexes, and we don't want that code to
      overflow.  So the maximum size of a file on a 32-bit host should
      actually be one page less than the full 32-bit index.
      
      So the actual limit is ULONG_MAX << PAGE_SHIFT.  That means that we will
      not actually be using the page of that last index (ULONG_MAX), but we
      can grow a file up to that limit.
      
      The wrong value of MAX_LFS_FILESIZE actually caused problems for Doug
      Nazar, who was still using a 32-bit host, but with a 9.7TB 2 x RAID5
      volume.  It turns out that our old MAX_LFS_FILESIZE was 8TiB (well, one
      byte less), but the actual true VM limit is one page less than 16TiB.
      
      This was invisible until commit c2a9737f ("vfs,mm: fix a dead loop
      in truncate_inode_pages_range()"), which started applying that
      MAX_LFS_FILESIZE limit to block devices too.
      
      NOTE! On 64-bit, the page index isn't a limiter at all, and the limit is
      actually just the offset type itself (loff_t), which is signed.  But for
      clarity, on 64-bit, just use the maximum signed value, and don't make
      people have to count the number of 'f' characters in the hex constant.
      
      So just use LLONG_MAX for the 64-bit case.  That was what the value had
      been before too, just written out as a hex constant.
      
      Fixes: c2a9737f ("vfs,mm: fix a dead loop in truncate_inode_pages_range()")
      Reported-and-tested-by: default avatarDoug Nazar <nazard@nazar.ca>
      Cc: Andreas Dilger <adilger@dilger.ca>
      Cc: Mark Fasheh <mfasheh@versity.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Dave Kleikamp <shaggy@kernel.org>
      Cc: stable@kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0cc3b0ec
  5. 26 Aug, 2017 1 commit
    • Linus Torvalds's avatar
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input · bab97524
      Linus Torvalds authored
      Pull input fixes from Dmitry Torokhov:
      
       - a tweak to the IBM Trackpoint driver that helps recognizing
         trackpoints on never Lenovo Carbons
      
       - a fix to the ALPS driver solving scroll issues on some Dells
      
       - yet another ACPI ID has been added to Elan I2C toucpad driver
      
       - quieted diagnostic message in soc_button_array driver
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
        Input: ALPS - fix two-finger scroll breakage in right side on ALPS touchpad
        Input: soc_button_array - silence -ENOENT error on Dell XPS13 9365
        Input: trackpoint - add new trackpoint firmware ID
        Input: elan_i2c - add ELAN0602 ACPI ID to support Lenovo Yoga310
      bab97524