1. 07 Oct, 2019 16 commits
    • Chris Down's avatar
      mm, memcg: make scan aggression always exclude protection · 1bc63fb1
      Chris Down authored
      This patch is an incremental improvement on the existing
      memory.{low,min} relative reclaim work to base its scan pressure
      calculations on how much protection is available compared to the current
      usage, rather than how much the current usage is over some protection
      threshold.
      
      This change doesn't change the experience for the user in the normal
      case too much.  One benefit is that it replaces the (somewhat arbitrary)
      100% cutoff with an indefinite slope, which makes it easier to ballpark
      a memory.low value.
      
      As well as this, the old methodology doesn't quite apply generically to
      machines with varying amounts of physical memory.  Let's say we have a
      top level cgroup, workload.slice, and another top level cgroup,
      system-management.slice.  We want to roughly give 12G to
      system-management.slice, so on a 32GB machine we set memory.low to 20GB
      in workload.slice, and on a 64GB machine we set memory.low to 52GB.
      However, because these are relative amounts to the total machine size,
      while the amount of memory we want to generally be willing to yield to
      system.slice is absolute (12G), we end up putting more pressure on
      system.slice just because we have a larger machine and a larger workload
      to fill it, which seems fairly unintuitive.  With this new behaviour, we
      don't end up with this unintended side effect.
      
      Previously the way that memory.low protection works is that if you are
      50% over a certain baseline, you get 50% of your normal scan pressure.
      This is certainly better than the previous cliff-edge behaviour, but it
      can be improved even further by always considering memory under the
      currently enforced protection threshold to be out of bounds.  This means
      that we can set relatively low memory.low thresholds for variable or
      bursty workloads while still getting a reasonable level of protection,
      whereas with the previous version we may still trivially hit the 100%
      clamp.  The previous 100% clamp is also somewhat arbitrary, whereas this
      one is more concretely based on the currently enforced protection
      threshold, which is likely easier to reason about.
      
      There is also a subtle issue with the way that proportional reclaim
      worked previously -- it promotes having no memory.low, since it makes
      pressure higher during low reclaim.  This happens because we base our
      scan pressure modulation on how far memory.current is between memory.min
      and memory.low, but if memory.low is unset, we only use the overage
      method.  In most cromulent configurations, this then means that we end
      up with *more* pressure than with no memory.low at all when we're in low
      reclaim, which is not really very usable or expected.
      
      With this patch, memory.low and memory.min affect reclaim pressure in a
      more understandable and composable way.  For example, from a user
      standpoint, "protected" memory now remains untouchable from a reclaim
      aggression standpoint, and users can also have more confidence that
      bursty workloads will still receive some amount of guaranteed
      protection.
      
      Link: http://lkml.kernel.org/r/20190322160307.GA3316@chrisdown.nameSigned-off-by: default avatarChris Down <chris@chrisdown.name>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1bc63fb1
    • Chris Down's avatar
      mm, memcg: make memory.emin the baseline for utilisation determination · 9de7ca46
      Chris Down authored
      Roman points out that when when we do the low reclaim pass, we scale the
      reclaim pressure relative to position between 0 and the maximum
      protection threshold.
      
      However, if the maximum protection is based on memory.elow, and
      memory.emin is above zero, this means we still may get binary behaviour
      on second-pass low reclaim.  This is because we scale starting at 0, not
      starting at memory.emin, and since we don't scan at all below emin, we
      end up with cliff behaviour.
      
      This should be a fairly uncommon case since usually we don't go into the
      second pass, but it makes sense to scale our low reclaim pressure
      starting at emin.
      
      You can test this by catting two large sparse files, one in a cgroup
      with emin set to some moderate size compared to physical RAM, and
      another cgroup without any emin.  In both cgroups, set an elow larger
      than 50% of physical RAM.  The one with emin will have less page
      scanning, as reclaim pressure is lower.
      
      Rebase on top of and apply the same idea as what was applied to handle
      cgroup_memory=disable properly for the original proportional patch
      http://lkml.kernel.org/r/20190201045711.GA18302@chrisdown.name ("mm,
      memcg: Handle cgroup_disable=memory when getting memcg protection").
      
      Link: http://lkml.kernel.org/r/20190201051810.GA18895@chrisdown.nameSigned-off-by: default avatarChris Down <chris@chrisdown.name>
      Suggested-by: default avatarRoman Gushchin <guro@fb.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9de7ca46
    • Chris Down's avatar
      mm, memcg: proportional memory.{low,min} reclaim · 9783aa99
      Chris Down authored
      cgroup v2 introduces two memory protection thresholds: memory.low
      (best-effort) and memory.min (hard protection).  While they generally do
      what they say on the tin, there is a limitation in their implementation
      that makes them difficult to use effectively: that cliff behaviour often
      manifests when they become eligible for reclaim.  This patch implements
      more intuitive and usable behaviour, where we gradually mount more
      reclaim pressure as cgroups further and further exceed their protection
      thresholds.
      
      This cliff edge behaviour happens because we only choose whether or not
      to reclaim based on whether the memcg is within its protection limits
      (see the use of mem_cgroup_protected in shrink_node), but we don't vary
      our reclaim behaviour based on this information.  Imagine the following
      timeline, with the numbers the lruvec size in this zone:
      
      1. memory.low=1000000, memory.current=999999. 0 pages may be scanned.
      2. memory.low=1000000, memory.current=1000000. 0 pages may be scanned.
      3. memory.low=1000000, memory.current=1000001. 1000001* pages may be
         scanned. (?!)
      
      * Of course, we won't usually scan all available pages in the zone even
        without this patch because of scan control priority, over-reclaim
        protection, etc.  However, as shown by the tests at the end, these
        techniques don't sufficiently throttle such an extreme change in input,
        so cliff-like behaviour isn't really averted by their existence alone.
      
      Here's an example of how this plays out in practice.  At Facebook, we are
      trying to protect various workloads from "system" software, like
      configuration management tools, metric collectors, etc (see this[0] case
      study).  In order to find a suitable memory.low value, we start by
      determining the expected memory range within which the workload will be
      comfortable operating.  This isn't an exact science -- memory usage deemed
      "comfortable" will vary over time due to user behaviour, differences in
      composition of work, etc, etc.  As such we need to ballpark memory.low,
      but doing this is currently problematic:
      
      1. If we end up setting it too low for the workload, it won't have
         *any* effect (see discussion above).  The group will receive the full
         weight of reclaim and won't have any priority while competing with the
         less important system software, as if we had no memory.low configured
         at all.
      
      2. Because of this behaviour, we end up erring on the side of setting
         it too high, such that the comfort range is reliably covered.  However,
         protected memory is completely unavailable to the rest of the system,
         so we might cause undue memory and IO pressure there when we *know* we
         have some elasticity in the workload.
      
      3. Even if we get the value totally right, smack in the middle of the
         comfort zone, we get extreme jumps between no pressure and full
         pressure that cause unpredictable pressure spikes in the workload due
         to the current binary reclaim behaviour.
      
      With this patch, we can set it to our ballpark estimation without too much
      worry.  Any undesirable behaviour, such as too much or too little reclaim
      pressure on the workload or system will be proportional to how far our
      estimation is off.  This means we can set memory.low much more
      conservatively and thus waste less resources *without* the risk of the
      workload falling off a cliff if we overshoot.
      
      As a more abstract technical description, this unintuitive behaviour
      results in having to give high-priority workloads a large protection
      buffer on top of their expected usage to function reliably, as otherwise
      we have abrupt periods of dramatically increased memory pressure which
      hamper performance.  Having to set these thresholds so high wastes
      resources and generally works against the principle of work conservation.
      In addition, having proportional memory reclaim behaviour has other
      benefits.  Most notably, before this patch it's basically mandatory to set
      memory.low to a higher than desirable value because otherwise as soon as
      you exceed memory.low, all protection is lost, and all pages are eligible
      to scan again.  By contrast, having a gradual ramp in reclaim pressure
      means that you now still get some protection when thresholds are exceeded,
      which means that one can now be more comfortable setting memory.low to
      lower values without worrying that all protection will be lost.  This is
      important because workingset size is really hard to know exactly,
      especially with variable workloads, so at least getting *some* protection
      if your workingset size grows larger than you expect increases user
      confidence in setting memory.low without a huge buffer on top being
      needed.
      
      Thanks a lot to Johannes Weiner and Tejun Heo for their advice and
      assistance in thinking about how to make this work better.
      
      In testing these changes, I intended to verify that:
      
      1. Changes in page scanning become gradual and proportional instead of
         binary.
      
         To test this, I experimented stepping further and further down
         memory.low protection on a workload that floats around 19G workingset
         when under memory.low protection, watching page scan rates for the
         workload cgroup:
      
         +------------+-----------------+--------------------+--------------+
         | memory.low | test (pgscan/s) | control (pgscan/s) | % of control |
         +------------+-----------------+--------------------+--------------+
         |        21G |               0 |                  0 | N/A          |
         |        17G |             867 |               3799 | 23%          |
         |        12G |            1203 |               3543 | 34%          |
         |         8G |            2534 |               3979 | 64%          |
         |         4G |            3980 |               4147 | 96%          |
         |          0 |            3799 |               3980 | 95%          |
         +------------+-----------------+--------------------+--------------+
      
         As you can see, the test kernel (with a kernel containing this
         patch) ramps up page scanning significantly more gradually than the
         control kernel (without this patch).
      
      2. More gradual ramp up in reclaim aggression doesn't result in
         premature OOMs.
      
         To test this, I wrote a script that slowly increments the number of
         pages held by stress(1)'s --vm-keep mode until a production system
         entered severe overall memory contention.  This script runs in a highly
         protected slice taking up the majority of available system memory.
         Watching vmstat revealed that page scanning continued essentially
         nominally between test and control, without causing forward reclaim
         progress to become arrested.
      
      [0]: https://facebookmicrosites.github.io/cgroup2/docs/overview.html#case-study-the-fbtax2-project
      
      [akpm@linux-foundation.org: reflow block comments to fit in 80 cols]
      [chris@chrisdown.name: handle cgroup_disable=memory when getting memcg protection]
        Link: http://lkml.kernel.org/r/20190201045711.GA18302@chrisdown.name
      Link: http://lkml.kernel.org/r/20190124014455.GA6396@chrisdown.nameSigned-off-by: default avatarChris Down <chris@chrisdown.name>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9783aa99
    • Dan Carpenter's avatar
      mm/vmpressure.c: fix a signedness bug in vmpressure_register_event() · 518a8671
      Dan Carpenter authored
      The "mode" and "level" variables are enums and in this context GCC will
      treat them as unsigned ints so the error handling is never triggered.
      
      I also removed the bogus initializer because it isn't required any more
      and it's sort of confusing.
      
      [akpm@linux-foundation.org: reduce implicit and explicit typecasting]
      [akpm@linux-foundation.org: fix return value, add comment, per Matthew]
      Link: http://lkml.kernel.org/r/20190925110449.GO3264@mwanda
      Fixes: 3cadfa2b ("mm/vmpressure.c: convert to use match_string() helper")
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Reviewed-by: default avatarAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Enrico Weigelt <info@metux.net>
      Cc: Kate Stewart <kstewart@linuxfoundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      518a8671
    • Qian Cai's avatar
      mm/page_alloc.c: fix a crash in free_pages_prepare() · 234fdce8
      Qian Cai authored
      On architectures like s390, arch_free_page() could mark the page unused
      (set_page_unused()) and any access later would trigger a kernel panic.
      Fix it by moving arch_free_page() after all possible accessing calls.
      
       Hardware name: IBM 2964 N96 400 (z/VM 6.4.0)
       Krnl PSW : 0404e00180000000 0000000026c2b96e (__free_pages_ok+0x34e/0x5d8)
                  R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
       Krnl GPRS: 0000000088d43af7 0000000000484000 000000000000007c 000000000000000f
                  000003d080012100 000003d080013fc0 0000000000000000 0000000000100000
                  00000000275cca48 0000000000000100 0000000000000008 000003d080010000
                  00000000000001d0 000003d000000000 0000000026c2b78a 000000002717fdb0
       Krnl Code: 0000000026c2b95c: ec1100b30659 risbgn %r1,%r1,0,179,6
                  0000000026c2b962: e32014000036 pfd 2,1024(%r1)
                 #0000000026c2b968: d7ff10001000 xc 0(256,%r1),0(%r1)
                 >0000000026c2b96e: 41101100  la %r1,256(%r1)
                  0000000026c2b972: a737fff8  brctg %r3,26c2b962
                  0000000026c2b976: d7ff10001000 xc 0(256,%r1),0(%r1)
                  0000000026c2b97c: e31003400004 lg %r1,832
                  0000000026c2b982: ebff1430016a asi 5168(%r1),-1
       Call Trace:
       __free_pages_ok+0x16a/0x5d8)
       memblock_free_all+0x206/0x290
       mem_init+0x58/0x120
       start_kernel+0x2b0/0x570
       startup_continue+0x6a/0xc0
       INFO: lockdep is turned off.
       Last Breaking-Event-Address:
       __free_pages_ok+0x372/0x5d8
       Kernel panic - not syncing: Fatal exception: panic_on_oops
       00: HCPGIR450W CP entered; disabled wait PSW 00020001 80000000 00000000 26A2379C
      
      In the past, only kernel_poison_pages() would trigger this but it needs
      "page_poison=on" kernel cmdline, and I suspect nobody tested that on
      s390.  Recently, kernel_init_free_pages() (commit 6471384a ("mm:
      security: introduce init_on_alloc=1 and init_on_free=1 boot options"))
      was added and could trigger this as well.
      
      [akpm@linux-foundation.org: add comment]
      Link: http://lkml.kernel.org/r/1569613623-16820-1-git-send-email-cai@lca.pw
      Fixes: 8823b1db ("mm/page_poison.c: enable PAGE_POISONING as a separate option")
      Fixes: 6471384a ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
      Signed-off-by: default avatarQian Cai <cai@lca.pw>
      Reviewed-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Vasily Gorbik <gor@linux.ibm.com>
      Cc: Alexander Duyck <alexander.duyck@gmail.com>
      Cc: <stable@vger.kernel.org>	[5.3+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      234fdce8
    • Vitaly Wool's avatar
      mm/z3fold.c: claim page in the beginning of free · 5b6807de
      Vitaly Wool authored
      There's a really hard to reproduce race in z3fold between z3fold_free()
      and z3fold_reclaim_page().  z3fold_reclaim_page() can claim the page
      after z3fold_free() has checked if the page was claimed and
      z3fold_free() will then schedule this page for compaction which may in
      turn lead to random page faults (since that page would have been
      reclaimed by then).
      
      Fix that by claiming page in the beginning of z3fold_free() and not
      forgetting to clear the claim in the end.
      
      [vitalywool@gmail.com: v2]
        Link: http://lkml.kernel.org/r/20190928113456.152742cf@bigdell
      Link: http://lkml.kernel.org/r/20190926104844.4f0c6efa1366b8f5741eaba9@gmail.comSigned-off-by: default avatarVitaly Wool <vitalywool@gmail.com>
      Reported-by: default avatarMarkus Linnala <markus.linnala@gmail.com>
      Cc: Dan Streetman <ddstreet@ieee.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Henry Burns <henrywolfeburns@gmail.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Markus Linnala <markus.linnala@gmail.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5b6807de
    • Michal Hocko's avatar
      kernel/sysctl.c: do not override max_threads provided by userspace · b0f53dbc
      Michal Hocko authored
      Partially revert 16db3d3f ("kernel/sysctl.c: threads-max observe
      limits") because the patch is causing a regression to any workload which
      needs to override the auto-tuning of the limit provided by kernel.
      
      set_max_threads is implementing a boot time guesstimate to provide a
      sensible limit of the concurrently running threads so that runaways will
      not deplete all the memory.  This is a good thing in general but there
      are workloads which might need to increase this limit for an application
      to run (reportedly WebSpher MQ is affected) and that is simply not
      possible after the mentioned change.  It is also very dubious to
      override an admin decision by an estimation that doesn't have any direct
      relation to correctness of the kernel operation.
      
      Fix this by dropping set_max_threads from sysctl_max_threads so any
      value is accepted as long as it fits into MAX_THREADS which is important
      to check because allowing more threads could break internal robust futex
      restriction.  While at it, do not use MIN_THREADS as the lower boundary
      because it is also only a heuristic for automatic estimation and admin
      might have a good reason to stop new threads to be created even when
      below this limit.
      
      This became more severe when we switched x86 from 4k to 8k kernel
      stacks.  Starting since 6538b8ea ("x86_64: expand kernel stack to
      16K") (3.16) we use THREAD_SIZE_ORDER = 2 and that halved the auto-tuned
      value.
      
      In the particular case
      
        3.12
        kernel.threads-max = 515561
      
        4.4
        kernel.threads-max = 200000
      
      Neither of the two values is really insane on 32GB machine.
      
      I am not sure we want/need to tune the max_thread value further.  If
      anything the tuning should be removed altogether if proven not useful in
      general.  But we definitely need a way to override this auto-tuning.
      
      Link: http://lkml.kernel.org/r/20190922065801.GB18814@dhcp22.suse.cz
      Fixes: 16db3d3f ("kernel/sysctl.c: threads-max observe limits")
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reviewed-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b0f53dbc
    • Baoquan He's avatar
      memcg: only record foreign writebacks with dirty pages when memcg is not disabled · 08d1d0e6
      Baoquan He authored
      In kdump kernel, memcg usually is disabled with 'cgroup_disable=memory'
      for saving memory.  Now kdump kernel will always panic when dump vmcore
      to local disk:
      
        BUG: kernel NULL pointer dereference, address: 0000000000000ab8
        Oops: 0000 [#1] SMP NOPTI
        CPU: 0 PID: 598 Comm: makedumpfile Not tainted 5.3.0+ #26
        Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 10/02/2018
        RIP: 0010:mem_cgroup_track_foreign_dirty_slowpath+0x38/0x140
        Call Trace:
         __set_page_dirty+0x52/0xc0
         iomap_set_page_dirty+0x50/0x90
         iomap_write_end+0x6e/0x270
         iomap_write_actor+0xce/0x170
         iomap_apply+0xba/0x11e
         iomap_file_buffered_write+0x62/0x90
         xfs_file_buffered_aio_write+0xca/0x320 [xfs]
         new_sync_write+0x12d/0x1d0
         vfs_write+0xa5/0x1a0
         ksys_write+0x59/0xd0
         do_syscall_64+0x59/0x1e0
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      And this will corrupt the 1st kernel too with 'cgroup_disable=memory'.
      
      Via the trace and with debugging, it is pointing to commit 97b27821
      ("writeback, memcg: Implement foreign dirty flushing") which introduced
      this regression.  Disabling memcg causes the null pointer dereference at
      uninitialized data in function mem_cgroup_track_foreign_dirty_slowpath().
      
      Fix it by returning directly if memcg is disabled, but not trying to
      record the foreign writebacks with dirty pages.
      
      Link: http://lkml.kernel.org/r/20190924141928.GD31919@MiWiFi-R3L-srv
      Fixes: 97b27821 ("writeback, memcg: Implement foreign dirty flushing")
      Signed-off-by: default avatarBaoquan He <bhe@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      08d1d0e6
    • Yi Wang's avatar
      mm: fix -Wmissing-prototypes warnings · 758b8db4
      Yi Wang authored
      We get two warnings when build kernel W=1:
      
        mm/shuffle.c:36:12: warning: no previous prototype for `shuffle_show' [-Wmissing-prototypes]
        mm/sparse.c:220:6: warning: no previous prototype for `subsection_mask_set' [-Wmissing-prototypes]
      
      Make the functions static to fix this.
      
      Link: http://lkml.kernel.org/r/1566978161-7293-1-git-send-email-wang.yi59@zte.com.cnSigned-off-by: default avatarYi Wang <wang.yi59@zte.com.cn>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      758b8db4
    • Tejun Heo's avatar
      writeback: fix use-after-free in finish_writeback_work() · 8e00c4e9
      Tejun Heo authored
      finish_writeback_work() reads @done->waitq after decrementing
      @done->cnt.  However, once @done->cnt reaches zero, @done may be freed
      (from stack) at any moment and @done->waitq can contain something
      unrelated by the time finish_writeback_work() tries to read it.  This
      led to the following crash.
      
        "BUG: kernel NULL pointer dereference, address: 0000000000000002"
        #PF: supervisor write access in kernel mode
        #PF: error_code(0x0002) - not-present page
        PGD 0 P4D 0
        Oops: 0002 [#1] SMP DEBUG_PAGEALLOC
        CPU: 40 PID: 555153 Comm: kworker/u98:50 Kdump: loaded Not tainted
        ...
        Workqueue: writeback wb_workfn (flush-btrfs-1)
        RIP: 0010:_raw_spin_lock_irqsave+0x10/0x30
        Code: 48 89 d8 5b c3 e8 50 db 6b ff eb f4 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 53 9c 5b fa 31 c0 ba 01 00 00 00 <f0> 0f b1 17 75 05 48 89 d8 5b c3 89 c6 e8 fe ca 6b ff eb f2 66 90
        RSP: 0018:ffffc90049b27d98 EFLAGS: 00010046
        RAX: 0000000000000000 RBX: 0000000000000246 RCX: 0000000000000000
        RDX: 0000000000000001 RSI: 0000000000000003 RDI: 0000000000000002
        RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001
        R10: ffff889fff407600 R11: ffff88ba9395d740 R12: 000000000000e300
        R13: 0000000000000003 R14: 0000000000000000 R15: 0000000000000000
        FS:  0000000000000000(0000) GS:ffff88bfdfa00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 0000000000000002 CR3: 0000000002409005 CR4: 00000000001606e0
        Call Trace:
         __wake_up_common_lock+0x63/0xc0
         wb_workfn+0xd2/0x3e0
         process_one_work+0x1f5/0x3f0
         worker_thread+0x2d/0x3d0
         kthread+0x111/0x130
         ret_from_fork+0x1f/0x30
      
      Fix it by reading and caching @done->waitq before decrementing
      @done->cnt.
      
      Link: http://lkml.kernel.org/r/20190924010631.GH2233839@devbig004.ftw2.facebook.com
      Fixes: 5b9cce4c ("writeback: Generalize and expose wb_completion")
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Debugged-by: default avatarChris Mason <clm@fb.com>
      Reviewed-by: default avatarJens Axboe <axboe@kernel.dk>
      Cc: Jan Kara <jack@suse.cz>
      Cc: <stable@vger.kernel.org>	[5.2+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8e00c4e9
    • Anshuman Khandual's avatar
      mm/memremap: drop unused SECTION_SIZE and SECTION_MASK · 6d0e9849
      Anshuman Khandual authored
      SECTION_SIZE and SECTION_MASK macros are not getting used anymore.  But
      they do conflict with existing definitions on arm64 platform causing
      following warning during build.  Lets drop these unused macros.
      
        mm/memremap.c:16: warning: "SECTION_MASK" redefined
         #define SECTION_MASK ~((1UL << PA_SECTION_SHIFT) - 1)
        arch/arm64/include/asm/pgtable-hwdef.h:79: note: this is the location of the previous definition
         #define SECTION_MASK  (~(SECTION_SIZE-1))
      
        mm/memremap.c:17: warning: "SECTION_SIZE" redefined
         #define SECTION_SIZE (1UL << PA_SECTION_SHIFT)
        arch/arm64/include/asm/pgtable-hwdef.h:78: note: this is the location of the previous definition
         #define SECTION_SIZE  (_AC(1, UL) << SECTION_SHIFT)
      
      Link: http://lkml.kernel.org/r/1569312010-31313-1-git-send-email-anshuman.khandual@arm.comSigned-off-by: default avatarAnshuman Khandual <anshuman.khandual@arm.com>
      Reported-by: default avatarkbuild test robot <lkp@intel.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Logan Gunthorpe <logang@deltatee.com>
      Cc: Ira Weiny <ira.weiny@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6d0e9849
    • Will Deacon's avatar
      panic: ensure preemption is disabled during panic() · 20bb759a
      Will Deacon authored
      Calling 'panic()' on a kernel with CONFIG_PREEMPT=y can leave the
      calling CPU in an infinite loop, but with interrupts and preemption
      enabled.  From this state, userspace can continue to be scheduled,
      despite the system being "dead" as far as the kernel is concerned.
      
      This is easily reproducible on arm64 when booting with "nosmp" on the
      command line; a couple of shell scripts print out a periodic "Ping"
      message whilst another triggers a crash by writing to
      /proc/sysrq-trigger:
      
        | sysrq: Trigger a crash
        | Kernel panic - not syncing: sysrq triggered crash
        | CPU: 0 PID: 1 Comm: init Not tainted 5.2.15 #1
        | Hardware name: linux,dummy-virt (DT)
        | Call trace:
        |  dump_backtrace+0x0/0x148
        |  show_stack+0x14/0x20
        |  dump_stack+0xa0/0xc4
        |  panic+0x140/0x32c
        |  sysrq_handle_reboot+0x0/0x20
        |  __handle_sysrq+0x124/0x190
        |  write_sysrq_trigger+0x64/0x88
        |  proc_reg_write+0x60/0xa8
        |  __vfs_write+0x18/0x40
        |  vfs_write+0xa4/0x1b8
        |  ksys_write+0x64/0xf0
        |  __arm64_sys_write+0x14/0x20
        |  el0_svc_common.constprop.0+0xb0/0x168
        |  el0_svc_handler+0x28/0x78
        |  el0_svc+0x8/0xc
        | Kernel Offset: disabled
        | CPU features: 0x0002,24002004
        | Memory Limit: none
        | ---[ end Kernel panic - not syncing: sysrq triggered crash ]---
        |  Ping 2!
        |  Ping 1!
        |  Ping 1!
        |  Ping 2!
      
      The issue can also be triggered on x86 kernels if CONFIG_SMP=n,
      otherwise local interrupts are disabled in 'smp_send_stop()'.
      
      Disable preemption in 'panic()' before re-enabling interrupts.
      
      Link: http://lkml.kernel.org/r/20191002123538.22609-1-will@kernel.org
      Link: https://lore.kernel.org/r/BX1W47JXPMR8.58IYW53H6M5N@dragonstoneSigned-off-by: default avatarWill Deacon <will@kernel.org>
      Reported-by: default avatarXogium <contact@xogium.me>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Petr Mladek <pmladek@suse.com>
      Cc: Feng Tang <feng.tang@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20bb759a
    • Jia-Ju Bai's avatar
      fs: ocfs2: fix a possible null-pointer dereference in ocfs2_info_scan_inode_alloc() · 2abb7d3b
      Jia-Ju Bai authored
      In ocfs2_info_scan_inode_alloc(), there is an if statement on line 283
      to check whether inode_alloc is NULL:
      
          if (inode_alloc)
      
      When inode_alloc is NULL, it is used on line 287:
      
          ocfs2_inode_lock(inode_alloc, &bh, 0);
              ocfs2_inode_lock_full_nested(inode, ...)
                  struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
      
      Thus, a possible null-pointer dereference may occur.
      
      To fix this bug, inode_alloc is checked on line 286.
      
      This bug is found by a static analysis tool STCheck written by us.
      
      Link: http://lkml.kernel.org/r/20190726033717.32359-1-baijiaju1990@gmail.comSigned-off-by: default avatarJia-Ju Bai <baijiaju1990@gmail.com>
      Reviewed-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Changwei Ge <gechangwei@live.cn>
      Cc: Gang He <ghe@suse.com>
      Cc: Jun Piao <piaojun@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2abb7d3b
    • Jia-Ju Bai's avatar
      fs: ocfs2: fix a possible null-pointer dereference in ocfs2_write_end_nolock() · 583fee3e
      Jia-Ju Bai authored
      In ocfs2_write_end_nolock(), there are an if statement on lines 1976,
      2047 and 2058, to check whether handle is NULL:
      
          if (handle)
      
      When handle is NULL, it is used on line 2045:
      
      	ocfs2_update_inode_fsync_trans(handle, inode, 1);
              oi->i_sync_tid = handle->h_transaction->t_tid;
      
      Thus, a possible null-pointer dereference may occur.
      
      To fix this bug, handle is checked before calling
      ocfs2_update_inode_fsync_trans().
      
      This bug is found by a static analysis tool STCheck written by us.
      
      Link: http://lkml.kernel.org/r/20190726033705.32307-1-baijiaju1990@gmail.comSigned-off-by: default avatarJia-Ju Bai <baijiaju1990@gmail.com>
      Reviewed-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Changwei Ge <gechangwei@live.cn>
      Cc: Gang He <ghe@suse.com>
      Cc: Jun Piao <piaojun@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      583fee3e
    • Jia-Ju Bai's avatar
      fs: ocfs2: fix possible null-pointer dereferences in ocfs2_xa_prepare_entry() · 56e94ea1
      Jia-Ju Bai authored
      In ocfs2_xa_prepare_entry(), there is an if statement on line 2136 to
      check whether loc->xl_entry is NULL:
      
          if (loc->xl_entry)
      
      When loc->xl_entry is NULL, it is used on line 2158:
      
          ocfs2_xa_add_entry(loc, name_hash);
              loc->xl_entry->xe_name_hash = cpu_to_le32(name_hash);
              loc->xl_entry->xe_name_offset = cpu_to_le16(loc->xl_size);
      
      and line 2164:
      
          ocfs2_xa_add_namevalue(loc, xi);
              loc->xl_entry->xe_value_size = cpu_to_le64(xi->xi_value_len);
              loc->xl_entry->xe_name_len = xi->xi_name_len;
      
      Thus, possible null-pointer dereferences may occur.
      
      To fix these bugs, if loc-xl_entry is NULL, ocfs2_xa_prepare_entry()
      abnormally returns with -EINVAL.
      
      These bugs are found by a static analysis tool STCheck written by us.
      
      [akpm@linux-foundation.org: remove now-unused ocfs2_xa_add_entry()]
      Link: http://lkml.kernel.org/r/20190726101447.9153-1-baijiaju1990@gmail.comSigned-off-by: default avatarJia-Ju Bai <baijiaju1990@gmail.com>
      Reviewed-by: default avatarJoseph Qi <joseph.qi@linux.alibaba.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Changwei Ge <gechangwei@live.cn>
      Cc: Gang He <ghe@suse.com>
      Cc: Jun Piao <piaojun@huawei.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      56e94ea1
    • Jia Guo's avatar
      ocfs2: clear zero in unaligned direct IO · 7a243c82
      Jia Guo authored
      Unused portion of a part-written fs-block-sized block is not set to zero
      in unaligned append direct write.This can lead to serious data
      inconsistencies.
      
      Ocfs2 manage disk with cluster size(for example, 1M), part-written in
      one cluster will change the cluster state from UN-WRITTEN to WRITTEN,
      VFS(function dio_zero_block) doesn't do the cleaning because bh's state
      is not set to NEW in function ocfs2_dio_wr_get_block when we write a
      WRITTEN cluster.  For example, the cluster size is 1M, file size is 8k
      and we direct write from 14k to 15k, then 12k~14k and 15k~16k will
      contain dirty data.
      
      We have to deal with two cases:
       1.The starting position of direct write is outside the file.
       2.The starting position of direct write is located in the file.
      
      We need set bh's state to NEW in the first case.  In the second case, we
      need mapped twice because bh's state of area out file should be set to
      NEW while area in file not.
      
      [akpm@linux-foundation.org: coding style fixes]
      Link: http://lkml.kernel.org/r/5292e287-8f1a-fd4a-1a14-661e555e0bed@huawei.comSigned-off-by: default avatarJia Guo <guojia12@huawei.com>
      Reviewed-by: default avatarYiwen Jiang <jiangyiwen@huawei.com>
      Cc: Mark Fasheh <mark@fasheh.com>
      Cc: Joel Becker <jlbec@evilplan.org>
      Cc: Junxiao Bi <junxiao.bi@oracle.com>
      Cc: Joseph Qi <joseph.qi@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7a243c82
  2. 04 Oct, 2019 14 commits
    • Linus Torvalds's avatar
      Merge tag 'mips_fixes_5.4_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux · 4ea65534
      Linus Torvalds authored
      Pull MIPS fixes from Paul Burton:
      
       - Build fixes for Cavium Octeon & PMC-Sierra MSP systems, as well as
         all pre-MIPSr6 configurations built with binutils < 2.25.
      
       - Boot fixes for 64-bit Loongson systems & SGI IP28 systems.
      
       - Wire up the new clone3 syscall.
      
       - Clean ups for a few build-time warnings.
      
      * tag 'mips_fixes_5.4_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
        MIPS: fw/arc: Remove unused addr variable
        MIPS: pmcs-msp71xx: Remove unused addr variable
        MIPS: pmcs-msp71xx: Add missing MAX_PROM_MEM definition
        mips: Loongson: Fix the link time qualifier of 'serial_exit()'
        MIPS: init: Prevent adding memory before PHYS_OFFSET
        MIPS: init: Fix reservation of memory between PHYS_OFFSET and mem start
        MIPS: VDSO: Fix build for binutils < 2.25
        MIPS: VDSO: Remove unused gettimeofday.c
        MIPS: Wire up clone3 syscall
        MIPS: octeon: Include required header; fix octeon ethernet build
        MIPS: cpu-bugs64: Mark inline functions as __always_inline
        MIPS: dts: ar9331: fix interrupt-controller size
        MIPS: Loongson64: Fix boot failure after dropping boot_mem_map
      4ea65534
    • Linus Torvalds's avatar
      Merge tag 'riscv/for-v5.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux · 812ad49d
      Linus Torvalds authored
      Pull RISC-V fixes from Paul Walmsley:
      
       - Ensure that exclusive-load reservations are terminated after system
         call or exception handling. This primarily affects QEMU, which does
         not expire load reservations.
      
       - Fix an issue primarily affecting RV32 platforms that can cause the DT
         header to be corrupted, causing boot failures.
      
      * tag 'riscv/for-v5.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
        riscv: Fix memblock reservation for device tree blob
        RISC-V: Clear load reservations while restoring hart contexts
      812ad49d
    • Linus Torvalds's avatar
      Merge tag 'devicetree-fixes-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux · a4ad51e9
      Linus Torvalds authored
      Pull DeviceTree fixes from Rob Herring:
       "Fix several 'dt_binding_check' build failures"
      
      * tag 'devicetree-fixes-for-5.4' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
        dt-bindings: phy: lantiq: Fix Property Name
        dt-bindings: iio: ad7192: Fix DTC warning in the example
        dt-bindings: iio: ad7192: Fix Regulator Properties
        dt-bindings: media: rc: Fix redundant string
        dt-bindings: dsp: Fix fsl,dsp example
      a4ad51e9
    • Paul Burton's avatar
      MIPS: fw/arc: Remove unused addr variable · 6822c29d
      Paul Burton authored
      The addr variable in prom_free_prom_memory() has been unused since
      commit 0df10076 ("MIPS: fw: Record prom memory"), leading to a
      compiler warning:
      
        arch/mips/fw/arc/memory.c:163:16:
          warning: unused variable 'addr' [-Wunused-variable]
      
      Fix this by removing the unused variable.
      Signed-off-by: default avatarPaul Burton <paul.burton@mips.com>
      Fixes: 0df10076 ("MIPS: fw: Record prom memory")
      Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
      Cc: linux-mips@vger.kernel.org
      6822c29d
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm · b145b0eb
      Linus Torvalds authored
      Pull KVM fixes from Paolo Bonzini:
       "ARM and x86 bugfixes of all kinds.
      
        The most visible one is that migrating a nested hypervisor has always
        been busted on Broadwell and newer processors, and that has finally
        been fixed"
      
      * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (22 commits)
        KVM: x86: omit "impossible" pmu MSRs from MSR list
        KVM: nVMX: Fix consistency check on injected exception error code
        KVM: x86: omit absent pmu MSRs from MSR list
        selftests: kvm: Fix libkvm build error
        kvm: vmx: Limit guest PMCs to those supported on the host
        kvm: x86, powerpc: do not allow clearing largepages debugfs entry
        KVM: selftests: x86: clarify what is reported on KVM_GET_MSRS failure
        KVM: VMX: Set VMENTER_L1D_FLUSH_NOT_REQUIRED if !X86_BUG_L1TF
        selftests: kvm: add test for dirty logging inside nested guests
        KVM: x86: fix nested guest live migration with PML
        KVM: x86: assign two bits to track SPTE kinds
        KVM: x86: Expose XSAVEERPTR to the guest
        kvm: x86: Enumerate support for CLZERO instruction
        kvm: x86: Use AMD CPUID semantics for AMD vCPUs
        kvm: x86: Improve emulation of CPUID leaves 0BH and 1FH
        KVM: X86: Fix userspace set invalid CR4
        kvm: x86: Fix a spurious -E2BIG in __do_cpuid_func
        KVM: LAPIC: Loosen filter for adaptive tuning of lapic_timer_advance_ns
        KVM: arm/arm64: vgic: Use the appropriate TRACE_INCLUDE_PATH
        arm64: KVM: Kill hyp_alternate_select()
        ...
      b145b0eb
    • Linus Torvalds's avatar
      Merge tag 'for-linus-5.4-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip · 50dfd03d
      Linus Torvalds authored
      Pull xen fixes and cleanups from Juergen Gross:
      
       - a fix in the Xen balloon driver avoiding hitting a BUG_ON() in some
         cases, plus a follow-on cleanup series for that driver
      
       - a patch for introducing non-blocking EFI callbacks in Xen's EFI
         driver, plu a cleanup patch for Xen EFI handling merging the x86 and
         ARM arch specific initialization into the Xen EFI driver
      
       - a fix of the Xen xenbus driver avoiding a self-deadlock when cleaning
         up after a user process has died
      
       - a fix for Xen on ARM after removal of ZONE_DMA
      
       - a cleanup patch for avoiding build warnings for Xen on ARM
      
      * tag 'for-linus-5.4-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
        xen/xenbus: fix self-deadlock after killing user process
        xen/efi: have a common runtime setup function
        arm: xen: mm: use __GPF_DMA32 for arm64
        xen/balloon: Clear PG_offline in balloon_retrieve()
        xen/balloon: Mark pages PG_offline in balloon_append()
        xen/balloon: Drop __balloon_append()
        xen/balloon: Set pages PageOffline() in balloon_add_region()
        ARM: xen: unexport HYPERVISOR_platform_op function
        xen/efi: Set nonblocking callbacks
      50dfd03d
    • Linus Torvalds's avatar
      Merge tag 'copy-struct-from-user-v5.4-rc2' of... · e524d16e
      Linus Torvalds authored
      Merge tag 'copy-struct-from-user-v5.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux
      
      Pull copy_struct_from_user() helper from Christian Brauner:
       "This contains the copy_struct_from_user() helper which got split out
        from the openat2() patchset. It is a generic interface designed to
        copy a struct from userspace.
      
        The helper will be especially useful for structs versioned by size of
        which we have quite a few. This allows for backwards compatibility,
        i.e. an extended struct can be passed to an older kernel, or a legacy
        struct can be passed to a newer kernel. For the first case (extended
        struct, older kernel) the new fields in an extended struct can be set
        to zero and the struct safely passed to an older kernel.
      
        The most obvious benefit is that this helper lets us get rid of
        duplicate code present in at least sched_setattr(), perf_event_open(),
        and clone3(). More importantly it will also help to ensure that users
        implementing versioning-by-size end up with the same core semantics.
      
        This point is especially crucial since we have at least one case where
        versioning-by-size is used but with slighly different semantics:
        sched_setattr(), perf_event_open(), and clone3() all do do similar
        checks to copy_struct_from_user() while rt_sigprocmask(2) always
        rejects differently-sized struct arguments.
      
        With this pull request we also switch over sched_setattr(),
        perf_event_open(), and clone3() to use the new helper"
      
      * tag 'copy-struct-from-user-v5.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
        usercopy: Add parentheses around assignment in test_copy_struct_from_user
        perf_event_open: switch to copy_struct_from_user()
        sched_setattr: switch to copy_struct_from_user()
        clone3: switch to copy_struct_from_user()
        lib: introduce copy_struct_from_user() helper
      e524d16e
    • Linus Torvalds's avatar
      Merge tag 'for-linus-20191003' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux · af0622f6
      Linus Torvalds authored
      Pull clone3/pidfd fixes from Christian Brauner:
       "This contains a couple of fixes:
      
         - Fix pidfd selftest compilation (Shuah Kahn)
      
           Due to a false linking instruction in the Makefile compilation for
           the pidfd selftests would fail on some systems.
      
         - Fix compilation for glibc on RISC-V systems (Seth Forshee)
      
           In some scenarios linux/uapi/linux/sched.h is included where
           __ASSEMBLY__ is defined causing a build failure because struct
           clone_args was not guarded by an #ifndef __ASSEMBLY__.
      
         - Add missing clone3() and struct clone_args kernel-doc (Christian Brauner)
      
           clone3() and struct clone_args were missing kernel-docs. (The goal
           is to use kernel-doc for any function or type where it's worth it.)
           For struct clone_args this also contains a comment about the fact
           that it's versioned by size"
      
      * tag 'for-linus-20191003' of git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux:
        sched: add kernel-doc for struct clone_args
        fork: add kernel-doc for clone3
        selftests: pidfd: Fix undefined reference to pthread_create()
        sched: Add __ASSEMBLY__ guards around struct clone_args
      af0622f6
    • Linus Torvalds's avatar
      Merge tag 'drm-fixes-2019-10-04' of git://anongit.freedesktop.org/drm/drm · 768b47b7
      Linus Torvalds authored
      Pull drm fixes from Dave Airlie:
       "Been offline for 3 days, got back and had some fixes queued up.
      
        Nothing too major, the i915 dp-mst fix is important, and amdgpu has a
        bulk move speedup fix and some regressions, but nothing too insane for
        an rc2 pull. The intel fixes are also 2 weeks worth, they missed the
        boat last week.
      
        core:
         - writeback fixes
      
        i915:
         - Fix DP-MST crtc_mask
         - Fix dsc dpp calculations
         - Fix g4x sprite scaling stride check with GTT remapping
         - Fix concurrence on cases where requests where getting retired at
           same time as resubmitted to HW
         - Fix gen9 display resolutions by setting the right max plane width
         - Fix GPU hang on preemption
         - Mark contents as dirty on a write fault. This was breaking cursor
           sprite with dumb buffers.
      
        komeda:
         - memory leak fix
      
        tilcdc:
         - include fix
      
        amdgpu:
         - Enable bulk moves
         - Power metrics fixes for Navi
         - Fix S4 regression
         - Add query for tcc disabled mask
         - Fix several leaks in error paths
         - randconfig fixes
         - clang fixes"
      
      * tag 'drm-fixes-2019-10-04' of git://anongit.freedesktop.org/drm/drm: (21 commits)
        Revert "drm/i915: Fix DP-MST crtc_mask"
        drm/omap: fix max fclk divider for omap36xx
        drm/i915: Fix g4x sprite scaling stride check with GTT remapping
        drm/i915/dp: Fix dsc bpp calculations, v5.
        drm/amd/display: fix dcn21 Makefile for clang
        drm/amd/display: hide an unused variable
        drm/amdgpu: display_mode_vba_21: remove uint typedef
        drm/amdgpu: hide another #warning
        drm/amdgpu: make pmu support optional, again
        drm/amd/display: memory leak
        drm/amdgpu: fix multiple memory leaks in acp_hw_init
        drm/amdgpu: return tcc_disabled_mask to userspace
        drm/amdgpu: don't increment vram lost if we are in hibernation
        Revert "drm/amdgpu: disable stutter mode for renoir"
        drm/amd/powerplay: add sensor lock support for smu
        drm/amd/powerplay: change metrics update period from 1ms to 100ms
        drm/amdgpu: revert "disable bulk moves for now"
        drm/tilcdc: include linux/pinctrl/consumer.h again
        drm/komeda: prevent memory leak in komeda_wb_connector_add
        drm: Clear the fence pointer when writeback job signaled
        ...
      768b47b7
    • Linus Torvalds's avatar
      Merge tag 'for-linus-2019-10-03' of git://git.kernel.dk/linux-block · c4bd70e8
      Linus Torvalds authored
      Pull block fixes from Jens Axboe:
      
       - Mandate timespec64 for the io_uring timeout ABI (Arnd)
      
       - Set of NVMe changes via Sagi:
           - controller removal race fix from Balbir
           - quirk additions from Gabriel and Jian-Hong
           - nvme-pci power state save fix from Mario
           - Add 64bit user commands (for 64bit registers) from Marta
           - nvme-rdma/nvme-tcp fixes from Max, Mark and Me
           - Minor cleanups and nits from James, Dan and John
      
       - Two s390 dasd fixes (Jan, Stefan)
      
       - Have loop change block size in DIO mode (Martijn)
      
       - paride pg header ifdef guard (Masahiro)
      
       - Two blk-mq queue scheduler tweaks, fixing an ordering issue on zoned
         devices and suboptimal performance on others (Ming)
      
      * tag 'for-linus-2019-10-03' of git://git.kernel.dk/linux-block: (22 commits)
        block: sed-opal: fix sparse warning: convert __be64 data
        block: sed-opal: fix sparse warning: obsolete array init.
        block: pg: add header include guard
        Revert "s390/dasd: Add discard support for ESE volumes"
        s390/dasd: Fix error handling during online processing
        io_uring: use __kernel_timespec in timeout ABI
        loop: change queue block size to match when using DIO
        blk-mq: apply normal plugging for HDD
        blk-mq: honor IO scheduler for multiqueue devices
        nvme-rdma: fix possible use-after-free in connect timeout
        nvme: Move ctrl sqsize to generic space
        nvme: Add ctrl attributes for queue_count and sqsize
        nvme: allow 64-bit results in passthru commands
        nvme: Add quirk for Kingston NVME SSD running FW E8FK11.T
        nvmet-tcp: remove superflous check on request sgl
        Added QUIRKs for ADATA XPG SX8200 Pro 512GB
        nvme-rdma: Fix max_hw_sectors calculation
        nvme: fix an error code in nvme_init_subsystem()
        nvme-pci: Save PCI state before putting drive into deepest state
        nvme-tcp: fix wrong stop condition in io_work
        ...
      c4bd70e8
    • Paolo Bonzini's avatar
      KVM: x86: omit "impossible" pmu MSRs from MSR list · cf05a67b
      Paolo Bonzini authored
      INTEL_PMC_MAX_GENERIC is currently 32, which exceeds the 18
      contiguous MSR indices reserved by Intel for event selectors.
      Since some machines actually have MSRs past the reserved range,
      filtering them against x86_pmu.num_counters_gp may have false
      positives.  Cut the list to 18 entries to avoid this.
      Reported-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Suggested-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Jim Mattson <jamttson@google.com>
      Fixes: e2ada66e ("kvm: x86: Add Intel PMU MSRs to msrs_to_save[]", 2019-08-21)
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cf05a67b
    • Dave Airlie's avatar
      Merge tag 'drm-intel-fixes-2019-10-03-1' of... · 07bba341
      Dave Airlie authored
      Merge tag 'drm-intel-fixes-2019-10-03-1' of git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
      
      - Fix DP-MST crtc_mask
      - Fix dsc dpp calculations
      - Fix g4x sprite scaling stride check with GTT remapping
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      
      From: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191003193051.GA26421@intel.com
      07bba341
    • Dave Airlie's avatar
      Merge tag 'drm-misc-fixes-2019-10-03' of git://anongit.freedesktop.org/drm/drm-misc into drm-fixes · 63c4cec7
      Dave Airlie authored
       - One include fix for tilcdc
       - A clock fix for OMAP
       - A memory leak fix for Komeda
       - Some fixes for resources cleanups with writeback
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      
      From: Maxime Ripard <mripard@kernel.org>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191003081031.oykms5fg4tijvdri@gilmour
      63c4cec7
    • Dave Airlie's avatar
      Merge tag 'drm-fixes-5.4-2019-10-02' of git://people.freedesktop.org/~agd5f/linux into drm-fixes · 0f83eb88
      Dave Airlie authored
      drm-fixes-5.4-2019-10-02:
      
      amdgpu:
      - Enable bulk moves
      - Power metrics fixes for Navi
      - Fix S4 regression
      - Add query for tcc disabled mask
      - Fix several leaks in error paths
      - randconfig fixes
      - clang fixes
      Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
      From: Alex Deucher <alexdeucher@gmail.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191002204909.3519-1-alexander.deucher@amd.com
      0f83eb88
  3. 03 Oct, 2019 10 commits