1. 20 Apr, 2016 28 commits
  2. 12 Apr, 2016 12 commits
    • Greg Kroah-Hartman's avatar
      Linux 4.5.1 · d5441f92
      Greg Kroah-Hartman authored
      d5441f92
    • Andi Kleen's avatar
      perf/x86/intel: Fix PEBS data source interpretation on Nehalem/Westmere · 73d45efd
      Andi Kleen authored
      commit e17dc653 upstream.
      
      Jiri reported some time ago that some entries in the PEBS data source table
      in perf do not agree with the SDM. We investigated and the bits
      changed for Sandy Bridge, but the SDM was not updated.
      
      perf already implements the bits correctly for Sandy Bridge
      and later. This patch patches it up for Nehalem and Westmere.
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: jolsa@kernel.org
      Link: http://lkml.kernel.org/r/1456871124-15985-1-git-send-email-andi@firstfloor.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      73d45efd
    • Jiri Olsa's avatar
      perf/x86/intel: Use PAGE_SIZE for PEBS buffer size on Core2 · 1b3b7e5e
      Jiri Olsa authored
      commit e72daf3f upstream.
      
      Using PAGE_SIZE buffers makes the WRMSR to PERF_GLOBAL_CTRL in
      intel_pmu_enable_all() mysteriously hang on Core2. As a workaround, we
      don't do this.
      
      The hard lockup is easily triggered by running 'perf test attr'
      repeatedly. Most of the time it gets stuck on sample session with
      small periods.
      
        # perf test attr -vv
        14: struct perf_event_attr setup                             :
        --- start ---
        ...
          'PERF_TEST_ATTR=/tmp/tmpuEKz3B /usr/bin/perf record -o /tmp/tmpuEKz3B/perf.data -c 123 kill >/dev/null 2>&1' ret 1
      Reported-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarJiri Olsa <jolsa@kernel.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/r/20160301190352.GA8355@krava.redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1b3b7e5e
    • Kan Liang's avatar
      perf/x86/intel: Fix PEBS warning by only restoring active PMU in pmi · ff8f5b41
      Kan Liang authored
      commit c3d266c8 upstream.
      
      This patch tries to fix a PEBS warning found in my stress test. The
      following perf command can easily trigger the pebs warning or spurious
      NMI error on Skylake/Broadwell/Haswell platforms:
      
        sudo perf record -e 'cpu/umask=0x04,event=0xc4/pp,cycles,branches,ref-cycles,cache-misses,cache-references' --call-graph fp -b -c1000 -a
      
      Also the NMI watchdog must be enabled.
      
      For this case, the events number is larger than counter number. So
      perf has to do multiplexing.
      
      In perf_mux_hrtimer_handler, it does perf_pmu_disable(), schedule out
      old events, rotate_ctx, schedule in new events and finally
      perf_pmu_enable().
      
      If the old events include precise event, the MSR_IA32_PEBS_ENABLE
      should be cleared when perf_pmu_disable().  The MSR_IA32_PEBS_ENABLE
      should keep 0 until the perf_pmu_enable() is called and the new event is
      precise event.
      
      However, there is a corner case which could restore PEBS_ENABLE to
      stale value during the above period. In perf_pmu_disable(), GLOBAL_CTRL
      will be set to 0 to stop overflow and followed PMI. But there may be
      pending PMI from an earlier overflow, which cannot be stopped. So even
      GLOBAL_CTRL is cleared, the kernel still be possible to get PMI. At
      the end of the PMI handler, __intel_pmu_enable_all() will be called,
      which will restore the stale values if old events haven't scheduled
      out.
      
      Once the stale pebs value is set, it's impossible to be corrected if
      the new events are non-precise. Because the pebs_enabled will be set
      to 0. x86_pmu.enable_all() will ignore the MSR_IA32_PEBS_ENABLE
      setting. As a result, the following NMI with stale PEBS_ENABLE
      trigger pebs warning.
      
      The pending PMI after enabled=0 will become harmless if the NMI handler
      does not change the state. This patch checks cpuc->enabled in pmi and
      only restore the state when PMU is active.
      
      Here is the dump:
      
        Call Trace:
         <NMI>  [<ffffffff813c3a2e>] dump_stack+0x63/0x85
         [<ffffffff810a46f2>] warn_slowpath_common+0x82/0xc0
         [<ffffffff810a483a>] warn_slowpath_null+0x1a/0x20
         [<ffffffff8100fe2e>] intel_pmu_drain_pebs_nhm+0x2be/0x320
         [<ffffffff8100caa9>] intel_pmu_handle_irq+0x279/0x460
         [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40
         [<ffffffff811f290d>] ? vunmap_page_range+0x20d/0x330
         [<ffffffff811f2f11>] ?  unmap_kernel_range_noflush+0x11/0x20
         [<ffffffff8148379f>] ? ghes_copy_tofrom_phys+0x10f/0x2a0
         [<ffffffff814839c8>] ? ghes_read_estatus+0x98/0x170
         [<ffffffff81005a7d>] perf_event_nmi_handler+0x2d/0x50
         [<ffffffff810310b9>] nmi_handle+0x69/0x120
         [<ffffffff810316f6>] default_do_nmi+0xe6/0x100
         [<ffffffff810317f2>] do_nmi+0xe2/0x130
         [<ffffffff817aea71>] end_repeat_nmi+0x1a/0x1e
         [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40
         [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40
         [<ffffffff810639b6>] ? native_write_msr_safe+0x6/0x40
         <<EOE>>  <IRQ>  [<ffffffff81006df8>] ?  x86_perf_event_set_period+0xd8/0x180
         [<ffffffff81006eec>] x86_pmu_start+0x4c/0x100
         [<ffffffff8100722d>] x86_pmu_enable+0x28d/0x300
         [<ffffffff811994d7>] perf_pmu_enable.part.81+0x7/0x10
         [<ffffffff8119cb70>] perf_mux_hrtimer_handler+0x200/0x280
         [<ffffffff8119c970>] ?  __perf_install_in_context+0xc0/0xc0
         [<ffffffff8110f92d>] __hrtimer_run_queues+0xfd/0x280
         [<ffffffff811100d8>] hrtimer_interrupt+0xa8/0x190
         [<ffffffff81199080>] ?  __perf_read_group_add.part.61+0x1a0/0x1a0
         [<ffffffff81051bd8>] local_apic_timer_interrupt+0x38/0x60
         [<ffffffff817af01d>] smp_apic_timer_interrupt+0x3d/0x50
         [<ffffffff817ad15c>] apic_timer_interrupt+0x8c/0xa0
         <EOI>  [<ffffffff81199080>] ?  __perf_read_group_add.part.61+0x1a0/0x1a0
         [<ffffffff81123de5>] ?  smp_call_function_single+0xd5/0x130
         [<ffffffff81123ddb>] ?  smp_call_function_single+0xcb/0x130
         [<ffffffff81199080>] ?  __perf_read_group_add.part.61+0x1a0/0x1a0
         [<ffffffff8119765a>] event_function_call+0x10a/0x120
         [<ffffffff8119c660>] ? ctx_resched+0x90/0x90
         [<ffffffff811971e0>] ? cpu_clock_event_read+0x30/0x30
         [<ffffffff811976d0>] ? _perf_event_disable+0x60/0x60
         [<ffffffff8119772b>] _perf_event_enable+0x5b/0x70
         [<ffffffff81197388>] perf_event_for_each_child+0x38/0xa0
         [<ffffffff811976d0>] ? _perf_event_disable+0x60/0x60
         [<ffffffff811a0ffd>] perf_ioctl+0x12d/0x3c0
         [<ffffffff8134d855>] ? selinux_file_ioctl+0x95/0x1e0
         [<ffffffff8124a3a1>] do_vfs_ioctl+0xa1/0x5a0
         [<ffffffff81036d29>] ? sched_clock+0x9/0x10
         [<ffffffff8124a919>] SyS_ioctl+0x79/0x90
         [<ffffffff817ac4b2>] entry_SYSCALL_64_fastpath+0x1a/0xa4
        ---[ end trace aef202839fe9a71d ]---
        Uhhuh. NMI received for unknown reason 2d on CPU 2.
        Do you have a strange power saving mode enabled?
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/1457046448-6184-1-git-send-email-kan.liang@intel.com
      [ Fixed various typos and other small details. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ff8f5b41
    • Kan Liang's avatar
      perf/x86/intel/uncore: Remove SBOX support for BDX-DE · e5b6971c
      Kan Liang authored
      commit 6cb2f1d9 upstream.
      
      BDX-DE and BDX-EP share the same uncore code path. But there is no sbox
      in BDX-DE. This patch remove SBOX support for BDX-DE.
      Signed-off-by: default avatarKan Liang <kan.liang@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <tonyb@cybernetics.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Battersby <tonyb@cybernetics.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/37D7C6CF3E00A74B8858931C1DB2F0770589D336@SHSMSX103.ccr.corp.intel.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e5b6971c
    • Stephane Eranian's avatar
      perf/x86/pebs: Add workaround for broken OVFL status on HSW+ · 41585808
      Stephane Eranian authored
      commit 8077eca0 upstream.
      
      This patch fixes an issue with the GLOBAL_OVERFLOW_STATUS bits on
      Haswell, Broadwell and Skylake processors when using PEBS.
      
      The SDM stipulates that when the PEBS iterrupt threshold is crossed,
      an interrupt is posted and the kernel is interrupted. The kernel will
      find GLOBAL_OVF_SATUS bit 62 set indicating there are PEBS records to
      drain. But the bits corresponding to the actual counters should NOT be
      set. The kernel follows the SDM and assumes that all PEBS events are
      processed in the drain_pebs() callback. The kernel then checks for
      remaining overflows on any other (non-PEBS) events and processes these
      in the for_each_bit_set(&status) loop.
      
      As it turns out, under certain conditions on HSW and later processors,
      on PEBS buffer interrupt, bit 62 is set but the counter bits may be
      set as well. In that case, the kernel drains PEBS and generates
      SAMPLES with the EXACT tag, then it processes the counter bits, and
      generates normal (non-EXACT) SAMPLES.
      
      I ran into this problem by trying to understand why on HSW sampling on
      a PEBS event was sometimes returning SAMPLES without the EXACT tag.
      This should not happen on user level code because HSW has the
      eventing_ip which always point to the instruction that caused the
      event.
      
      The workaround in this patch simply ensures that the bits for the
      counters used for PEBS events are cleared after the PEBS buffer has
      been drained. With this fix 100% of the PEBS samples on my user code
      report the EXACT tag.
      
      Before:
        $ perf record -e cpu/event=0xd0,umask=0x81/upp ./multichase
        $ perf report -D | fgrep SAMPLES
        PERF_RECORD_SAMPLE(IP, 0x2): 11775/11775: 0x406de5 period: 73469 addr: 0 exact=Y
                                 \--- EXACT tag is missing
      
      After:
        $ perf record -e cpu/event=0xd0,umask=0x81/upp ./multichase
        $ perf report -D | fgrep SAMPLES
        PERF_RECORD_SAMPLE(IP, 0x4002): 11775/11775: 0x406de5 period: 73469 addr: 0 exact=Y
                                 \--- EXACT tag is set
      
      The problem tends to appear more often when multiple PEBS events are used.
      Signed-off-by: default avatarStephane Eranian <eranian@google.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: adrian.hunter@intel.com
      Cc: kan.liang@intel.com
      Cc: namhyung@kernel.org
      Link: http://lkml.kernel.org/r/1457034642-21837-3-git-send-email-eranian@google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      41585808
    • Thomas Gleixner's avatar
      sched/cputime: Fix steal time accounting vs. CPU hotplug · a9a3cef5
      Thomas Gleixner authored
      commit e9532e69 upstream.
      
      On CPU hotplug the steal time accounting can keep a stale rq->prev_steal_time
      value over CPU down and up. So after the CPU comes up again the delta
      calculation in steal_account_process_tick() wreckages itself due to the
      unsigned math:
      
      	 u64 steal = paravirt_steal_clock(smp_processor_id());
      
      	 steal -= this_rq()->prev_steal_time;
      
      So if steal is smaller than rq->prev_steal_time we end up with an insane large
      value which then gets added to rq->prev_steal_time, resulting in a permanent
      wreckage of the accounting. As a consequence the per CPU stats in /proc/stat
      become stale.
      
      Nice trick to tell the world how idle the system is (100%) while the CPU is
      100% busy running tasks. Though we prefer realistic numbers.
      
      None of the accounting values which use a previous value to account for
      fractions is reset at CPU hotplug time. update_rq_clock_task() has a sanity
      check for prev_irq_time and prev_steal_time_rq, but that sanity check solely
      deals with clock warps and limits the /proc/stat visible wreckage. The
      prev_time values are still wrong.
      
      Solution is simple: Reset rq->prev_*_time when the CPU is plugged in again.
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Fixes: commit 095c0aa8 "sched: adjust scheduler cpu power for stolen time"
      Fixes: commit aa483808 "sched: Remove irq time from available CPU power"
      Fixes: commit e6e6685a "KVM guest: Steal time accounting"
      Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1603041539490.3686@nanosSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a9a3cef5
    • Hannes Reinecke's avatar
      scsi_common: do not clobber fixed sense information · be7cc5a6
      Hannes Reinecke authored
      commit ba083116 upstream.
      
      For fixed sense the information field is 32 bits, to we need to truncate
      the information field to avoid clobbering the sense code.
      
      Fixes: a1524f22 ("libata-eh: Set 'information' field for autosense")
      Signed-off-by: default avatarHannes Reinecke <hare@suse.com>
      Reviewed-by: default avatarLee Duncan <lduncan@suse.com>
      Reviewed-by: default avatarBart Van Assche <bart.vanassche@sandisk.com>
      Reviewed-by: default avatarEwan D. Milne <emilne@redhat.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      be7cc5a6
    • Lukas Wunner's avatar
      PM / sleep: Clear pm_suspend_global_flags upon hibernate · ab8a4365
      Lukas Wunner authored
      commit 27614273 upstream.
      
      When suspending to RAM, waking up and later suspending to disk,
      we gratuitously runtime resume devices after the thaw phase.
      This does not occur if we always suspend to RAM or always to disk.
      
      pm_complete_with_resume_check(), which gets called from
      pci_pm_complete() among others, schedules a runtime resume
      if PM_SUSPEND_FLAG_FW_RESUME is set. The flag is set during
      a suspend-to-RAM cycle. It is cleared at the beginning of
      the suspend-to-RAM cycle but not afterwards and it is not
      cleared during a suspend-to-disk cycle at all. Fix it.
      
      Fixes: ef25ba04 (PM / sleep: Add flags to indicate platform firmware involvement)
      Signed-off-by: default avatarLukas Wunner <lukas@wunner.de>
      Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ab8a4365
    • Len Brown's avatar
      intel_idle: prevent SKL-H boot failure when C8+C9+C10 enabled · 87afa6a7
      Len Brown authored
      commit d70e28f5 upstream.
      
      Some SKL-H configurations require "intel_idle.max_cstate=7" to boot.
      While that is an effective workaround, it disables C10.
      
      This patch detects the problematic configuration,
      and disables C8 and C9, keeping C10 enabled.
      
      Note that enabling SGX in BIOS SETUP can also prevent this issue,
      if the system BIOS provides that option.
      
      https://bugzilla.kernel.org/show_bug.cgi?id=109081
      "Freezes with Intel i7 6700HQ (Skylake), unless intel_idle.max_cstate=7"
      Signed-off-by: default avatarLen Brown <len.brown@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      87afa6a7
    • Aaro Koskinen's avatar
      mtd: onenand: fix deadlock in onenand_block_markbad · 5bb18668
      Aaro Koskinen authored
      commit 5e64c29e upstream.
      
      Commit 5942ddbc ("mtd: introduce mtd_block_markbad interface")
      incorrectly changed onenand_block_markbad() to call mtd_block_markbad
      instead of onenand_chip's block_markbad function. As a result the function
      will now recurse and deadlock. Fix by reverting the change.
      
      Fixes: 5942ddbc ("mtd: introduce mtd_block_markbad interface")
      Signed-off-by: default avatarAaro Koskinen <aaro.koskinen@iki.fi>
      Acked-by: default avatarArtem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Signed-off-by: default avatarBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5bb18668
    • Vlastimil Babka's avatar
      mm/page_alloc: prevent merging between isolated and other pageblocks · cbb6e048
      Vlastimil Babka authored
      commit d9dddbf5 upstream.
      
      Hanjun Guo has reported that a CMA stress test causes broken accounting of
      CMA and free pages:
      
      > Before the test, I got:
      > -bash-4.3# cat /proc/meminfo | grep Cma
      > CmaTotal:         204800 kB
      > CmaFree:          195044 kB
      >
      >
      > After running the test:
      > -bash-4.3# cat /proc/meminfo | grep Cma
      > CmaTotal:         204800 kB
      > CmaFree:         6602584 kB
      >
      > So the freed CMA memory is more than total..
      >
      > Also the the MemFree is more than mem total:
      >
      > -bash-4.3# cat /proc/meminfo
      > MemTotal:       16342016 kB
      > MemFree:        22367268 kB
      > MemAvailable:   22370528 kB
      
      Laura Abbott has confirmed the issue and suspected the freepage accounting
      rewrite around 3.18/4.0 by Joonsoo Kim.  Joonsoo had a theory that this is
      caused by unexpected merging between MIGRATE_ISOLATE and MIGRATE_CMA
      pageblocks:
      
      > CMA isolates MAX_ORDER aligned blocks, but, during the process,
      > partialy isolated block exists. If MAX_ORDER is 11 and
      > pageblock_order is 9, two pageblocks make up MAX_ORDER
      > aligned block and I can think following scenario because pageblock
      > (un)isolation would be done one by one.
      >
      > (each character means one pageblock. 'C', 'I' means MIGRATE_CMA,
      > MIGRATE_ISOLATE, respectively.
      >
      > CC -> IC -> II (Isolation)
      > II -> CI -> CC (Un-isolation)
      >
      > If some pages are freed at this intermediate state such as IC or CI,
      > that page could be merged to the other page that is resident on
      > different type of pageblock and it will cause wrong freepage count.
      
      This was supposed to be prevented by CMA operating on MAX_ORDER blocks,
      but since it doesn't hold the zone->lock between pageblocks, a race
      window does exist.
      
      It's also likely that unexpected merging can occur between
      MIGRATE_ISOLATE and non-CMA pageblocks.  This should be prevented in
      __free_one_page() since commit 3c605096 ("mm/page_alloc: restrict
      max order of merging on isolated pageblock").  However, we only check
      the migratetype of the pageblock where buddy merging has been initiated,
      not the migratetype of the buddy pageblock (or group of pageblocks)
      which can be MIGRATE_ISOLATE.
      
      Joonsoo has suggested checking for buddy migratetype as part of
      page_is_buddy(), but that would add extra checks in allocator hotpath
      and bloat-o-meter has shown significant code bloat (the function is
      inline).
      
      This patch reduces the bloat at some expense of more complicated code.
      The buddy-merging while-loop in __free_one_page() is initially bounded
      to pageblock_border and without any migratetype checks.  The checks are
      placed outside, bumping the max_order if merging is allowed, and
      returning to the while-loop with a statement which can't be possibly
      considered harmful.
      
      This fixes the accounting bug and also removes the arguably weird state
      in the original commit 3c605096 where buddies could be left
      unmerged.
      
      Fixes: 3c605096 ("mm/page_alloc: restrict max order of merging on isolated pageblock")
      Link: https://lkml.org/lkml/2016/3/2/280Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reported-by: default avatarHanjun Guo <guohanjun@huawei.com>
      Tested-by: default avatarHanjun Guo <guohanjun@huawei.com>
      Acked-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Debugged-by: default avatarLaura Abbott <labbott@redhat.com>
      Debugged-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Zhang Yanfei <zhangyanfei@cn.fujitsu.com>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cbb6e048