- 19 Oct, 2015 1 commit
-
-
Chuck Ebbert authored
Fix obvious mistake: FS/GS should be DS/ES. Signed-off-by: Chuck Ebbert <cebbert.lkml@gmail.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20151014143119.78858eeb@r5Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
- 06 Oct, 2015 18 commits
-
-
xiaofeng.yan authored
The parameter "int next_cpu" in the following function is unused: migrate_task_rq(struct task_struct *p, int next_cpu) Remove it. Signed-off-by: xiaofeng.yan <yanxiaofeng@inspur.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1442991360-31945-1-git-send-email-yanxiaofeng@inspur.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Geliang Tang authored
(1) For !CONFIG_BUG cases, the bug call is a no-op, so we couldn't care less and the change is ok. (2) PPC and MIPS, which HAVE_ARCH_BUG_ON, do not rely on branch predictions as it seems to be pointless [1] and thus callers should not be trying to push an optimization in the first place. (3) For CONFIG_BUG and !HAVE_ARCH_BUG_ON cases, BUG_ON() contains an unlikely compiler flag already. Hence, we can drop unlikely behind BUG_ON(). [1] http://lkml.iu.edu/hypermail/linux/kernel/1101.3/02289.htmlSigned-off-by: Geliang Tang <geliangtang@163.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/6fa7125979f98bbeac26e268271769b6ca935c8d.1444051018.git.geliangtang@163.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Mike Meyer reported the following bug: > During evaluation of some performance data, it was discovered thread > and run queue run_delay accounting data was inconsistent with the other > accounting data that was collected. Further investigation found under > certain circumstances execution time was leaking into the task and > run queue accounting of run_delay. > > Consider the following sequence: > > a. thread is running. > b. thread moves beween cgroups, changes scheduling class or priority. > c. thread sleeps OR > d. thread involuntarily gives up cpu. > > a. implies: > > thread->sched_info.last_queued = 0 > > a. and b. results in the following: > > 1. dequeue_task(rq, thread) > > sched_info_dequeued(rq, thread) > delta = 0 > > sched_info_reset_dequeued(thread) > thread->sched_info.last_queued = 0 > > thread->sched_info.run_delay += delta > > 2. enqueue_task(rq, thread) > > sched_info_queued(rq, thread) > > /* thread is still on cpu at this point. */ > thread->sched_info.last_queued = task_rq(thread)->clock; > > c. results in: > > dequeue_task(rq, thread) > > sched_info_dequeued(rq, thread) > > /* delta is execution time not run_delay. */ > delta = task_rq(thread)->clock - thread->sched_info.last_queued > > sched_info_reset_dequeued(thread) > thread->sched_info.last_queued = 0 > > thread->sched_info.run_delay += delta > > Since thread was running between enqueue_task(rq, thread) and > dequeue_task(rq, thread), the delta above is really execution > time and not run_delay. > > d. results in: > > __sched_info_switch(thread, next_thread) > > sched_info_depart(rq, thread) > > sched_info_queued(rq, thread) > > /* last_queued not updated due to being non-zero */ > return > > Since thread was running between enqueue_task(rq, thread) and > __sched_info_switch(thread, next_thread), the execution time > between enqueue_task(rq, thread) and > __sched_info_switch(thread, next_thread) now will become > associated with run_delay due to when last_queued was last updated. > This alternative patch solves the problem by not calling sched_info_{de,}queued() in {de,en}queue_task(). Therefore the sched_info state is preserved and things work as expected. By inlining the {de,en}queue_task() functions the new condition becomes (mostly) a compile-time constant and we'll not emit any new branch instructions. It even shrinks the code (due to inlining {en,de}queue_task()): $ size defconfig-build/kernel/sched/core.o defconfig-build/kernel/sched/core.o.orig text data bss dec hex filename 64019 23378 2344 89741 15e8d defconfig-build/kernel/sched/core.o 64149 23378 2344 89871 15f0f defconfig-build/kernel/sched/core.o.orig Reported-by: Mike Meyer <Mike.Meyer@Teradata.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/20150930154413.GO3604@twins.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Srikar Dronamraju authored
If static branch 'sched_numa_balancing' is enabled, it should kickstart NUMA balancing through task_tick_numa(). However the following commit: 2a595721 ("sched/numa: Convert sched_numa_balancing to a static_branch") erroneously disables this. Fix this anomaly by enabling task_tick_numa() when the static branch 'sched_numa_balancing' is enabled. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Link: http://lkml.kernel.org/r/1443752305-27413-1-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Ingo requested I keep my debug check for the preempt_count invariant. Requested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
preempt_schedule_common() is marked notrace, but it does not use _notrace() preempt_count functions and __schedule() is also not marked notrace, which means that its perfectly possible to end up in the tracer from preempt_schedule_common(). Steve says: | Yep, there's some history to this. This was originally the issue that | caused function tracing to go into infinite recursion. But now we have | preempt_schedule_notrace(), which is used by the function tracer, and | that function must not be traced till preemption is disabled. | | Now if function tracing is running and we take an interrupt when | NEED_RESCHED is set, it calls | | preempt_schedule_common() (not traced) | | But then that calls preempt_disable() (traced) | | function tracer calls preempt_disable_notrace() followed by | preempt_enable_notrace() which will see NEED_RESCHED set, and it will | call preempt_schedule_notrace(), which stops the recursion, but | still calls __schedule() here, and that means when we return, we call | the __schedule() from preempt_schedule_common(). | | That said, I prefer this patch. Preemption is disabled before calling | __schedule(), and we get rid of a one round recursion with the | scheduler. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Its unused, kill the definition. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
With the introduction of the context switch preempt_count invariant, and the demise of PREEMPT_ACTIVE, its pointless to save/restore the per-cpu preemption count, it must always be 2. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Since we stopped setting PREEMPT_ACTIVE, there is no need to mask it out of preempt_count() tests. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
When we warn about a preempt_count leak; reset the preempt_count to the known good value such that the problem does not ripple forward. This is most important on x86 which has a per cpu preempt_count that is not saved/restored (after this series). So if you schedule with an invalid (!2*PREEMPT_DISABLE_OFFSET) preempt_count the next task is messed up too. Enforcing this invariant limits the borkage to just the one task. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Now that nothing tests for PREEMPT_ACTIVE anymore, stop setting it. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
__trace_sched_switch_state() is the last remaining PREEMPT_ACTIVE user, move trace_sched_switch() from prepare_task_switch() to __schedule() and propagate the @preempt argument. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
There is only a single PREEMPT_ACTIVE use in the regular __schedule() path and that is to circumvent the task->state check. Since the code setting PREEMPT_ACTIVE is the immediate caller of __schedule() we can replace this with a function argument. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
Assuming units of PREEMPT_DISABLE_OFFSET for preempt_count() numbers. Now that TASK_DEAD no longer results in preempt_count() == 3 during scheduling, we will always call context_switch() with preempt_count() == 2. However, we don't always end up with preempt_count() == 2 in finish_task_switch() because new tasks get created with preempt_count() == 1. Create FORK_PREEMPT_COUNT and set it to 2 and use that in the right places. Note that we cannot use INIT_PREEMPT_COUNT as that serves another purpose (boot). After this, preempt_count() is invariant across the context switch, with exception of PREEMPT_ACTIVE. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
TASK_DEAD is special in that the final schedule call from do_exit() must be done with preemption disabled. This means we end up scheduling with a preempt_count() higher than usual (3 instead of the 'expected' 2). Since future patches will want to rely on an invariant preempt_count() value during schedule, fix this up. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
As per the following commit: d86ee480 ("sched: optimize cond_resched()") we need PREEMPT_ACTIVE to avoid cond_resched() from working before the scheduler is set up. However, keeping preemption disabled should do the same thing already, making the PREEMPT_ACTIVE part entirely redundant. The only complication is !PREEMPT_COUNT kernels, where PREEMPT_DISABLED ends up being 0. Instead we use an unconditional PREEMPT_OFFSET to set preempt_count() even on !PREEMPT_COUNT kernels. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Ingo Molnar authored
Signed-off-by: Ingo Molnar <mingo@kernel.org>
-
Peter Zijlstra authored
So the problem this patch is trying to address is as follows: CPU0 CPU1 context_switch(A, B) ttwu(A) LOCK A->pi_lock A->on_cpu == 0 finish_task_switch(A) prev_state = A->state <-. WMB | A->on_cpu = 0; | UNLOCK rq0->lock | | context_switch(C, A) `-- A->state = TASK_DEAD prev_state == TASK_DEAD put_task_struct(A) context_switch(A, C) finish_task_switch(A) A->state == TASK_DEAD put_task_struct(A) The argument being that the WMB will allow the load of A->state on CPU0 to cross over and observe CPU1's store of A->state, which will then result in a double-drop and use-after-free. Now the comment states (and this was true once upon a long time ago) that we need to observe A->state while holding rq->lock because that will order us against the wakeup; however the wakeup will not in fact acquire (that) rq->lock; it takes A->pi_lock these days. We can obviously fix this by upgrading the WMB to an MB, but that is expensive, so we'd rather avoid that. The alternative this patch takes is: smp_store_release(&A->on_cpu, 0), which avoids the MB on some archs, but not important ones like ARM. Reported-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: <stable@vger.kernel.org> # v3.1+ Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Cc: manfred@colorfullife.com Cc: will.deacon@arm.com Fixes: e4a52bcb ("sched: Remove rq->lock from the first half of ttwu()") Link: http://lkml.kernel.org/r/20150929124509.GG3816@twins.programming.kicks-ass.netSigned-off-by: Ingo Molnar <mingo@kernel.org>
-
- 04 Oct, 2015 6 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tileLinus Torvalds authored
Pull strscpy string copy function implementation from Chris Metcalf. Chris sent this during the merge window, but I waffled back and forth on the pull request, which is why it's going in only now. The new "strscpy()" function is definitely easier to use and more secure than either strncpy() or strlcpy(), both of which are horrible nasty interfaces that have serious and irredeemable problems. strncpy() has a useless return value, and doesn't NUL-terminate an overlong result. To make matters worse, it pads a short result with zeroes, which is a performance disaster if you have big buffers. strlcpy(), by contrast, is a mis-designed "fix" for strlcpy(), lacking the insane NUL padding, but having a differently broken return value which returns the original length of the source string. Which means that it will read characters past the count from the source buffer, and you have to trust the source to be properly terminated. It also makes error handling fragile, since the test for overflow is unnecessarily subtle. strscpy() avoids both these problems, guaranteeing the NUL termination (but not excessive padding) if the destination size wasn't zero, and making the overflow condition very obvious by returning -E2BIG. It also doesn't read past the size of the source, and can thus be used for untrusted source data too. So why did I waffle about this for so long? Every time we introduce a new-and-improved interface, people start doing these interminable series of trivial conversion patches. And every time that happens, somebody does some silly mistake, and the conversion patch to the improved interface actually makes things worse. Because the patch is mindnumbing and trivial, nobody has the attention span to look at it carefully, and it's usually done over large swatches of source code which means that not every conversion gets tested. So I'm pulling the strscpy() support because it *is* a better interface. But I will refuse to pull mindless conversion patches. Use this in places where it makes sense, but don't do trivial patches to fix things that aren't actually known to be broken. * 'strscpy' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: tile: use global strscpy() rather than private copy string: provide strscpy() Make asm/word-at-a-time.h available on all architectures
-
git://neil.brown.name/mdLinus Torvalds authored
Pull md fixes from Neil Brown: "Assorted fixes for md in 4.3-rc. Two tagged for -stable, and one is really a cleanup to match and improve kmemcache interface. * tag 'md/4.3-fixes' of git://neil.brown.name/md: md/bitmap: don't pass -1 to bitmap_storage_alloc. md/raid1: Avoid raid1 resync getting stuck md: drop null test before destroy functions md: clear CHANGE_PENDING in readonly array md/raid0: apply base queue limits *before* disk_stack_limits md/raid5: don't index beyond end of array in need_this_block(). raid5: update analysis state for failed stripe md: wait for pending superblock updates before switching to read-only
-
git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds authored
Pull MIPS updates from Ralf Baechle: "This week's round of MIPS fixes: - Fix JZ4740 build - Fix fallback to GFP_DMA - FP seccomp in case of ENOSYS - Fix bootmem panic - A number of FP and CPS fixes - Wire up new syscalls - Make sure BPF assembler objects can properly be disassembled - Fix BPF assembler code for MIPS I" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: MIPS: scall: Always run the seccomp syscall filters MIPS: Octeon: Fix kernel panic on startup from memory corruption MIPS: Fix R2300 FP context switch handling MIPS: Fix octeon FP context switch handling MIPS: BPF: Fix load delay slots. MIPS: BPF: Do all exports of symbols with FEXPORT(). MIPS: Fix the build on jz4740 after removing the custom gpio.h MIPS: CPS: #ifdef on CONFIG_MIPS_MT_SMP rather than CONFIG_MIPS_MT MIPS: CPS: Don't include MT code in non-MT kernels. MIPS: CPS: Stop dangling delay slot from has_mt. MIPS: dma-default: Fix 32-bit fall back to GFP_DMA MIPS: Wire up userfaultfd and membarrier syscalls.
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fixes from Thomas Gleixner: "This update contains: - Fix for a long standing race affecting /proc/irq/NNN - One line fix for ARM GICV3-ITS counting the wrong data - Warning silencing in ARM GICV3-ITS. Another GCC trying to be overly clever issue" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/gic-v3-its: Count additional LPIs for the aliased devices irqchip/gic-v3-its: Silence warning when its_lpi_alloc_chunks gets inlined genirq: Fix race in register_irq_proc()
-
Markos Chandras authored
The MIPS syscall handler code used to return -ENOSYS on invalid syscalls. Whilst this is expected, it caused problems for seccomp filters because the said filters never had the change to run since the code returned -ENOSYS before triggering them. This caused problems on the chromium testsuite for filters looking for invalid syscalls. This has now changed and the seccomp filters are always run even if the syscall is invalid. We return -ENOSYS once we return from the seccomp filters. Moreover, similar codepaths have been merged in the process which simplifies somewhat the overall syscall code. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11236/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
- 03 Oct, 2015 4 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Ingo Molnar: "Fixes all around the map: W+X kernel mapping fix, WCHAN fixes, two build failure fixes for corner case configs, x32 header fix and a speling fix" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/headers/uapi: Fix __BITS_PER_LONG value for x32 builds x86/mm: Set NX on gap between __ex_table and rodata x86/kexec: Fix kexec crash in syscall kexec_file_load() x86/process: Unify 32bit and 64bit implementations of get_wchan() x86/process: Add proper bound checks in 64bit get_wchan() x86, efi, kasan: Fix build failure on !KASAN && KMEMCHECK=y kernels x86/hyperv: Fix the build in the !CONFIG_KEXEC_CORE case x86/cpufeatures: Correct spelling of the HWP_NOTIFY flag
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull timer fixes from Ingo Molnar: "An abs64() fix in the watchdog driver, and two clocksource driver NO_IRQ assumption fixes" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: clocksource: Fix abs() usage w/ 64bit values clocksource/drivers/keystone: Fix bad NO_IRQ usage clocksource/drivers/rockchip: Fix bad NO_IRQ usage
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull EFI fixes from Ingo Molnar: "Two EFI fixes: one for x86, one for ARM, fixing a boot crash bug that can trigger under newer EFI firmware" * 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: arm64/efi: Fix boot crash by not padding between EFI_MEMORY_RUNTIME regions x86/efi: Fix boot crash by mapping EFI memmap entries bottom-up at runtime, instead of top-down
-
git://people.freedesktop.org/~airlied/linuxLinus Torvalds authored
Pull drm fixes from Dave Airlie: "Bunch of fixes all over the place, all pretty small: amdgpu, i915, exynos, one qxl and one vmwgfx. There is also a bunch of mst fixes, I left some cleanups in the series as I didn't think it was worth splitting up the tested series" * 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (37 commits) drm/dp/mst: add some defines for logical/physical ports drm/dp/mst: drop cancel work sync in the mstb destroy path (v2) drm/dp/mst: split connector registration into two parts (v2) drm/dp/mst: update the link_address_sent before sending the link address (v3) drm/dp/mst: fixup handling hotplug on port removal. drm/dp/mst: don't pass port into the path builder function drm/radeon: drop radeon_fb_helper_set_par drm: handle cursor_set2 in restore_fbdev_mode drm/exynos: Staticize local function in exynos_drm_gem.c drm/exynos: fimd: actually disable dp clock drm/exynos: dp: remove suspend/resume functions drm/qxl: recreate the primary surface when the bo is not primary drm/amdgpu: only print meaningful VM faults drm/amdgpu/cgs: remove import_gpu_mem drm/i915: Call non-locking version of drm_kms_helper_poll_enable(), v2 drm: Add a non-locking version of drm_kms_helper_poll_enable(), v2 drm/vmwgfx: Fix a command submission hang regression drm/exynos: remove unused mode_fixup() code drm/exynos: remove decon_mode_fixup() drm/exynos: remove fimd_mode_fixup() ...
-
- 02 Oct, 2015 11 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/inputLinus Torvalds authored
Pull input layer fixes from Dmitry Torokhov: "Fixes for two recent regressions (in Synaptics PS/2 and uinput drivers) and some more driver fixups" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Revert "Input: synaptics - fix handling of disabling gesture mode" Input: psmouse - fix data race in __ps2_command Input: elan_i2c - add all valid ic type for i2c/smbus Input: zhenhua - ensure we have BITREVERSE Input: omap4-keypad - fix memory leak Input: serio - fix blocking of parport Input: uinput - fix crash when using ABS events Input: elan_i2c - expand maximum product_id form 0xFF to 0xFFFF Input: elan_i2c - add ic type 0x03 Input: elan_i2c - don't require known iap version Input: imx6ul_tsc - fix controller name Input: imx6ul_tsc - use the preferred method for kzalloc() Input: imx6ul_tsc - check for negative return value Input: imx6ul_tsc - propagate the errors Input: walkera0701 - fix abs() calculations on 64 bit values Input: mms114 - remove unneded semicolons Input: pm8941-pwrkey - remove unneded semicolon Input: fix typo in MT documentation Input: cyapa - fix address of Gen3 devices in device tree documentation
-
John Stultz authored
This patch fixes one cases where abs() was being used with 64-bit nanosecond values, where the result may be capped at 32-bits. This potentially could cause watchdog false negatives on 32-bit systems, so this patch addresses the issue by using abs64(). Signed-off-by: John Stultz <john.stultz@linaro.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1442279124-7309-2-git-send-email-john.stultz@linaro.orgSigned-off-by: Thomas Gleixner <tglx@linutronix.de>
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linuxLinus Torvalds authored
Pull arm64 fixes from Catalin Marinas: - Fix for transparent huge page change_protection() logic which was inadvertently changing a huge pmd page into a pmd table entry. - Function graph tracer panic fix caused by the return_to_handler code corrupting the multi-regs function return value (composite types). * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: arm64: ftrace: fix function_graph tracer panic arm64: Fix THP protection change logic
-
git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68kLinus Torvalds authored
Pull m68k updates from Geert Uytterhoeven: "Summary: - Fix for accidental modification of arguments of syscall functions - Wire up new syscalls - Update defconfigs" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k: m68k/defconfig: Update defconfigs for v4.3-rc1 m68k: Define asmlinkage_protect m68k: Wire up membarrier m68k: Wire up userfaultfd m68k: Wire up direct socket calls
-
Marc Zyngier authored
When configuring the interrupt mapping for a new device, we iterate over all the possible aliases to account for their maximum MSI allocation. This was introduced by e8137f4f ("irqchip: gicv3-its: Iterate over PCI aliases to generate ITS configuration"). Turns out that the code doing that is a bit braindead, and repeatedly accounts for the same device over and over. Fix this by counting the actual alias that is passed to us by the core code. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Alex Shi <alex.shi@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: David Daney <ddaney.cavm@gmail.com> Cc: Jason Cooper <jason@lakedaemon.net> Link: http://lkml.kernel.org/r/1443800646-8074-3-git-send-email-marc.zyngier@arm.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de>
-
Marc Zyngier authored
More agressive inlining in recent versions of GCC have uncovered a new set of warnings: drivers/irqchip/irq-gic-v3-its.c: In function its_msi_prepare: drivers/irqchip/irq-gic-v3-its.c:1148:26: warning: lpi_base may be used uninitialized in this function [-Wmaybe-uninitialized] dev->event_map.lpi_base = lpi_base; ^ drivers/irqchip/irq-gic-v3-its.c:1116:6: note: lpi_base was declared here int lpi_base; ^ drivers/irqchip/irq-gic-v3-its.c:1149:25: warning: nr_lpis may be used uninitialized in this function [-Wmaybe-uninitialized] dev->event_map.nr_lpis = nr_lpis; ^ drivers/irqchip/irq-gic-v3-its.c:1117:6: note: nr_lpis was declared here int nr_lpis; ^ The warning is fairly benign (there is no code path that could actually use uninitialized variables), but let's silence it anyway by zeroing the variables on the error path. Reported-by: Alex Shi <alex.shi@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: David Daney <ddaney.cavm@gmail.com> Cc: Jason Cooper <jason@lakedaemon.net> Link: http://lkml.kernel.org/r/1443800646-8074-2-git-send-email-marc.zyngier@arm.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de>
-
git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds authored
Pull dmaengine fixes from Vinod Koul: "This contains fixes spread throughout the drivers, and also fixes one more instance of privatecnt in dmaengine. Driver fixes summary: - bunch of pxa_dma fixes for reuse of descriptor issue, residue and no-requestor - odd fixes in xgene, idma, sun4i and zxdma - at_xdmac fixes for cleaning descriptor and block addr mode" * tag 'dmaengine-fix-4.3-rc4' of git://git.infradead.org/users/vkoul/slave-dma: dmaengine: pxa_dma: fix residue corner case dmaengine: pxa_dma: fix the no-requestor case dmaengine: zxdma: Fix off-by-one for testing valid pchan request dmaengine: at_xdmac: clean used descriptor dmaengine: at_xdmac: change block increment addressing mode dmaengine: dw: properly read DWC_PARAMS register dmaengine: xgene-dma: Fix overwritting DMA tx ring dmaengine: fix balance of privatecnt dmaengine: sun4i: fix unsafe list iteration dmaengine: idma64: improve residue estimation dmaengine: xgene-dma: fix handling xgene_dma_get_ring_size result dmaengine: pxa_dma: fix initial list move
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: "Another week, another round of fixes. These have been brewing for a bit and in various iterations, but I feel pretty comfortable about the quality of them. They fix real issues. The pull request is mostly blk-mq related, and the only one not fixing a real bug, is the tag iterator abstraction from Christoph. But it's pretty trivial, and we'll need it for another fix soon. Apart from the blk-mq fixes, there's an NVMe affinity fix from Keith, and a single fix for xen-blkback from Roger fixing failure to free requests on disconnect" * 'for-linus' of git://git.kernel.dk/linux-block: blk-mq: factor out a helper to iterate all tags for a request_queue blk-mq: fix racy updates of rq->errors blk-mq: fix deadlock when reading cpu_list blk-mq: avoid inserting requests before establishing new mapping blk-mq: fix q->mq_usage_counter access race blk-mq: Fix use after of free q->mq_map blk-mq: fix sysfs registration/unregistration race blk-mq: avoid setting hctx->tags->cpumask before allocation NVMe: Set affinity after allocating request queues xen/blkback: free requests on disconnection
-
Dmitry Torokhov authored
This reverts commit e51e3849: we actually do want the device to work in extended W mode, as this is the mode that allows us receiving multiple contact information. Cc: stable@vger.kernel.org
-
Matt Bennett authored
During development it was found that a number of builds would panic during the kernel init process, more specifically in 'delayed_fput()'. The panic showed the kernel trying to access a memory address of '0xb7fdc00' while traversing the 'delayed_fput_list' structure. Comparing this memory address to the value of the pointer used on builds that did not panic confirmed that the pointer on crashing builds must have been corrupted at some stage earlier in the init process. By traversing the list earlier and earlier in the code it was found that 'plat_mem_setup()' was responsible for corrupting the list. Specifically the line: memory = cvmx_bootmem_phy_alloc(mem_alloc_size, __pa_symbol(&__init_end), -1, 0x100000, CVMX_BOOTMEM_FLAG_NO_LOCKING); Which would eventually call: cvmx_bootmem_phy_set_size(new_ent_addr, cvmx_bootmem_phy_get_size (ent_addr) - (desired_min_addr - ent_addr)); Where 'new_ent_addr'=0x4800000 (the address of 'delayed_fput_list') and the second argument (size)=0xb7fdc00 (the address causing the kernel panic). The job of this part of 'plat_mem_setup()' is to allocate chunks of memory for the kernel to use. At the start of each chunk of memory the size of the chunk is written, hence the value 0xb7fdc00 is written onto memory at 0x4800000, therefore the kernel panics when it goes back to access 'delayed_fput_list' later on in the initialisation process. On builds that were not crashing it was found that the compiler had placed 'delayed_fput_list' at 0x4800008, meaning it wasn't corrupted (but something else in memory was overwritten). As can be seen in the first function call above the code begins to allocate chunks of memory beginning from the symbol '__init_end'. The MIPS linker script (vmlinux.lds.S) however defines the .bss section to begin after '__init_end'. Therefore memory within the .bss section is allocated to the kernel to use (System.map shows 'delayed_fput_list' and other kernel structures to be in .bss). To stop the kernel panic (and the .bss section being corrupted) memory should begin being allocated from the symbol '_end'. Signed-off-by: Matt Bennett <matt.bennett@alliedtelesis.co.nz> Acked-by: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: aleksey.makarov@auriga.com Patchwork: https://patchwork.linux-mips.org/patch/11251/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
Commit 1a3d5957 ("MIPS: Tidy up FPU context switching") removed FP context saving from the asm-written resume function in favour of reusing existing code to perform the same task. However it only removed the FP context saving code from the r4k_switch.S implementation of resume. Remove it from the r2300_switch.S implementation too in order to prevent attempting to save the FP context twice, which would likely lead to an exception from the second save because the FPU had already been disabled by the first save. This patch has only been build tested, using rbtx49xx_defconfig. Fixes: 1a3d5957 ("MIPS: Tidy up FPU context switching") Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Maciej W. Rozycki <macro@linux-mips.org> Cc: linux-kernel@vger.kernel.org Cc: Manuel Lauss <manuel.lauss@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/11167/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-