Commit 076f14be authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'x86-entry-2020-06-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 entry updates from Thomas Gleixner:
 "The x86 entry, exception and interrupt code rework

  This all started about 6 month ago with the attempt to move the Posix
  CPU timer heavy lifting out of the timer interrupt code and just have
  lockless quick checks in that code path. Trivial 5 patches.

  This unearthed an inconsistency in the KVM handling of task work and
  the review requested to move all of this into generic code so other
  architectures can share.

  Valid request and solved with another 25 patches but those unearthed
  inconsistencies vs. RCU and instrumentation.

  Digging into this made it obvious that there are quite some
  inconsistencies vs. instrumentation in general. The int3 text poke
  handling in particular was completely unprotected and with the batched
  update of trace events even more likely to expose to endless int3
  recursion.

  In parallel the RCU implications of instrumenting fragile entry code
  came up in several discussions.

  The conclusion of the x86 maintainer team was to go all the way and
  make the protection against any form of instrumentation of fragile and
  dangerous code pathes enforcable and verifiable by tooling.

  A first batch of preparatory work hit mainline with commit
  d5f744f9 ("Pull x86 entry code updates from Thomas Gleixner")

  That (almost) full solution introduced a new code section
  '.noinstr.text' into which all code which needs to be protected from
  instrumentation of all sorts goes into. Any call into instrumentable
  code out of this section has to be annotated. objtool has support to
  validate this.

  Kprobes now excludes this section fully which also prevents BPF from
  fiddling with it and all 'noinstr' annotated functions also keep
  ftrace off. The section, kprobes and objtool changes are already
  merged.

  The major changes coming with this are:

    - Preparatory cleanups

    - Annotating of relevant functions to move them into the
      noinstr.text section or enforcing inlining by marking them
      __always_inline so the compiler cannot misplace or instrument
      them.

    - Splitting and simplifying the idtentry macro maze so that it is
      now clearly separated into simple exception entries and the more
      interesting ones which use interrupt stacks and have the paranoid
      handling vs. CR3 and GS.

    - Move quite some of the low level ASM functionality into C code:

       - enter_from and exit to user space handling. The ASM code now
         calls into C after doing the really necessary ASM handling and
         the return path goes back out without bells and whistels in
         ASM.

       - exception entry/exit got the equivivalent treatment

       - move all IRQ tracepoints from ASM to C so they can be placed as
         appropriate which is especially important for the int3
         recursion issue.

    - Consolidate the declaration and definition of entry points between
      32 and 64 bit. They share a common header and macros now.

    - Remove the extra device interrupt entry maze and just use the
      regular exception entry code.

    - All ASM entry points except NMI are now generated from the shared
      header file and the corresponding macros in the 32 and 64 bit
      entry ASM.

    - The C code entry points are consolidated as well with the help of
      DEFINE_IDTENTRY*() macros. This allows to ensure at one central
      point that all corresponding entry points share the same
      semantics. The actual function body for most entry points is in an
      instrumentable and sane state.

      There are special macros for the more sensitive entry points, e.g.
      INT3 and of course the nasty paranoid #NMI, #MCE, #DB and #DF.
      They allow to put the whole entry instrumentation and RCU handling
      into safe places instead of the previous pray that it is correct
      approach.

    - The INT3 text poke handling is now completely isolated and the
      recursion issue banned. Aside of the entry rework this required
      other isolation work, e.g. the ability to force inline bsearch.

    - Prevent #DB on fragile entry code, entry relevant memory and
      disable it on NMI, #MC entry, which allowed to get rid of the
      nested #DB IST stack shifting hackery.

    - A few other cleanups and enhancements which have been made
      possible through this and already merged changes, e.g.
      consolidating and further restricting the IDT code so the IDT
      table becomes RO after init which removes yet another popular
      attack vector

    - About 680 lines of ASM maze are gone.

  There are a few open issues:

   - An escape out of the noinstr section in the MCE handler which needs
     some more thought but under the aspect that MCE is a complete
     trainwreck by design and the propability to survive it is low, this
     was not high on the priority list.

   - Paravirtualization

     When PV is enabled then objtool complains about a bunch of indirect
     calls out of the noinstr section. There are a few straight forward
     ways to fix this, but the other issues vs. general correctness were
     more pressing than parawitz.

   - KVM

     KVM is inconsistent as well. Patches have been posted, but they
     have not yet been commented on or picked up by the KVM folks.

   - IDLE

     Pretty much the same problems can be found in the low level idle
     code especially the parts where RCU stopped watching. This was
     beyond the scope of the more obvious and exposable problems and is
     on the todo list.

  The lesson learned from this brain melting exercise to morph the
  evolved code base into something which can be validated and understood
  is that once again the violation of the most important engineering
  principle "correctness first" has caused quite a few people to spend
  valuable time on problems which could have been avoided in the first
  place. The "features first" tinkering mindset really has to stop.

  With that I want to say thanks to everyone involved in contributing to
  this effort. Special thanks go to the following people (alphabetical
  order): Alexandre Chartre, Andy Lutomirski, Borislav Petkov, Brian
  Gerst, Frederic Weisbecker, Josh Poimboeuf, Juergen Gross, Lai
  Jiangshan, Macro Elver, Paolo Bonzin,i Paul McKenney, Peter Zijlstra,
  Vitaly Kuznetsov, and Will Deacon"

* tag 'x86-entry-2020-06-12' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (142 commits)
  x86/entry: Force rcu_irq_enter() when in idle task
  x86/entry: Make NMI use IDTENTRY_RAW
  x86/entry: Treat BUG/WARN as NMI-like entries
  x86/entry: Unbreak __irqentry_text_start/end magic
  x86/entry: __always_inline CR2 for noinstr
  lockdep: __always_inline more for noinstr
  x86/entry: Re-order #DB handler to avoid *SAN instrumentation
  x86/entry: __always_inline arch_atomic_* for noinstr
  x86/entry: __always_inline irqflags for noinstr
  x86/entry: __always_inline debugreg for noinstr
  x86/idt: Consolidate idt functionality
  x86/idt: Cleanup trap_init()
  x86/idt: Use proper constants for table size
  x86/idt: Add comments about early #PF handling
  x86/idt: Mark init only functions __init
  x86/entry: Rename trace_hardirqs_off_prepare()
  x86/entry: Clarify irq_{enter,exit}_rcu()
  x86/entry: Remove DBn stacks
  x86/entry: Remove debug IDT frobbing
  x86/entry: Optimize local_db_save() for virt
  ...
parents 6c329784 0bf3924b
...@@ -181,7 +181,6 @@ config X86 ...@@ -181,7 +181,6 @@ config X86
select HAVE_HW_BREAKPOINT select HAVE_HW_BREAKPOINT
select HAVE_IDE select HAVE_IDE
select HAVE_IOREMAP_PROT select HAVE_IOREMAP_PROT
select HAVE_IRQ_EXIT_ON_IRQ_STACK if X86_64
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
......
...@@ -3,7 +3,13 @@ ...@@ -3,7 +3,13 @@
# Makefile for the x86 low level entry code # Makefile for the x86 low level entry code
# #
OBJECT_FILES_NON_STANDARD_entry_64_compat.o := y KASAN_SANITIZE := n
UBSAN_SANITIZE := n
KCOV_INSTRUMENT := n
CFLAGS_REMOVE_common.o = $(CC_FLAGS_FTRACE) -fstack-protector -fstack-protector-strong
CFLAGS_REMOVE_syscall_32.o = $(CC_FLAGS_FTRACE) -fstack-protector -fstack-protector-strong
CFLAGS_REMOVE_syscall_64.o = $(CC_FLAGS_FTRACE) -fstack-protector -fstack-protector-strong
CFLAGS_syscall_64.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_64.o += $(call cc-option,-Wno-override-init,)
CFLAGS_syscall_32.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_32.o += $(call cc-option,-Wno-override-init,)
......
...@@ -341,30 +341,13 @@ For 32-bit we have the following conventions - kernel is built with ...@@ -341,30 +341,13 @@ For 32-bit we have the following conventions - kernel is built with
#endif #endif
.endm .endm
#endif /* CONFIG_X86_64 */ #else /* CONFIG_X86_64 */
# undef UNWIND_HINT_IRET_REGS
# define UNWIND_HINT_IRET_REGS
#endif /* !CONFIG_X86_64 */
.macro STACKLEAK_ERASE .macro STACKLEAK_ERASE
#ifdef CONFIG_GCC_PLUGIN_STACKLEAK #ifdef CONFIG_GCC_PLUGIN_STACKLEAK
call stackleak_erase call stackleak_erase
#endif #endif
.endm .endm
/*
* This does 'call enter_from_user_mode' unless we can avoid it based on
* kernel config or using the static jump infrastructure.
*/
.macro CALL_enter_from_user_mode
#ifdef CONFIG_CONTEXT_TRACKING
#ifdef CONFIG_JUMP_LABEL
STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_key, def=0
#endif
call enter_from_user_mode
.Lafter_call_\@:
#endif
.endm
#ifdef CONFIG_PARAVIRT_XXL
#define GET_CR2_INTO(reg) GET_CR2_INTO_AX ; _ASM_MOV %_ASM_AX, reg
#else
#define GET_CR2_INTO(reg) _ASM_MOV %cr2, reg
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -46,12 +46,14 @@ ...@@ -46,12 +46,14 @@
* ebp user stack * ebp user stack
* 0(%ebp) arg6 * 0(%ebp) arg6
*/ */
SYM_FUNC_START(entry_SYSENTER_compat) SYM_CODE_START(entry_SYSENTER_compat)
UNWIND_HINT_EMPTY
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
SWAPGS SWAPGS
/* We are about to clobber %rsp anyway, clobbering here is OK */ pushq %rax
SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
popq %rax
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
...@@ -104,6 +106,9 @@ SYM_FUNC_START(entry_SYSENTER_compat) ...@@ -104,6 +106,9 @@ SYM_FUNC_START(entry_SYSENTER_compat)
xorl %r14d, %r14d /* nospec r14 */ xorl %r14d, %r14d /* nospec r14 */
pushq $0 /* pt_regs->r15 = 0 */ pushq $0 /* pt_regs->r15 = 0 */
xorl %r15d, %r15d /* nospec r15 */ xorl %r15d, %r15d /* nospec r15 */
UNWIND_HINT_REGS
cld cld
/* /*
...@@ -129,17 +134,11 @@ SYM_FUNC_START(entry_SYSENTER_compat) ...@@ -129,17 +134,11 @@ SYM_FUNC_START(entry_SYSENTER_compat)
jnz .Lsysenter_fix_flags jnz .Lsysenter_fix_flags
.Lsysenter_flags_fixed: .Lsysenter_flags_fixed:
/*
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_fast_syscall_32 call do_fast_syscall_32
/* XEN PV guests always use IRET path */ /* XEN PV guests always use IRET path */
ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ ALTERNATIVE "testl %eax, %eax; jz swapgs_restore_regs_and_return_to_usermode", \
"jmp .Lsyscall_32_done", X86_FEATURE_XENPV "jmp swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
jmp sysret32_from_system_call jmp sysret32_from_system_call
.Lsysenter_fix_flags: .Lsysenter_fix_flags:
...@@ -147,7 +146,7 @@ SYM_FUNC_START(entry_SYSENTER_compat) ...@@ -147,7 +146,7 @@ SYM_FUNC_START(entry_SYSENTER_compat)
popfq popfq
jmp .Lsysenter_flags_fixed jmp .Lsysenter_flags_fixed
SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL) SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL)
SYM_FUNC_END(entry_SYSENTER_compat) SYM_CODE_END(entry_SYSENTER_compat)
/* /*
* 32-bit SYSCALL entry. * 32-bit SYSCALL entry.
...@@ -197,6 +196,7 @@ SYM_FUNC_END(entry_SYSENTER_compat) ...@@ -197,6 +196,7 @@ SYM_FUNC_END(entry_SYSENTER_compat)
* 0(%esp) arg6 * 0(%esp) arg6
*/ */
SYM_CODE_START(entry_SYSCALL_compat) SYM_CODE_START(entry_SYSCALL_compat)
UNWIND_HINT_EMPTY
/* Interrupts are off on entry. */ /* Interrupts are off on entry. */
swapgs swapgs
...@@ -247,17 +247,13 @@ SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL) ...@@ -247,17 +247,13 @@ SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL)
pushq $0 /* pt_regs->r15 = 0 */ pushq $0 /* pt_regs->r15 = 0 */
xorl %r15d, %r15d /* nospec r15 */ xorl %r15d, %r15d /* nospec r15 */
/* UNWIND_HINT_REGS
* User mode is traced as though IRQs are on, and SYSENTER
* turned them off.
*/
TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_fast_syscall_32 call do_fast_syscall_32
/* XEN PV guests always use IRET path */ /* XEN PV guests always use IRET path */
ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \ ALTERNATIVE "testl %eax, %eax; jz swapgs_restore_regs_and_return_to_usermode", \
"jmp .Lsyscall_32_done", X86_FEATURE_XENPV "jmp swapgs_restore_regs_and_return_to_usermode", X86_FEATURE_XENPV
/* Opportunistic SYSRET */ /* Opportunistic SYSRET */
sysret32_from_system_call: sysret32_from_system_call:
...@@ -266,7 +262,7 @@ sysret32_from_system_call: ...@@ -266,7 +262,7 @@ sysret32_from_system_call:
* stack. So let's erase the thread stack right now. * stack. So let's erase the thread stack right now.
*/ */
STACKLEAK_ERASE STACKLEAK_ERASE
TRACE_IRQS_ON /* User mode traces as IRQs on. */
movq RBX(%rsp), %rbx /* pt_regs->rbx */ movq RBX(%rsp), %rbx /* pt_regs->rbx */
movq RBP(%rsp), %rbp /* pt_regs->rbp */ movq RBP(%rsp), %rbp /* pt_regs->rbp */
movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */ movq EFLAGS(%rsp), %r11 /* pt_regs->flags (in r11) */
...@@ -340,6 +336,7 @@ SYM_CODE_END(entry_SYSCALL_compat) ...@@ -340,6 +336,7 @@ SYM_CODE_END(entry_SYSCALL_compat)
* ebp arg6 * ebp arg6
*/ */
SYM_CODE_START(entry_INT80_compat) SYM_CODE_START(entry_INT80_compat)
UNWIND_HINT_EMPTY
/* /*
* Interrupts are off on entry. * Interrupts are off on entry.
*/ */
...@@ -361,8 +358,11 @@ SYM_CODE_START(entry_INT80_compat) ...@@ -361,8 +358,11 @@ SYM_CODE_START(entry_INT80_compat)
/* Need to switch before accessing the thread stack. */ /* Need to switch before accessing the thread stack. */
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
/* In the Xen PV case we already run on the thread stack. */ /* In the Xen PV case we already run on the thread stack. */
ALTERNATIVE "movq %rsp, %rdi", "jmp .Lint80_keep_stack", X86_FEATURE_XENPV ALTERNATIVE "", "jmp .Lint80_keep_stack", X86_FEATURE_XENPV
movq %rsp, %rdi
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
pushq 6*8(%rdi) /* regs->ss */ pushq 6*8(%rdi) /* regs->ss */
...@@ -401,19 +401,12 @@ SYM_CODE_START(entry_INT80_compat) ...@@ -401,19 +401,12 @@ SYM_CODE_START(entry_INT80_compat)
xorl %r14d, %r14d /* nospec r14 */ xorl %r14d, %r14d /* nospec r14 */
pushq %r15 /* pt_regs->r15 */ pushq %r15 /* pt_regs->r15 */
xorl %r15d, %r15d /* nospec r15 */ xorl %r15d, %r15d /* nospec r15 */
cld
/* UNWIND_HINT_REGS
* User mode is traced as though IRQs are on, and the interrupt
* gate turned them off. cld
*/
TRACE_IRQS_OFF
movq %rsp, %rdi movq %rsp, %rdi
call do_int80_syscall_32 call do_int80_syscall_32
.Lsyscall_32_done:
/* Go back to user mode. */
TRACE_IRQS_ON
jmp swapgs_restore_regs_and_return_to_usermode jmp swapgs_restore_regs_and_return_to_usermode
SYM_CODE_END(entry_INT80_compat) SYM_CODE_END(entry_INT80_compat)
...@@ -3,7 +3,6 @@ ...@@ -3,7 +3,6 @@
* Save registers before calling assembly functions. This avoids * Save registers before calling assembly functions. This avoids
* disturbance of register allocation in some inline assembly constructs. * disturbance of register allocation in some inline assembly constructs.
* Copyright 2001,2002 by Andi Kleen, SuSE Labs. * Copyright 2001,2002 by Andi Kleen, SuSE Labs.
* Added trace_hardirqs callers - Copyright 2007 Steven Rostedt, Red Hat, Inc.
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include "calling.h" #include "calling.h"
...@@ -37,15 +36,6 @@ SYM_FUNC_END(\name) ...@@ -37,15 +36,6 @@ SYM_FUNC_END(\name)
_ASM_NOKPROBE(\name) _ASM_NOKPROBE(\name)
.endm .endm
#ifdef CONFIG_TRACE_IRQFLAGS
THUNK trace_hardirqs_on_thunk,trace_hardirqs_on_caller,1
THUNK trace_hardirqs_off_thunk,trace_hardirqs_off_caller,1
#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
THUNK lockdep_sys_exit_thunk,lockdep_sys_exit
#endif
#ifdef CONFIG_PREEMPTION #ifdef CONFIG_PREEMPTION
THUNK preempt_schedule_thunk, preempt_schedule THUNK preempt_schedule_thunk, preempt_schedule
THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace THUNK preempt_schedule_notrace_thunk, preempt_schedule_notrace
...@@ -53,9 +43,7 @@ SYM_FUNC_END(\name) ...@@ -53,9 +43,7 @@ SYM_FUNC_END(\name)
EXPORT_SYMBOL(preempt_schedule_notrace_thunk) EXPORT_SYMBOL(preempt_schedule_notrace_thunk)
#endif #endif
#if defined(CONFIG_TRACE_IRQFLAGS) \ #ifdef CONFIG_PREEMPTION
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \
|| defined(CONFIG_PREEMPTION)
SYM_CODE_START_LOCAL_NOALIGN(.L_restore) SYM_CODE_START_LOCAL_NOALIGN(.L_restore)
popq %r11 popq %r11
popq %r10 popq %r10
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h> #include <asm/mshyperv.h>
#include <asm/idtentry.h>
#include <linux/version.h> #include <linux/version.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -152,15 +153,11 @@ static inline bool hv_reenlightenment_available(void) ...@@ -152,15 +153,11 @@ static inline bool hv_reenlightenment_available(void)
ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT; ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT;
} }
__visible void __irq_entry hyperv_reenlightenment_intr(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_reenlightenment)
{ {
entering_ack_irq(); ack_APIC_irq();
inc_irq_stat(irq_hv_reenlightenment_count); inc_irq_stat(irq_hv_reenlightenment_count);
schedule_delayed_work(&hv_reenlightenment_work, HZ/10); schedule_delayed_work(&hv_reenlightenment_work, HZ/10);
exiting_irq();
} }
void set_hv_tscchange_cb(void (*cb)(void)) void set_hv_tscchange_cb(void (*cb)(void))
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_ACRN_H
#define _ASM_X86_ACRN_H
extern void acrn_hv_callback_vector(void);
#ifdef CONFIG_TRACING
#define trace_acrn_hv_callback_vector acrn_hv_callback_vector
#endif
extern void acrn_hv_vector_handler(struct pt_regs *regs);
#endif /* _ASM_X86_ACRN_H */
...@@ -519,39 +519,6 @@ static inline bool apic_id_is_primary_thread(unsigned int id) { return false; } ...@@ -519,39 +519,6 @@ static inline bool apic_id_is_primary_thread(unsigned int id) { return false; }
static inline void apic_smt_update(void) { } static inline void apic_smt_update(void) { }
#endif #endif
extern void irq_enter(void);
extern void irq_exit(void);
static inline void entering_irq(void)
{
irq_enter();
kvm_set_cpu_l1tf_flush_l1d();
}
static inline void entering_ack_irq(void)
{
entering_irq();
ack_APIC_irq();
}
static inline void ipi_entering_ack_irq(void)
{
irq_enter();
ack_APIC_irq();
kvm_set_cpu_l1tf_flush_l1d();
}
static inline void exiting_irq(void)
{
irq_exit();
}
static inline void exiting_ack_irq(void)
{
ack_APIC_irq();
irq_exit();
}
extern void ioapic_zap_locks(void); extern void ioapic_zap_locks(void);
#endif /* _ASM_X86_APIC_H */ #endif /* _ASM_X86_APIC_H */
...@@ -205,13 +205,13 @@ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int n ...@@ -205,13 +205,13 @@ static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int n
} }
#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg #define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
static inline int arch_atomic_xchg(atomic_t *v, int new) static __always_inline int arch_atomic_xchg(atomic_t *v, int new)
{ {
return arch_xchg(&v->counter, new); return arch_xchg(&v->counter, new);
} }
#define arch_atomic_xchg arch_atomic_xchg #define arch_atomic_xchg arch_atomic_xchg
static inline void arch_atomic_and(int i, atomic_t *v) static __always_inline void arch_atomic_and(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "andl %1,%0" asm volatile(LOCK_PREFIX "andl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -219,7 +219,7 @@ static inline void arch_atomic_and(int i, atomic_t *v) ...@@ -219,7 +219,7 @@ static inline void arch_atomic_and(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int arch_atomic_fetch_and(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_and(int i, atomic_t *v)
{ {
int val = arch_atomic_read(v); int val = arch_atomic_read(v);
...@@ -229,7 +229,7 @@ static inline int arch_atomic_fetch_and(int i, atomic_t *v) ...@@ -229,7 +229,7 @@ static inline int arch_atomic_fetch_and(int i, atomic_t *v)
} }
#define arch_atomic_fetch_and arch_atomic_fetch_and #define arch_atomic_fetch_and arch_atomic_fetch_and
static inline void arch_atomic_or(int i, atomic_t *v) static __always_inline void arch_atomic_or(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "orl %1,%0" asm volatile(LOCK_PREFIX "orl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -237,7 +237,7 @@ static inline void arch_atomic_or(int i, atomic_t *v) ...@@ -237,7 +237,7 @@ static inline void arch_atomic_or(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int arch_atomic_fetch_or(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_or(int i, atomic_t *v)
{ {
int val = arch_atomic_read(v); int val = arch_atomic_read(v);
...@@ -247,7 +247,7 @@ static inline int arch_atomic_fetch_or(int i, atomic_t *v) ...@@ -247,7 +247,7 @@ static inline int arch_atomic_fetch_or(int i, atomic_t *v)
} }
#define arch_atomic_fetch_or arch_atomic_fetch_or #define arch_atomic_fetch_or arch_atomic_fetch_or
static inline void arch_atomic_xor(int i, atomic_t *v) static __always_inline void arch_atomic_xor(int i, atomic_t *v)
{ {
asm volatile(LOCK_PREFIX "xorl %1,%0" asm volatile(LOCK_PREFIX "xorl %1,%0"
: "+m" (v->counter) : "+m" (v->counter)
...@@ -255,7 +255,7 @@ static inline void arch_atomic_xor(int i, atomic_t *v) ...@@ -255,7 +255,7 @@ static inline void arch_atomic_xor(int i, atomic_t *v)
: "memory"); : "memory");
} }
static inline int arch_atomic_fetch_xor(int i, atomic_t *v) static __always_inline int arch_atomic_fetch_xor(int i, atomic_t *v)
{ {
int val = arch_atomic_read(v); int val = arch_atomic_read(v);
......
...@@ -70,14 +70,17 @@ do { \ ...@@ -70,14 +70,17 @@ do { \
#define HAVE_ARCH_BUG #define HAVE_ARCH_BUG
#define BUG() \ #define BUG() \
do { \ do { \
instrumentation_begin(); \
_BUG_FLAGS(ASM_UD2, 0); \ _BUG_FLAGS(ASM_UD2, 0); \
unreachable(); \ unreachable(); \
} while (0) } while (0)
#define __WARN_FLAGS(flags) \ #define __WARN_FLAGS(flags) \
do { \ do { \
instrumentation_begin(); \
_BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \ _BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \
annotate_reachable(); \ annotate_reachable(); \
instrumentation_end(); \
} while (0) } while (0)
#include <asm-generic/bug.h> #include <asm-generic/bug.h>
......
...@@ -11,15 +11,11 @@ ...@@ -11,15 +11,11 @@
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
/* Macro to enforce the same ordering and stack sizes */ /* Macro to enforce the same ordering and stack sizes */
#define ESTACKS_MEMBERS(guardsize, db2_holesize)\ #define ESTACKS_MEMBERS(guardsize) \
char DF_stack_guard[guardsize]; \ char DF_stack_guard[guardsize]; \
char DF_stack[EXCEPTION_STKSZ]; \ char DF_stack[EXCEPTION_STKSZ]; \
char NMI_stack_guard[guardsize]; \ char NMI_stack_guard[guardsize]; \
char NMI_stack[EXCEPTION_STKSZ]; \ char NMI_stack[EXCEPTION_STKSZ]; \
char DB2_stack_guard[guardsize]; \
char DB2_stack[db2_holesize]; \
char DB1_stack_guard[guardsize]; \
char DB1_stack[EXCEPTION_STKSZ]; \
char DB_stack_guard[guardsize]; \ char DB_stack_guard[guardsize]; \
char DB_stack[EXCEPTION_STKSZ]; \ char DB_stack[EXCEPTION_STKSZ]; \
char MCE_stack_guard[guardsize]; \ char MCE_stack_guard[guardsize]; \
...@@ -28,12 +24,12 @@ ...@@ -28,12 +24,12 @@
/* The exception stacks' physical storage. No guard pages required */ /* The exception stacks' physical storage. No guard pages required */
struct exception_stacks { struct exception_stacks {
ESTACKS_MEMBERS(0, 0) ESTACKS_MEMBERS(0)
}; };
/* The effective cpu entry area mapping with guard pages. */ /* The effective cpu entry area mapping with guard pages. */
struct cea_exception_stacks { struct cea_exception_stacks {
ESTACKS_MEMBERS(PAGE_SIZE, EXCEPTION_STKSZ) ESTACKS_MEMBERS(PAGE_SIZE)
}; };
/* /*
...@@ -42,8 +38,6 @@ struct cea_exception_stacks { ...@@ -42,8 +38,6 @@ struct cea_exception_stacks {
enum exception_stack_ordering { enum exception_stack_ordering {
ESTACK_DF, ESTACK_DF,
ESTACK_NMI, ESTACK_NMI,
ESTACK_DB2,
ESTACK_DB1,
ESTACK_DB, ESTACK_DB,
ESTACK_MCE, ESTACK_MCE,
N_EXCEPTION_STACKS N_EXCEPTION_STACKS
......
...@@ -18,7 +18,7 @@ DECLARE_PER_CPU(unsigned long, cpu_dr7); ...@@ -18,7 +18,7 @@ DECLARE_PER_CPU(unsigned long, cpu_dr7);
native_set_debugreg(register, value) native_set_debugreg(register, value)
#endif #endif
static inline unsigned long native_get_debugreg(int regno) static __always_inline unsigned long native_get_debugreg(int regno)
{ {
unsigned long val = 0; /* Damn you, gcc! */ unsigned long val = 0; /* Damn you, gcc! */
...@@ -47,7 +47,7 @@ static inline unsigned long native_get_debugreg(int regno) ...@@ -47,7 +47,7 @@ static inline unsigned long native_get_debugreg(int regno)
return val; return val;
} }
static inline void native_set_debugreg(int regno, unsigned long value) static __always_inline void native_set_debugreg(int regno, unsigned long value)
{ {
switch (regno) { switch (regno) {
case 0: case 0:
...@@ -85,7 +85,7 @@ static inline void hw_breakpoint_disable(void) ...@@ -85,7 +85,7 @@ static inline void hw_breakpoint_disable(void)
set_debugreg(0UL, 3); set_debugreg(0UL, 3);
} }
static inline int hw_breakpoint_active(void) static __always_inline bool hw_breakpoint_active(void)
{ {
return __this_cpu_read(cpu_dr7) & DR_GLOBAL_ENABLE_MASK; return __this_cpu_read(cpu_dr7) & DR_GLOBAL_ENABLE_MASK;
} }
...@@ -94,24 +94,38 @@ extern void aout_dump_debugregs(struct user *dump); ...@@ -94,24 +94,38 @@ extern void aout_dump_debugregs(struct user *dump);
extern void hw_breakpoint_restore(void); extern void hw_breakpoint_restore(void);
#ifdef CONFIG_X86_64 static __always_inline unsigned long local_db_save(void)
DECLARE_PER_CPU(int, debug_stack_usage);
static inline void debug_stack_usage_inc(void)
{ {
__this_cpu_inc(debug_stack_usage); unsigned long dr7;
if (static_cpu_has(X86_FEATURE_HYPERVISOR) && !hw_breakpoint_active())
return 0;
get_debugreg(dr7, 7);
dr7 &= ~0x400; /* architecturally set bit */
if (dr7)
set_debugreg(0, 7);
/*
* Ensure the compiler doesn't lower the above statements into
* the critical section; disabling breakpoints late would not
* be good.
*/
barrier();
return dr7;
} }
static inline void debug_stack_usage_dec(void)
static __always_inline void local_db_restore(unsigned long dr7)
{ {
__this_cpu_dec(debug_stack_usage); /*
* Ensure the compiler doesn't raise this statement into
* the critical section; enabling breakpoints early would
* not be good.
*/
barrier();
if (dr7)
set_debugreg(dr7, 7);
} }
void debug_stack_set_zero(void);
void debug_stack_reset(void);
#else /* !X86_64 */
static inline void debug_stack_set_zero(void) { }
static inline void debug_stack_reset(void) { }
static inline void debug_stack_usage_inc(void) { }
static inline void debug_stack_usage_dec(void) { }
#endif /* X86_64 */
#ifdef CONFIG_CPU_SUP_AMD #ifdef CONFIG_CPU_SUP_AMD
extern void set_dr_addr_mask(unsigned long mask, int dr); extern void set_dr_addr_mask(unsigned long mask, int dr);
......
...@@ -40,11 +40,6 @@ static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *in ...@@ -40,11 +40,6 @@ static inline void fill_ldt(struct desc_struct *desc, const struct user_desc *in
desc->l = 0; desc->l = 0;
} }
extern struct desc_ptr idt_descr;
extern gate_desc idt_table[];
extern const struct desc_ptr debug_idt_descr;
extern gate_desc debug_idt_table[];
struct gdt_page { struct gdt_page {
struct desc_struct gdt[GDT_ENTRIES]; struct desc_struct gdt[GDT_ENTRIES];
} __attribute__((aligned(PAGE_SIZE))); } __attribute__((aligned(PAGE_SIZE)));
...@@ -214,7 +209,7 @@ static inline void native_load_gdt(const struct desc_ptr *dtr) ...@@ -214,7 +209,7 @@ static inline void native_load_gdt(const struct desc_ptr *dtr)
asm volatile("lgdt %0"::"m" (*dtr)); asm volatile("lgdt %0"::"m" (*dtr));
} }
static inline void native_load_idt(const struct desc_ptr *dtr) static __always_inline void native_load_idt(const struct desc_ptr *dtr)
{ {
asm volatile("lidt %0"::"m" (*dtr)); asm volatile("lidt %0"::"m" (*dtr));
} }
...@@ -386,64 +381,23 @@ static inline void set_desc_limit(struct desc_struct *desc, unsigned long limit) ...@@ -386,64 +381,23 @@ static inline void set_desc_limit(struct desc_struct *desc, unsigned long limit)
desc->limit1 = (limit >> 16) & 0xf; desc->limit1 = (limit >> 16) & 0xf;
} }
void update_intr_gate(unsigned int n, const void *addr);
void alloc_intr_gate(unsigned int n, const void *addr); void alloc_intr_gate(unsigned int n, const void *addr);
extern unsigned long system_vectors[]; extern unsigned long system_vectors[];
#ifdef CONFIG_X86_64 extern void load_current_idt(void);
DECLARE_PER_CPU(u32, debug_idt_ctr);
static inline bool is_debug_idt_enabled(void)
{
if (this_cpu_read(debug_idt_ctr))
return true;
return false;
}
static inline void load_debug_idt(void)
{
load_idt((const struct desc_ptr *)&debug_idt_descr);
}
#else
static inline bool is_debug_idt_enabled(void)
{
return false;
}
static inline void load_debug_idt(void)
{
}
#endif
/*
* The load_current_idt() must be called with interrupts disabled
* to avoid races. That way the IDT will always be set back to the expected
* descriptor. It's also called when a CPU is being initialized, and
* that doesn't need to disable interrupts, as nothing should be
* bothering the CPU then.
*/
static inline void load_current_idt(void)
{
if (is_debug_idt_enabled())
load_debug_idt();
else
load_idt((const struct desc_ptr *)&idt_descr);
}
extern void idt_setup_early_handler(void); extern void idt_setup_early_handler(void);
extern void idt_setup_early_traps(void); extern void idt_setup_early_traps(void);
extern void idt_setup_traps(void); extern void idt_setup_traps(void);
extern void idt_setup_apic_and_irq_gates(void); extern void idt_setup_apic_and_irq_gates(void);
extern bool idt_is_f00f_address(unsigned long address);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
extern void idt_setup_early_pf(void); extern void idt_setup_early_pf(void);
extern void idt_setup_ist_traps(void); extern void idt_setup_ist_traps(void);
extern void idt_setup_debugidt_traps(void);
#else #else
static inline void idt_setup_early_pf(void) { } static inline void idt_setup_early_pf(void) { }
static inline void idt_setup_ist_traps(void) { } static inline void idt_setup_ist_traps(void) { }
static inline void idt_setup_debugidt_traps(void) { }
#endif #endif
extern void idt_invalidate(void *addr); extern void idt_invalidate(void *addr);
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* This file is designed to contain the BUILD_INTERRUPT specifications for
* all of the extra named interrupt vectors used by the architecture.
* Usually this is the Inter Process Interrupts (IPIs)
*/
/*
* The following vectors are part of the Linux architecture, there
* is no hardware IRQ pin equivalent for them, they are triggered
* through the ICC by us (IPIs)
*/
#ifdef CONFIG_SMP
BUILD_INTERRUPT(reschedule_interrupt,RESCHEDULE_VECTOR)
BUILD_INTERRUPT(call_function_interrupt,CALL_FUNCTION_VECTOR)
BUILD_INTERRUPT(call_function_single_interrupt,CALL_FUNCTION_SINGLE_VECTOR)
BUILD_INTERRUPT(irq_move_cleanup_interrupt, IRQ_MOVE_CLEANUP_VECTOR)
BUILD_INTERRUPT(reboot_interrupt, REBOOT_VECTOR)
#endif
#ifdef CONFIG_HAVE_KVM
BUILD_INTERRUPT(kvm_posted_intr_ipi, POSTED_INTR_VECTOR)
BUILD_INTERRUPT(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR)
BUILD_INTERRUPT(kvm_posted_intr_nested_ipi, POSTED_INTR_NESTED_VECTOR)
#endif
/*
* every pentium local APIC has two 'local interrupts', with a
* soft-definable vector attached to both interrupts, one of
* which is a timer interrupt, the other one is error counter
* overflow. Linux uses the local APIC timer interrupt to get
* a much simpler SMP time architecture:
*/
#ifdef CONFIG_X86_LOCAL_APIC
BUILD_INTERRUPT(apic_timer_interrupt,LOCAL_TIMER_VECTOR)
BUILD_INTERRUPT(error_interrupt,ERROR_APIC_VECTOR)
BUILD_INTERRUPT(spurious_interrupt,SPURIOUS_APIC_VECTOR)
BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
#ifdef CONFIG_IRQ_WORK
BUILD_INTERRUPT(irq_work_interrupt, IRQ_WORK_VECTOR)
#endif
#ifdef CONFIG_X86_THERMAL_VECTOR
BUILD_INTERRUPT(thermal_interrupt,THERMAL_APIC_VECTOR)
#endif
#ifdef CONFIG_X86_MCE_THRESHOLD
BUILD_INTERRUPT(threshold_interrupt,THRESHOLD_APIC_VECTOR)
#endif
#ifdef CONFIG_X86_MCE_AMD
BUILD_INTERRUPT(deferred_error_interrupt, DEFERRED_ERROR_VECTOR)
#endif
#endif
...@@ -28,28 +28,6 @@ ...@@ -28,28 +28,6 @@
#include <asm/irq.h> #include <asm/irq.h>
#include <asm/sections.h> #include <asm/sections.h>
/* Interrupt handlers registered during init_IRQ */
extern asmlinkage void apic_timer_interrupt(void);
extern asmlinkage void x86_platform_ipi(void);
extern asmlinkage void kvm_posted_intr_ipi(void);
extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
extern asmlinkage void kvm_posted_intr_nested_ipi(void);
extern asmlinkage void error_interrupt(void);
extern asmlinkage void irq_work_interrupt(void);
extern asmlinkage void uv_bau_message_intr1(void);
extern asmlinkage void spurious_interrupt(void);
extern asmlinkage void thermal_interrupt(void);
extern asmlinkage void reschedule_interrupt(void);
extern asmlinkage void irq_move_cleanup_interrupt(void);
extern asmlinkage void reboot_interrupt(void);
extern asmlinkage void threshold_interrupt(void);
extern asmlinkage void deferred_error_interrupt(void);
extern asmlinkage void call_function_interrupt(void);
extern asmlinkage void call_function_single_interrupt(void);
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
struct irq_data; struct irq_data;
struct pci_dev; struct pci_dev;
......
This diff is collapsed.
...@@ -11,6 +11,13 @@ ...@@ -11,6 +11,13 @@
#include <asm/apicdef.h> #include <asm/apicdef.h>
#include <asm/irq_vectors.h> #include <asm/irq_vectors.h>
/*
* The irq entry code is in the noinstr section and the start/end of
* __irqentry_text is emitted via labels. Make the build fail if
* something moves a C function into the __irq_entry section.
*/
#define __irq_entry __invalid_section
static inline int irq_canonicalize(int irq) static inline int irq_canonicalize(int irq)
{ {
return ((irq == 2) ? 9 : irq); return ((irq == 2) ? 9 : irq);
...@@ -26,17 +33,14 @@ extern void fixup_irqs(void); ...@@ -26,17 +33,14 @@ extern void fixup_irqs(void);
#ifdef CONFIG_HAVE_KVM #ifdef CONFIG_HAVE_KVM
extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void)); extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void));
extern __visible void smp_kvm_posted_intr_ipi(struct pt_regs *regs);
extern __visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs);
extern __visible void smp_kvm_posted_intr_nested_ipi(struct pt_regs *regs);
#endif #endif
extern void (*x86_platform_ipi_callback)(void); extern void (*x86_platform_ipi_callback)(void);
extern void native_init_IRQ(void); extern void native_init_IRQ(void);
extern void handle_irq(struct irq_desc *desc, struct pt_regs *regs); extern void __handle_irq(struct irq_desc *desc, struct pt_regs *regs);
extern __visible void do_IRQ(struct pt_regs *regs); extern __visible void do_IRQ(struct pt_regs *regs, unsigned long vector);
extern void init_ISA_irqs(void); extern void init_ISA_irqs(void);
...@@ -46,7 +50,6 @@ extern void __init init_IRQ(void); ...@@ -46,7 +50,6 @@ extern void __init init_IRQ(void);
void arch_trigger_cpumask_backtrace(const struct cpumask *mask, void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
bool exclude_self); bool exclude_self);
extern __visible void smp_x86_platform_ipi(struct pt_regs *regs);
#define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
#endif #endif
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Per-cpu current frame pointer - the location of the last exception frame on
* the stack, stored in the per-cpu area.
*
* Jeremy Fitzhardinge <jeremy@goop.org>
*/
#ifndef _ASM_X86_IRQ_REGS_H
#define _ASM_X86_IRQ_REGS_H
#include <asm/percpu.h>
#define ARCH_HAS_OWN_IRQ_REGS
DECLARE_PER_CPU(struct pt_regs *, irq_regs);
static inline struct pt_regs *get_irq_regs(void)
{
return __this_cpu_read(irq_regs);
}
static inline struct pt_regs *set_irq_regs(struct pt_regs *new_regs)
{
struct pt_regs *old_regs;
old_regs = get_irq_regs();
__this_cpu_write(irq_regs, new_regs);
return old_regs;
}
#endif /* _ASM_X86_IRQ_REGS_32_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_IRQ_STACK_H
#define _ASM_X86_IRQ_STACK_H
#include <linux/ptrace.h>
#include <asm/processor.h>
#ifdef CONFIG_X86_64
static __always_inline bool irqstack_active(void)
{
return __this_cpu_read(irq_count) != -1;
}
void asm_call_on_stack(void *sp, void *func, void *arg);
static __always_inline void __run_on_irqstack(void *func, void *arg)
{
void *tos = __this_cpu_read(hardirq_stack_ptr);
__this_cpu_add(irq_count, 1);
asm_call_on_stack(tos - 8, func, arg);
__this_cpu_sub(irq_count, 1);
}
#else /* CONFIG_X86_64 */
static inline bool irqstack_active(void) { return false; }
static inline void __run_on_irqstack(void *func, void *arg) { }
#endif /* !CONFIG_X86_64 */
static __always_inline bool irq_needs_irq_stack(struct pt_regs *regs)
{
if (IS_ENABLED(CONFIG_X86_32))
return false;
if (!regs)
return !irqstack_active();
return !user_mode(regs) && !irqstack_active();
}
static __always_inline void run_on_irqstack_cond(void *func, void *arg,
struct pt_regs *regs)
{
void (*__func)(void *arg) = func;
lockdep_assert_irqs_disabled();
if (irq_needs_irq_stack(regs))
__run_on_irqstack(__func, arg);
else
__func(arg);
}
#endif
...@@ -10,7 +10,6 @@ static inline bool arch_irq_work_has_interrupt(void) ...@@ -10,7 +10,6 @@ static inline bool arch_irq_work_has_interrupt(void)
return boot_cpu_has(X86_FEATURE_APIC); return boot_cpu_has(X86_FEATURE_APIC);
} }
extern void arch_irq_work_raise(void); extern void arch_irq_work_raise(void);
extern __visible void smp_irq_work_interrupt(struct pt_regs *regs);
#else #else
static inline bool arch_irq_work_has_interrupt(void) static inline bool arch_irq_work_has_interrupt(void)
{ {
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
/* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */ /* Declaration required for gcc < 4.9 to prevent -Werror=missing-prototypes */
extern inline unsigned long native_save_fl(void); extern inline unsigned long native_save_fl(void);
extern inline unsigned long native_save_fl(void) extern __always_inline unsigned long native_save_fl(void)
{ {
unsigned long flags; unsigned long flags;
...@@ -44,12 +44,12 @@ extern inline void native_restore_fl(unsigned long flags) ...@@ -44,12 +44,12 @@ extern inline void native_restore_fl(unsigned long flags)
:"memory", "cc"); :"memory", "cc");
} }
static inline void native_irq_disable(void) static __always_inline void native_irq_disable(void)
{ {
asm volatile("cli": : :"memory"); asm volatile("cli": : :"memory");
} }
static inline void native_irq_enable(void) static __always_inline void native_irq_enable(void)
{ {
asm volatile("sti": : :"memory"); asm volatile("sti": : :"memory");
} }
...@@ -74,22 +74,22 @@ static inline __cpuidle void native_halt(void) ...@@ -74,22 +74,22 @@ static inline __cpuidle void native_halt(void)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/types.h> #include <linux/types.h>
static inline notrace unsigned long arch_local_save_flags(void) static __always_inline unsigned long arch_local_save_flags(void)
{ {
return native_save_fl(); return native_save_fl();
} }
static inline notrace void arch_local_irq_restore(unsigned long flags) static __always_inline void arch_local_irq_restore(unsigned long flags)
{ {
native_restore_fl(flags); native_restore_fl(flags);
} }
static inline notrace void arch_local_irq_disable(void) static __always_inline void arch_local_irq_disable(void)
{ {
native_irq_disable(); native_irq_disable();
} }
static inline notrace void arch_local_irq_enable(void) static __always_inline void arch_local_irq_enable(void)
{ {
native_irq_enable(); native_irq_enable();
} }
...@@ -115,7 +115,7 @@ static inline __cpuidle void halt(void) ...@@ -115,7 +115,7 @@ static inline __cpuidle void halt(void)
/* /*
* For spinlocks, etc: * For spinlocks, etc:
*/ */
static inline notrace unsigned long arch_local_irq_save(void) static __always_inline unsigned long arch_local_irq_save(void)
{ {
unsigned long flags = arch_local_save_flags(); unsigned long flags = arch_local_save_flags();
arch_local_irq_disable(); arch_local_irq_disable();
...@@ -159,12 +159,12 @@ static inline notrace unsigned long arch_local_irq_save(void) ...@@ -159,12 +159,12 @@ static inline notrace unsigned long arch_local_irq_save(void)
#endif /* CONFIG_PARAVIRT_XXL */ #endif /* CONFIG_PARAVIRT_XXL */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
static inline int arch_irqs_disabled_flags(unsigned long flags) static __always_inline int arch_irqs_disabled_flags(unsigned long flags)
{ {
return !(flags & X86_EFLAGS_IF); return !(flags & X86_EFLAGS_IF);
} }
static inline int arch_irqs_disabled(void) static __always_inline int arch_irqs_disabled(void)
{ {
unsigned long flags = arch_local_save_flags(); unsigned long flags = arch_local_save_flags();
...@@ -172,38 +172,4 @@ static inline int arch_irqs_disabled(void) ...@@ -172,38 +172,4 @@ static inline int arch_irqs_disabled(void)
} }
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#ifdef __ASSEMBLY__
#ifdef CONFIG_TRACE_IRQFLAGS
# define TRACE_IRQS_ON call trace_hardirqs_on_thunk;
# define TRACE_IRQS_OFF call trace_hardirqs_off_thunk;
#else
# define TRACE_IRQS_ON
# define TRACE_IRQS_OFF
#endif
#ifdef CONFIG_DEBUG_LOCK_ALLOC
# ifdef CONFIG_X86_64
# define LOCKDEP_SYS_EXIT call lockdep_sys_exit_thunk
# define LOCKDEP_SYS_EXIT_IRQ \
TRACE_IRQS_ON; \
sti; \
call lockdep_sys_exit_thunk; \
cli; \
TRACE_IRQS_OFF;
# else
# define LOCKDEP_SYS_EXIT \
pushl %eax; \
pushl %ecx; \
pushl %edx; \
call lockdep_sys_exit; \
popl %edx; \
popl %ecx; \
popl %eax;
# define LOCKDEP_SYS_EXIT_IRQ
# endif
#else
# define LOCKDEP_SYS_EXIT
# define LOCKDEP_SYS_EXIT_IRQ
#endif
#endif /* __ASSEMBLY__ */
#endif #endif
...@@ -141,7 +141,7 @@ static inline void kvm_disable_steal_time(void) ...@@ -141,7 +141,7 @@ static inline void kvm_disable_steal_time(void)
return; return;
} }
static inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token) static __always_inline bool kvm_handle_async_pf(struct pt_regs *regs, u32 token)
{ {
return false; return false;
} }
......
...@@ -238,7 +238,7 @@ extern void mce_disable_bank(int bank); ...@@ -238,7 +238,7 @@ extern void mce_disable_bank(int bank);
/* /*
* Exception handler * Exception handler
*/ */
void do_machine_check(struct pt_regs *, long); void do_machine_check(struct pt_regs *pt_regs);
/* /*
* Threshold handler * Threshold handler
......
...@@ -54,20 +54,8 @@ typedef int (*hyperv_fill_flush_list_func)( ...@@ -54,20 +54,8 @@ typedef int (*hyperv_fill_flush_list_func)(
vclocks_set_used(VDSO_CLOCKMODE_HVCLOCK); vclocks_set_used(VDSO_CLOCKMODE_HVCLOCK);
#define hv_get_raw_timer() rdtsc_ordered() #define hv_get_raw_timer() rdtsc_ordered()
void hyperv_callback_vector(void);
void hyperv_reenlightenment_vector(void);
#ifdef CONFIG_TRACING
#define trace_hyperv_callback_vector hyperv_callback_vector
#endif
void hyperv_vector_handler(struct pt_regs *regs); void hyperv_vector_handler(struct pt_regs *regs);
/*
* Routines for stimer0 Direct Mode handling.
* On x86/x64, there are no percpu actions to take.
*/
void hv_stimer0_vector_handler(struct pt_regs *regs);
void hv_stimer0_callback_vector(void);
static inline void hv_enable_stimer0_percpu_irq(int irq) {} static inline void hv_enable_stimer0_percpu_irq(int irq) {}
static inline void hv_disable_stimer0_percpu_irq(int irq) {} static inline void hv_disable_stimer0_percpu_irq(int irq) {}
...@@ -226,7 +214,6 @@ void hyperv_setup_mmu_ops(void); ...@@ -226,7 +214,6 @@ void hyperv_setup_mmu_ops(void);
void *hv_alloc_hyperv_page(void); void *hv_alloc_hyperv_page(void);
void *hv_alloc_hyperv_zeroed_page(void); void *hv_alloc_hyperv_zeroed_page(void);
void hv_free_hyperv_page(unsigned long addr); void hv_free_hyperv_page(unsigned long addr);
void hyperv_reenlightenment_intr(struct pt_regs *regs);
void set_hv_tscchange_cb(void (*cb)(void)); void set_hv_tscchange_cb(void (*cb)(void));
void clear_hv_tscchange_cb(void); void clear_hv_tscchange_cb(void);
void hyperv_stop_tsc_emulation(void); void hyperv_stop_tsc_emulation(void);
......
...@@ -262,7 +262,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear); ...@@ -262,7 +262,7 @@ DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
* combination with microcode which triggers a CPU buffer flush when the * combination with microcode which triggers a CPU buffer flush when the
* instruction is executed. * instruction is executed.
*/ */
static inline void mds_clear_cpu_buffers(void) static __always_inline void mds_clear_cpu_buffers(void)
{ {
static const u16 ds = __KERNEL_DS; static const u16 ds = __KERNEL_DS;
...@@ -283,7 +283,7 @@ static inline void mds_clear_cpu_buffers(void) ...@@ -283,7 +283,7 @@ static inline void mds_clear_cpu_buffers(void)
* *
* Clear CPU buffers if the corresponding static key is enabled * Clear CPU buffers if the corresponding static key is enabled
*/ */
static inline void mds_user_clear_cpu_buffers(void) static __always_inline void mds_user_clear_cpu_buffers(void)
{ {
if (static_branch_likely(&mds_user_clear)) if (static_branch_likely(&mds_user_clear))
mds_clear_cpu_buffers(); mds_clear_cpu_buffers();
......
...@@ -823,7 +823,7 @@ static inline void prefetch(const void *x) ...@@ -823,7 +823,7 @@ static inline void prefetch(const void *x)
* Useful for spinlocks to avoid one state transition in the * Useful for spinlocks to avoid one state transition in the
* cache coherency protocol: * cache coherency protocol:
*/ */
static inline void prefetchw(const void *x) static __always_inline void prefetchw(const void *x)
{ {
alternative_input(BASE_PREFETCH, "prefetchw %P1", alternative_input(BASE_PREFETCH, "prefetchw %P1",
X86_FEATURE_3DNOWPREFETCH, X86_FEATURE_3DNOWPREFETCH,
......
...@@ -123,7 +123,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc) ...@@ -123,7 +123,7 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
* On x86_64, vm86 mode is mercifully nonexistent, and we don't need * On x86_64, vm86 mode is mercifully nonexistent, and we don't need
* the extra check. * the extra check.
*/ */
static inline int user_mode(struct pt_regs *regs) static __always_inline int user_mode(struct pt_regs *regs)
{ {
#ifdef CONFIG_X86_32 #ifdef CONFIG_X86_32
return ((regs->cs & SEGMENT_RPL_MASK) | (regs->flags & X86_VM_MASK)) >= USER_RPL; return ((regs->cs & SEGMENT_RPL_MASK) | (regs->flags & X86_VM_MASK)) >= USER_RPL;
......
...@@ -7,6 +7,7 @@ ...@@ -7,6 +7,7 @@
#include <asm/nops.h> #include <asm/nops.h>
#include <asm/processor-flags.h> #include <asm/processor-flags.h>
#include <linux/irqflags.h>
#include <linux/jump_label.h> #include <linux/jump_label.h>
/* /*
...@@ -27,14 +28,14 @@ static inline unsigned long native_read_cr0(void) ...@@ -27,14 +28,14 @@ static inline unsigned long native_read_cr0(void)
return val; return val;
} }
static inline unsigned long native_read_cr2(void) static __always_inline unsigned long native_read_cr2(void)
{ {
unsigned long val; unsigned long val;
asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order)); asm volatile("mov %%cr2,%0\n\t" : "=r" (val), "=m" (__force_order));
return val; return val;
} }
static inline void native_write_cr2(unsigned long val) static __always_inline void native_write_cr2(unsigned long val)
{ {
asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order)); asm volatile("mov %0,%%cr2": : "r" (val), "m" (__force_order));
} }
...@@ -129,7 +130,16 @@ static inline void native_wbinvd(void) ...@@ -129,7 +130,16 @@ static inline void native_wbinvd(void)
asm volatile("wbinvd": : :"memory"); asm volatile("wbinvd": : :"memory");
} }
extern asmlinkage void native_load_gs_index(unsigned); extern asmlinkage void asm_load_gs_index(unsigned int selector);
static inline void native_load_gs_index(unsigned int selector)
{
unsigned long flags;
local_irq_save(flags);
asm_load_gs_index(selector);
local_irq_restore(flags);
}
static inline unsigned long __read_cr4(void) static inline unsigned long __read_cr4(void)
{ {
...@@ -150,12 +160,12 @@ static inline void write_cr0(unsigned long x) ...@@ -150,12 +160,12 @@ static inline void write_cr0(unsigned long x)
native_write_cr0(x); native_write_cr0(x);
} }
static inline unsigned long read_cr2(void) static __always_inline unsigned long read_cr2(void)
{ {
return native_read_cr2(); return native_read_cr2();
} }
static inline void write_cr2(unsigned long x) static __always_inline void write_cr2(unsigned long x)
{ {
native_write_cr2(x); native_write_cr2(x);
} }
...@@ -186,7 +196,7 @@ static inline void wbinvd(void) ...@@ -186,7 +196,7 @@ static inline void wbinvd(void)
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
static inline void load_gs_index(unsigned selector) static inline void load_gs_index(unsigned int selector)
{ {
native_load_gs_index(selector); native_load_gs_index(selector);
} }
......
...@@ -64,7 +64,7 @@ extern void text_poke_finish(void); ...@@ -64,7 +64,7 @@ extern void text_poke_finish(void);
#define DISP32_SIZE 4 #define DISP32_SIZE 4
static inline int text_opcode_size(u8 opcode) static __always_inline int text_opcode_size(u8 opcode)
{ {
int size = 0; int size = 0;
...@@ -118,12 +118,14 @@ extern __ro_after_init struct mm_struct *poking_mm; ...@@ -118,12 +118,14 @@ extern __ro_after_init struct mm_struct *poking_mm;
extern __ro_after_init unsigned long poking_addr; extern __ro_after_init unsigned long poking_addr;
#ifndef CONFIG_UML_X86 #ifndef CONFIG_UML_X86
static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip) static __always_inline
void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
{ {
regs->ip = ip; regs->ip = ip;
} }
static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val) static __always_inline
void int3_emulate_push(struct pt_regs *regs, unsigned long val)
{ {
/* /*
* The int3 handler in entry_64.S adds a gap between the * The int3 handler in entry_64.S adds a gap between the
...@@ -138,7 +140,8 @@ static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val) ...@@ -138,7 +140,8 @@ static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val)
*(unsigned long *)regs->sp = val; *(unsigned long *)regs->sp = val;
} }
static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func) static __always_inline
void int3_emulate_call(struct pt_regs *regs, unsigned long func)
{ {
int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE); int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
int3_emulate_jmp(regs, func); int3_emulate_jmp(regs, func);
......
...@@ -5,12 +5,8 @@ ...@@ -5,12 +5,8 @@
DECLARE_STATIC_KEY_FALSE(trace_pagefault_key); DECLARE_STATIC_KEY_FALSE(trace_pagefault_key);
#define trace_pagefault_enabled() \ #define trace_pagefault_enabled() \
static_branch_unlikely(&trace_pagefault_key) static_branch_unlikely(&trace_pagefault_key)
DECLARE_STATIC_KEY_FALSE(trace_resched_ipi_key);
#define trace_resched_ipi_enabled() \
static_branch_unlikely(&trace_resched_ipi_key)
#else #else
static inline bool trace_pagefault_enabled(void) { return false; } static inline bool trace_pagefault_enabled(void) { return false; }
static inline bool trace_resched_ipi_enabled(void) { return false; }
#endif #endif
#endif #endif
...@@ -10,9 +10,6 @@ ...@@ -10,9 +10,6 @@
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
extern int trace_resched_ipi_reg(void);
extern void trace_resched_ipi_unreg(void);
DECLARE_EVENT_CLASS(x86_irq_vector, DECLARE_EVENT_CLASS(x86_irq_vector,
TP_PROTO(int vector), TP_PROTO(int vector),
...@@ -37,18 +34,6 @@ DEFINE_EVENT_FN(x86_irq_vector, name##_exit, \ ...@@ -37,18 +34,6 @@ DEFINE_EVENT_FN(x86_irq_vector, name##_exit, \
TP_PROTO(int vector), \ TP_PROTO(int vector), \
TP_ARGS(vector), NULL, NULL); TP_ARGS(vector), NULL, NULL);
#define DEFINE_RESCHED_IPI_EVENT(name) \
DEFINE_EVENT_FN(x86_irq_vector, name##_entry, \
TP_PROTO(int vector), \
TP_ARGS(vector), \
trace_resched_ipi_reg, \
trace_resched_ipi_unreg); \
DEFINE_EVENT_FN(x86_irq_vector, name##_exit, \
TP_PROTO(int vector), \
TP_ARGS(vector), \
trace_resched_ipi_reg, \
trace_resched_ipi_unreg);
/* /*
* local_timer - called when entering/exiting a local timer interrupt * local_timer - called when entering/exiting a local timer interrupt
* vector handler * vector handler
...@@ -99,7 +84,7 @@ TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0); ...@@ -99,7 +84,7 @@ TRACE_EVENT_PERF_PERM(irq_work_exit, is_sampling_event(p_event) ? -EPERM : 0);
/* /*
* reschedule - called when entering/exiting a reschedule vector handler * reschedule - called when entering/exiting a reschedule vector handler
*/ */
DEFINE_RESCHED_IPI_EVENT(reschedule); DEFINE_IRQ_VECTOR_EVENT(reschedule);
/* /*
* call_function - called when entering/exiting a call function interrupt * call_function - called when entering/exiting a call function interrupt
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_X86_TRAPNR_H
#define _ASM_X86_TRAPNR_H
/* Interrupts/Exceptions */
#define X86_TRAP_DE 0 /* Divide-by-zero */
#define X86_TRAP_DB 1 /* Debug */
#define X86_TRAP_NMI 2 /* Non-maskable Interrupt */
#define X86_TRAP_BP 3 /* Breakpoint */
#define X86_TRAP_OF 4 /* Overflow */
#define X86_TRAP_BR 5 /* Bound Range Exceeded */
#define X86_TRAP_UD 6 /* Invalid Opcode */
#define X86_TRAP_NM 7 /* Device Not Available */
#define X86_TRAP_DF 8 /* Double Fault */
#define X86_TRAP_OLD_MF 9 /* Coprocessor Segment Overrun */
#define X86_TRAP_TS 10 /* Invalid TSS */
#define X86_TRAP_NP 11 /* Segment Not Present */
#define X86_TRAP_SS 12 /* Stack Segment Fault */
#define X86_TRAP_GP 13 /* General Protection Fault */
#define X86_TRAP_PF 14 /* Page Fault */
#define X86_TRAP_SPURIOUS 15 /* Spurious Interrupt */
#define X86_TRAP_MF 16 /* x87 Floating-Point Exception */
#define X86_TRAP_AC 17 /* Alignment Check */
#define X86_TRAP_MC 18 /* Machine Check */
#define X86_TRAP_XF 19 /* SIMD Floating-Point Exception */
#define X86_TRAP_VE 20 /* Virtualization Exception */
#define X86_TRAP_CP 21 /* Control Protection Exception */
#define X86_TRAP_IRET 32 /* IRET Exception */
#endif
...@@ -6,85 +6,9 @@ ...@@ -6,85 +6,9 @@
#include <linux/kprobes.h> #include <linux/kprobes.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
#include <asm/idtentry.h>
#include <asm/siginfo.h> /* TRAP_TRACE, ... */ #include <asm/siginfo.h> /* TRAP_TRACE, ... */
#define dotraplinkage __visible
asmlinkage void divide_error(void);
asmlinkage void debug(void);
asmlinkage void nmi(void);
asmlinkage void int3(void);
asmlinkage void overflow(void);
asmlinkage void bounds(void);
asmlinkage void invalid_op(void);
asmlinkage void device_not_available(void);
#ifdef CONFIG_X86_64
asmlinkage void double_fault(void);
#endif
asmlinkage void coprocessor_segment_overrun(void);
asmlinkage void invalid_TSS(void);
asmlinkage void segment_not_present(void);
asmlinkage void stack_segment(void);
asmlinkage void general_protection(void);
asmlinkage void page_fault(void);
asmlinkage void async_page_fault(void);
asmlinkage void spurious_interrupt_bug(void);
asmlinkage void coprocessor_error(void);
asmlinkage void alignment_check(void);
#ifdef CONFIG_X86_MCE
asmlinkage void machine_check(void);
#endif /* CONFIG_X86_MCE */
asmlinkage void simd_coprocessor_error(void);
#if defined(CONFIG_X86_64) && defined(CONFIG_XEN_PV)
asmlinkage void xen_divide_error(void);
asmlinkage void xen_xennmi(void);
asmlinkage void xen_xendebug(void);
asmlinkage void xen_int3(void);
asmlinkage void xen_overflow(void);
asmlinkage void xen_bounds(void);
asmlinkage void xen_invalid_op(void);
asmlinkage void xen_device_not_available(void);
asmlinkage void xen_double_fault(void);
asmlinkage void xen_coprocessor_segment_overrun(void);
asmlinkage void xen_invalid_TSS(void);
asmlinkage void xen_segment_not_present(void);
asmlinkage void xen_stack_segment(void);
asmlinkage void xen_general_protection(void);
asmlinkage void xen_page_fault(void);
asmlinkage void xen_spurious_interrupt_bug(void);
asmlinkage void xen_coprocessor_error(void);
asmlinkage void xen_alignment_check(void);
#ifdef CONFIG_X86_MCE
asmlinkage void xen_machine_check(void);
#endif /* CONFIG_X86_MCE */
asmlinkage void xen_simd_coprocessor_error(void);
#endif
dotraplinkage void do_divide_error(struct pt_regs *regs, long error_code);
dotraplinkage void do_debug(struct pt_regs *regs, long error_code);
dotraplinkage void do_nmi(struct pt_regs *regs, long error_code);
dotraplinkage void do_int3(struct pt_regs *regs, long error_code);
dotraplinkage void do_overflow(struct pt_regs *regs, long error_code);
dotraplinkage void do_bounds(struct pt_regs *regs, long error_code);
dotraplinkage void do_invalid_op(struct pt_regs *regs, long error_code);
dotraplinkage void do_device_not_available(struct pt_regs *regs, long error_code);
dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code, unsigned long cr2);
dotraplinkage void do_coprocessor_segment_overrun(struct pt_regs *regs, long error_code);
dotraplinkage void do_invalid_TSS(struct pt_regs *regs, long error_code);
dotraplinkage void do_segment_not_present(struct pt_regs *regs, long error_code);
dotraplinkage void do_stack_segment(struct pt_regs *regs, long error_code);
dotraplinkage void do_general_protection(struct pt_regs *regs, long error_code);
dotraplinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code, unsigned long address);
dotraplinkage void do_spurious_interrupt_bug(struct pt_regs *regs, long error_code);
dotraplinkage void do_coprocessor_error(struct pt_regs *regs, long error_code);
dotraplinkage void do_alignment_check(struct pt_regs *regs, long error_code);
dotraplinkage void do_simd_coprocessor_error(struct pt_regs *regs, long error_code);
#ifdef CONFIG_X86_32
dotraplinkage void do_iret_error(struct pt_regs *regs, long error_code);
#endif
dotraplinkage void do_mce(struct pt_regs *regs, long error_code);
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs); asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs);
asmlinkage __visible notrace asmlinkage __visible notrace
...@@ -92,6 +16,11 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s); ...@@ -92,6 +16,11 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s);
void __init trap_init(void); void __init trap_init(void);
#endif #endif
#ifdef CONFIG_X86_F00F_BUG
/* For handling the FOOF bug */
void handle_invalid_op(struct pt_regs *regs);
#endif
static inline int get_si_code(unsigned long condition) static inline int get_si_code(unsigned long condition)
{ {
if (condition & DR_STEP) if (condition & DR_STEP)
...@@ -105,16 +34,6 @@ static inline int get_si_code(unsigned long condition) ...@@ -105,16 +34,6 @@ static inline int get_si_code(unsigned long condition)
extern int panic_on_unrecovered_nmi; extern int panic_on_unrecovered_nmi;
void math_emulate(struct math_emu_info *); void math_emulate(struct math_emu_info *);
#ifndef CONFIG_X86_32
asmlinkage void smp_thermal_interrupt(struct pt_regs *regs);
asmlinkage void smp_threshold_interrupt(struct pt_regs *regs);
asmlinkage void smp_deferred_error_interrupt(struct pt_regs *regs);
#endif
void smp_apic_timer_interrupt(struct pt_regs *regs);
void smp_spurious_interrupt(struct pt_regs *regs);
void smp_error_interrupt(struct pt_regs *regs);
asmlinkage void smp_irq_move_cleanup_interrupt(void);
#ifdef CONFIG_VMAP_STACK #ifdef CONFIG_VMAP_STACK
void __noreturn handle_stack_overflow(const char *message, void __noreturn handle_stack_overflow(const char *message,
...@@ -122,31 +41,6 @@ void __noreturn handle_stack_overflow(const char *message, ...@@ -122,31 +41,6 @@ void __noreturn handle_stack_overflow(const char *message,
unsigned long fault_address); unsigned long fault_address);
#endif #endif
/* Interrupts/Exceptions */
enum {
X86_TRAP_DE = 0, /* 0, Divide-by-zero */
X86_TRAP_DB, /* 1, Debug */
X86_TRAP_NMI, /* 2, Non-maskable Interrupt */
X86_TRAP_BP, /* 3, Breakpoint */
X86_TRAP_OF, /* 4, Overflow */
X86_TRAP_BR, /* 5, Bound Range Exceeded */
X86_TRAP_UD, /* 6, Invalid Opcode */
X86_TRAP_NM, /* 7, Device Not Available */
X86_TRAP_DF, /* 8, Double Fault */
X86_TRAP_OLD_MF, /* 9, Coprocessor Segment Overrun */
X86_TRAP_TS, /* 10, Invalid TSS */
X86_TRAP_NP, /* 11, Segment Not Present */
X86_TRAP_SS, /* 12, Stack Segment Fault */
X86_TRAP_GP, /* 13, General Protection Fault */
X86_TRAP_PF, /* 14, Page Fault */
X86_TRAP_SPURIOUS, /* 15, Spurious Interrupt */
X86_TRAP_MF, /* 16, x87 Floating-Point Exception */
X86_TRAP_AC, /* 17, Alignment Check */
X86_TRAP_MC, /* 18, Machine Check */
X86_TRAP_XF, /* 19, SIMD Floating-Point Exception */
X86_TRAP_IRET = 32, /* 32, IRET Exception */
};
/* /*
* Page fault error code bits: * Page fault error code bits:
* *
......
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
#define _ASM_X86_UV_UV_BAU_H #define _ASM_X86_UV_UV_BAU_H
#include <linux/bitmap.h> #include <linux/bitmap.h>
#include <asm/idtentry.h>
#define BITSPERBYTE 8 #define BITSPERBYTE 8
/* /*
...@@ -799,12 +801,6 @@ static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits) ...@@ -799,12 +801,6 @@ static inline void bau_cpubits_clear(struct bau_local_cpumask *dstp, int nbits)
bitmap_zero(&dstp->bits, nbits); bitmap_zero(&dstp->bits, nbits);
} }
extern void uv_bau_message_intr1(void);
#ifdef CONFIG_TRACING
#define trace_uv_bau_message_intr1 uv_bau_message_intr1
#endif
extern void uv_bau_timeout_intr1(void);
struct atomic_short { struct atomic_short {
short counter; short counter;
}; };
......
...@@ -1011,28 +1011,29 @@ struct bp_patching_desc { ...@@ -1011,28 +1011,29 @@ struct bp_patching_desc {
static struct bp_patching_desc *bp_desc; static struct bp_patching_desc *bp_desc;
static inline struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp) static __always_inline
struct bp_patching_desc *try_get_desc(struct bp_patching_desc **descp)
{ {
struct bp_patching_desc *desc = READ_ONCE(*descp); /* rcu_dereference */ struct bp_patching_desc *desc = __READ_ONCE(*descp); /* rcu_dereference */
if (!desc || !atomic_inc_not_zero(&desc->refs)) if (!desc || !arch_atomic_inc_not_zero(&desc->refs))
return NULL; return NULL;
return desc; return desc;
} }
static inline void put_desc(struct bp_patching_desc *desc) static __always_inline void put_desc(struct bp_patching_desc *desc)
{ {
smp_mb__before_atomic(); smp_mb__before_atomic();
atomic_dec(&desc->refs); arch_atomic_dec(&desc->refs);
} }
static inline void *text_poke_addr(struct text_poke_loc *tp) static __always_inline void *text_poke_addr(struct text_poke_loc *tp)
{ {
return _stext + tp->rel_addr; return _stext + tp->rel_addr;
} }
static int notrace patch_cmp(const void *key, const void *elt) static __always_inline int patch_cmp(const void *key, const void *elt)
{ {
struct text_poke_loc *tp = (struct text_poke_loc *) elt; struct text_poke_loc *tp = (struct text_poke_loc *) elt;
...@@ -1042,9 +1043,8 @@ static int notrace patch_cmp(const void *key, const void *elt) ...@@ -1042,9 +1043,8 @@ static int notrace patch_cmp(const void *key, const void *elt)
return 1; return 1;
return 0; return 0;
} }
NOKPROBE_SYMBOL(patch_cmp);
int notrace poke_int3_handler(struct pt_regs *regs) int noinstr poke_int3_handler(struct pt_regs *regs)
{ {
struct bp_patching_desc *desc; struct bp_patching_desc *desc;
struct text_poke_loc *tp; struct text_poke_loc *tp;
...@@ -1077,7 +1077,7 @@ int notrace poke_int3_handler(struct pt_regs *regs) ...@@ -1077,7 +1077,7 @@ int notrace poke_int3_handler(struct pt_regs *regs)
* Skip the binary search if there is a single member in the vector. * Skip the binary search if there is a single member in the vector.
*/ */
if (unlikely(desc->nr_entries > 1)) { if (unlikely(desc->nr_entries > 1)) {
tp = bsearch(ip, desc->vec, desc->nr_entries, tp = __inline_bsearch(ip, desc->vec, desc->nr_entries,
sizeof(struct text_poke_loc), sizeof(struct text_poke_loc),
patch_cmp); patch_cmp);
if (!tp) if (!tp)
...@@ -1118,7 +1118,6 @@ int notrace poke_int3_handler(struct pt_regs *regs) ...@@ -1118,7 +1118,6 @@ int notrace poke_int3_handler(struct pt_regs *regs)
put_desc(desc); put_desc(desc);
return ret; return ret;
} }
NOKPROBE_SYMBOL(poke_int3_handler);
#define TP_VEC_MAX (PAGE_SIZE / sizeof(struct text_poke_loc)) #define TP_VEC_MAX (PAGE_SIZE / sizeof(struct text_poke_loc))
static struct text_poke_loc tp_vec[TP_VEC_MAX]; static struct text_poke_loc tp_vec[TP_VEC_MAX];
......
...@@ -1088,23 +1088,14 @@ static void local_apic_timer_interrupt(void) ...@@ -1088,23 +1088,14 @@ static void local_apic_timer_interrupt(void)
* [ if a single-CPU system runs an SMP kernel then we call the local * [ if a single-CPU system runs an SMP kernel then we call the local
* interrupt as well. Thus we cannot inline the local irq ... ] * interrupt as well. Thus we cannot inline the local irq ... ]
*/ */
__visible void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_apic_timer_interrupt)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
/* ack_APIC_irq();
* NOTE! We'd better ACK the irq immediately,
* because timer handling can be slow.
*
* update_process_times() expects us to have done irq_enter().
* Besides, if we don't timer interrupts ignore the global
* interrupt lock, which is the WrongThing (tm) to do.
*/
entering_ack_irq();
trace_local_timer_entry(LOCAL_TIMER_VECTOR); trace_local_timer_entry(LOCAL_TIMER_VECTOR);
local_apic_timer_interrupt(); local_apic_timer_interrupt();
trace_local_timer_exit(LOCAL_TIMER_VECTOR); trace_local_timer_exit(LOCAL_TIMER_VECTOR);
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
...@@ -2120,15 +2111,21 @@ void __init register_lapic_address(unsigned long address) ...@@ -2120,15 +2111,21 @@ void __init register_lapic_address(unsigned long address)
* Local APIC interrupts * Local APIC interrupts
*/ */
/* /**
* This interrupt should _never_ happen with our APIC/SMP architecture * spurious_interrupt - Catch all for interrupts raised on unused vectors
* @regs: Pointer to pt_regs on stack
* @vector: The vector number
*
* This is invoked from ASM entry code to catch all interrupts which
* trigger on an entry which is routed to the common_spurious idtentry
* point.
*
* Also called from sysvec_spurious_apic_interrupt().
*/ */
__visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_IRQ(spurious_interrupt)
{ {
u8 vector = ~regs->orig_ax;
u32 v; u32 v;
entering_irq();
trace_spurious_apic_entry(vector); trace_spurious_apic_entry(vector);
inc_irq_stat(irq_spurious_count); inc_irq_stat(irq_spurious_count);
...@@ -2158,13 +2155,17 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs) ...@@ -2158,13 +2155,17 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs)
} }
out: out:
trace_spurious_apic_exit(vector); trace_spurious_apic_exit(vector);
exiting_irq(); }
DEFINE_IDTENTRY_SYSVEC(sysvec_spurious_apic_interrupt)
{
__spurious_interrupt(regs, SPURIOUS_APIC_VECTOR);
} }
/* /*
* This interrupt should never happen with our APIC/SMP architecture * This interrupt should never happen with our APIC/SMP architecture
*/ */
__visible void __irq_entry smp_error_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_error_interrupt)
{ {
static const char * const error_interrupt_reason[] = { static const char * const error_interrupt_reason[] = {
"Send CS error", /* APIC Error Bit 0 */ "Send CS error", /* APIC Error Bit 0 */
...@@ -2178,7 +2179,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs) ...@@ -2178,7 +2179,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs)
}; };
u32 v, i = 0; u32 v, i = 0;
entering_irq();
trace_error_apic_entry(ERROR_APIC_VECTOR); trace_error_apic_entry(ERROR_APIC_VECTOR);
/* First tickle the hardware, only then report what went on. -- REW */ /* First tickle the hardware, only then report what went on. -- REW */
...@@ -2202,7 +2202,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs) ...@@ -2202,7 +2202,6 @@ __visible void __irq_entry smp_error_interrupt(struct pt_regs *regs)
apic_printk(APIC_DEBUG, KERN_CONT "\n"); apic_printk(APIC_DEBUG, KERN_CONT "\n");
trace_error_apic_exit(ERROR_APIC_VECTOR); trace_error_apic_exit(ERROR_APIC_VECTOR);
exiting_irq();
} }
/** /**
......
...@@ -115,7 +115,8 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force) ...@@ -115,7 +115,8 @@ msi_set_affinity(struct irq_data *irqd, const struct cpumask *mask, bool force)
* denote it as spurious which is no harm as this is a rare event * denote it as spurious which is no harm as this is a rare event
* and interrupt handlers have to cope with spurious interrupts * and interrupt handlers have to cope with spurious interrupts
* anyway. If the vector is unused, then it is marked so it won't * anyway. If the vector is unused, then it is marked so it won't
* trigger the 'No irq handler for vector' warning in do_IRQ(). * trigger the 'No irq handler for vector' warning in
* common_interrupt().
* *
* This requires to hold vector lock to prevent concurrent updates to * This requires to hold vector lock to prevent concurrent updates to
* the affected vector. * the affected vector.
......
...@@ -861,13 +861,13 @@ static void free_moved_vector(struct apic_chip_data *apicd) ...@@ -861,13 +861,13 @@ static void free_moved_vector(struct apic_chip_data *apicd)
apicd->move_in_progress = 0; apicd->move_in_progress = 0;
} }
asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void) DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
{ {
struct hlist_head *clhead = this_cpu_ptr(&cleanup_list); struct hlist_head *clhead = this_cpu_ptr(&cleanup_list);
struct apic_chip_data *apicd; struct apic_chip_data *apicd;
struct hlist_node *tmp; struct hlist_node *tmp;
entering_ack_irq(); ack_APIC_irq();
/* Prevent vectors vanishing under us */ /* Prevent vectors vanishing under us */
raw_spin_lock(&vector_lock); raw_spin_lock(&vector_lock);
...@@ -892,7 +892,6 @@ asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void) ...@@ -892,7 +892,6 @@ asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void)
} }
raw_spin_unlock(&vector_lock); raw_spin_unlock(&vector_lock);
exiting_irq();
} }
static void __send_cleanup_vector(struct apic_chip_data *apicd) static void __send_cleanup_vector(struct apic_chip_data *apicd)
......
...@@ -57,9 +57,6 @@ int main(void) ...@@ -57,9 +57,6 @@ int main(void)
BLANK(); BLANK();
#undef ENTRY #undef ENTRY
OFFSET(TSS_ist, tss_struct, x86_tss.ist);
DEFINE(DB_STACK_OFFSET, offsetof(struct cea_exception_stacks, DB_stack) -
offsetof(struct cea_exception_stacks, DB1_stack));
BLANK(); BLANK();
#ifdef CONFIG_STACKPROTECTOR #ifdef CONFIG_STACKPROTECTOR
......
...@@ -10,10 +10,10 @@ ...@@ -10,10 +10,10 @@
*/ */
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <asm/acrn.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/hypervisor.h> #include <asm/hypervisor.h>
#include <asm/idtentry.h>
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
static uint32_t __init acrn_detect(void) static uint32_t __init acrn_detect(void)
...@@ -24,7 +24,7 @@ static uint32_t __init acrn_detect(void) ...@@ -24,7 +24,7 @@ static uint32_t __init acrn_detect(void)
static void __init acrn_init_platform(void) static void __init acrn_init_platform(void)
{ {
/* Setup the IDT for ACRN hypervisor callback */ /* Setup the IDT for ACRN hypervisor callback */
alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, acrn_hv_callback_vector); alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, asm_sysvec_acrn_hv_callback);
} }
static bool acrn_x2apic_available(void) static bool acrn_x2apic_available(void)
...@@ -39,7 +39,7 @@ static bool acrn_x2apic_available(void) ...@@ -39,7 +39,7 @@ static bool acrn_x2apic_available(void)
static void (*acrn_intr_handler)(void); static void (*acrn_intr_handler)(void);
__visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_acrn_hv_callback)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
...@@ -50,13 +50,12 @@ __visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs) ...@@ -50,13 +50,12 @@ __visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs)
* will block the interrupt whose vector is lower than * will block the interrupt whose vector is lower than
* HYPERVISOR_CALLBACK_VECTOR. * HYPERVISOR_CALLBACK_VECTOR.
*/ */
entering_ack_irq(); ack_APIC_irq();
inc_irq_stat(irq_hv_callback_count); inc_irq_stat(irq_hv_callback_count);
if (acrn_intr_handler) if (acrn_intr_handler)
acrn_intr_handler(); acrn_intr_handler();
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
......
...@@ -1706,25 +1706,6 @@ void syscall_init(void) ...@@ -1706,25 +1706,6 @@ void syscall_init(void)
X86_EFLAGS_IOPL|X86_EFLAGS_AC|X86_EFLAGS_NT); X86_EFLAGS_IOPL|X86_EFLAGS_AC|X86_EFLAGS_NT);
} }
DEFINE_PER_CPU(int, debug_stack_usage);
DEFINE_PER_CPU(u32, debug_idt_ctr);
void debug_stack_set_zero(void)
{
this_cpu_inc(debug_idt_ctr);
load_current_idt();
}
NOKPROBE_SYMBOL(debug_stack_set_zero);
void debug_stack_reset(void)
{
if (WARN_ON(!this_cpu_read(debug_idt_ctr)))
return;
if (this_cpu_dec_return(debug_idt_ctr) == 0)
load_current_idt();
}
NOKPROBE_SYMBOL(debug_stack_reset);
#else /* CONFIG_X86_64 */ #else /* CONFIG_X86_64 */
DEFINE_PER_CPU(struct task_struct *, current_task) = &init_task; DEFINE_PER_CPU(struct task_struct *, current_task) = &init_task;
......
...@@ -907,14 +907,13 @@ static void __log_error(unsigned int bank, u64 status, u64 addr, u64 misc) ...@@ -907,14 +907,13 @@ static void __log_error(unsigned int bank, u64 status, u64 addr, u64 misc)
mce_log(&m); mce_log(&m);
} }
asmlinkage __visible void __irq_entry smp_deferred_error_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_deferred_error)
{ {
entering_irq();
trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR); trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR);
inc_irq_stat(irq_deferred_error_count); inc_irq_stat(irq_deferred_error_count);
deferred_error_int_vector(); deferred_error_int_vector();
trace_deferred_error_apic_exit(DEFERRED_ERROR_VECTOR); trace_deferred_error_apic_exit(DEFERRED_ERROR_VECTOR);
exiting_ack_irq(); ack_APIC_irq();
} }
/* /*
......
...@@ -130,7 +130,7 @@ static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs); ...@@ -130,7 +130,7 @@ static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
BLOCKING_NOTIFIER_HEAD(x86_mce_decoder_chain); BLOCKING_NOTIFIER_HEAD(x86_mce_decoder_chain);
/* Do initial initialization of a struct mce */ /* Do initial initialization of a struct mce */
void mce_setup(struct mce *m) noinstr void mce_setup(struct mce *m)
{ {
memset(m, 0, sizeof(struct mce)); memset(m, 0, sizeof(struct mce));
m->cpu = m->extcpu = smp_processor_id(); m->cpu = m->extcpu = smp_processor_id();
...@@ -140,12 +140,12 @@ void mce_setup(struct mce *m) ...@@ -140,12 +140,12 @@ void mce_setup(struct mce *m)
m->cpuid = cpuid_eax(1); m->cpuid = cpuid_eax(1);
m->socketid = cpu_data(m->extcpu).phys_proc_id; m->socketid = cpu_data(m->extcpu).phys_proc_id;
m->apicid = cpu_data(m->extcpu).initial_apicid; m->apicid = cpu_data(m->extcpu).initial_apicid;
rdmsrl(MSR_IA32_MCG_CAP, m->mcgcap); m->mcgcap = __rdmsr(MSR_IA32_MCG_CAP);
if (this_cpu_has(X86_FEATURE_INTEL_PPIN)) if (this_cpu_has(X86_FEATURE_INTEL_PPIN))
rdmsrl(MSR_PPIN, m->ppin); m->ppin = __rdmsr(MSR_PPIN);
else if (this_cpu_has(X86_FEATURE_AMD_PPIN)) else if (this_cpu_has(X86_FEATURE_AMD_PPIN))
rdmsrl(MSR_AMD_PPIN, m->ppin); m->ppin = __rdmsr(MSR_AMD_PPIN);
m->microcode = boot_cpu_data.microcode; m->microcode = boot_cpu_data.microcode;
} }
...@@ -1100,13 +1100,15 @@ static void mce_clear_state(unsigned long *toclear) ...@@ -1100,13 +1100,15 @@ static void mce_clear_state(unsigned long *toclear)
* kdump kernel establishing a new #MC handler where a broadcasted MCE * kdump kernel establishing a new #MC handler where a broadcasted MCE
* might not get handled properly. * might not get handled properly.
*/ */
static bool __mc_check_crashing_cpu(int cpu) static noinstr bool mce_check_crashing_cpu(void)
{ {
unsigned int cpu = smp_processor_id();
if (cpu_is_offline(cpu) || if (cpu_is_offline(cpu) ||
(crashing_cpu != -1 && crashing_cpu != cpu)) { (crashing_cpu != -1 && crashing_cpu != cpu)) {
u64 mcgstatus; u64 mcgstatus;
mcgstatus = mce_rdmsrl(MSR_IA32_MCG_STATUS); mcgstatus = __rdmsr(MSR_IA32_MCG_STATUS);
if (boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) { if (boot_cpu_data.x86_vendor == X86_VENDOR_ZHAOXIN) {
if (mcgstatus & MCG_STATUS_LMCES) if (mcgstatus & MCG_STATUS_LMCES)
...@@ -1114,7 +1116,7 @@ static bool __mc_check_crashing_cpu(int cpu) ...@@ -1114,7 +1116,7 @@ static bool __mc_check_crashing_cpu(int cpu)
} }
if (mcgstatus & MCG_STATUS_RIPV) { if (mcgstatus & MCG_STATUS_RIPV) {
mce_wrmsrl(MSR_IA32_MCG_STATUS, 0); __wrmsr(MSR_IA32_MCG_STATUS, 0, 0);
return true; return true;
} }
} }
...@@ -1230,12 +1232,11 @@ static void kill_me_maybe(struct callback_head *cb) ...@@ -1230,12 +1232,11 @@ static void kill_me_maybe(struct callback_head *cb)
* backing the user stack, tracing that reads the user stack will cause * backing the user stack, tracing that reads the user stack will cause
* potentially infinite recursion. * potentially infinite recursion.
*/ */
void noinstr do_machine_check(struct pt_regs *regs, long error_code) void noinstr do_machine_check(struct pt_regs *regs)
{ {
DECLARE_BITMAP(valid_banks, MAX_NR_BANKS); DECLARE_BITMAP(valid_banks, MAX_NR_BANKS);
DECLARE_BITMAP(toclear, MAX_NR_BANKS); DECLARE_BITMAP(toclear, MAX_NR_BANKS);
struct mca_config *cfg = &mca_cfg; struct mca_config *cfg = &mca_cfg;
int cpu = smp_processor_id();
struct mce m, *final; struct mce m, *final;
char *msg = NULL; char *msg = NULL;
int worst = 0; int worst = 0;
...@@ -1264,11 +1265,6 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code) ...@@ -1264,11 +1265,6 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code)
*/ */
int lmce = 1; int lmce = 1;
if (__mc_check_crashing_cpu(cpu))
return;
nmi_enter();
this_cpu_inc(mce_exception_count); this_cpu_inc(mce_exception_count);
mce_gather_info(&m, regs); mce_gather_info(&m, regs);
...@@ -1356,7 +1352,7 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code) ...@@ -1356,7 +1352,7 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code)
sync_core(); sync_core();
if (worst != MCE_AR_SEVERITY && !kill_it) if (worst != MCE_AR_SEVERITY && !kill_it)
goto out_ist; return;
/* Fault was in user mode and we need to take some action */ /* Fault was in user mode and we need to take some action */
if ((m.cs & 3) == 3) { if ((m.cs & 3) == 3) {
...@@ -1370,12 +1366,9 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code) ...@@ -1370,12 +1366,9 @@ void noinstr do_machine_check(struct pt_regs *regs, long error_code)
current->mce_kill_me.func = kill_me_now; current->mce_kill_me.func = kill_me_now;
task_work_add(current, &current->mce_kill_me, true); task_work_add(current, &current->mce_kill_me, true);
} else { } else {
if (!fixup_exception(regs, X86_TRAP_MC, error_code, 0)) if (!fixup_exception(regs, X86_TRAP_MC, 0, 0))
mce_panic("Failed kernel mode recovery", &m, msg); mce_panic("Failed kernel mode recovery", &m, msg);
} }
out_ist:
nmi_exit();
} }
EXPORT_SYMBOL_GPL(do_machine_check); EXPORT_SYMBOL_GPL(do_machine_check);
...@@ -1902,21 +1895,84 @@ bool filter_mce(struct mce *m) ...@@ -1902,21 +1895,84 @@ bool filter_mce(struct mce *m)
} }
/* Handle unconfigured int18 (should never happen) */ /* Handle unconfigured int18 (should never happen) */
static void unexpected_machine_check(struct pt_regs *regs, long error_code) static noinstr void unexpected_machine_check(struct pt_regs *regs)
{ {
instrumentation_begin();
pr_err("CPU#%d: Unexpected int18 (Machine Check)\n", pr_err("CPU#%d: Unexpected int18 (Machine Check)\n",
smp_processor_id()); smp_processor_id());
instrumentation_end();
} }
/* Call the installed machine check handler for this CPU setup. */ /* Call the installed machine check handler for this CPU setup. */
void (*machine_check_vector)(struct pt_regs *, long error_code) = void (*machine_check_vector)(struct pt_regs *) = unexpected_machine_check;
unexpected_machine_check;
dotraplinkage notrace void do_mce(struct pt_regs *regs, long error_code) static __always_inline void exc_machine_check_kernel(struct pt_regs *regs)
{ {
machine_check_vector(regs, error_code); /*
* Only required when from kernel mode. See
* mce_check_crashing_cpu() for details.
*/
if (machine_check_vector == do_machine_check &&
mce_check_crashing_cpu())
return;
nmi_enter();
/*
* The call targets are marked noinstr, but objtool can't figure
* that out because it's an indirect call. Annotate it.
*/
instrumentation_begin();
trace_hardirqs_off_finish();
machine_check_vector(regs);
if (regs->flags & X86_EFLAGS_IF)
trace_hardirqs_on_prepare();
instrumentation_end();
nmi_exit();
} }
NOKPROBE_SYMBOL(do_mce);
static __always_inline void exc_machine_check_user(struct pt_regs *regs)
{
idtentry_enter_user(regs);
instrumentation_begin();
machine_check_vector(regs);
instrumentation_end();
idtentry_exit_user(regs);
}
#ifdef CONFIG_X86_64
/* MCE hit kernel mode */
DEFINE_IDTENTRY_MCE(exc_machine_check)
{
unsigned long dr7;
dr7 = local_db_save();
exc_machine_check_kernel(regs);
local_db_restore(dr7);
}
/* The user mode variant. */
DEFINE_IDTENTRY_MCE_USER(exc_machine_check)
{
unsigned long dr7;
dr7 = local_db_save();
exc_machine_check_user(regs);
local_db_restore(dr7);
}
#else
/* 32bit unified entry point */
DEFINE_IDTENTRY_MCE(exc_machine_check)
{
unsigned long dr7;
dr7 = local_db_save();
if (user_mode(regs))
exc_machine_check_user(regs);
else
exc_machine_check_kernel(regs);
local_db_restore(dr7);
}
#endif
/* /*
* Called for each booted CPU to set up machine checks. * Called for each booted CPU to set up machine checks.
......
...@@ -146,9 +146,9 @@ static void raise_exception(struct mce *m, struct pt_regs *pregs) ...@@ -146,9 +146,9 @@ static void raise_exception(struct mce *m, struct pt_regs *pregs)
regs.cs = m->cs; regs.cs = m->cs;
pregs = &regs; pregs = &regs;
} }
/* in mcheck exeception handler, irq will be disabled */ /* do_machine_check() expects interrupts disabled -- at least */
local_irq_save(flags); local_irq_save(flags);
do_machine_check(pregs, 0); do_machine_check(pregs);
local_irq_restore(flags); local_irq_restore(flags);
m->finished = 0; m->finished = 0;
} }
......
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#include <asm/mce.h> #include <asm/mce.h>
/* Pointer to the installed machine check handler for this CPU setup. */ /* Pointer to the installed machine check handler for this CPU setup. */
extern void (*machine_check_vector)(struct pt_regs *, long error_code); extern void (*machine_check_vector)(struct pt_regs *);
enum severity_level { enum severity_level {
MCE_NO_SEVERITY, MCE_NO_SEVERITY,
......
...@@ -21,12 +21,11 @@ ...@@ -21,12 +21,11 @@
int mce_p5_enabled __read_mostly; int mce_p5_enabled __read_mostly;
/* Machine check handler for Pentium class Intel CPUs: */ /* Machine check handler for Pentium class Intel CPUs: */
static void pentium_machine_check(struct pt_regs *regs, long error_code) static noinstr void pentium_machine_check(struct pt_regs *regs)
{ {
u32 loaddr, hi, lotype; u32 loaddr, hi, lotype;
nmi_enter(); instrumentation_begin();
rdmsr(MSR_IA32_P5_MC_ADDR, loaddr, hi); rdmsr(MSR_IA32_P5_MC_ADDR, loaddr, hi);
rdmsr(MSR_IA32_P5_MC_TYPE, lotype, hi); rdmsr(MSR_IA32_P5_MC_TYPE, lotype, hi);
...@@ -39,8 +38,7 @@ static void pentium_machine_check(struct pt_regs *regs, long error_code) ...@@ -39,8 +38,7 @@ static void pentium_machine_check(struct pt_regs *regs, long error_code)
} }
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
instrumentation_end();
nmi_exit();
} }
/* Set up machine check reporting for processors with Intel style MCE: */ /* Set up machine check reporting for processors with Intel style MCE: */
......
...@@ -614,14 +614,13 @@ static void unexpected_thermal_interrupt(void) ...@@ -614,14 +614,13 @@ static void unexpected_thermal_interrupt(void)
static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt; static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt;
asmlinkage __visible void __irq_entry smp_thermal_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_thermal)
{ {
entering_irq();
trace_thermal_apic_entry(THERMAL_APIC_VECTOR); trace_thermal_apic_entry(THERMAL_APIC_VECTOR);
inc_irq_stat(irq_thermal_count); inc_irq_stat(irq_thermal_count);
smp_thermal_vector(); smp_thermal_vector();
trace_thermal_apic_exit(THERMAL_APIC_VECTOR); trace_thermal_apic_exit(THERMAL_APIC_VECTOR);
exiting_ack_irq(); ack_APIC_irq();
} }
/* Thermal monitoring depends on APIC, ACPI and clock modulation */ /* Thermal monitoring depends on APIC, ACPI and clock modulation */
......
...@@ -21,12 +21,11 @@ static void default_threshold_interrupt(void) ...@@ -21,12 +21,11 @@ static void default_threshold_interrupt(void)
void (*mce_threshold_vector)(void) = default_threshold_interrupt; void (*mce_threshold_vector)(void) = default_threshold_interrupt;
asmlinkage __visible void __irq_entry smp_threshold_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_threshold)
{ {
entering_irq();
trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR); trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR);
inc_irq_stat(irq_threshold_count); inc_irq_stat(irq_threshold_count);
mce_threshold_vector(); mce_threshold_vector();
trace_threshold_apic_exit(THRESHOLD_APIC_VECTOR); trace_threshold_apic_exit(THRESHOLD_APIC_VECTOR);
exiting_ack_irq(); ack_APIC_irq();
} }
...@@ -17,14 +17,12 @@ ...@@ -17,14 +17,12 @@
#include "internal.h" #include "internal.h"
/* Machine check handler for WinChip C6: */ /* Machine check handler for WinChip C6: */
static void winchip_machine_check(struct pt_regs *regs, long error_code) static noinstr void winchip_machine_check(struct pt_regs *regs)
{ {
nmi_enter(); instrumentation_begin();
pr_emerg("CPU0: Machine Check Exception.\n"); pr_emerg("CPU0: Machine Check Exception.\n");
add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE); add_taint(TAINT_MACHINE_CHECK, LOCKDEP_NOW_UNRELIABLE);
instrumentation_end();
nmi_exit();
} }
/* Set up machine check reporting on the Winchip C6 series */ /* Set up machine check reporting on the Winchip C6 series */
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <asm/hyperv-tlfs.h> #include <asm/hyperv-tlfs.h>
#include <asm/mshyperv.h> #include <asm/mshyperv.h>
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/idtentry.h>
#include <asm/irq_regs.h> #include <asm/irq_regs.h>
#include <asm/i8259.h> #include <asm/i8259.h>
#include <asm/apic.h> #include <asm/apic.h>
...@@ -40,11 +41,10 @@ static void (*hv_stimer0_handler)(void); ...@@ -40,11 +41,10 @@ static void (*hv_stimer0_handler)(void);
static void (*hv_kexec_handler)(void); static void (*hv_kexec_handler)(void);
static void (*hv_crash_handler)(struct pt_regs *regs); static void (*hv_crash_handler)(struct pt_regs *regs);
__visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_callback)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
entering_irq();
inc_irq_stat(irq_hv_callback_count); inc_irq_stat(irq_hv_callback_count);
if (vmbus_handler) if (vmbus_handler)
vmbus_handler(); vmbus_handler();
...@@ -52,7 +52,6 @@ __visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs) ...@@ -52,7 +52,6 @@ __visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs)
if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED) if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED)
ack_APIC_irq(); ack_APIC_irq();
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
...@@ -73,19 +72,16 @@ EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq); ...@@ -73,19 +72,16 @@ EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq);
* Routines to do per-architecture handling of stimer0 * Routines to do per-architecture handling of stimer0
* interrupts when in Direct Mode * interrupts when in Direct Mode
*/ */
DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_stimer0)
__visible void __irq_entry hv_stimer0_vector_handler(struct pt_regs *regs)
{ {
struct pt_regs *old_regs = set_irq_regs(regs); struct pt_regs *old_regs = set_irq_regs(regs);
entering_irq();
inc_irq_stat(hyperv_stimer0_count); inc_irq_stat(hyperv_stimer0_count);
if (hv_stimer0_handler) if (hv_stimer0_handler)
hv_stimer0_handler(); hv_stimer0_handler();
add_interrupt_randomness(HYPERV_STIMER0_VECTOR, 0); add_interrupt_randomness(HYPERV_STIMER0_VECTOR, 0);
ack_APIC_irq(); ack_APIC_irq();
exiting_irq();
set_irq_regs(old_regs); set_irq_regs(old_regs);
} }
...@@ -331,17 +327,19 @@ static void __init ms_hyperv_init_platform(void) ...@@ -331,17 +327,19 @@ static void __init ms_hyperv_init_platform(void)
x86_platform.apic_post_init = hyperv_init; x86_platform.apic_post_init = hyperv_init;
hyperv_setup_mmu_ops(); hyperv_setup_mmu_ops();
/* Setup the IDT for hypervisor callback */ /* Setup the IDT for hypervisor callback */
alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector); alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, asm_sysvec_hyperv_callback);
/* Setup the IDT for reenlightenment notifications */ /* Setup the IDT for reenlightenment notifications */
if (ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT) if (ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT) {
alloc_intr_gate(HYPERV_REENLIGHTENMENT_VECTOR, alloc_intr_gate(HYPERV_REENLIGHTENMENT_VECTOR,
hyperv_reenlightenment_vector); asm_sysvec_hyperv_reenlightenment);
}
/* Setup the IDT for stimer0 */ /* Setup the IDT for stimer0 */
if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE) if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE) {
alloc_intr_gate(HYPERV_STIMER0_VECTOR, alloc_intr_gate(HYPERV_STIMER0_VECTOR,
hv_stimer0_callback_vector); asm_sysvec_hyperv_stimer0);
}
# ifdef CONFIG_SMP # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu; smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
......
...@@ -10,7 +10,6 @@ ...@@ -10,7 +10,6 @@
#include <asm/desc.h> #include <asm/desc.h>
#include <asm/traps.h> #include <asm/traps.h>
extern void double_fault(void);
#define ptr_ok(x) ((x) > PAGE_OFFSET && (x) < PAGE_OFFSET + MAXMEM) #define ptr_ok(x) ((x) > PAGE_OFFSET && (x) < PAGE_OFFSET + MAXMEM)
#define TSS(x) this_cpu_read(cpu_tss_rw.x86_tss.x) #define TSS(x) this_cpu_read(cpu_tss_rw.x86_tss.x)
...@@ -21,7 +20,7 @@ static void set_df_gdt_entry(unsigned int cpu); ...@@ -21,7 +20,7 @@ static void set_df_gdt_entry(unsigned int cpu);
* Called by double_fault with CR0.TS and EFLAGS.NT cleared. The CPU thinks * Called by double_fault with CR0.TS and EFLAGS.NT cleared. The CPU thinks
* we're running the doublefault task. Cannot return. * we're running the doublefault task. Cannot return.
*/ */
asmlinkage notrace void __noreturn doublefault_shim(void) asmlinkage noinstr void __noreturn doublefault_shim(void)
{ {
unsigned long cr2; unsigned long cr2;
struct pt_regs regs; struct pt_regs regs;
...@@ -40,7 +39,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void) ...@@ -40,7 +39,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void)
* Fill in pt_regs. A downside of doing this in C is that the unwinder * Fill in pt_regs. A downside of doing this in C is that the unwinder
* won't see it (no ENCODE_FRAME_POINTER), so a nested stack dump * won't see it (no ENCODE_FRAME_POINTER), so a nested stack dump
* won't successfully unwind to the source of the double fault. * won't successfully unwind to the source of the double fault.
* The main dump from do_double_fault() is fine, though, since it * The main dump from exc_double_fault() is fine, though, since it
* uses these regs directly. * uses these regs directly.
* *
* If anyone ever cares, this could be moved to asm. * If anyone ever cares, this could be moved to asm.
...@@ -70,7 +69,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void) ...@@ -70,7 +69,7 @@ asmlinkage notrace void __noreturn doublefault_shim(void)
regs.cx = TSS(cx); regs.cx = TSS(cx);
regs.bx = TSS(bx); regs.bx = TSS(bx);
do_double_fault(&regs, 0, cr2); exc_double_fault(&regs, 0, cr2);
/* /*
* x86_32 does not save the original CR3 anywhere on a task switch. * x86_32 does not save the original CR3 anywhere on a task switch.
...@@ -84,7 +83,6 @@ asmlinkage notrace void __noreturn doublefault_shim(void) ...@@ -84,7 +83,6 @@ asmlinkage notrace void __noreturn doublefault_shim(void)
*/ */
panic("cannot return from double fault\n"); panic("cannot return from double fault\n");
} }
NOKPROBE_SYMBOL(doublefault_shim);
DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = { DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = {
.tss = { .tss = {
...@@ -95,7 +93,7 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = { ...@@ -95,7 +93,7 @@ DEFINE_PER_CPU_PAGE_ALIGNED(struct doublefault_stack, doublefault_stack) = {
.ldt = 0, .ldt = 0,
.io_bitmap_base = IO_BITMAP_OFFSET_INVALID, .io_bitmap_base = IO_BITMAP_OFFSET_INVALID,
.ip = (unsigned long) double_fault, .ip = (unsigned long) asm_exc_double_fault,
.flags = X86_EFLAGS_FIXED, .flags = X86_EFLAGS_FIXED,
.es = __USER_DS, .es = __USER_DS,
.cs = __KERNEL_CS, .cs = __KERNEL_CS,
......
...@@ -22,15 +22,13 @@ ...@@ -22,15 +22,13 @@
static const char * const exception_stack_names[] = { static const char * const exception_stack_names[] = {
[ ESTACK_DF ] = "#DF", [ ESTACK_DF ] = "#DF",
[ ESTACK_NMI ] = "NMI", [ ESTACK_NMI ] = "NMI",
[ ESTACK_DB2 ] = "#DB2",
[ ESTACK_DB1 ] = "#DB1",
[ ESTACK_DB ] = "#DB", [ ESTACK_DB ] = "#DB",
[ ESTACK_MCE ] = "#MC", [ ESTACK_MCE ] = "#MC",
}; };
const char *stack_type_name(enum stack_type type) const char *stack_type_name(enum stack_type type)
{ {
BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); BUILD_BUG_ON(N_EXCEPTION_STACKS != 4);
if (type == STACK_TYPE_IRQ) if (type == STACK_TYPE_IRQ)
return "IRQ"; return "IRQ";
...@@ -79,7 +77,6 @@ static const ...@@ -79,7 +77,6 @@ static const
struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = { struct estack_pages estack_pages[CEA_ESTACK_PAGES] ____cacheline_aligned = {
EPAGERANGE(DF), EPAGERANGE(DF),
EPAGERANGE(NMI), EPAGERANGE(NMI),
EPAGERANGE(DB1),
EPAGERANGE(DB), EPAGERANGE(DB),
EPAGERANGE(MCE), EPAGERANGE(MCE),
}; };
...@@ -91,7 +88,7 @@ static bool in_exception_stack(unsigned long *stack, struct stack_info *info) ...@@ -91,7 +88,7 @@ static bool in_exception_stack(unsigned long *stack, struct stack_info *info)
struct pt_regs *regs; struct pt_regs *regs;
unsigned int k; unsigned int k;
BUILD_BUG_ON(N_EXCEPTION_STACKS != 6); BUILD_BUG_ON(N_EXCEPTION_STACKS != 4);
begin = (unsigned long)__this_cpu_read(cea_exception_stacks); begin = (unsigned long)__this_cpu_read(cea_exception_stacks);
/* /*
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
#include <asm/frame.h> #include <asm/frame.h>
.code64 .code64
.section .entry.text, "ax" .section .text, "ax"
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
/* Save parent and function stack frames (rip and rbp) */ /* Save parent and function stack frames (rip and rbp) */
......
...@@ -29,15 +29,16 @@ ...@@ -29,15 +29,16 @@
#ifdef CONFIG_PARAVIRT_XXL #ifdef CONFIG_PARAVIRT_XXL
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/paravirt.h> #include <asm/paravirt.h>
#define GET_CR2_INTO(reg) GET_CR2_INTO_AX ; _ASM_MOV %_ASM_AX, reg
#else #else
#define INTERRUPT_RETURN iretq #define INTERRUPT_RETURN iretq
#define GET_CR2_INTO(reg) _ASM_MOV %cr2, reg
#endif #endif
/* we are not able to switch in one step to the final KERNEL ADDRESS SPACE /*
* We are not able to switch in one step to the final KERNEL ADDRESS SPACE
* because we need identity-mapped pages. * because we need identity-mapped pages.
*
*/ */
#define l4_index(x) (((x) >> 39) & 511) #define l4_index(x) (((x) >> 39) & 511)
#define pud_index(x) (((x) >> PUD_SHIFT) & (PTRS_PER_PUD-1)) #define pud_index(x) (((x) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -148,7 +148,7 @@ void do_softirq_own_stack(void) ...@@ -148,7 +148,7 @@ void do_softirq_own_stack(void)
call_on_stack(__do_softirq, isp); call_on_stack(__do_softirq, isp);
} }
void handle_irq(struct irq_desc *desc, struct pt_regs *regs) void __handle_irq(struct irq_desc *desc, struct pt_regs *regs)
{ {
int overflow = check_stack_overflow(); int overflow = check_stack_overflow();
......
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#include <asm/cpu_entry_area.h> #include <asm/cpu_entry_area.h>
#include <asm/irq_stack.h>
#include <asm/io_apic.h> #include <asm/io_apic.h>
#include <asm/apic.h> #include <asm/apic.h>
...@@ -70,3 +71,8 @@ int irq_init_percpu_irqstack(unsigned int cpu) ...@@ -70,3 +71,8 @@ int irq_init_percpu_irqstack(unsigned int cpu)
return 0; return 0;
return map_irq_stack(cpu); return map_irq_stack(cpu);
} }
void do_softirq_own_stack(void)
{
run_on_irqstack_cond(__do_softirq, NULL, NULL);
}
...@@ -9,18 +9,18 @@ ...@@ -9,18 +9,18 @@
#include <linux/irq_work.h> #include <linux/irq_work.h>
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <asm/apic.h> #include <asm/apic.h>
#include <asm/idtentry.h>
#include <asm/trace/irq_vectors.h> #include <asm/trace/irq_vectors.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#ifdef CONFIG_X86_LOCAL_APIC #ifdef CONFIG_X86_LOCAL_APIC
__visible void __irq_entry smp_irq_work_interrupt(struct pt_regs *regs) DEFINE_IDTENTRY_SYSVEC(sysvec_irq_work)
{ {
ipi_entering_ack_irq(); ack_APIC_irq();
trace_irq_work_entry(IRQ_WORK_VECTOR); trace_irq_work_entry(IRQ_WORK_VECTOR);
inc_irq_stat(apic_irq_work_irqs); inc_irq_stat(apic_irq_work_irqs);
irq_work_run(); irq_work_run();
trace_irq_work_exit(IRQ_WORK_VECTOR); trace_irq_work_exit(IRQ_WORK_VECTOR);
exiting_irq();
} }
void arch_irq_work_raise(void) void arch_irq_work_raise(void)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment