Commit 8e4d7a78 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'x86-cleanups-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 cleanups from Ingo Molnar:
 "Misc cleanups & removal of obsolete code"

* tag 'x86-cleanups-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/sgx: Correct kernel-doc's arg name in sgx_encl_release()
  doc: Remove references to IBM Calgary
  x86/setup: Document that Windows reserves the first MiB
  x86/crash: Remove crash_reserve_low_1M()
  x86/setup: Remove CONFIG_X86_RESERVE_LOW and reservelow= options
  x86/alternative: Align insn bytes vertically
  x86: Fix leftover comment typos
  x86/asm: Simplify __smp_mb() definition
  x86/alternatives: Make the x86nops[] symbol static
parents 98e62da8 1d315639
......@@ -4775,11 +4775,6 @@
Reserves a hole at the top of the kernel virtual
address space.
reservelow= [X86]
Format: nn[K]
Set the amount of memory to reserve for BIOS at
the bottom of the address space.
reset_devices [KNL] Force drivers to reset the underlying device
during initialization.
......
......@@ -247,16 +247,11 @@ Multiple x86-64 PCI-DMA mapping implementations exist, for example:
Kernel boot message: "PCI-DMA: Using software bounce buffering
for IO (SWIOTLB)"
4. <arch/x86_64/pci-calgary.c> : IBM Calgary hardware IOMMU. Used in IBM
pSeries and xSeries servers. This hardware IOMMU supports DMA address
mapping with memory protection, etc.
Kernel boot message: "PCI-DMA: Using Calgary IOMMU"
::
iommu=[<size>][,noagp][,off][,force][,noforce]
[,memaper[=<order>]][,merge][,fullflush][,nomerge]
[,noaperture][,calgary]
[,noaperture]
General iommu options:
......@@ -295,8 +290,6 @@ iommu options only relevant to the AMD GART hardware IOMMU:
Don't initialize the AGP driver and use full aperture.
panic
Always panic when IOMMU overflows.
calgary
Use the Calgary IOMMU if it is available
iommu options only relevant to the software bounce buffering (SWIOTLB) IOMMU
implementation:
......@@ -307,28 +300,6 @@ implementation:
force
Force all IO through the software TLB.
Settings for the IBM Calgary hardware IOMMU currently found in IBM
pSeries and xSeries machines
calgary=[64k,128k,256k,512k,1M,2M,4M,8M]
Set the size of each PCI slot's translation table when using the
Calgary IOMMU. This is the size of the translation table itself
in main memory. The smallest table, 64k, covers an IO space of
32MB; the largest, 8MB table, can cover an IO space of 4GB.
Normally the kernel will make the right choice by itself.
calgary=[translate_empty_slots]
Enable translation even on slots that have no devices attached to
them, in case a device will be hotplugged in the future.
calgary=[disable=<PCI bus number>]
Disable translation on a given PHB. For
example, the built-in graphics adapter resides on the first bridge
(PCI bus number 0); if translation (isolation) is enabled on this
bridge, X servers that access the hardware directly from user
space might stop working. Use this option if you have devices that
are accessed from userspace directly on some PCI host bridge.
panic
Always panic when IOMMU overflows
Miscellaneous
=============
......
......@@ -1693,35 +1693,6 @@ config X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK
Set whether the default state of memory_corruption_check is
on or off.
config X86_RESERVE_LOW
int "Amount of low memory, in kilobytes, to reserve for the BIOS"
default 64
range 4 640
help
Specify the amount of low memory to reserve for the BIOS.
The first page contains BIOS data structures that the kernel
must not use, so that page must always be reserved.
By default we reserve the first 64K of physical RAM, as a
number of BIOSes are known to corrupt that memory range
during events such as suspend/resume or monitor cable
insertion, so it must not be used by the kernel.
You can set this to 4 if you are absolutely sure that you
trust the BIOS to get all its memory reservations and usages
right. If you know your BIOS have problems beyond the
default 64K area, you can set this to 640 to avoid using the
entire low memory range.
If you have doubts about the BIOS (e.g. suspend/resume does
not work or there's kernel crashes after certain hardware
hotplug events) then you might want to enable
X86_CHECK_BIOS_CORRUPTION=y to allow the kernel to check
typical corruption patterns.
Leave this to the default value of 64 if you are unsure.
config MATH_EMULATION
bool
depends on MODIFY_LDT_SYSCALL
......
......@@ -623,7 +623,7 @@ bool hv_query_ext_cap(u64 cap_query)
* output parameter to the hypercall below and so it should be
* compatible with 'virt_to_phys'. Which means, it's address should be
* directly mapped. Use 'static' to keep it compatible; stack variables
* can be virtually mapped, making them imcompatible with
* can be virtually mapped, making them incompatible with
* 'virt_to_phys'.
* Hypercall input/output addresses should also be 8-byte aligned.
*/
......
......@@ -54,11 +54,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
#define dma_rmb() barrier()
#define dma_wmb() barrier()
#ifdef CONFIG_X86_32
#define __smp_mb() asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc")
#else
#define __smp_mb() asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc")
#endif
#define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc")
#define __smp_rmb() dma_rmb()
#define __smp_wmb() barrier()
#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
......
......@@ -9,10 +9,4 @@ int crash_setup_memmap_entries(struct kimage *image,
struct boot_params *params);
void crash_smp_send_stop(void);
#ifdef CONFIG_KEXEC_CORE
void __init crash_reserve_low_1M(void);
#else
static inline void __init crash_reserve_low_1M(void) { }
#endif
#endif /* _ASM_X86_CRASH_H */
......@@ -13,7 +13,7 @@
/*
* This file contains both data structures defined by SGX architecture and Linux
* defined software data structures and functions. The two should not be mixed
* together for better readibility. The architectural definitions come first.
* together for better readability. The architectural definitions come first.
*/
/* The SGX specific CPUID function. */
......
......@@ -11,7 +11,7 @@
* The same segment is shared by percpu area and stack canary. On
* x86_64, percpu symbols are zero based and %gs (64-bit) points to the
* base of percpu area. The first occupant of the percpu area is always
* fixed_percpu_data which contains stack_canary at the approproate
* fixed_percpu_data which contains stack_canary at the appropriate
* offset. On x86_32, the stack canary is just a regular percpu
* variable.
*
......
......@@ -75,7 +75,7 @@ do { \
} \
} while (0)
const unsigned char x86nops[] =
static const unsigned char x86nops[] =
{
BYTES_NOP1,
BYTES_NOP2,
......
......@@ -383,7 +383,7 @@ const struct vm_operations_struct sgx_vm_ops = {
/**
* sgx_encl_release - Destroy an enclave instance
* @kref: address of a kref inside &sgx_encl
* @ref: address of a kref inside &sgx_encl
*
* Used together with kref_put(). Frees all the resources associated with the
* enclave and the instance itself.
......
......@@ -70,19 +70,6 @@ static inline void cpu_crash_vmclear_loaded_vmcss(void)
rcu_read_unlock();
}
/*
* When the crashkernel option is specified, only use the low
* 1M for the real mode trampoline.
*/
void __init crash_reserve_low_1M(void)
{
if (cmdline_find_option(boot_command_line, "crashkernel", NULL, 0) < 0)
return;
memblock_reserve(0, 1<<20);
pr_info("Reserving the low 1M of memory for crashkernel\n");
}
#if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC)
static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
......
......@@ -674,7 +674,7 @@ static int prepare_emulation(struct kprobe *p, struct insn *insn)
break;
if (insn->addr_bytes != sizeof(unsigned long))
return -EOPNOTSUPP; /* Don't support differnt size */
return -EOPNOTSUPP; /* Don't support different size */
if (X86_MODRM_MOD(opcode) != 3)
return -EOPNOTSUPP; /* TODO: support memory addressing */
......
......@@ -695,30 +695,6 @@ static void __init e820_add_kernel_range(void)
e820__range_add(start, size, E820_TYPE_RAM);
}
static unsigned reserve_low = CONFIG_X86_RESERVE_LOW << 10;
static int __init parse_reservelow(char *p)
{
unsigned long long size;
if (!p)
return -EINVAL;
size = memparse(p, &p);
if (size < 4096)
size = 4096;
if (size > 640*1024)
size = 640*1024;
reserve_low = size;
return 0;
}
early_param("reservelow", parse_reservelow);
static void __init early_reserve_memory(void)
{
/*
......@@ -1084,17 +1060,18 @@ void __init setup_arch(char **cmdline_p)
#endif
/*
* Find free memory for the real mode trampoline and place it
* there.
* If there is not enough free memory under 1M, on EFI-enabled
* systems there will be additional attempt to reclaim the memory
* for the real mode trampoline at efi_free_boot_services().
* Find free memory for the real mode trampoline and place it there. If
* there is not enough free memory under 1M, on EFI-enabled systems
* there will be additional attempt to reclaim the memory for the real
* mode trampoline at efi_free_boot_services().
*
* Unconditionally reserve the entire first 1M of RAM because BIOSes
* are known to corrupt low memory and several hundred kilobytes are not
* worth complex detection what memory gets clobbered. Windows does the
* same thing for very similar reasons.
*
* Unconditionally reserve the entire first 1M of RAM because
* BIOSes are know to corrupt low memory and several
* hundred kilobytes are not worth complex detection what memory gets
* clobbered. Moreover, on machines with SandyBridge graphics or in
* setups that use crashkernel the entire 1M is reserved anyway.
* Moreover, on machines with SandyBridge graphics or in setups that use
* crashkernel the entire 1M is reserved anyway.
*/
reserve_real_mode();
......
......@@ -2374,7 +2374,7 @@ static int make_mmu_pages_available(struct kvm_vcpu *vcpu)
* page is available, while the caller may end up allocating as many as
* four pages, e.g. for PAE roots or for 5-level paging. Temporarily
* exceeding the (arbitrary by default) limit will not harm the host,
* being too agressive may unnecessarily kill the guest, and getting an
* being too aggressive may unnecessarily kill the guest, and getting an
* exact count is far more trouble than it's worth, especially in the
* page fault paths.
*/
......
......@@ -1017,7 +1017,7 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code,
if (!is_shadow_present_pte(iter.old_spte)) {
/*
* If SPTE has been forzen by another thread, just
* If SPTE has been frozen by another thread, just
* give up and retry, avoiding unnecessary page table
* allocation and free.
*/
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment