Commit 4608f064 authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next

Pull sparc updates from David Miller:

 1) Add support for ADI (Application Data Integrity) found in more
    recent sparc64 cpus. Essentially this is keyed based access to
    virtual memory, and if the key encoded in the virual address is
    wrong you get a trap.

    The mm changes were reviewed by Andrew Morton and others.

    Work by Khalid Aziz.

 2) Validate DAX completion index range properly, from Rob Gardner.

 3) Add proper Kconfig deps for DAX driver. From Guenter Roeck.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next:
  sparc64: Make atomic_xchg() an inline function rather than a macro.
  sparc64: Properly range check DAX completion index
  sparc: Make auxiliary vectors for ADI available on 32-bit as well
  sparc64: Oracle DAX driver depends on SPARC64
  sparc64: Update signal delivery to use new helper functions
  sparc64: Add support for ADI (Application Data Integrity)
  mm: Allow arch code to override copy_highpage()
  mm: Clear arch specific VM flags on protection change
  mm: Add address parameter to arch_validate_prot()
  sparc64: Add auxiliary vectors to report platform ADI properties
  sparc64: Add handler for "Memory Corruption Detected" trap
  sparc64: Add HV fault type handlers for ADI related faults
  sparc64: Add support for ADI register fields, ASIs and traps
  mm, swap: Add infrastructure for saving page metadata on swap
  signals, sparc: Add signal codes for ADI violations
parents 5bb053be d13864b6
Application Data Integrity (ADI)
================================
SPARC M7 processor adds the Application Data Integrity (ADI) feature.
ADI allows a task to set version tags on any subset of its address
space. Once ADI is enabled and version tags are set for ranges of
address space of a task, the processor will compare the tag in pointers
to memory in these ranges to the version set by the application
previously. Access to memory is granted only if the tag in given pointer
matches the tag set by the application. In case of mismatch, processor
raises an exception.
Following steps must be taken by a task to enable ADI fully:
1. Set the user mode PSTATE.mcde bit. This acts as master switch for
the task's entire address space to enable/disable ADI for the task.
2. Set TTE.mcd bit on any TLB entries that correspond to the range of
addresses ADI is being enabled on. MMU checks the version tag only
on the pages that have TTE.mcd bit set.
3. Set the version tag for virtual addresses using stxa instruction
and one of the MCD specific ASIs. Each stxa instruction sets the
given tag for one ADI block size number of bytes. This step must
be repeated for entire page to set tags for entire page.
ADI block size for the platform is provided by the hypervisor to kernel
in machine description tables. Hypervisor also provides the number of
top bits in the virtual address that specify the version tag. Once
version tag has been set for a memory location, the tag is stored in the
physical memory and the same tag must be present in the ADI version tag
bits of the virtual address being presented to the MMU. For example on
SPARC M7 processor, MMU uses bits 63-60 for version tags and ADI block
size is same as cacheline size which is 64 bytes. A task that sets ADI
version to, say 10, on a range of memory, must access that memory using
virtual addresses that contain 0xa in bits 63-60.
ADI is enabled on a set of pages using mprotect() with PROT_ADI flag.
When ADI is enabled on a set of pages by a task for the first time,
kernel sets the PSTATE.mcde bit fot the task. Version tags for memory
addresses are set with an stxa instruction on the addresses using
ASI_MCD_PRIMARY or ASI_MCD_ST_BLKINIT_PRIMARY. ADI block size is
provided by the hypervisor to the kernel. Kernel returns the value of
ADI block size to userspace using auxiliary vector along with other ADI
info. Following auxiliary vectors are provided by the kernel:
AT_ADI_BLKSZ ADI block size. This is the granularity and
alignment, in bytes, of ADI versioning.
AT_ADI_NBITS Number of ADI version bits in the VA
IMPORTANT NOTES:
- Version tag values of 0x0 and 0xf are reserved. These values match any
tag in virtual address and never generate a mismatch exception.
- Version tags are set on virtual addresses from userspace even though
tags are stored in physical memory. Tags are set on a physical page
after it has been allocated to a task and a pte has been created for
it.
- When a task frees a memory page it had set version tags on, the page
goes back to free page pool. When this page is re-allocated to a task,
kernel clears the page using block initialization ASI which clears the
version tags as well for the page. If a page allocated to a task is
freed and allocated back to the same task, old version tags set by the
task on that page will no longer be present.
- ADI tag mismatches are not detected for non-faulting loads.
- Kernel does not set any tags for user pages and it is entirely a
task's responsibility to set any version tags. Kernel does ensure the
version tags are preserved if a page is swapped out to the disk and
swapped back in. It also preserves that version tags if a page is
migrated.
- ADI works for any size pages. A userspace task need not be aware of
page size when using ADI. It can simply select a virtual address
range, enable ADI on the range using mprotect() and set version tags
for the entire range. mprotect() ensures range is aligned to page size
and is a multiple of page size.
- ADI tags can only be set on writable memory. For example, ADI tags can
not be set on read-only mappings.
ADI related traps
-----------------
With ADI enabled, following new traps may occur:
Disrupting memory corruption
When a store accesses a memory localtion that has TTE.mcd=1,
the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
tag in the address used (bits 63:60) does not match the tag set on
the corresponding cacheline, a memory corruption trap occurs. By
default, it is a disrupting trap and is sent to the hypervisor
first. Hypervisor creates a sun4v error report and sends a
resumable error (TT=0x7e) trap to the kernel. The kernel sends
a SIGSEGV to the task that resulted in this trap with the following
info:
siginfo.si_signo = SIGSEGV;
siginfo.errno = 0;
siginfo.si_code = SEGV_ADIDERR;
siginfo.si_addr = addr; /* PC where first mismatch occurred */
siginfo.si_trapno = 0;
Precise memory corruption
When a store accesses a memory location that has TTE.mcd=1,
the task is running with ADI enabled (PSTATE.mcde=1), and the ADI
tag in the address used (bits 63:60) does not match the tag set on
the corresponding cacheline, a memory corruption trap occurs. If
MCD precise exception is enabled (MCDPERR=1), a precise
exception is sent to the kernel with TT=0x1a. The kernel sends
a SIGSEGV to the task that resulted in this trap with the following
info:
siginfo.si_signo = SIGSEGV;
siginfo.errno = 0;
siginfo.si_code = SEGV_ADIPERR;
siginfo.si_addr = addr; /* address that caused trap */
siginfo.si_trapno = 0;
NOTE: ADI tag mismatch on a load always results in precise trap.
MCD disabled
When a task has not enabled ADI and attempts to set ADI version
on a memory address, processor sends an MCD disabled trap. This
trap is handled by hypervisor first and the hypervisor vectors this
trap through to the kernel as Data Access Exception trap with
fault type set to 0xa (invalid ASI). When this occurs, the kernel
sends the task SIGSEGV signal with following info:
siginfo.si_signo = SIGSEGV;
siginfo.errno = 0;
siginfo.si_code = SEGV_ACCADI;
siginfo.si_addr = addr; /* address that caused trap */
siginfo.si_trapno = 0;
Sample program to use ADI
-------------------------
Following sample program is meant to illustrate how to use the ADI
functionality.
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <elf.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <sys/mman.h>
#include <asm/asi.h>
#ifndef AT_ADI_BLKSZ
#define AT_ADI_BLKSZ 48
#endif
#ifndef AT_ADI_NBITS
#define AT_ADI_NBITS 49
#endif
#ifndef PROT_ADI
#define PROT_ADI 0x10
#endif
#define BUFFER_SIZE 32*1024*1024UL
main(int argc, char* argv[], char* envp[])
{
unsigned long i, mcde, adi_blksz, adi_nbits;
char *shmaddr, *tmp_addr, *end, *veraddr, *clraddr;
int shmid, version;
Elf64_auxv_t *auxv;
adi_blksz = 0;
while(*envp++ != NULL);
for (auxv = (Elf64_auxv_t *)envp; auxv->a_type != AT_NULL; auxv++) {
switch (auxv->a_type) {
case AT_ADI_BLKSZ:
adi_blksz = auxv->a_un.a_val;
break;
case AT_ADI_NBITS:
adi_nbits = auxv->a_un.a_val;
break;
}
}
if (adi_blksz == 0) {
fprintf(stderr, "Oops! ADI is not supported\n");
exit(1);
}
printf("ADI capabilities:\n");
printf("\tBlock size = %ld\n", adi_blksz);
printf("\tNumber of bits = %ld\n", adi_nbits);
if ((shmid = shmget(2, BUFFER_SIZE,
IPC_CREAT | SHM_R | SHM_W)) < 0) {
perror("shmget failed");
exit(1);
}
shmaddr = shmat(shmid, NULL, 0);
if (shmaddr == (char *)-1) {
perror("shm attach failed");
shmctl(shmid, IPC_RMID, NULL);
exit(1);
}
if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE|PROT_ADI)) {
perror("mprotect failed");
goto err_out;
}
/* Set the ADI version tag on the shm segment
*/
version = 10;
tmp_addr = shmaddr;
end = shmaddr + BUFFER_SIZE;
while (tmp_addr < end) {
asm volatile(
"stxa %1, [%0]0x90\n\t"
:
: "r" (tmp_addr), "r" (version));
tmp_addr += adi_blksz;
}
asm volatile("membar #Sync\n\t");
/* Create a versioned address from the normal address by placing
* version tag in the upper adi_nbits bits
*/
tmp_addr = (void *) ((unsigned long)shmaddr << adi_nbits);
tmp_addr = (void *) ((unsigned long)tmp_addr >> adi_nbits);
veraddr = (void *) (((unsigned long)version << (64-adi_nbits))
| (unsigned long)tmp_addr);
printf("Starting the writes:\n");
for (i = 0; i < BUFFER_SIZE; i++) {
veraddr[i] = (char)(i);
if (!(i % (1024 * 1024)))
printf(".");
}
printf("\n");
printf("Verifying data...");
fflush(stdout);
for (i = 0; i < BUFFER_SIZE; i++)
if (veraddr[i] != (char)i)
printf("\nIndex %lu mismatched\n", i);
printf("Done.\n");
/* Disable ADI and clean up
*/
if (mprotect(shmaddr, BUFFER_SIZE, PROT_READ|PROT_WRITE)) {
perror("mprotect failed");
goto err_out;
}
if (shmdt((const void *)shmaddr) != 0)
perror("Detach failure");
shmctl(shmid, IPC_RMID, NULL);
exit(0);
err_out:
if (shmdt((const void *)shmaddr) != 0)
perror("Detach failure");
shmctl(shmid, IPC_RMID, NULL);
exit(1);
}
...@@ -43,7 +43,7 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags) ...@@ -43,7 +43,7 @@ static inline pgprot_t arch_vm_get_page_prot(unsigned long vm_flags)
} }
#define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags) #define arch_vm_get_page_prot(vm_flags) arch_vm_get_page_prot(vm_flags)
static inline bool arch_validate_prot(unsigned long prot) static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
{ {
if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO)) if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_SAO))
return false; return false;
...@@ -51,7 +51,7 @@ static inline bool arch_validate_prot(unsigned long prot) ...@@ -51,7 +51,7 @@ static inline bool arch_validate_prot(unsigned long prot)
return false; return false;
return true; return true;
} }
#define arch_validate_prot(prot) arch_validate_prot(prot) #define arch_validate_prot arch_validate_prot
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
#endif /* _ASM_POWERPC_MMAN_H */ #endif /* _ASM_POWERPC_MMAN_H */
...@@ -48,7 +48,7 @@ static inline long do_mmap2(unsigned long addr, size_t len, ...@@ -48,7 +48,7 @@ static inline long do_mmap2(unsigned long addr, size_t len,
{ {
long ret = -EINVAL; long ret = -EINVAL;
if (!arch_validate_prot(prot)) if (!arch_validate_prot(prot, addr))
goto out; goto out;
if (shift) { if (shift) {
......
#ifndef ___ASM_SPARC_ADI_H
#define ___ASM_SPARC_ADI_H
#if defined(__sparc__) && defined(__arch64__)
#include <asm/adi_64.h>
#endif
#endif
/* adi_64.h: ADI related data structures
*
* Copyright (c) 2016 Oracle and/or its affiliates. All rights reserved.
* Author: Khalid Aziz (khalid.aziz@oracle.com)
*
* This work is licensed under the terms of the GNU GPL, version 2.
*/
#ifndef __ASM_SPARC64_ADI_H
#define __ASM_SPARC64_ADI_H
#include <linux/types.h>
#ifndef __ASSEMBLY__
struct adi_caps {
__u64 blksz;
__u64 nbits;
__u64 ue_on_adi;
};
struct adi_config {
bool enabled;
struct adi_caps caps;
};
extern struct adi_config adi_state;
extern void mdesc_adi_init(void);
static inline bool adi_capable(void)
{
return adi_state.enabled;
}
static inline unsigned long adi_blksize(void)
{
return adi_state.caps.blksz;
}
static inline unsigned long adi_nbits(void)
{
return adi_state.caps.nbits;
}
#endif /* __ASSEMBLY__ */
#endif /* !(__ASM_SPARC64_ADI_H) */
...@@ -83,7 +83,11 @@ ATOMIC_OPS(xor) ...@@ -83,7 +83,11 @@ ATOMIC_OPS(xor)
#define atomic64_add_negative(i, v) (atomic64_add_return(i, v) < 0) #define atomic64_add_negative(i, v) (atomic64_add_return(i, v) < 0)
#define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n)))
#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
static inline int atomic_xchg(atomic_t *v, int new)
{
return xchg(&v->counter, new);
}
static inline int __atomic_add_unless(atomic_t *v, int a, int u) static inline int __atomic_add_unless(atomic_t *v, int a, int u)
{ {
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/extable_64.h> #include <asm/extable_64.h>
#include <asm/spitfire.h> #include <asm/spitfire.h>
#include <asm/adi.h>
/* /*
* Sparc section types * Sparc section types
...@@ -215,9 +216,13 @@ extern unsigned int vdso_enabled; ...@@ -215,9 +216,13 @@ extern unsigned int vdso_enabled;
#define ARCH_DLINFO \ #define ARCH_DLINFO \
do { \ do { \
extern struct adi_config adi_state; \
if (vdso_enabled) \ if (vdso_enabled) \
NEW_AUX_ENT(AT_SYSINFO_EHDR, \ NEW_AUX_ENT(AT_SYSINFO_EHDR, \
(unsigned long)current->mm->context.vdso); \ (unsigned long)current->mm->context.vdso); \
NEW_AUX_ENT(AT_ADI_BLKSZ, adi_state.caps.blksz); \
NEW_AUX_ENT(AT_ADI_NBITS, adi_state.caps.nbits); \
NEW_AUX_ENT(AT_ADI_UEONADI, adi_state.caps.ue_on_adi); \
} while (0) } while (0)
struct linux_binprm; struct linux_binprm;
......
...@@ -570,6 +570,8 @@ struct hv_fault_status { ...@@ -570,6 +570,8 @@ struct hv_fault_status {
#define HV_FAULT_TYPE_RESV1 13 #define HV_FAULT_TYPE_RESV1 13
#define HV_FAULT_TYPE_UNALIGNED 14 #define HV_FAULT_TYPE_UNALIGNED 14
#define HV_FAULT_TYPE_INV_PGSZ 15 #define HV_FAULT_TYPE_INV_PGSZ 15
#define HV_FAULT_TYPE_MCD 17
#define HV_FAULT_TYPE_MCD_DIS 18
/* Values 16 --> -2 are reserved. */ /* Values 16 --> -2 are reserved. */
#define HV_FAULT_TYPE_MULTIPLE -1 #define HV_FAULT_TYPE_MULTIPLE -1
......
...@@ -7,5 +7,87 @@ ...@@ -7,5 +7,87 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define arch_mmap_check(addr,len,flags) sparc_mmap_check(addr,len) #define arch_mmap_check(addr,len,flags) sparc_mmap_check(addr,len)
int sparc_mmap_check(unsigned long addr, unsigned long len); int sparc_mmap_check(unsigned long addr, unsigned long len);
#endif
#ifdef CONFIG_SPARC64
#include <asm/adi_64.h>
static inline void ipi_set_tstate_mcde(void *arg)
{
struct mm_struct *mm = arg;
/* Set TSTATE_MCDE for the task using address map that ADI has been
* enabled on if the task is running. If not, it will be set
* automatically at the next context switch
*/
if (current->mm == mm) {
struct pt_regs *regs;
regs = task_pt_regs(current);
regs->tstate |= TSTATE_MCDE;
}
}
#define arch_calc_vm_prot_bits(prot, pkey) sparc_calc_vm_prot_bits(prot)
static inline unsigned long sparc_calc_vm_prot_bits(unsigned long prot)
{
if (adi_capable() && (prot & PROT_ADI)) {
struct pt_regs *regs;
if (!current->mm->context.adi) {
regs = task_pt_regs(current);
regs->tstate |= TSTATE_MCDE;
current->mm->context.adi = true;
on_each_cpu_mask(mm_cpumask(current->mm),
ipi_set_tstate_mcde, current->mm, 0);
}
return VM_SPARC_ADI;
} else {
return 0;
}
}
#define arch_vm_get_page_prot(vm_flags) sparc_vm_get_page_prot(vm_flags)
static inline pgprot_t sparc_vm_get_page_prot(unsigned long vm_flags)
{
return (vm_flags & VM_SPARC_ADI) ? __pgprot(_PAGE_MCD_4V) : __pgprot(0);
}
#define arch_validate_prot(prot, addr) sparc_validate_prot(prot, addr)
static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
{
if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
return 0;
if (prot & PROT_ADI) {
if (!adi_capable())
return 0;
if (addr) {
struct vm_area_struct *vma;
vma = find_vma(current->mm, addr);
if (vma) {
/* ADI can not be enabled on PFN
* mapped pages
*/
if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
return 0;
/* Mergeable pages can become unmergeable
* if ADI is enabled on them even if they
* have identical data on them. This can be
* because ADI enabled pages with identical
* data may still not have identical ADI
* tags on them. Disallow ADI on mergeable
* pages.
*/
if (vma->vm_flags & VM_MERGEABLE)
return 0;
}
}
}
return 1;
}
#endif /* CONFIG_SPARC64 */
#endif /* __ASSEMBLY__ */
#endif /* __SPARC_MMAN_H__ */ #endif /* __SPARC_MMAN_H__ */
...@@ -90,6 +90,20 @@ struct tsb_config { ...@@ -90,6 +90,20 @@ struct tsb_config {
#define MM_NUM_TSBS 1 #define MM_NUM_TSBS 1
#endif #endif
/* ADI tags are stored when a page is swapped out and the storage for
* tags is allocated dynamically. There is a tag storage descriptor
* associated with each set of tag storage pages. Tag storage descriptors
* are allocated dynamically. Since kernel will allocate a full page for
* each tag storage descriptor, we can store up to
* PAGE_SIZE/sizeof(tag storage descriptor) descriptors on that page.
*/
typedef struct {
unsigned long start; /* Start address for this tag storage */
unsigned long end; /* Last address for tag storage */
unsigned char *tags; /* Where the tags are */
unsigned long tag_users; /* number of references to descriptor */
} tag_storage_desc_t;
typedef struct { typedef struct {
spinlock_t lock; spinlock_t lock;
unsigned long sparc64_ctx_val; unsigned long sparc64_ctx_val;
...@@ -98,6 +112,9 @@ typedef struct { ...@@ -98,6 +112,9 @@ typedef struct {
struct tsb_config tsb_block[MM_NUM_TSBS]; struct tsb_config tsb_block[MM_NUM_TSBS];
struct hv_tsb_descr tsb_descr[MM_NUM_TSBS]; struct hv_tsb_descr tsb_descr[MM_NUM_TSBS];
void *vdso; void *vdso;
bool adi;
tag_storage_desc_t *tag_store;
spinlock_t tag_lock;
} mm_context_t; } mm_context_t;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -9,8 +9,10 @@ ...@@ -9,8 +9,10 @@
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/mm_types.h> #include <linux/mm_types.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/sched.h>
#include <asm/spitfire.h> #include <asm/spitfire.h>
#include <asm/adi_64.h>
#include <asm-generic/mm_hooks.h> #include <asm-generic/mm_hooks.h>
#include <asm/percpu.h> #include <asm/percpu.h>
...@@ -136,6 +138,55 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str ...@@ -136,6 +138,55 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str
#define deactivate_mm(tsk,mm) do { } while (0) #define deactivate_mm(tsk,mm) do { } while (0)
#define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL) #define activate_mm(active_mm, mm) switch_mm(active_mm, mm, NULL)
#define __HAVE_ARCH_START_CONTEXT_SWITCH
static inline void arch_start_context_switch(struct task_struct *prev)
{
/* Save the current state of MCDPER register for the process
* we are switching from
*/
if (adi_capable()) {
register unsigned long tmp_mcdper;
__asm__ __volatile__(
".word 0x83438000\n\t" /* rd %mcdper, %g1 */
"mov %%g1, %0\n\t"
: "=r" (tmp_mcdper)
:
: "g1");
if (tmp_mcdper)
set_tsk_thread_flag(prev, TIF_MCDPER);
else
clear_tsk_thread_flag(prev, TIF_MCDPER);
}
}
#define finish_arch_post_lock_switch finish_arch_post_lock_switch
static inline void finish_arch_post_lock_switch(void)
{
/* Restore the state of MCDPER register for the new process
* just switched to.
*/
if (adi_capable()) {
register unsigned long tmp_mcdper;
tmp_mcdper = test_thread_flag(TIF_MCDPER);
__asm__ __volatile__(
"mov %0, %%g1\n\t"
".word 0x9d800001\n\t" /* wr %g0, %g1, %mcdper" */
".word 0xaf902001\n\t" /* wrpr %g0, 1, %pmcdper */
:
: "ir" (tmp_mcdper)
: "g1");
if (current && current->mm && current->mm->context.adi) {
struct pt_regs *regs;
regs = task_pt_regs(current);
regs->tstate |= TSTATE_MCDE;
}
}
}
#endif /* !(__ASSEMBLY__) */ #endif /* !(__ASSEMBLY__) */
#endif /* !(__SPARC64_MMU_CONTEXT_H) */ #endif /* !(__SPARC64_MMU_CONTEXT_H) */
...@@ -48,6 +48,12 @@ struct page; ...@@ -48,6 +48,12 @@ struct page;
void clear_user_page(void *addr, unsigned long vaddr, struct page *page); void clear_user_page(void *addr, unsigned long vaddr, struct page *page);
#define copy_page(X,Y) memcpy((void *)(X), (void *)(Y), PAGE_SIZE) #define copy_page(X,Y) memcpy((void *)(X), (void *)(Y), PAGE_SIZE)
void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage); void copy_user_page(void *to, void *from, unsigned long vaddr, struct page *topage);
#define __HAVE_ARCH_COPY_USER_HIGHPAGE
struct vm_area_struct;
void copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma);
#define __HAVE_ARCH_COPY_HIGHPAGE
void copy_highpage(struct page *to, struct page *from);
/* Unlike sparc32, sparc64's parameter passing API is more /* Unlike sparc32, sparc64's parameter passing API is more
* sane in that structures which as small enough are passed * sane in that structures which as small enough are passed
......
...@@ -19,6 +19,7 @@ ...@@ -19,6 +19,7 @@
#include <asm/types.h> #include <asm/types.h>
#include <asm/spitfire.h> #include <asm/spitfire.h>
#include <asm/asi.h> #include <asm/asi.h>
#include <asm/adi.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/processor.h> #include <asm/processor.h>
...@@ -164,6 +165,8 @@ bool kern_addr_valid(unsigned long addr); ...@@ -164,6 +165,8 @@ bool kern_addr_valid(unsigned long addr);
#define _PAGE_E_4V _AC(0x0000000000000800,UL) /* side-Effect */ #define _PAGE_E_4V _AC(0x0000000000000800,UL) /* side-Effect */
#define _PAGE_CP_4V _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */ #define _PAGE_CP_4V _AC(0x0000000000000400,UL) /* Cacheable in P-Cache */
#define _PAGE_CV_4V _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */ #define _PAGE_CV_4V _AC(0x0000000000000200,UL) /* Cacheable in V-Cache */
/* Bit 9 is used to enable MCD corruption detection instead on M7 */
#define _PAGE_MCD_4V _AC(0x0000000000000200,UL) /* Memory Corruption */
#define _PAGE_P_4V _AC(0x0000000000000100,UL) /* Privileged Page */ #define _PAGE_P_4V _AC(0x0000000000000100,UL) /* Privileged Page */
#define _PAGE_EXEC_4V _AC(0x0000000000000080,UL) /* Executable Page */ #define _PAGE_EXEC_4V _AC(0x0000000000000080,UL) /* Executable Page */
#define _PAGE_W_4V _AC(0x0000000000000040,UL) /* Writable */ #define _PAGE_W_4V _AC(0x0000000000000040,UL) /* Writable */
...@@ -604,6 +607,18 @@ static inline pte_t pte_mkspecial(pte_t pte) ...@@ -604,6 +607,18 @@ static inline pte_t pte_mkspecial(pte_t pte)
return pte; return pte;
} }
static inline pte_t pte_mkmcd(pte_t pte)
{
pte_val(pte) |= _PAGE_MCD_4V;
return pte;
}
static inline pte_t pte_mknotmcd(pte_t pte)
{
pte_val(pte) &= ~_PAGE_MCD_4V;
return pte;
}
static inline unsigned long pte_young(pte_t pte) static inline unsigned long pte_young(pte_t pte)
{ {
unsigned long mask; unsigned long mask;
...@@ -1046,6 +1061,39 @@ int page_in_phys_avail(unsigned long paddr); ...@@ -1046,6 +1061,39 @@ int page_in_phys_avail(unsigned long paddr);
int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long, int remap_pfn_range(struct vm_area_struct *, unsigned long, unsigned long,
unsigned long, pgprot_t); unsigned long, pgprot_t);
void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pte_t pte);
int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pte_t oldpte);
#define __HAVE_ARCH_DO_SWAP_PAGE
static inline void arch_do_swap_page(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr,
pte_t pte, pte_t oldpte)
{
/* If this is a new page being mapped in, there can be no
* ADI tags stored away for this page. Skip looking for
* stored tags
*/
if (pte_none(oldpte))
return;
if (adi_state.enabled && (pte_val(pte) & _PAGE_MCD_4V))
adi_restore_tags(mm, vma, addr, pte);
}
#define __HAVE_ARCH_UNMAP_ONE
static inline int arch_unmap_one(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr, pte_t oldpte)
{
if (adi_state.enabled && (pte_val(oldpte) & _PAGE_MCD_4V))
return adi_save_tags(mm, vma, addr, oldpte);
return 0;
}
static inline int io_remap_pfn_range(struct vm_area_struct *vma, static inline int io_remap_pfn_range(struct vm_area_struct *vma,
unsigned long from, unsigned long pfn, unsigned long from, unsigned long pfn,
unsigned long size, pgprot_t prot) unsigned long size, pgprot_t prot)
......
...@@ -188,7 +188,7 @@ register struct thread_info *current_thread_info_reg asm("g6"); ...@@ -188,7 +188,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
* in using in assembly, else we can't use the mask as * in using in assembly, else we can't use the mask as
* an immediate value in instructions such as andcc. * an immediate value in instructions such as andcc.
*/ */
/* flag bit 12 is available */ #define TIF_MCDPER 12 /* Precise MCD exception */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */ #define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_POLLING_NRFLAG 14 #define TIF_POLLING_NRFLAG 14
......
...@@ -76,6 +76,8 @@ extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch, ...@@ -76,6 +76,8 @@ extern struct sun4v_1insn_patch_entry __sun4v_1insn_patch,
__sun4v_1insn_patch_end; __sun4v_1insn_patch_end;
extern struct sun4v_1insn_patch_entry __fast_win_ctrl_1insn_patch, extern struct sun4v_1insn_patch_entry __fast_win_ctrl_1insn_patch,
__fast_win_ctrl_1insn_patch_end; __fast_win_ctrl_1insn_patch_end;
extern struct sun4v_1insn_patch_entry __sun_m7_1insn_patch,
__sun_m7_1insn_patch_end;
struct sun4v_2insn_patch_entry { struct sun4v_2insn_patch_entry {
unsigned int addr; unsigned int addr;
......
...@@ -219,6 +219,16 @@ ...@@ -219,6 +219,16 @@
nop; \ nop; \
nop; nop;
#define SUN4V_MCD_PRECISE \
ldxa [%g0] ASI_SCRATCHPAD, %g2; \
ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \
ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \
ba,pt %xcc, etrap; \
rd %pc, %g7; \
ba,pt %xcc, sun4v_mcd_detect_precise; \
nop; \
nop;
/* Before touching these macros, you owe it to yourself to go and /* Before touching these macros, you owe it to yourself to go and
* see how arch/sparc64/kernel/winfixup.S works... -DaveM * see how arch/sparc64/kernel/winfixup.S works... -DaveM
* *
......
...@@ -145,6 +145,8 @@ ...@@ -145,6 +145,8 @@
* ASIs, "(4V)" designates SUN4V specific ASIs. "(NG4)" designates SPARC-T4 * ASIs, "(4V)" designates SUN4V specific ASIs. "(NG4)" designates SPARC-T4
* and later ASIs. * and later ASIs.
*/ */
#define ASI_MCD_PRIV_PRIMARY 0x02 /* (NG7) Privileged MCD version VA */
#define ASI_MCD_REAL 0x05 /* (NG7) Privileged MCD version PA */
#define ASI_PHYS_USE_EC 0x14 /* PADDR, E-cachable */ #define ASI_PHYS_USE_EC 0x14 /* PADDR, E-cachable */
#define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-bit */ #define ASI_PHYS_BYPASS_EC_E 0x15 /* PADDR, E-bit */
#define ASI_BLK_AIUP_4V 0x16 /* (4V) Prim, user, block ld/st */ #define ASI_BLK_AIUP_4V 0x16 /* (4V) Prim, user, block ld/st */
...@@ -245,6 +247,9 @@ ...@@ -245,6 +247,9 @@
#define ASI_UDBL_CONTROL_R 0x7f /* External UDB control regs rd low*/ #define ASI_UDBL_CONTROL_R 0x7f /* External UDB control regs rd low*/
#define ASI_INTR_R 0x7f /* IRQ vector dispatch read */ #define ASI_INTR_R 0x7f /* IRQ vector dispatch read */
#define ASI_INTR_DATAN_R 0x7f /* (III) In irq vector data reg N */ #define ASI_INTR_DATAN_R 0x7f /* (III) In irq vector data reg N */
#define ASI_MCD_PRIMARY 0x90 /* (NG7) MCD version load/store */
#define ASI_MCD_ST_BLKINIT_PRIMARY \
0x92 /* (NG7) MCD store BLKINIT primary */
#define ASI_PIC 0xb0 /* (NG4) PIC registers */ #define ASI_PIC 0xb0 /* (NG4) PIC registers */
#define ASI_PST8_P 0xc0 /* Primary, 8 8-bit, partial */ #define ASI_PST8_P 0xc0 /* Primary, 8 8-bit, partial */
#define ASI_PST8_S 0xc1 /* Secondary, 8 8-bit, partial */ #define ASI_PST8_S 0xc1 /* Secondary, 8 8-bit, partial */
......
...@@ -3,6 +3,13 @@ ...@@ -3,6 +3,13 @@
#define AT_SYSINFO_EHDR 33 #define AT_SYSINFO_EHDR 33
#define AT_VECTOR_SIZE_ARCH 1 /* Avoid overlap with other AT_* values since they are consolidated in
* glibc and any overlaps can cause problems
*/
#define AT_ADI_BLKSZ 48
#define AT_ADI_NBITS 49
#define AT_ADI_UEONADI 50
#define AT_VECTOR_SIZE_ARCH 4
#endif /* !(__ASMSPARC_AUXVEC_H) */ #endif /* !(__ASMSPARC_AUXVEC_H) */
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
/* SunOS'ified... */ /* SunOS'ified... */
#define PROT_ADI 0x10 /* ADI enabled */
#define MAP_RENAME MAP_ANONYMOUS /* In SunOS terminology */ #define MAP_RENAME MAP_ANONYMOUS /* In SunOS terminology */
#define MAP_NORESERVE 0x40 /* don't reserve swap pages */ #define MAP_NORESERVE 0x40 /* don't reserve swap pages */
#define MAP_INHERIT 0x80 /* SunOS doesn't do this, but... */ #define MAP_INHERIT 0x80 /* SunOS doesn't do this, but... */
......
...@@ -11,7 +11,12 @@ ...@@ -11,7 +11,12 @@
* ----------------------------------------------------------------------- * -----------------------------------------------------------------------
* 63 12 11 10 9 8 7 6 5 4 3 2 1 0 * 63 12 11 10 9 8 7 6 5 4 3 2 1 0
*/ */
/* IG on V9 conflicts with MCDE on M7. PSTATE_MCDE will only be used on
* processors that support ADI which do not use IG, hence there is no
* functional conflict
*/
#define PSTATE_IG _AC(0x0000000000000800,UL) /* Interrupt Globals. */ #define PSTATE_IG _AC(0x0000000000000800,UL) /* Interrupt Globals. */
#define PSTATE_MCDE _AC(0x0000000000000800,UL) /* MCD Enable */
#define PSTATE_MG _AC(0x0000000000000400,UL) /* MMU Globals. */ #define PSTATE_MG _AC(0x0000000000000400,UL) /* MMU Globals. */
#define PSTATE_CLE _AC(0x0000000000000200,UL) /* Current Little Endian.*/ #define PSTATE_CLE _AC(0x0000000000000200,UL) /* Current Little Endian.*/
#define PSTATE_TLE _AC(0x0000000000000100,UL) /* Trap Little Endian. */ #define PSTATE_TLE _AC(0x0000000000000100,UL) /* Trap Little Endian. */
...@@ -48,7 +53,12 @@ ...@@ -48,7 +53,12 @@
#define TSTATE_ASI _AC(0x00000000ff000000,UL) /* AddrSpace ID. */ #define TSTATE_ASI _AC(0x00000000ff000000,UL) /* AddrSpace ID. */
#define TSTATE_PIL _AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/ #define TSTATE_PIL _AC(0x0000000000f00000,UL) /* %pil (Linux traps)*/
#define TSTATE_PSTATE _AC(0x00000000000fff00,UL) /* PSTATE. */ #define TSTATE_PSTATE _AC(0x00000000000fff00,UL) /* PSTATE. */
/* IG on V9 conflicts with MCDE on M7. TSTATE_MCDE will only be used on
* processors that support ADI which do not support IG, hence there is
* no functional conflict
*/
#define TSTATE_IG _AC(0x0000000000080000,UL) /* Interrupt Globals.*/ #define TSTATE_IG _AC(0x0000000000080000,UL) /* Interrupt Globals.*/
#define TSTATE_MCDE _AC(0x0000000000080000,UL) /* MCD enable. */
#define TSTATE_MG _AC(0x0000000000040000,UL) /* MMU Globals. */ #define TSTATE_MG _AC(0x0000000000040000,UL) /* MMU Globals. */
#define TSTATE_CLE _AC(0x0000000000020000,UL) /* CurrLittleEndian. */ #define TSTATE_CLE _AC(0x0000000000020000,UL) /* CurrLittleEndian. */
#define TSTATE_TLE _AC(0x0000000000010000,UL) /* TrapLittleEndian. */ #define TSTATE_TLE _AC(0x0000000000010000,UL) /* TrapLittleEndian. */
......
...@@ -69,6 +69,7 @@ obj-$(CONFIG_SPARC64) += visemul.o ...@@ -69,6 +69,7 @@ obj-$(CONFIG_SPARC64) += visemul.o
obj-$(CONFIG_SPARC64) += hvapi.o obj-$(CONFIG_SPARC64) += hvapi.o
obj-$(CONFIG_SPARC64) += sstate.o obj-$(CONFIG_SPARC64) += sstate.o
obj-$(CONFIG_SPARC64) += mdesc.o obj-$(CONFIG_SPARC64) += mdesc.o
obj-$(CONFIG_SPARC64) += adi_64.o
obj-$(CONFIG_SPARC64) += pcr.o obj-$(CONFIG_SPARC64) += pcr.o
obj-$(CONFIG_SPARC64) += nmi.o obj-$(CONFIG_SPARC64) += nmi.o
obj-$(CONFIG_SPARC64_SMP) += cpumap.o obj-$(CONFIG_SPARC64_SMP) += cpumap.o
......
/* adi_64.c: support for ADI (Application Data Integrity) feature on
* sparc m7 and newer processors. This feature is also known as
* SSM (Silicon Secured Memory).
*
* Copyright (C) 2016 Oracle and/or its affiliates. All rights reserved.
* Author: Khalid Aziz (khalid.aziz@oracle.com)
*
* This work is licensed under the terms of the GNU GPL, version 2.
*/
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/mm_types.h>
#include <asm/mdesc.h>
#include <asm/adi_64.h>
#include <asm/mmu_64.h>
#include <asm/pgtable_64.h>
/* Each page of storage for ADI tags can accommodate tags for 128
* pages. When ADI enabled pages are being swapped out, it would be
* prudent to allocate at least enough tag storage space to accommodate
* SWAPFILE_CLUSTER number of pages. Allocate enough tag storage to
* store tags for four SWAPFILE_CLUSTER pages to reduce need for
* further allocations for same vma.
*/
#define TAG_STORAGE_PAGES 8
struct adi_config adi_state;
EXPORT_SYMBOL(adi_state);
/* mdesc_adi_init() : Parse machine description provided by the
* hypervisor to detect ADI capabilities
*
* Hypervisor reports ADI capabilities of platform in "hwcap-list" property
* for "cpu" node. If the platform supports ADI, "hwcap-list" property
* contains the keyword "adp". If the platform supports ADI, "platform"
* node will contain "adp-blksz", "adp-nbits" and "ue-on-adp" properties
* to describe the ADI capabilities.
*/
void __init mdesc_adi_init(void)
{
struct mdesc_handle *hp = mdesc_grab();
const char *prop;
u64 pn, *val;
int len;
if (!hp)
goto adi_not_found;
pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "cpu");
if (pn == MDESC_NODE_NULL)
goto adi_not_found;
prop = mdesc_get_property(hp, pn, "hwcap-list", &len);
if (!prop)
goto adi_not_found;
/*
* Look for "adp" keyword in hwcap-list which would indicate
* ADI support
*/
adi_state.enabled = false;
while (len) {
int plen;
if (!strcmp(prop, "adp")) {
adi_state.enabled = true;
break;
}
plen = strlen(prop) + 1;
prop += plen;
len -= plen;
}
if (!adi_state.enabled)
goto adi_not_found;
/* Find the ADI properties in "platform" node. If all ADI
* properties are not found, ADI support is incomplete and
* do not enable ADI in the kernel.
*/
pn = mdesc_node_by_name(hp, MDESC_NODE_NULL, "platform");
if (pn == MDESC_NODE_NULL)
goto adi_not_found;
val = (u64 *) mdesc_get_property(hp, pn, "adp-blksz", &len);
if (!val)
goto adi_not_found;
adi_state.caps.blksz = *val;
val = (u64 *) mdesc_get_property(hp, pn, "adp-nbits", &len);
if (!val)
goto adi_not_found;
adi_state.caps.nbits = *val;
val = (u64 *) mdesc_get_property(hp, pn, "ue-on-adp", &len);
if (!val)
goto adi_not_found;
adi_state.caps.ue_on_adi = *val;
/* Some of the code to support swapping ADI tags is written
* assumption that two ADI tags can fit inside one byte. If
* this assumption is broken by a future architecture change,
* that code will have to be revisited. If that were to happen,
* disable ADI support so we do not get unpredictable results
* with programs trying to use ADI and their pages getting
* swapped out
*/
if (adi_state.caps.nbits > 4) {
pr_warn("WARNING: ADI tag size >4 on this platform. Disabling AADI support\n");
adi_state.enabled = false;
}
mdesc_release(hp);
return;
adi_not_found:
adi_state.enabled = false;
adi_state.caps.blksz = 0;
adi_state.caps.nbits = 0;
if (hp)
mdesc_release(hp);
}
tag_storage_desc_t *find_tag_store(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr)
{
tag_storage_desc_t *tag_desc = NULL;
unsigned long i, max_desc, flags;
/* Check if this vma already has tag storage descriptor
* allocated for it.
*/
max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
if (mm->context.tag_store) {
tag_desc = mm->context.tag_store;
spin_lock_irqsave(&mm->context.tag_lock, flags);
for (i = 0; i < max_desc; i++) {
if ((addr >= tag_desc->start) &&
((addr + PAGE_SIZE - 1) <= tag_desc->end))
break;
tag_desc++;
}
spin_unlock_irqrestore(&mm->context.tag_lock, flags);
/* If no matching entries were found, this must be a
* freshly allocated page
*/
if (i >= max_desc)
tag_desc = NULL;
}
return tag_desc;
}
tag_storage_desc_t *alloc_tag_store(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr)
{
unsigned char *tags;
unsigned long i, size, max_desc, flags;
tag_storage_desc_t *tag_desc, *open_desc;
unsigned long end_addr, hole_start, hole_end;
max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
open_desc = NULL;
hole_start = 0;
hole_end = ULONG_MAX;
end_addr = addr + PAGE_SIZE - 1;
/* Check if this vma already has tag storage descriptor
* allocated for it.
*/
spin_lock_irqsave(&mm->context.tag_lock, flags);
if (mm->context.tag_store) {
tag_desc = mm->context.tag_store;
/* Look for a matching entry for this address. While doing
* that, look for the first open slot as well and find
* the hole in already allocated range where this request
* will fit in.
*/
for (i = 0; i < max_desc; i++) {
if (tag_desc->tag_users == 0) {
if (open_desc == NULL)
open_desc = tag_desc;
} else {
if ((addr >= tag_desc->start) &&
(tag_desc->end >= (addr + PAGE_SIZE - 1))) {
tag_desc->tag_users++;
goto out;
}
}
if ((tag_desc->start > end_addr) &&
(tag_desc->start < hole_end))
hole_end = tag_desc->start;
if ((tag_desc->end < addr) &&
(tag_desc->end > hole_start))
hole_start = tag_desc->end;
tag_desc++;
}
} else {
size = sizeof(tag_storage_desc_t)*max_desc;
mm->context.tag_store = kzalloc(size, GFP_NOWAIT|__GFP_NOWARN);
if (mm->context.tag_store == NULL) {
tag_desc = NULL;
goto out;
}
tag_desc = mm->context.tag_store;
for (i = 0; i < max_desc; i++, tag_desc++)
tag_desc->tag_users = 0;
open_desc = mm->context.tag_store;
i = 0;
}
/* Check if we ran out of tag storage descriptors */
if (open_desc == NULL) {
tag_desc = NULL;
goto out;
}
/* Mark this tag descriptor slot in use and then initialize it */
tag_desc = open_desc;
tag_desc->tag_users = 1;
/* Tag storage has not been allocated for this vma and space
* is available in tag storage descriptor. Since this page is
* being swapped out, there is high probability subsequent pages
* in the VMA will be swapped out as well. Allocate pages to
* store tags for as many pages in this vma as possible but not
* more than TAG_STORAGE_PAGES. Each byte in tag space holds
* two ADI tags since each ADI tag is 4 bits. Each ADI tag
* covers adi_blksize() worth of addresses. Check if the hole is
* big enough to accommodate full address range for using
* TAG_STORAGE_PAGES number of tag pages.
*/
size = TAG_STORAGE_PAGES * PAGE_SIZE;
end_addr = addr + (size*2*adi_blksize()) - 1;
/* Check for overflow. If overflow occurs, allocate only one page */
if (end_addr < addr) {
size = PAGE_SIZE;
end_addr = addr + (size*2*adi_blksize()) - 1;
/* If overflow happens with the minimum tag storage
* allocation as well, adjust ending address for this
* tag storage.
*/
if (end_addr < addr)
end_addr = ULONG_MAX;
}
if (hole_end < end_addr) {
/* Available hole is too small on the upper end of
* address. Can we expand the range towards the lower
* address and maximize use of this slot?
*/
unsigned long tmp_addr;
end_addr = hole_end - 1;
tmp_addr = end_addr - (size*2*adi_blksize()) + 1;
/* Check for underflow. If underflow occurs, allocate
* only one page for storing ADI tags
*/
if (tmp_addr > addr) {
size = PAGE_SIZE;
tmp_addr = end_addr - (size*2*adi_blksize()) - 1;
/* If underflow happens with the minimum tag storage
* allocation as well, adjust starting address for
* this tag storage.
*/
if (tmp_addr > addr)
tmp_addr = 0;
}
if (tmp_addr < hole_start) {
/* Available hole is restricted on lower address
* end as well
*/
tmp_addr = hole_start + 1;
}
addr = tmp_addr;
size = (end_addr + 1 - addr)/(2*adi_blksize());
size = (size + (PAGE_SIZE-adi_blksize()))/PAGE_SIZE;
size = size * PAGE_SIZE;
}
tags = kzalloc(size, GFP_NOWAIT|__GFP_NOWARN);
if (tags == NULL) {
tag_desc->tag_users = 0;
tag_desc = NULL;
goto out;
}
tag_desc->start = addr;
tag_desc->tags = tags;
tag_desc->end = end_addr;
out:
spin_unlock_irqrestore(&mm->context.tag_lock, flags);
return tag_desc;
}
void del_tag_store(tag_storage_desc_t *tag_desc, struct mm_struct *mm)
{
unsigned long flags;
unsigned char *tags = NULL;
spin_lock_irqsave(&mm->context.tag_lock, flags);
tag_desc->tag_users--;
if (tag_desc->tag_users == 0) {
tag_desc->start = tag_desc->end = 0;
/* Do not free up the tag storage space allocated
* by the first descriptor. This is persistent
* emergency tag storage space for the task.
*/
if (tag_desc != mm->context.tag_store) {
tags = tag_desc->tags;
tag_desc->tags = NULL;
}
}
spin_unlock_irqrestore(&mm->context.tag_lock, flags);
kfree(tags);
}
#define tag_start(addr, tag_desc) \
((tag_desc)->tags + ((addr - (tag_desc)->start)/(2*adi_blksize())))
/* Retrieve any saved ADI tags for the page being swapped back in and
* restore these tags to the newly allocated physical page.
*/
void adi_restore_tags(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pte_t pte)
{
unsigned char *tag;
tag_storage_desc_t *tag_desc;
unsigned long paddr, tmp, version1, version2;
/* Check if the swapped out page has an ADI version
* saved. If yes, restore version tag to the newly
* allocated page.
*/
tag_desc = find_tag_store(mm, vma, addr);
if (tag_desc == NULL)
return;
tag = tag_start(addr, tag_desc);
paddr = pte_val(pte) & _PAGE_PADDR_4V;
for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
version1 = (*tag) >> 4;
version2 = (*tag) & 0x0f;
*tag++ = 0;
asm volatile("stxa %0, [%1] %2\n\t"
:
: "r" (version1), "r" (tmp),
"i" (ASI_MCD_REAL));
tmp += adi_blksize();
asm volatile("stxa %0, [%1] %2\n\t"
:
: "r" (version2), "r" (tmp),
"i" (ASI_MCD_REAL));
}
asm volatile("membar #Sync\n\t");
/* Check and mark this tag space for release later if
* the swapped in page was the last user of tag space
*/
del_tag_store(tag_desc, mm);
}
/* A page is about to be swapped out. Save any ADI tags associated with
* this physical page so they can be restored later when the page is swapped
* back in.
*/
int adi_save_tags(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pte_t oldpte)
{
unsigned char *tag;
tag_storage_desc_t *tag_desc;
unsigned long version1, version2, paddr, tmp;
tag_desc = alloc_tag_store(mm, vma, addr);
if (tag_desc == NULL)
return -1;
tag = tag_start(addr, tag_desc);
paddr = pte_val(oldpte) & _PAGE_PADDR_4V;
for (tmp = paddr; tmp < (paddr+PAGE_SIZE); tmp += adi_blksize()) {
asm volatile("ldxa [%1] %2, %0\n\t"
: "=r" (version1)
: "r" (tmp), "i" (ASI_MCD_REAL));
tmp += adi_blksize();
asm volatile("ldxa [%1] %2, %0\n\t"
: "=r" (version2)
: "r" (tmp), "i" (ASI_MCD_REAL));
*tag = (version1 << 4) | version2;
tag++;
}
return 0;
}
...@@ -160,6 +160,9 @@ void sun4v_resum_overflow(struct pt_regs *regs); ...@@ -160,6 +160,9 @@ void sun4v_resum_overflow(struct pt_regs *regs);
void sun4v_nonresum_error(struct pt_regs *regs, void sun4v_nonresum_error(struct pt_regs *regs,
unsigned long offset); unsigned long offset);
void sun4v_nonresum_overflow(struct pt_regs *regs); void sun4v_nonresum_overflow(struct pt_regs *regs);
void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs,
unsigned long addr,
unsigned long context);
extern unsigned long sun4v_err_itlb_vaddr; extern unsigned long sun4v_err_itlb_vaddr;
extern unsigned long sun4v_err_itlb_ctx; extern unsigned long sun4v_err_itlb_ctx;
......
...@@ -151,7 +151,32 @@ etrap_save: save %g2, -STACK_BIAS, %sp ...@@ -151,7 +151,32 @@ etrap_save: save %g2, -STACK_BIAS, %sp
stx %g6, [%sp + PTREGS_OFF + PT_V9_G6] stx %g6, [%sp + PTREGS_OFF + PT_V9_G6]
stx %g7, [%sp + PTREGS_OFF + PT_V9_G7] stx %g7, [%sp + PTREGS_OFF + PT_V9_G7]
or %l7, %l0, %l7 or %l7, %l0, %l7
sethi %hi(TSTATE_TSO | TSTATE_PEF), %l0 661: sethi %hi(TSTATE_TSO | TSTATE_PEF), %l0
/* If userspace is using ADI, it could potentially pass
* a pointer with version tag embedded in it. To maintain
* the ADI security, we must enable PSTATE.mcde. Userspace
* would have already set TTE.mcd in an earlier call to
* kernel and set the version tag for the address being
* dereferenced. Setting PSTATE.mcde would ensure any
* access to userspace data through a system call honors
* ADI and does not allow a rogue app to bypass ADI by
* using system calls. Setting PSTATE.mcde only affects
* accesses to virtual addresses that have TTE.mcd set.
* Set PMCDPER to ensure any exceptions caused by ADI
* version tag mismatch are exposed before system call
* returns to userspace. Setting PMCDPER affects only
* writes to virtual addresses that have TTE.mcd set and
* have a version tag set as well.
*/
.section .sun_m7_1insn_patch, "ax"
.word 661b
sethi %hi(TSTATE_TSO | TSTATE_PEF | TSTATE_MCDE), %l0
.previous
661: nop
.section .sun_m7_1insn_patch, "ax"
.word 661b
.word 0xaf902001 /* wrpr %g0, 1, %pmcdper */
.previous
or %l7, %l0, %l7 or %l7, %l0, %l7
wrpr %l2, %tnpc wrpr %l2, %tnpc
wrpr %l7, (TSTATE_PRIV | TSTATE_IE), %tstate wrpr %l7, (TSTATE_PRIV | TSTATE_IE), %tstate
......
...@@ -897,6 +897,7 @@ sparc64_boot_end: ...@@ -897,6 +897,7 @@ sparc64_boot_end:
#include "syscalls.S" #include "syscalls.S"
#include "helpers.S" #include "helpers.S"
#include "sun4v_tlb_miss.S" #include "sun4v_tlb_miss.S"
#include "sun4v_mcd.S"
#include "sun4v_ivec.S" #include "sun4v_ivec.S"
#include "ktlb.S" #include "ktlb.S"
#include "tsb.S" #include "tsb.S"
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/uaccess.h> #include <linux/uaccess.h>
#include <asm/oplib.h> #include <asm/oplib.h>
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/adi.h>
/* Unlike the OBP device tree, the machine description is a full-on /* Unlike the OBP device tree, the machine description is a full-on
* DAG. An arbitrary number of ARCs are possible from one * DAG. An arbitrary number of ARCs are possible from one
...@@ -1345,5 +1346,6 @@ void __init sun4v_mdesc_init(void) ...@@ -1345,5 +1346,6 @@ void __init sun4v_mdesc_init(void)
cur_mdesc = hp; cur_mdesc = hp;
mdesc_adi_init();
report_platform_properties(); report_platform_properties();
} }
...@@ -670,6 +670,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, ...@@ -670,6 +670,31 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
return 0; return 0;
} }
/* TIF_MCDPER in thread info flags for current task is updated lazily upon
* a context switch. Update this flag in current task's thread flags
* before dup so the dup'd task will inherit the current TIF_MCDPER flag.
*/
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{
if (adi_capable()) {
register unsigned long tmp_mcdper;
__asm__ __volatile__(
".word 0x83438000\n\t" /* rd %mcdper, %g1 */
"mov %%g1, %0\n\t"
: "=r" (tmp_mcdper)
:
: "g1");
if (tmp_mcdper)
set_thread_flag(TIF_MCDPER);
else
clear_thread_flag(TIF_MCDPER);
}
*dst = *src;
return 0;
}
typedef struct { typedef struct {
union { union {
unsigned int pr_regs[32]; unsigned int pr_regs[32];
......
...@@ -25,13 +25,31 @@ ...@@ -25,13 +25,31 @@
.align 32 .align 32
__handle_preemption: __handle_preemption:
call SCHEDULE_USER call SCHEDULE_USER
wrpr %g0, RTRAP_PSTATE, %pstate 661: wrpr %g0, RTRAP_PSTATE, %pstate
/* If userspace is using ADI, it could potentially pass
* a pointer with version tag embedded in it. To maintain
* the ADI security, we must re-enable PSTATE.mcde before
* we continue execution in the kernel for another thread.
*/
.section .sun_m7_1insn_patch, "ax"
.word 661b
wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
.previous
ba,pt %xcc, __handle_preemption_continue ba,pt %xcc, __handle_preemption_continue
wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
__handle_user_windows: __handle_user_windows:
call fault_in_user_windows call fault_in_user_windows
wrpr %g0, RTRAP_PSTATE, %pstate 661: wrpr %g0, RTRAP_PSTATE, %pstate
/* If userspace is using ADI, it could potentially pass
* a pointer with version tag embedded in it. To maintain
* the ADI security, we must re-enable PSTATE.mcde before
* we continue execution in the kernel for another thread.
*/
.section .sun_m7_1insn_patch, "ax"
.word 661b
wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
.previous
ba,pt %xcc, __handle_preemption_continue ba,pt %xcc, __handle_preemption_continue
wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
...@@ -48,7 +66,16 @@ __handle_signal: ...@@ -48,7 +66,16 @@ __handle_signal:
add %sp, PTREGS_OFF, %o0 add %sp, PTREGS_OFF, %o0
mov %l0, %o2 mov %l0, %o2
call do_notify_resume call do_notify_resume
wrpr %g0, RTRAP_PSTATE, %pstate 661: wrpr %g0, RTRAP_PSTATE, %pstate
/* If userspace is using ADI, it could potentially pass
* a pointer with version tag embedded in it. To maintain
* the ADI security, we must re-enable PSTATE.mcde before
* we continue execution in the kernel for another thread.
*/
.section .sun_m7_1insn_patch, "ax"
.word 661b
wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
.previous
wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
/* Signal delivery can modify pt_regs tstate, so we must /* Signal delivery can modify pt_regs tstate, so we must
......
...@@ -294,6 +294,8 @@ static void __init sun4v_patch(void) ...@@ -294,6 +294,8 @@ static void __init sun4v_patch(void)
case SUN4V_CHIP_SPARC_M7: case SUN4V_CHIP_SPARC_M7:
case SUN4V_CHIP_SPARC_M8: case SUN4V_CHIP_SPARC_M8:
case SUN4V_CHIP_SPARC_SN: case SUN4V_CHIP_SPARC_SN:
sun4v_patch_1insn_range(&__sun_m7_1insn_patch,
&__sun_m7_1insn_patch_end);
sun_m7_patch_2insn_range(&__sun_m7_2insn_patch, sun_m7_patch_2insn_range(&__sun_m7_2insn_patch,
&__sun_m7_2insn_patch_end); &__sun_m7_2insn_patch_end);
break; break;
......
/* sun4v_mcd.S: Sun4v memory corruption detected precise exception handler
*
* Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
* Authors: Bob Picco <bob.picco@oracle.com>,
* Khalid Aziz <khalid.aziz@oracle.com>
*
* This work is licensed under the terms of the GNU GPL, version 2.
*/
.text
.align 32
sun4v_mcd_detect_precise:
mov %l4, %o1
mov %l5, %o2
call sun4v_mem_corrupt_detect_precise
add %sp, PTREGS_OFF, %o0
ba,a,pt %xcc, rtrap
nop
...@@ -362,7 +362,6 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig ...@@ -362,7 +362,6 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig
{ {
unsigned short type = (type_ctx >> 16); unsigned short type = (type_ctx >> 16);
unsigned short ctx = (type_ctx & 0xffff); unsigned short ctx = (type_ctx & 0xffff);
siginfo_t info;
if (notify_die(DIE_TRAP, "data access exception", regs, if (notify_die(DIE_TRAP, "data access exception", regs,
0, 0x8, SIGTRAP) == NOTIFY_STOP) 0, 0x8, SIGTRAP) == NOTIFY_STOP)
...@@ -397,12 +396,29 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig ...@@ -397,12 +396,29 @@ void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsig
if (is_no_fault_exception(regs)) if (is_no_fault_exception(regs))
return; return;
info.si_signo = SIGSEGV; /* MCD (Memory Corruption Detection) disabled trap (TT=0x19) in HV
info.si_errno = 0; * is vectored thorugh data access exception trap with fault type
info.si_code = SEGV_MAPERR; * set to HV_FAULT_TYPE_MCD_DIS. Check for MCD disabled trap.
info.si_addr = (void __user *) addr; * Accessing an address with invalid ASI for the address, for
info.si_trapno = 0; * example setting an ADI tag on an address with ASI_MCD_PRIMARY
force_sig_info(SIGSEGV, &info, current); * when TTE.mcd is not set for the VA, is also vectored into
* kerbel by HV as data access exception with fault type set to
* HV_FAULT_TYPE_INV_ASI.
*/
switch (type) {
case HV_FAULT_TYPE_INV_ASI:
force_sig_fault(SIGILL, ILL_ILLADR, (void __user *)addr, 0,
current);
break;
case HV_FAULT_TYPE_MCD_DIS:
force_sig_fault(SIGSEGV, SEGV_ACCADI, (void __user *)addr, 0,
current);
break;
default:
force_sig_fault(SIGSEGV, SEGV_MAPERR, (void __user *)addr, 0,
current);
break;
}
} }
void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
...@@ -1847,6 +1863,7 @@ struct sun4v_error_entry { ...@@ -1847,6 +1863,7 @@ struct sun4v_error_entry {
#define SUN4V_ERR_ATTRS_ASI 0x00000080 #define SUN4V_ERR_ATTRS_ASI 0x00000080
#define SUN4V_ERR_ATTRS_PRIV_REG 0x00000100 #define SUN4V_ERR_ATTRS_PRIV_REG 0x00000100
#define SUN4V_ERR_ATTRS_SPSTATE_MSK 0x00000600 #define SUN4V_ERR_ATTRS_SPSTATE_MSK 0x00000600
#define SUN4V_ERR_ATTRS_MCD 0x00000800
#define SUN4V_ERR_ATTRS_SPSTATE_SHFT 9 #define SUN4V_ERR_ATTRS_SPSTATE_SHFT 9
#define SUN4V_ERR_ATTRS_MODE_MSK 0x03000000 #define SUN4V_ERR_ATTRS_MODE_MSK 0x03000000
#define SUN4V_ERR_ATTRS_MODE_SHFT 24 #define SUN4V_ERR_ATTRS_MODE_SHFT 24
...@@ -2044,6 +2061,50 @@ static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent, ...@@ -2044,6 +2061,50 @@ static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent,
} }
} }
/* Handle memory corruption detected error which is vectored in
* through resumable error trap.
*/
void do_mcd_err(struct pt_regs *regs, struct sun4v_error_entry ent)
{
if (notify_die(DIE_TRAP, "MCD error", regs, 0, 0x34,
SIGSEGV) == NOTIFY_STOP)
return;
if (regs->tstate & TSTATE_PRIV) {
/* MCD exception could happen because the task was
* running a system call with MCD enabled and passed a
* non-versioned pointer or pointer with bad version
* tag to the system call. In such cases, hypervisor
* places the address of offending instruction in the
* resumable error report. This is a deferred error,
* so the read/write that caused the trap was potentially
* retired long time back and we may have no choice
* but to send SIGSEGV to the process.
*/
const struct exception_table_entry *entry;
entry = search_exception_tables(regs->tpc);
if (entry) {
/* Looks like a bad syscall parameter */
#ifdef DEBUG_EXCEPTIONS
pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
regs->tpc);
pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
ent.err_raddr, entry->fixup);
#endif
regs->tpc = entry->fixup;
regs->tnpc = regs->tpc + 4;
return;
}
}
/* Send SIGSEGV to the userspace process with the right signal
* code
*/
force_sig_fault(SIGSEGV, SEGV_ADIDERR, (void __user *)ent.err_raddr,
0, current);
}
/* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate. /* We run with %pil set to PIL_NORMAL_MAX and PSTATE_IE enabled in %pstate.
* Log the event and clear the first word of the entry. * Log the event and clear the first word of the entry.
*/ */
...@@ -2081,6 +2142,14 @@ void sun4v_resum_error(struct pt_regs *regs, unsigned long offset) ...@@ -2081,6 +2142,14 @@ void sun4v_resum_error(struct pt_regs *regs, unsigned long offset)
goto out; goto out;
} }
/* If this is a memory corruption detected error vectored in
* by HV through resumable error trap, call the handler
*/
if (local_copy.err_attrs & SUN4V_ERR_ATTRS_MCD) {
do_mcd_err(regs, local_copy);
return;
}
sun4v_log_error(regs, &local_copy, cpu, sun4v_log_error(regs, &local_copy, cpu,
KERN_ERR "RESUMABLE ERROR", KERN_ERR "RESUMABLE ERROR",
&sun4v_resum_oflow_cnt); &sun4v_resum_oflow_cnt);
...@@ -2656,6 +2725,53 @@ void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_c ...@@ -2656,6 +2725,53 @@ void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_c
force_sig_info(SIGBUS, &info, current); force_sig_info(SIGBUS, &info, current);
} }
/* sun4v_mem_corrupt_detect_precise() - Handle precise exception on an ADI
* tag mismatch.
*
* ADI version tag mismatch on a load from memory always results in a
* precise exception. Tag mismatch on a store to memory will result in
* precise exception if MCDPER or PMCDPER is set to 1.
*/
void sun4v_mem_corrupt_detect_precise(struct pt_regs *regs, unsigned long addr,
unsigned long context)
{
if (notify_die(DIE_TRAP, "memory corruption precise exception", regs,
0, 0x8, SIGSEGV) == NOTIFY_STOP)
return;
if (regs->tstate & TSTATE_PRIV) {
/* MCD exception could happen because the task was running
* a system call with MCD enabled and passed a non-versioned
* pointer or pointer with bad version tag to the system
* call.
*/
const struct exception_table_entry *entry;
entry = search_exception_tables(regs->tpc);
if (entry) {
/* Looks like a bad syscall parameter */
#ifdef DEBUG_EXCEPTIONS
pr_emerg("Exception: PC<%016lx> faddr<UNKNOWN>\n",
regs->tpc);
pr_emerg("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
regs->tpc, entry->fixup);
#endif
regs->tpc = entry->fixup;
regs->tnpc = regs->tpc + 4;
return;
}
pr_emerg("%s: ADDR[%016lx] CTX[%lx], going.\n",
__func__, addr, context);
die_if_kernel("MCD precise", regs);
}
if (test_thread_flag(TIF_32BIT)) {
regs->tpc &= 0xffffffff;
regs->tnpc &= 0xffffffff;
}
force_sig_fault(SIGSEGV, SEGV_ADIPERR, (void __user *)addr, 0, current);
}
void do_privop(struct pt_regs *regs) void do_privop(struct pt_regs *regs)
{ {
enum ctx_state prev_state = exception_enter(); enum ctx_state prev_state = exception_enter();
......
...@@ -26,8 +26,10 @@ tl0_ill: membar #Sync ...@@ -26,8 +26,10 @@ tl0_ill: membar #Sync
TRAP_7INSNS(do_illegal_instruction) TRAP_7INSNS(do_illegal_instruction)
tl0_privop: TRAP(do_privop) tl0_privop: TRAP(do_privop)
tl0_resv012: BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17) tl0_resv012: BTRAP(0x12) BTRAP(0x13) BTRAP(0x14) BTRAP(0x15) BTRAP(0x16) BTRAP(0x17)
tl0_resv018: BTRAP(0x18) BTRAP(0x19) BTRAP(0x1a) BTRAP(0x1b) BTRAP(0x1c) BTRAP(0x1d) tl0_resv018: BTRAP(0x18) BTRAP(0x19)
tl0_resv01e: BTRAP(0x1e) BTRAP(0x1f) tl0_mcd: SUN4V_MCD_PRECISE
tl0_resv01b: BTRAP(0x1b)
tl0_resv01c: BTRAP(0x1c) BTRAP(0x1d) BTRAP(0x1e) BTRAP(0x1f)
tl0_fpdis: TRAP_NOSAVE(do_fpdis) tl0_fpdis: TRAP_NOSAVE(do_fpdis)
tl0_fpieee: TRAP_SAVEFPU(do_fpieee) tl0_fpieee: TRAP_SAVEFPU(do_fpieee)
tl0_fpother: TRAP_NOSAVE(do_fpother_check_fitos) tl0_fpother: TRAP_NOSAVE(do_fpother_check_fitos)
......
...@@ -50,7 +50,12 @@ user_rtt_fill_fixup_common: ...@@ -50,7 +50,12 @@ user_rtt_fill_fixup_common:
SET_GL(0) SET_GL(0)
.previous .previous
wrpr %g0, RTRAP_PSTATE, %pstate 661: wrpr %g0, RTRAP_PSTATE, %pstate
.section .sun_m7_1insn_patch, "ax"
.word 661b
/* Re-enable PSTATE.mcde to maintain ADI security */
wrpr %g0, RTRAP_PSTATE|PSTATE_MCDE, %pstate
.previous
mov %l1, %g6 mov %l1, %g6
ldx [%g6 + TI_TASK], %g4 ldx [%g6 + TI_TASK], %g4
......
...@@ -145,6 +145,11 @@ SECTIONS ...@@ -145,6 +145,11 @@ SECTIONS
*(.pause_3insn_patch) *(.pause_3insn_patch)
__pause_3insn_patch_end = .; __pause_3insn_patch_end = .;
} }
.sun_m7_1insn_patch : {
__sun_m7_1insn_patch = .;
*(.sun_m7_1insn_patch)
__sun_m7_1insn_patch_end = .;
}
.sun_m7_2insn_patch : { .sun_m7_2insn_patch : {
__sun_m7_2insn_patch = .; __sun_m7_2insn_patch = .;
*(.sun_m7_2insn_patch) *(.sun_m7_2insn_patch)
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/adi.h>
/* /*
* The performance critical leaf functions are made noinline otherwise gcc * The performance critical leaf functions are made noinline otherwise gcc
...@@ -201,6 +202,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, ...@@ -201,6 +202,24 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
pgd_t *pgdp; pgd_t *pgdp;
int nr = 0; int nr = 0;
#ifdef CONFIG_SPARC64
if (adi_capable()) {
long addr = start;
/* If userspace has passed a versioned address, kernel
* will not find it in the VMAs since it does not store
* the version tags in the list of VMAs. Storing version
* tags in list of VMAs is impractical since they can be
* changed any time from userspace without dropping into
* kernel. Any address search in VMAs will be done with
* non-versioned addresses. Ensure the ADI version bits
* are dropped here by sign extending the last bit before
* ADI bits. IOMMU does not implement version tags.
*/
addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
start = addr;
}
#endif
start &= PAGE_MASK; start &= PAGE_MASK;
addr = start; addr = start;
len = (unsigned long) nr_pages << PAGE_SHIFT; len = (unsigned long) nr_pages << PAGE_SHIFT;
...@@ -231,6 +250,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write, ...@@ -231,6 +250,24 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
pgd_t *pgdp; pgd_t *pgdp;
int nr = 0; int nr = 0;
#ifdef CONFIG_SPARC64
if (adi_capable()) {
long addr = start;
/* If userspace has passed a versioned address, kernel
* will not find it in the VMAs since it does not store
* the version tags in the list of VMAs. Storing version
* tags in list of VMAs is impractical since they can be
* changed any time from userspace without dropping into
* kernel. Any address search in VMAs will be done with
* non-versioned addresses. Ensure the ADI version bits
* are dropped here by sign extending the last bit before
* ADI bits. IOMMU does not implements version tags,
*/
addr = (addr << (long)adi_nbits()) >> (long)adi_nbits();
start = addr;
}
#endif
start &= PAGE_MASK; start &= PAGE_MASK;
addr = start; addr = start;
len = (unsigned long) nr_pages << PAGE_SHIFT; len = (unsigned long) nr_pages << PAGE_SHIFT;
......
...@@ -182,8 +182,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, ...@@ -182,8 +182,20 @@ pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
struct page *page, int writeable) struct page *page, int writeable)
{ {
unsigned int shift = huge_page_shift(hstate_vma(vma)); unsigned int shift = huge_page_shift(hstate_vma(vma));
pte_t pte;
return hugepage_shift_to_tte(entry, shift); pte = hugepage_shift_to_tte(entry, shift);
#ifdef CONFIG_SPARC64
/* If this vma has ADI enabled on it, turn on TTE.mcd
*/
if (vma->vm_flags & VM_SPARC_ADI)
return pte_mkmcd(pte);
else
return pte_mknotmcd(pte);
#else
return pte;
#endif
} }
static unsigned int sun4v_huge_tte_to_shift(pte_t entry) static unsigned int sun4v_huge_tte_to_shift(pte_t entry)
......
...@@ -3160,3 +3160,72 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end) ...@@ -3160,3 +3160,72 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
do_flush_tlb_kernel_range(start, end); do_flush_tlb_kernel_range(start, end);
} }
} }
void copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr, struct vm_area_struct *vma)
{
char *vfrom, *vto;
vfrom = kmap_atomic(from);
vto = kmap_atomic(to);
copy_user_page(vto, vfrom, vaddr, to);
kunmap_atomic(vto);
kunmap_atomic(vfrom);
/* If this page has ADI enabled, copy over any ADI tags
* as well
*/
if (vma->vm_flags & VM_SPARC_ADI) {
unsigned long pfrom, pto, i, adi_tag;
pfrom = page_to_phys(from);
pto = page_to_phys(to);
for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
asm volatile("ldxa [%1] %2, %0\n\t"
: "=r" (adi_tag)
: "r" (i), "i" (ASI_MCD_REAL));
asm volatile("stxa %0, [%1] %2\n\t"
:
: "r" (adi_tag), "r" (pto),
"i" (ASI_MCD_REAL));
pto += adi_blksize();
}
asm volatile("membar #Sync\n\t");
}
}
EXPORT_SYMBOL(copy_user_highpage);
void copy_highpage(struct page *to, struct page *from)
{
char *vfrom, *vto;
vfrom = kmap_atomic(from);
vto = kmap_atomic(to);
copy_page(vto, vfrom);
kunmap_atomic(vto);
kunmap_atomic(vfrom);
/* If this platform is ADI enabled, copy any ADI tags
* as well
*/
if (adi_capable()) {
unsigned long pfrom, pto, i, adi_tag;
pfrom = page_to_phys(from);
pto = page_to_phys(to);
for (i = pfrom; i < (pfrom + PAGE_SIZE); i += adi_blksize()) {
asm volatile("ldxa [%1] %2, %0\n\t"
: "=r" (adi_tag)
: "r" (i), "i" (ASI_MCD_REAL));
asm volatile("stxa %0, [%1] %2\n\t"
:
: "r" (adi_tag), "r" (pto),
"i" (ASI_MCD_REAL));
pto += adi_blksize();
}
asm volatile("membar #Sync\n\t");
}
}
EXPORT_SYMBOL(copy_highpage);
...@@ -546,6 +546,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm) ...@@ -546,6 +546,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
mm->context.sparc64_ctx_val = 0UL; mm->context.sparc64_ctx_val = 0UL;
mm->context.tag_store = NULL;
spin_lock_init(&mm->context.tag_lock);
#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
/* We reset them to zero because the fork() page copying /* We reset them to zero because the fork() page copying
* will re-increment the counters as the parent PTEs are * will re-increment the counters as the parent PTEs are
...@@ -611,4 +614,22 @@ void destroy_context(struct mm_struct *mm) ...@@ -611,4 +614,22 @@ void destroy_context(struct mm_struct *mm)
} }
spin_unlock_irqrestore(&ctx_alloc_lock, flags); spin_unlock_irqrestore(&ctx_alloc_lock, flags);
/* If ADI tag storage was allocated for this task, free it */
if (mm->context.tag_store) {
tag_storage_desc_t *tag_desc;
unsigned long max_desc;
unsigned char *tags;
tag_desc = mm->context.tag_store;
max_desc = PAGE_SIZE/sizeof(tag_storage_desc_t);
for (i = 0; i < max_desc; i++) {
tags = tag_desc->tags;
tag_desc->tags = NULL;
kfree(tags);
tag_desc++;
}
kfree(mm->context.tag_store);
mm->context.tag_store = NULL;
}
} }
...@@ -27,7 +27,7 @@ static inline void signal_compat_build_tests(void) ...@@ -27,7 +27,7 @@ static inline void signal_compat_build_tests(void)
*/ */
BUILD_BUG_ON(NSIGILL != 11); BUILD_BUG_ON(NSIGILL != 11);
BUILD_BUG_ON(NSIGFPE != 13); BUILD_BUG_ON(NSIGFPE != 13);
BUILD_BUG_ON(NSIGSEGV != 4); BUILD_BUG_ON(NSIGSEGV != 7);
BUILD_BUG_ON(NSIGBUS != 5); BUILD_BUG_ON(NSIGBUS != 5);
BUILD_BUG_ON(NSIGTRAP != 4); BUILD_BUG_ON(NSIGTRAP != 4);
BUILD_BUG_ON(NSIGCHLD != 6); BUILD_BUG_ON(NSIGCHLD != 6);
......
...@@ -72,7 +72,8 @@ config DISPLAY7SEG ...@@ -72,7 +72,8 @@ config DISPLAY7SEG
config ORACLE_DAX config ORACLE_DAX
tristate "Oracle Data Analytics Accelerator" tristate "Oracle Data Analytics Accelerator"
default m if SPARC64 depends on SPARC64
default m
help help
Driver for Oracle Data Analytics Accelerator, which is Driver for Oracle Data Analytics Accelerator, which is
a coprocessor that performs database operations in hardware. a coprocessor that performs database operations in hardware.
......
...@@ -880,7 +880,7 @@ static int dax_ccb_exec(struct dax_ctx *ctx, const char __user *buf, ...@@ -880,7 +880,7 @@ static int dax_ccb_exec(struct dax_ctx *ctx, const char __user *buf,
dax_dbg("args: ccb_buf_len=%ld, idx=%d", count, idx); dax_dbg("args: ccb_buf_len=%ld, idx=%d", count, idx);
/* for given index and length, verify ca_buf range exists */ /* for given index and length, verify ca_buf range exists */
if (idx + nccbs >= DAX_CA_ELEMS) { if (idx < 0 || idx > (DAX_CA_ELEMS - nccbs)) {
ctx->result.exec.status = DAX_SUBMIT_ERR_NO_CA_AVAIL; ctx->result.exec.status = DAX_SUBMIT_ERR_NO_CA_AVAIL;
return 0; return 0;
} }
......
...@@ -400,6 +400,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b) ...@@ -400,6 +400,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b)
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif #endif
#ifndef __HAVE_ARCH_DO_SWAP_PAGE
/*
* Some architectures support metadata associated with a page. When a
* page is being swapped out, this metadata must be saved so it can be
* restored when the page is swapped back in. SPARC M7 and newer
* processors support an ADI (Application Data Integrity) tag for the
* page as metadata for the page. arch_do_swap_page() can restore this
* metadata when a page is swapped back in.
*/
static inline void arch_do_swap_page(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr,
pte_t pte, pte_t oldpte)
{
}
#endif
#ifndef __HAVE_ARCH_UNMAP_ONE
/*
* Some architectures support metadata associated with a page. When a
* page is being swapped out, this metadata must be saved so it can be
* restored when the page is swapped back in. SPARC M7 and newer
* processors support an ADI (Application Data Integrity) tag for the
* page as metadata for the page. arch_unmap_one() can save this
* metadata on a swap-out of a page.
*/
static inline int arch_unmap_one(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long addr,
pte_t orig_pte)
{
return 0;
}
#endif
#ifndef __HAVE_ARCH_PGD_OFFSET_GATE #ifndef __HAVE_ARCH_PGD_OFFSET_GATE
#define pgd_offset_gate(mm, addr) pgd_offset(mm, addr) #define pgd_offset_gate(mm, addr) pgd_offset(mm, addr)
#endif #endif
......
...@@ -237,6 +237,8 @@ static inline void copy_user_highpage(struct page *to, struct page *from, ...@@ -237,6 +237,8 @@ static inline void copy_user_highpage(struct page *to, struct page *from,
#endif #endif
#ifndef __HAVE_ARCH_COPY_HIGHPAGE
static inline void copy_highpage(struct page *to, struct page *from) static inline void copy_highpage(struct page *to, struct page *from)
{ {
char *vfrom, *vto; char *vfrom, *vto;
...@@ -248,4 +250,6 @@ static inline void copy_highpage(struct page *to, struct page *from) ...@@ -248,4 +250,6 @@ static inline void copy_highpage(struct page *to, struct page *from)
kunmap_atomic(vfrom); kunmap_atomic(vfrom);
} }
#endif
#endif /* _LINUX_HIGHMEM_H */ #endif /* _LINUX_HIGHMEM_H */
...@@ -243,6 +243,9 @@ extern unsigned int kobjsize(const void *objp); ...@@ -243,6 +243,9 @@ extern unsigned int kobjsize(const void *objp);
# define VM_GROWSUP VM_ARCH_1 # define VM_GROWSUP VM_ARCH_1
#elif defined(CONFIG_IA64) #elif defined(CONFIG_IA64)
# define VM_GROWSUP VM_ARCH_1 # define VM_GROWSUP VM_ARCH_1
#elif defined(CONFIG_SPARC64)
# define VM_SPARC_ADI VM_ARCH_1 /* Uses ADI tag for access control */
# define VM_ARCH_CLEAR VM_SPARC_ADI
#elif !defined(CONFIG_MMU) #elif !defined(CONFIG_MMU)
# define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */
#endif #endif
...@@ -285,6 +288,12 @@ extern unsigned int kobjsize(const void *objp); ...@@ -285,6 +288,12 @@ extern unsigned int kobjsize(const void *objp);
/* This mask is used to clear all the VMA flags used by mlock */ /* This mask is used to clear all the VMA flags used by mlock */
#define VM_LOCKED_CLEAR_MASK (~(VM_LOCKED | VM_LOCKONFAULT)) #define VM_LOCKED_CLEAR_MASK (~(VM_LOCKED | VM_LOCKONFAULT))
/* Arch-specific flags to clear when updating VM flags on protection change */
#ifndef VM_ARCH_CLEAR
# define VM_ARCH_CLEAR VM_NONE
#endif
#define VM_FLAGS_CLEAR (ARCH_VM_PKEY_FLAGS | VM_ARCH_CLEAR)
/* /*
* mapping from the currently active vm_flags protection bits (the * mapping from the currently active vm_flags protection bits (the
* low four bits) to a page protection mask.. * low four bits) to a page protection mask..
......
...@@ -92,7 +92,7 @@ static inline void vm_unacct_memory(long pages) ...@@ -92,7 +92,7 @@ static inline void vm_unacct_memory(long pages)
* *
* Returns true if the prot flags are valid * Returns true if the prot flags are valid
*/ */
static inline bool arch_validate_prot(unsigned long prot) static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
{ {
return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0; return (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM)) == 0;
} }
......
...@@ -220,7 +220,10 @@ typedef struct siginfo { ...@@ -220,7 +220,10 @@ typedef struct siginfo {
#else #else
# define SEGV_PKUERR 4 /* failed protection key checks */ # define SEGV_PKUERR 4 /* failed protection key checks */
#endif #endif
#define NSIGSEGV 4 #define SEGV_ACCADI 5 /* ADI not enabled for mapped object */
#define SEGV_ADIDERR 6 /* Disrupting MCD error */
#define SEGV_ADIPERR 7 /* Precise MCD exception */
#define NSIGSEGV 7
/* /*
* SIGBUS si_codes * SIGBUS si_codes
......
...@@ -2369,6 +2369,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start, ...@@ -2369,6 +2369,10 @@ int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
if (*vm_flags & VM_SAO) if (*vm_flags & VM_SAO)
return 0; return 0;
#endif #endif
#ifdef VM_SPARC_ADI
if (*vm_flags & VM_SPARC_ADI)
return 0;
#endif
if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) { if (!test_bit(MMF_VM_MERGEABLE, &mm->flags)) {
err = __ksm_enter(mm); err = __ksm_enter(mm);
......
...@@ -3053,6 +3053,7 @@ int do_swap_page(struct vm_fault *vmf) ...@@ -3053,6 +3053,7 @@ int do_swap_page(struct vm_fault *vmf)
if (pte_swp_soft_dirty(vmf->orig_pte)) if (pte_swp_soft_dirty(vmf->orig_pte))
pte = pte_mksoft_dirty(pte); pte = pte_mksoft_dirty(pte);
set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
vmf->orig_pte = pte; vmf->orig_pte = pte;
/* ksm created a completely new copy */ /* ksm created a completely new copy */
......
...@@ -417,7 +417,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, ...@@ -417,7 +417,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
end = start + len; end = start + len;
if (end <= start) if (end <= start)
return -ENOMEM; return -ENOMEM;
if (!arch_validate_prot(prot)) if (!arch_validate_prot(prot, start))
return -EINVAL; return -EINVAL;
reqprot = prot; reqprot = prot;
...@@ -475,7 +475,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len, ...@@ -475,7 +475,7 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
* cleared from the VMA. * cleared from the VMA.
*/ */
mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC | mask_off_old_flags = VM_READ | VM_WRITE | VM_EXEC |
ARCH_VM_PKEY_FLAGS; VM_FLAGS_CLEAR;
new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey); new_vma_pkey = arch_override_mprotect_pkey(vma, prot, pkey);
newflags = calc_vm_prot_bits(prot, new_vma_pkey); newflags = calc_vm_prot_bits(prot, new_vma_pkey);
......
...@@ -1497,6 +1497,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, ...@@ -1497,6 +1497,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
(flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) { (flags & (TTU_MIGRATION|TTU_SPLIT_FREEZE))) {
swp_entry_t entry; swp_entry_t entry;
pte_t swp_pte; pte_t swp_pte;
if (arch_unmap_one(mm, vma, address, pteval) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
}
/* /*
* Store the pfn of the page in a special migration * Store the pfn of the page in a special migration
* pte. do_swap_page() will wait until the migration * pte. do_swap_page() will wait until the migration
...@@ -1556,6 +1564,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, ...@@ -1556,6 +1564,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
page_vma_mapped_walk_done(&pvmw); page_vma_mapped_walk_done(&pvmw);
break; break;
} }
if (arch_unmap_one(mm, vma, address, pteval) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
}
if (list_empty(&mm->mmlist)) { if (list_empty(&mm->mmlist)) {
spin_lock(&mmlist_lock); spin_lock(&mmlist_lock);
if (list_empty(&mm->mmlist)) if (list_empty(&mm->mmlist))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment