Commit ac435075 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'csky-for-linus-4.20' of https://github.com/c-sky/csky-linux

Pull C-SKY architecture port from Guo Ren:
 "This contains the Linux port for C-SKY(csky) based on linux-4.19
  Release, which has been through 10 rounds of review on mailing list.

  More information:

    http://en.c-sky.com

  The development repo:

    https://github.com/c-sky/csky-linux

  ABI Documentation:

    https://github.com/c-sky/csky-doc

  Here is the pre-built cross compiler for fast test from our CI:

    https://gitlab.com/c-sky/buildroot/-/jobs/101608095/artifacts/file/output/images/csky_toolchain_qemu_csky_ck807f_4.18_glibc_defconfig_482b221e52908be1c9b2ccb444255e1562bb7025.tar.xz

  We use buildroot as our CI-test enviornment. "LTP, Lmbench ..." will
  be tested for every commit. See here for more details:

    https://gitlab.com/c-sky/buildroot/pipelines

  We'll continouslly improve csky subsystem in future"

Arnd acks, and adds the following notes:
 "I did a thorough review of the ABI, which as usual mainly consists of
  spotting any files that don't use the asm-generic ABI itself, and
  having it changed to it matches exactly what we do on other new
  architectures.

  I also looked at every other patch and commented on maybe half of them
  where I saw something that did not quite seem right. Others have
  reviewed specific patches in greater depth. I'm sure that one could
  fine more of the minor details, but as long as they are not ABI
  relevant, they can be fixed later.

  The only patch that is part of the ABI and that nobody reviewed is the
  signal handling. This is one of the areas I never worked on in much
  detail. I did not see anything wrong with it, but I also don't know
  what the problems with the other architectures are here, and we seem
  to be hitting issues occasionally, and we never managed to generalize
  this enough for new architectures to have a trivial implementation.

  I was originally hoping that we could have the 64-bit time_t
  interfaces ready in time to completely drop the 32-bit ones, but that
  did not happen. We might still remove them in the next merge window
  depending on whether the libc upstream people prefer to keep them or
  not.

  One more general comment: I think this may well be the last new CPU
  architecture we ever add to the kernel. Both nds32 and c-sky are made
  by companies that also work on risc-v, and generally speaking risc-v
  seems to be killing off any of the minor licensable instruction set
  projects, just like ARM has mostly killed off the custom
  vendor-specific instruction sets already.

  If we add another architecture in the future, it may instead be
  something like the LLVM bitcode or WebAssembly, who knows?"

To which Geert Uytterhoeven pipes in about another architecture still in
the pipeline: Kalray MPPA.

* tag 'csky-for-linus-4.20' of https://github.com/c-sky/csky-linux: (24 commits)
  dt-bindings: interrupt-controller: C-SKY APB intc
  irqchip: add C-SKY APB bus interrupt controller
  dt-bindings: interrupt-controller: C-SKY SMP intc
  irqchip: add C-SKY SMP interrupt controller
  MAINTAINERS: Add csky
  dt-bindings: Add vendor prefix for csky
  dt-bindings: csky CPU Bindings
  csky: Misc headers
  csky: SMP support
  csky: Debug and Ptrace GDB
  csky: User access
  csky: Library functions
  csky: ELF and module probe
  csky: Atomic operations
  csky: IRQ handling
  csky: VDSO and rt_sigreturn
  csky: Process management and Signal
  csky: MMU and page table management
  csky: Cache and TLB routines
  csky: System Call
  ...
parents 9f51ae62 2347e7e1
==================
C-SKY CPU Bindings
==================
The device tree allows to describe the layout of CPUs in a system through
the "cpus" node, which in turn contains a number of subnodes (ie "cpu")
defining properties for every cpu.
Only SMP system need to care about the cpus node and single processor
needn't define cpus node at all.
=====================================
cpus and cpu node bindings definition
=====================================
- cpus node
Description: Container of cpu nodes
The node name must be "cpus".
A cpus node must define the following properties:
- #address-cells
Usage: required
Value type: <u32>
Definition: must be set to 1
- #size-cells
Usage: required
Value type: <u32>
Definition: must be set to 0
- cpu node
Description: Describes one of SMP cores
PROPERTIES
- device_type
Usage: required
Value type: <string>
Definition: must be "cpu"
- reg
Usage: required
Value type: <u32>
Definition: CPU index
- compatible:
Usage: required
Value type: <string>
Definition: must contain "csky", eg:
"csky,610"
"csky,807"
"csky,810"
"csky,860"
Example:
--------
cpus {
#address-cells = <1>;
#size-cells = <0>;
cpu@0 {
device_type = "cpu";
reg = <0>;
status = "ok";
};
cpu@1 {
device_type = "cpu";
reg = <1>;
status = "ok";
};
};
==============================
C-SKY APB Interrupt Controller
==============================
C-SKY APB Interrupt Controller is a simple soc interrupt controller
on the apb bus and we only use it as root irq controller.
- csky,apb-intc is used in a lot of csky fpgas and socs, it support 64 irq nums.
- csky,dual-apb-intc consists of 2 apb-intc and 128 irq nums supported.
- csky,gx6605s-intc is gx6605s soc internal irq interrupt controller, 64 irq nums.
=============================
intc node bindings definition
=============================
Description: Describes APB interrupt controller
PROPERTIES
- compatible
Usage: required
Value type: <string>
Definition: must be "csky,apb-intc"
"csky,dual-apb-intc"
"csky,gx6605s-intc"
- #interrupt-cells
Usage: required
Value type: <u32>
Definition: must be <1>
- reg
Usage: required
Value type: <u32 u32>
Definition: <phyaddr size> in soc from cpu view
- interrupt-controller:
Usage: required
- csky,support-pulse-signal:
Usage: select
Description: to support pulse signal flag
Examples:
---------
intc: interrupt-controller@500000 {
compatible = "csky,apb-intc";
#interrupt-cells = <1>;
reg = <0x00500000 0x400>;
interrupt-controller;
};
intc: interrupt-controller@500000 {
compatible = "csky,dual-apb-intc";
#interrupt-cells = <1>;
reg = <0x00500000 0x400>;
interrupt-controller;
};
intc: interrupt-controller@500000 {
compatible = "csky,gx6605s-intc";
#interrupt-cells = <1>;
reg = <0x00500000 0x400>;
interrupt-controller;
};
===========================================
C-SKY Multi-processors Interrupt Controller
===========================================
C-SKY Multi-processors Interrupt Controller is designed for ck807/ck810/ck860
SMP soc, and it also could be used in non-SMP system.
Interrupt number definition:
0-15 : software irq, and we use 15 as our IPI_IRQ.
16-31 : private irq, and we use 16 as the co-processor timer.
31-1024: common irq for soc ip.
=============================
intc node bindings definition
=============================
Description: Describes SMP interrupt controller
PROPERTIES
- compatible
Usage: required
Value type: <string>
Definition: must be "csky,mpintc"
- #interrupt-cells
Usage: required
Value type: <u32>
Definition: must be <1>
- interrupt-controller:
Usage: required
Examples:
---------
intc: interrupt-controller {
compatible = "csky,mpintc";
#interrupt-cells = <1>;
interrupt-controller;
};
......@@ -84,6 +84,7 @@ cosmic Cosmic Circuits
crane Crane Connectivity Solutions
creative Creative Technology Ltd
crystalfontz Crystalfontz America, Inc.
csky Hangzhou C-SKY Microsystems Co., Ltd
cubietech Cubietech, Ltd.
cypress Cypress Semiconductor Corporation
cznic CZ.NIC, z.s.p.o.
......
......@@ -3229,6 +3229,15 @@ T: git git://git.alsa-project.org/alsa-kernel.git
S: Maintained
F: sound/pci/oxygen/
C-SKY ARCHITECTURE
M: Guo Ren <ren_guo@c-sky.com>
T: git https://github.com/c-sky/csky-linux.git
S: Supported
F: arch/csky/
F: Documentation/devicetree/bindings/csky/
K: csky
N: csky
C6X ARCHITECTURE
M: Mark Salter <msalter@redhat.com>
M: Aurelien Jacquiot <jacquiot.aurelien@gmail.com>
......
config CSKY
def_bool y
select ARCH_HAS_SYNC_DMA_FOR_CPU
select ARCH_HAS_SYNC_DMA_FOR_DEVICE
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS if NR_CPUS>2
select COMMON_CLK
select CLKSRC_MMIO
select CLKSRC_OF
select DMA_DIRECT_OPS
select DMA_NONCOHERENT_OPS
select IRQ_DOMAIN
select HANDLE_DOMAIN_IRQ
select DW_APB_TIMER_OF
select GENERIC_LIB_ASHLDI3
select GENERIC_LIB_ASHRDI3
select GENERIC_LIB_LSHRDI3
select GENERIC_LIB_MULDI3
select GENERIC_LIB_CMPDI2
select GENERIC_LIB_UCMPDI2
select GENERIC_ALLOCATOR
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
select GENERIC_CPU_DEVICES
select GENERIC_IRQ_CHIP
select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_MULTI_HANDLER
select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD
select HAVE_ARCH_TRACEHOOK
select HAVE_GENERIC_DMA_COHERENT
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO
select HAVE_KERNEL_LZMA
select HAVE_C_RECORDMCOUNT
select HAVE_DMA_API_DEBUG
select HAVE_DMA_CONTIGUOUS
select HAVE_MEMBLOCK
select MAY_HAVE_SPARSE_IRQ
select MODULES_USE_ELF_RELA if MODULES
select NO_BOOTMEM
select OF
select OF_EARLY_FLATTREE
select OF_RESERVED_MEM
select PERF_USE_VMALLOC
select RTC_LIB
select TIMER_OF
select USB_ARCH_HAS_EHCI
select USB_ARCH_HAS_OHCI
config CPU_HAS_CACHEV2
bool
config CPU_HAS_FPUV2
bool
config CPU_HAS_HILO
bool
config CPU_HAS_TLBI
bool
config CPU_HAS_LDSTEX
bool
help
For SMP, CPU needs "ldex&stex" instrcutions to atomic operations.
config CPU_NEED_TLBSYNC
bool
config CPU_NEED_SOFTALIGN
bool
config CPU_NO_USER_BKPT
bool
help
For abiv2 we couldn't use "trap 1" as user space bkpt in gdbserver, because
abiv2 is 16/32bit instruction set and "trap 1" is 32bit.
So we need a 16bit instruction as user space bkpt, and it will cause an illegal
instruction exception.
In kernel we parse the *regs->pc to determine whether to send SIGTRAP or not.
config GENERIC_CALIBRATE_DELAY
def_bool y
config GENERIC_CSUM
def_bool y
config GENERIC_HWEIGHT
def_bool y
config MMU
def_bool y
config RWSEM_GENERIC_SPINLOCK
def_bool y
config TIME_LOW_RES
def_bool y
config TRACE_IRQFLAGS_SUPPORT
def_bool y
config CPU_TLB_SIZE
int
default "128" if (CPU_CK610 || CPU_CK807 || CPU_CK810)
default "1024" if (CPU_CK860)
config CPU_ASID_BITS
int
default "8" if (CPU_CK610 || CPU_CK807 || CPU_CK810)
default "12" if (CPU_CK860)
config L1_CACHE_SHIFT
int
default "4" if (CPU_CK610)
default "5" if (CPU_CK807 || CPU_CK810)
default "6" if (CPU_CK860)
menu "Processor type and features"
choice
prompt "CPU MODEL"
default CPU_CK807
config CPU_CK610
bool "CSKY CPU ck610"
select CPU_NEED_TLBSYNC
select CPU_NEED_SOFTALIGN
select CPU_NO_USER_BKPT
config CPU_CK810
bool "CSKY CPU ck810"
select CPU_HAS_HILO
select CPU_NEED_TLBSYNC
config CPU_CK807
bool "CSKY CPU ck807"
select CPU_HAS_HILO
config CPU_CK860
bool "CSKY CPU ck860"
select CPU_HAS_TLBI
select CPU_HAS_CACHEV2
select CPU_HAS_LDSTEX
select CPU_HAS_FPUV2
endchoice
choice
prompt "Power Manager Instruction (wait/doze/stop)"
default CPU_PM_NONE
config CPU_PM_NONE
bool "None"
config CPU_PM_WAIT
bool "wait"
config CPU_PM_DOZE
bool "doze"
config CPU_PM_STOP
bool "stop"
endchoice
config CPU_HAS_VDSP
bool "CPU has VDSP coprocessor"
depends on CPU_HAS_FPU && CPU_HAS_FPUV2
config CPU_HAS_FPU
bool "CPU has FPU coprocessor"
depends on CPU_CK807 || CPU_CK810 || CPU_CK860
config CPU_HAS_TEE
bool "CPU has Trusted Execution Environment"
depends on CPU_CK810
config SMP
bool "Symmetric Multi-Processing (SMP) support for C-SKY"
depends on CPU_CK860
default n
config NR_CPUS
int "Maximum number of CPUs (2-32)"
range 2 32
depends on SMP
default "2"
config HIGHMEM
bool "High Memory Support"
depends on !CPU_CK610
default y
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
default "11"
config RAM_BASE
hex "DRAM start addr (the same with memory-section in dts)"
default 0x0
endmenu
source "kernel/Kconfig.hz"
menu "C-SKY Debug Options"
config CSKY_BUILTIN_DTB
string "Use kernel builtin dtb"
help
User could define the dtb instead of the one which is passed from
bootloader.
Sometimes for debug, we want to use a built-in dtb and then we needn't
modify bootloader at all.
endmenu
OBJCOPYFLAGS :=-O binary
GZFLAGS :=-9
KBUILD_DEFCONFIG := defconfig
ifdef CONFIG_CPU_HAS_FPU
FPUEXT = f
endif
ifdef CONFIG_CPU_HAS_VDSP
VDSPEXT = v
endif
ifdef CONFIG_CPU_HAS_TEE
TEEEXT = t
endif
ifdef CONFIG_CPU_CK610
CPUTYPE = ck610
CSKYABI = abiv1
endif
ifdef CONFIG_CPU_CK810
CPUTYPE = ck810
CSKYABI = abiv2
endif
ifdef CONFIG_CPU_CK807
CPUTYPE = ck807
CSKYABI = abiv2
endif
ifdef CONFIG_CPU_CK860
CPUTYPE = ck860
CSKYABI = abiv2
endif
ifneq ($(CSKYABI),)
MCPU_STR = $(CPUTYPE)$(FPUEXT)$(VDSPEXT)$(TEEEXT)
KBUILD_CFLAGS += -mcpu=$(MCPU_STR)
KBUILD_CFLAGS += -DCSKYCPU_DEF_NAME=\"$(MCPU_STR)\"
KBUILD_CFLAGS += -msoft-float -mdiv
KBUILD_CFLAGS += -fno-tree-vectorize
endif
KBUILD_CFLAGS += -pipe
ifeq ($(CSKYABI),abiv2)
KBUILD_CFLAGS += -mno-stack-size
endif
abidirs := $(patsubst %,arch/csky/%/,$(CSKYABI))
KBUILD_CFLAGS += $(patsubst %,-I$(srctree)/%inc,$(abidirs))
KBUILD_CPPFLAGS += -mlittle-endian
LDFLAGS += -EL
KBUILD_AFLAGS += $(KBUILD_CFLAGS)
head-y := arch/csky/kernel/head.o
core-y += arch/csky/kernel/
core-y += arch/csky/mm/
core-y += arch/csky/$(CSKYABI)/
libs-y += arch/csky/lib/ \
$(shell $(CC) $(KBUILD_CFLAGS) $(KCFLAGS) -print-libgcc-file-name)
boot := arch/csky/boot
ifneq '$(CONFIG_CSKY_BUILTIN_DTB)' '""'
core-y += $(boot)/dts/
endif
all: zImage
dtbs: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts
%.dtb %.dtb.S %.dtb.o: scripts
$(Q)$(MAKE) $(build)=$(boot)/dts $(boot)/dts/$@
zImage Image uImage: vmlinux dtbs
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
archclean:
$(Q)$(MAKE) $(clean)=$(boot)
$(Q)$(MAKE) $(clean)=$(boot)/dts
rm -rf arch/csky/include/generated
define archhelp
echo '* zImage - Compressed kernel image (arch/$(ARCH)/boot/zImage)'
echo ' Image - Uncompressed kernel image (arch/$(ARCH)/boot/Image)'
echo ' uImage - U-Boot wrapped zImage'
endef
obj-$(CONFIG_CPU_NEED_SOFTALIGN) += alignment.o
obj-y += bswapdi.o
obj-y += bswapsi.o
obj-y += cacheflush.o
obj-y += mmap.o
obj-y += memcpy.o
obj-y += memset.o
obj-y += strksyms.o
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/kernel.h>
#include <linux/uaccess.h>
#include <linux/ptrace.h>
static int align_enable = 1;
static int align_count;
static inline uint32_t get_ptreg(struct pt_regs *regs, uint32_t rx)
{
return rx == 15 ? regs->lr : *((uint32_t *)&(regs->a0) - 2 + rx);
}
static inline void put_ptreg(struct pt_regs *regs, uint32_t rx, uint32_t val)
{
if (rx == 15)
regs->lr = val;
else
*((uint32_t *)&(regs->a0) - 2 + rx) = val;
}
/*
* Get byte-value from addr and set it to *valp.
*
* Success: return 0
* Failure: return 1
*/
static int ldb_asm(uint32_t addr, uint32_t *valp)
{
uint32_t val;
int err;
if (!access_ok(VERIFY_READ, (void *)addr, 1))
return 1;
asm volatile (
"movi %0, 0\n"
"1:\n"
"ldb %1, (%2)\n"
"br 3f\n"
"2:\n"
"movi %0, 1\n"
"br 3f\n"
".section __ex_table,\"a\"\n"
".align 2\n"
".long 1b, 2b\n"
".previous\n"
"3:\n"
: "=&r"(err), "=r"(val)
: "r" (addr)
);
*valp = val;
return err;
}
/*
* Put byte-value to addr.
*
* Success: return 0
* Failure: return 1
*/
static int stb_asm(uint32_t addr, uint32_t val)
{
int err;
if (!access_ok(VERIFY_WRITE, (void *)addr, 1))
return 1;
asm volatile (
"movi %0, 0\n"
"1:\n"
"stb %1, (%2)\n"
"br 3f\n"
"2:\n"
"movi %0, 1\n"
"br 3f\n"
".section __ex_table,\"a\"\n"
".align 2\n"
".long 1b, 2b\n"
".previous\n"
"3:\n"
: "=&r"(err)
: "r"(val), "r" (addr)
);
return err;
}
/*
* Get half-word from [rx + imm]
*
* Success: return 0
* Failure: return 1
*/
static int ldh_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
{
uint32_t byte0, byte1;
if (ldb_asm(addr, &byte0))
return 1;
addr += 1;
if (ldb_asm(addr, &byte1))
return 1;
byte0 |= byte1 << 8;
put_ptreg(regs, rz, byte0);
return 0;
}
/*
* Store half-word to [rx + imm]
*
* Success: return 0
* Failure: return 1
*/
static int sth_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
{
uint32_t byte0, byte1;
byte0 = byte1 = get_ptreg(regs, rz);
byte0 &= 0xff;
if (stb_asm(addr, byte0))
return 1;
addr += 1;
byte1 = (byte1 >> 8) & 0xff;
if (stb_asm(addr, byte1))
return 1;
return 0;
}
/*
* Get word from [rx + imm]
*
* Success: return 0
* Failure: return 1
*/
static int ldw_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
{
uint32_t byte0, byte1, byte2, byte3;
if (ldb_asm(addr, &byte0))
return 1;
addr += 1;
if (ldb_asm(addr, &byte1))
return 1;
addr += 1;
if (ldb_asm(addr, &byte2))
return 1;
addr += 1;
if (ldb_asm(addr, &byte3))
return 1;
byte0 |= byte1 << 8;
byte0 |= byte2 << 16;
byte0 |= byte3 << 24;
put_ptreg(regs, rz, byte0);
return 0;
}
/*
* Store word to [rx + imm]
*
* Success: return 0
* Failure: return 1
*/
static int stw_c(struct pt_regs *regs, uint32_t rz, uint32_t addr)
{
uint32_t byte0, byte1, byte2, byte3;
byte0 = byte1 = byte2 = byte3 = get_ptreg(regs, rz);
byte0 &= 0xff;
if (stb_asm(addr, byte0))
return 1;
addr += 1;
byte1 = (byte1 >> 8) & 0xff;
if (stb_asm(addr, byte1))
return 1;
addr += 1;
byte2 = (byte2 >> 16) & 0xff;
if (stb_asm(addr, byte2))
return 1;
addr += 1;
byte3 = (byte3 >> 24) & 0xff;
if (stb_asm(addr, byte3))
return 1;
align_count++;
return 0;
}
extern int fixup_exception(struct pt_regs *regs);
#define OP_LDH 0xc000
#define OP_STH 0xd000
#define OP_LDW 0x8000
#define OP_STW 0x9000
void csky_alignment(struct pt_regs *regs)
{
int ret;
uint16_t tmp;
uint32_t opcode = 0;
uint32_t rx = 0;
uint32_t rz = 0;
uint32_t imm = 0;
uint32_t addr = 0;
if (!user_mode(regs))
goto bad_area;
ret = get_user(tmp, (uint16_t *)instruction_pointer(regs));
if (ret) {
pr_err("%s get_user failed.\n", __func__);
goto bad_area;
}
opcode = (uint32_t)tmp;
rx = opcode & 0xf;
imm = (opcode >> 4) & 0xf;
rz = (opcode >> 8) & 0xf;
opcode &= 0xf000;
if (rx == 0 || rx == 1 || rz == 0 || rz == 1)
goto bad_area;
switch (opcode) {
case OP_LDH:
addr = get_ptreg(regs, rx) + (imm << 1);
ret = ldh_c(regs, rz, addr);
break;
case OP_LDW:
addr = get_ptreg(regs, rx) + (imm << 2);
ret = ldw_c(regs, rz, addr);
break;
case OP_STH:
addr = get_ptreg(regs, rx) + (imm << 1);
ret = sth_c(regs, rz, addr);
break;
case OP_STW:
addr = get_ptreg(regs, rx) + (imm << 2);
ret = stw_c(regs, rz, addr);
break;
}
if (ret)
goto bad_area;
regs->pc += 2;
return;
bad_area:
if (!user_mode(regs)) {
if (fixup_exception(regs))
return;
bust_spinlocks(1);
pr_alert("%s opcode: %x, rz: %d, rx: %d, imm: %d, addr: %x.\n",
__func__, opcode, rz, rx, imm, addr);
show_regs(regs);
bust_spinlocks(0);
do_exit(SIGKILL);
}
force_sig_fault(SIGBUS, BUS_ADRALN, (void __user *)addr, current);
}
static struct ctl_table alignment_tbl[4] = {
{
.procname = "enable",
.data = &align_enable,
.maxlen = sizeof(align_enable),
.mode = 0666,
.proc_handler = &proc_dointvec
},
{
.procname = "count",
.data = &align_count,
.maxlen = sizeof(align_count),
.mode = 0666,
.proc_handler = &proc_dointvec
},
{}
};
static struct ctl_table sysctl_table[2] = {
{
.procname = "csky_alignment",
.mode = 0555,
.child = alignment_tbl},
{}
};
static struct ctl_path sysctl_path[2] = {
{.procname = "csky"},
{}
};
static int __init csky_alignment_init(void)
{
register_sysctl_paths(sysctl_path, sysctl_table);
return 0;
}
arch_initcall(csky_alignment_init);
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/export.h>
#include <linux/compiler.h>
#include <uapi/linux/swab.h>
unsigned long long notrace __bswapdi2(unsigned long long u)
{
return ___constant_swab64(u);
}
EXPORT_SYMBOL(__bswapdi2);
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/export.h>
#include <linux/compiler.h>
#include <uapi/linux/swab.h>
unsigned int notrace __bswapsi2(unsigned int u)
{
return ___constant_swab32(u);
}
EXPORT_SYMBOL(__bswapsi2);
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/fs.h>
#include <linux/syscalls.h>
#include <linux/spinlock.h>
#include <asm/page.h>
#include <asm/cache.h>
#include <asm/cacheflush.h>
#include <asm/cachectl.h>
void flush_dcache_page(struct page *page)
{
struct address_space *mapping = page_mapping(page);
unsigned long addr;
if (mapping && !mapping_mapped(mapping)) {
set_bit(PG_arch_1, &(page)->flags);
return;
}
/*
* We could delay the flush for the !page_mapping case too. But that
* case is for exec env/arg pages and those are %99 certainly going to
* get faulted into the tlb (and thus flushed) anyways.
*/
addr = (unsigned long) page_address(page);
dcache_wb_range(addr, addr + PAGE_SIZE);
}
void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
pte_t *pte)
{
unsigned long addr;
struct page *page;
unsigned long pfn;
pfn = pte_pfn(*pte);
if (unlikely(!pfn_valid(pfn)))
return;
page = pfn_to_page(pfn);
addr = (unsigned long) page_address(page);
if (vma->vm_flags & VM_EXEC ||
pages_do_alias(addr, address & PAGE_MASK))
cache_wbinv_all();
clear_bit(PG_arch_1, &(page)->flags);
}
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ABI_CSKY_CACHEFLUSH_H
#define __ABI_CSKY_CACHEFLUSH_H
#include <linux/compiler.h>
#include <asm/string.h>
#include <asm/cache.h>
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
extern void flush_dcache_page(struct page *);
#define flush_cache_mm(mm) cache_wbinv_all()
#define flush_cache_page(vma, page, pfn) cache_wbinv_all()
#define flush_cache_dup_mm(mm) cache_wbinv_all()
/*
* if (current_mm != vma->mm) cache_wbinv_range(start, end) will be broken.
* Use cache_wbinv_all() here and need to be improved in future.
*/
#define flush_cache_range(vma, start, end) cache_wbinv_all()
#define flush_cache_vmap(start, end) cache_wbinv_range(start, end)
#define flush_cache_vunmap(start, end) cache_wbinv_range(start, end)
#define flush_icache_page(vma, page) cache_wbinv_all()
#define flush_icache_range(start, end) cache_wbinv_range(start, end)
#define flush_icache_user_range(vma, pg, adr, len) \
cache_wbinv_range(adr, adr + len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
do { \
cache_wbinv_all(); \
memcpy(dst, src, len); \
cache_wbinv_all(); \
} while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
cache_wbinv_all(); \
memcpy(dst, src, len); \
cache_wbinv_all(); \
} while (0)
#define flush_dcache_mmap_lock(mapping) do {} while (0)
#define flush_dcache_mmap_unlock(mapping) do {} while (0)
#endif /* __ABI_CSKY_CACHEFLUSH_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_CKMMUV1_H
#define __ASM_CSKY_CKMMUV1_H
#include <abi/reg_ops.h>
static inline int read_mmu_index(void)
{
return cprcr("cpcr0");
}
static inline void write_mmu_index(int value)
{
cpwcr("cpcr0", value);
}
static inline int read_mmu_entrylo0(void)
{
return cprcr("cpcr2") << 6;
}
static inline int read_mmu_entrylo1(void)
{
return cprcr("cpcr3") << 6;
}
static inline void write_mmu_pagemask(int value)
{
cpwcr("cpcr6", value);
}
static inline int read_mmu_entryhi(void)
{
return cprcr("cpcr4");
}
static inline void write_mmu_entryhi(int value)
{
cpwcr("cpcr4", value);
}
/*
* TLB operations.
*/
static inline void tlb_probe(void)
{
cpwcr("cpcr8", 0x80000000);
}
static inline void tlb_read(void)
{
cpwcr("cpcr8", 0x40000000);
}
static inline void tlb_invalid_all(void)
{
cpwcr("cpcr8", 0x04000000);
}
static inline void tlb_invalid_indexed(void)
{
cpwcr("cpcr8", 0x02000000);
}
static inline void setup_pgd(unsigned long pgd, bool kernel)
{
cpwcr("cpcr29", pgd);
}
static inline unsigned long get_pgd(void)
{
return cprcr("cpcr29");
}
#endif /* __ASM_CSKY_CKMMUV1_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ABI_CSKY_ELF_H
#define __ABI_CSKY_ELF_H
#define ELF_CORE_COPY_REGS(pr_reg, regs) do { \
pr_reg[0] = regs->pc; \
pr_reg[1] = regs->regs[9]; \
pr_reg[2] = regs->usp; \
pr_reg[3] = regs->sr; \
pr_reg[4] = regs->a0; \
pr_reg[5] = regs->a1; \
pr_reg[6] = regs->a2; \
pr_reg[7] = regs->a3; \
pr_reg[8] = regs->regs[0]; \
pr_reg[9] = regs->regs[1]; \
pr_reg[10] = regs->regs[2]; \
pr_reg[11] = regs->regs[3]; \
pr_reg[12] = regs->regs[4]; \
pr_reg[13] = regs->regs[5]; \
pr_reg[14] = regs->regs[6]; \
pr_reg[15] = regs->regs[7]; \
pr_reg[16] = regs->regs[8]; \
pr_reg[17] = regs->lr; \
} while (0);
#endif /* __ABI_CSKY_ELF_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_ENTRY_H
#define __ASM_CSKY_ENTRY_H
#include <asm/setup.h>
#include <abi/regdef.h>
#define LSAVE_PC 8
#define LSAVE_PSR 12
#define LSAVE_A0 24
#define LSAVE_A1 28
#define LSAVE_A2 32
#define LSAVE_A3 36
#define LSAVE_A4 40
#define LSAVE_A5 44
#define EPC_INCREASE 2
#define EPC_KEEP 0
.macro USPTOKSP
mtcr sp, ss1
mfcr sp, ss0
.endm
.macro KSPTOUSP
mtcr sp, ss0
mfcr sp, ss1
.endm
.macro INCTRAP rx
addi \rx, EPC_INCREASE
.endm
.macro SAVE_ALL epc_inc
mtcr r13, ss2
mfcr r13, epsr
btsti r13, 31
bt 1f
USPTOKSP
1:
subi sp, 32
subi sp, 32
subi sp, 16
stw r13, (sp, 12)
stw lr, (sp, 4)
mfcr lr, epc
movi r13, \epc_inc
add lr, r13
stw lr, (sp, 8)
mfcr lr, ss1
stw lr, (sp, 16)
stw a0, (sp, 20)
stw a0, (sp, 24)
stw a1, (sp, 28)
stw a2, (sp, 32)
stw a3, (sp, 36)
addi sp, 32
addi sp, 8
mfcr r13, ss2
stw r6, (sp)
stw r7, (sp, 4)
stw r8, (sp, 8)
stw r9, (sp, 12)
stw r10, (sp, 16)
stw r11, (sp, 20)
stw r12, (sp, 24)
stw r13, (sp, 28)
stw r14, (sp, 32)
stw r1, (sp, 36)
subi sp, 32
subi sp, 8
.endm
.macro RESTORE_ALL
psrclr ie
ldw lr, (sp, 4)
ldw a0, (sp, 8)
mtcr a0, epc
ldw a0, (sp, 12)
mtcr a0, epsr
btsti a0, 31
ldw a0, (sp, 16)
mtcr a0, ss1
ldw a0, (sp, 24)
ldw a1, (sp, 28)
ldw a2, (sp, 32)
ldw a3, (sp, 36)
addi sp, 32
addi sp, 8
ldw r6, (sp)
ldw r7, (sp, 4)
ldw r8, (sp, 8)
ldw r9, (sp, 12)
ldw r10, (sp, 16)
ldw r11, (sp, 20)
ldw r12, (sp, 24)
ldw r13, (sp, 28)
ldw r14, (sp, 32)
ldw r1, (sp, 36)
addi sp, 32
addi sp, 8
bt 1f
KSPTOUSP
1:
rte
.endm
.macro SAVE_SWITCH_STACK
subi sp, 32
stm r8-r15, (sp)
.endm
.macro RESTORE_SWITCH_STACK
ldm r8-r15, (sp)
addi sp, 32
.endm
/* MMU registers operators. */
.macro RD_MIR rx
cprcr \rx, cpcr0
.endm
.macro RD_MEH rx
cprcr \rx, cpcr4
.endm
.macro RD_MCIR rx
cprcr \rx, cpcr8
.endm
.macro RD_PGDR rx
cprcr \rx, cpcr29
.endm
.macro WR_MEH rx
cpwcr \rx, cpcr4
.endm
.macro WR_MCIR rx
cpwcr \rx, cpcr8
.endm
.macro SETUP_MMU rx
lrw \rx, PHYS_OFFSET | 0xe
cpwcr \rx, cpcr30
lrw \rx, (PHYS_OFFSET + 0x20000000) | 0xe
cpwcr \rx, cpcr31
.endm
#endif /* __ASM_CSKY_ENTRY_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
extern unsigned long shm_align_mask;
extern void flush_dcache_page(struct page *page);
static inline unsigned long pages_do_alias(unsigned long addr1,
unsigned long addr2)
{
return (addr1 ^ addr2) & shm_align_mask;
}
static inline void clear_user_page(void *addr, unsigned long vaddr,
struct page *page)
{
clear_page(addr);
if (pages_do_alias((unsigned long) addr, vaddr & PAGE_MASK))
flush_dcache_page(page);
}
static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
struct page *page)
{
copy_page(to, from);
if (pages_do_alias((unsigned long) to, vaddr & PAGE_MASK))
flush_dcache_page(page);
}
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_PGTABLE_BITS_H
#define __ASM_CSKY_PGTABLE_BITS_H
/* implemented in software */
#define _PAGE_ACCESSED (1<<3)
#define PAGE_ACCESSED_BIT (3)
#define _PAGE_READ (1<<1)
#define _PAGE_WRITE (1<<2)
#define _PAGE_PRESENT (1<<0)
#define _PAGE_MODIFIED (1<<4)
#define PAGE_MODIFIED_BIT (4)
/* implemented in hardware */
#define _PAGE_GLOBAL (1<<6)
#define _PAGE_VALID (1<<7)
#define PAGE_VALID_BIT (7)
#define _PAGE_DIRTY (1<<8)
#define PAGE_DIRTY_BIT (8)
#define _PAGE_CACHE (3<<9)
#define _PAGE_UNCACHE (2<<9)
#define _CACHE_MASK (7<<9)
#define _CACHE_CACHED (_PAGE_VALID | _PAGE_CACHE)
#define _CACHE_UNCACHED (_PAGE_VALID | _PAGE_UNCACHE)
#define HAVE_ARCH_UNMAPPED_AREA
#endif /* __ASM_CSKY_PGTABLE_BITS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ABI_REG_OPS_H
#define __ABI_REG_OPS_H
#include <asm/reg_ops.h>
#define cprcr(reg) \
({ \
unsigned int tmp; \
asm volatile("cprcr %0, "reg"\n":"=b"(tmp)); \
tmp; \
})
#define cpwcr(reg, val) \
({ \
asm volatile("cpwcr %0, "reg"\n"::"b"(val)); \
})
static inline unsigned int mfcr_hint(void)
{
return mfcr("cr30");
}
static inline unsigned int mfcr_ccr2(void) { return 0; }
#endif /* __ABI_REG_OPS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_REGDEF_H
#define __ASM_CSKY_REGDEF_H
#define syscallid r1
#define r11_sig r11
#define regs_syscallid(regs) regs->regs[9]
/*
* PSR format:
* | 31 | 30-24 | 23-16 | 15 14 | 13-0 |
* S CPID VEC TM
*
* S: Super Mode
* CPID: Coprocessor id, only 15 for MMU
* VEC: Exception Number
* TM: Trace Mode
*/
#define DEFAULT_PSR_VALUE 0x8f000000
#define SYSTRACE_SAVENUM 2
#endif /* __ASM_CSKY_REGDEF_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ABI_CSKY_STRING_H
#define __ABI_CSKY_STRING_H
#define __HAVE_ARCH_MEMCPY
extern void *memcpy(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void *memset(void *, int, __kernel_size_t);
#endif /* __ABI_CSKY_STRING_H */
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/uaccess.h>
static inline int setup_vdso_page(unsigned short *ptr)
{
int err = 0;
/* movi r1, 127 */
err |= __put_user(0x67f1, ptr + 0);
/* addi r1, (139 - 127) */
err |= __put_user(0x20b1, ptr + 1);
/* trap 0 */
err |= __put_user(0x0008, ptr + 2);
return err;
}
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
.macro GET_FRONT_BITS rx y
#ifdef __cskyLE__
lsri \rx, \y
#else
lsli \rx, \y
#endif
.endm
.macro GET_AFTER_BITS rx y
#ifdef __cskyLE__
lsli \rx, \y
#else
lsri \rx, \y
#endif
.endm
/* void *memcpy(void *dest, const void *src, size_t n); */
ENTRY(memcpy)
mov r7, r2
cmplti r4, 4
bt .L_copy_by_byte
mov r6, r2
andi r6, 3
cmpnei r6, 0
jbt .L_dest_not_aligned
mov r6, r3
andi r6, 3
cmpnei r6, 0
jbt .L_dest_aligned_but_src_not_aligned
.L0:
cmplti r4, 16
jbt .L_aligned_and_len_less_16bytes
subi sp, 8
stw r8, (sp, 0)
.L_aligned_and_len_larger_16bytes:
ldw r1, (r3, 0)
ldw r5, (r3, 4)
ldw r8, (r3, 8)
stw r1, (r7, 0)
ldw r1, (r3, 12)
stw r5, (r7, 4)
stw r8, (r7, 8)
stw r1, (r7, 12)
subi r4, 16
addi r3, 16
addi r7, 16
cmplti r4, 16
jbf .L_aligned_and_len_larger_16bytes
ldw r8, (sp, 0)
addi sp, 8
cmpnei r4, 0
jbf .L_return
.L_aligned_and_len_less_16bytes:
cmplti r4, 4
bt .L_copy_by_byte
.L1:
ldw r1, (r3, 0)
stw r1, (r7, 0)
subi r4, 4
addi r3, 4
addi r7, 4
cmplti r4, 4
jbf .L1
br .L_copy_by_byte
.L_return:
rts
.L_copy_by_byte: /* len less than 4 bytes */
cmpnei r4, 0
jbf .L_return
.L4:
ldb r1, (r3, 0)
stb r1, (r7, 0)
addi r3, 1
addi r7, 1
decne r4
jbt .L4
rts
/*
* If dest is not aligned, just copying some bytes makes the dest align.
* Afther that, we judge whether the src is aligned.
*/
.L_dest_not_aligned:
mov r5, r3
rsub r5, r5, r7
abs r5, r5
cmplt r5, r4
bt .L_copy_by_byte
mov r5, r7
sub r5, r3
cmphs r5, r4
bf .L_copy_by_byte
mov r5, r6
.L5:
ldb r1, (r3, 0) /* makes the dest align. */
stb r1, (r7, 0)
addi r5, 1
subi r4, 1
addi r3, 1
addi r7, 1
cmpnei r5, 4
jbt .L5
cmplti r4, 4
jbt .L_copy_by_byte
mov r6, r3 /* judge whether the src is aligned. */
andi r6, 3
cmpnei r6, 0
jbf .L0
/* Judge the number of misaligned, 1, 2, 3? */
.L_dest_aligned_but_src_not_aligned:
mov r5, r3
rsub r5, r5, r7
abs r5, r5
cmplt r5, r4
bt .L_copy_by_byte
bclri r3, 0
bclri r3, 1
ldw r1, (r3, 0)
addi r3, 4
cmpnei r6, 2
bf .L_dest_aligned_but_src_not_aligned_2bytes
cmpnei r6, 3
bf .L_dest_aligned_but_src_not_aligned_3bytes
.L_dest_aligned_but_src_not_aligned_1byte:
mov r5, r7
sub r5, r3
cmphs r5, r4
bf .L_copy_by_byte
cmplti r4, 16
bf .L11
.L10: /* If the len is less than 16 bytes */
GET_FRONT_BITS r1 8
mov r5, r1
ldw r6, (r3, 0)
mov r1, r6
GET_AFTER_BITS r6 24
or r5, r6
stw r5, (r7, 0)
subi r4, 4
addi r3, 4
addi r7, 4
cmplti r4, 4
bf .L10
subi r3, 3
br .L_copy_by_byte
.L11:
subi sp, 16
stw r8, (sp, 0)
stw r9, (sp, 4)
stw r10, (sp, 8)
stw r11, (sp, 12)
.L12:
ldw r5, (r3, 0)
ldw r11, (r3, 4)
ldw r8, (r3, 8)
ldw r9, (r3, 12)
GET_FRONT_BITS r1 8 /* little or big endian? */
mov r10, r5
GET_AFTER_BITS r5 24
or r5, r1
GET_FRONT_BITS r10 8
mov r1, r11
GET_AFTER_BITS r11 24
or r11, r10
GET_FRONT_BITS r1 8
mov r10, r8
GET_AFTER_BITS r8 24
or r8, r1
GET_FRONT_BITS r10 8
mov r1, r9
GET_AFTER_BITS r9 24
or r9, r10
stw r5, (r7, 0)
stw r11, (r7, 4)
stw r8, (r7, 8)
stw r9, (r7, 12)
subi r4, 16
addi r3, 16
addi r7, 16
cmplti r4, 16
jbf .L12
ldw r8, (sp, 0)
ldw r9, (sp, 4)
ldw r10, (sp, 8)
ldw r11, (sp, 12)
addi sp , 16
cmplti r4, 4
bf .L10
subi r3, 3
br .L_copy_by_byte
.L_dest_aligned_but_src_not_aligned_2bytes:
cmplti r4, 16
bf .L21
.L20:
GET_FRONT_BITS r1 16
mov r5, r1
ldw r6, (r3, 0)
mov r1, r6
GET_AFTER_BITS r6 16
or r5, r6
stw r5, (r7, 0)
subi r4, 4
addi r3, 4
addi r7, 4
cmplti r4, 4
bf .L20
subi r3, 2
br .L_copy_by_byte
rts
.L21: /* n > 16 */
subi sp, 16
stw r8, (sp, 0)
stw r9, (sp, 4)
stw r10, (sp, 8)
stw r11, (sp, 12)
.L22:
ldw r5, (r3, 0)
ldw r11, (r3, 4)
ldw r8, (r3, 8)
ldw r9, (r3, 12)
GET_FRONT_BITS r1 16
mov r10, r5
GET_AFTER_BITS r5 16
or r5, r1
GET_FRONT_BITS r10 16
mov r1, r11
GET_AFTER_BITS r11 16
or r11, r10
GET_FRONT_BITS r1 16
mov r10, r8
GET_AFTER_BITS r8 16
or r8, r1
GET_FRONT_BITS r10 16
mov r1, r9
GET_AFTER_BITS r9 16
or r9, r10
stw r5, (r7, 0)
stw r11, (r7, 4)
stw r8, (r7, 8)
stw r9, (r7, 12)
subi r4, 16
addi r3, 16
addi r7, 16
cmplti r4, 16
jbf .L22
ldw r8, (sp, 0)
ldw r9, (sp, 4)
ldw r10, (sp, 8)
ldw r11, (sp, 12)
addi sp, 16
cmplti r4, 4
bf .L20
subi r3, 2
br .L_copy_by_byte
.L_dest_aligned_but_src_not_aligned_3bytes:
cmplti r4, 16
bf .L31
.L30:
GET_FRONT_BITS r1 24
mov r5, r1
ldw r6, (r3, 0)
mov r1, r6
GET_AFTER_BITS r6 8
or r5, r6
stw r5, (r7, 0)
subi r4, 4
addi r3, 4
addi r7, 4
cmplti r4, 4
bf .L30
subi r3, 1
br .L_copy_by_byte
.L31:
subi sp, 16
stw r8, (sp, 0)
stw r9, (sp, 4)
stw r10, (sp, 8)
stw r11, (sp, 12)
.L32:
ldw r5, (r3, 0)
ldw r11, (r3, 4)
ldw r8, (r3, 8)
ldw r9, (r3, 12)
GET_FRONT_BITS r1 24
mov r10, r5
GET_AFTER_BITS r5 8
or r5, r1
GET_FRONT_BITS r10 24
mov r1, r11
GET_AFTER_BITS r11 8
or r11, r10
GET_FRONT_BITS r1 24
mov r10, r8
GET_AFTER_BITS r8 8
or r8, r1
GET_FRONT_BITS r10 24
mov r1, r9
GET_AFTER_BITS r9 8
or r9, r10
stw r5, (r7, 0)
stw r11, (r7, 4)
stw r8, (r7, 8)
stw r9, (r7, 12)
subi r4, 16
addi r3, 16
addi r7, 16
cmplti r4, 16
jbf .L32
ldw r8, (sp, 0)
ldw r9, (sp, 4)
ldw r10, (sp, 8)
ldw r11, (sp, 12)
addi sp, 16
cmplti r4, 4
bf .L30
subi r3, 1
br .L_copy_by_byte
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/types.h>
void *memset(void *dest, int c, size_t l)
{
char *d = dest;
int ch = c & 0xff;
int tmp = (ch | ch << 8 | ch << 16 | ch << 24);
while (((uintptr_t)d & 0x3) && l--)
*d++ = ch;
while (l >= 16) {
*(((u32 *)d)) = tmp;
*(((u32 *)d)+1) = tmp;
*(((u32 *)d)+2) = tmp;
*(((u32 *)d)+3) = tmp;
l -= 16;
d += 16;
}
while (l > 3) {
*(((u32 *)d)) = tmp;
l -= 4;
d += 4;
}
while (l) {
*d = ch;
l--;
d++;
}
return dest;
}
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/fs.h>
#include <linux/mm.h>
#include <linux/mman.h>
#include <linux/shm.h>
#include <linux/sched.h>
#include <linux/random.h>
#include <linux/io.h>
unsigned long shm_align_mask = (0x4000 >> 1) - 1; /* Sane caches */
#define COLOUR_ALIGN(addr, pgoff) \
((((addr) + shm_align_mask) & ~shm_align_mask) + \
(((pgoff) << PAGE_SHIFT) & shm_align_mask))
unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct vm_area_struct *vmm;
int do_color_align;
if (flags & MAP_FIXED) {
/*
* We do not accept a shared mapping if it would violate
* cache aliasing constraints.
*/
if ((flags & MAP_SHARED) &&
((addr - (pgoff << PAGE_SHIFT)) & shm_align_mask))
return -EINVAL;
return addr;
}
if (len > TASK_SIZE)
return -ENOMEM;
do_color_align = 0;
if (filp || (flags & MAP_SHARED))
do_color_align = 1;
if (addr) {
if (do_color_align)
addr = COLOUR_ALIGN(addr, pgoff);
else
addr = PAGE_ALIGN(addr);
vmm = find_vma(current->mm, addr);
if (TASK_SIZE - len >= addr &&
(!vmm || addr + len <= vmm->vm_start))
return addr;
}
addr = TASK_UNMAPPED_BASE;
if (do_color_align)
addr = COLOUR_ALIGN(addr, pgoff);
else
addr = PAGE_ALIGN(addr);
for (vmm = find_vma(current->mm, addr); ; vmm = vmm->vm_next) {
/* At this point: (!vmm || addr < vmm->vm_end). */
if (TASK_SIZE - len < addr)
return -ENOMEM;
if (!vmm || addr + len <= vmm->vm_start)
return addr;
addr = vmm->vm_end;
if (do_color_align)
addr = COLOUR_ALIGN(addr, pgoff);
}
}
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/module.h>
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memset);
obj-y += cacheflush.o
obj-$(CONFIG_CPU_HAS_FPU) += fpu.o
obj-y += memcmp.o
obj-y += memcpy.o
obj-y += memmove.o
obj-y += memset.o
obj-y += strcmp.o
obj-y += strcpy.o
obj-y += strlen.o
obj-y += strksyms.o
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/cache.h>
#include <linux/highmem.h>
#include <linux/mm.h>
#include <asm/cache.h>
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
{
unsigned long start;
start = (unsigned long) kmap_atomic(page);
cache_wbinv_range(start, start + PAGE_SIZE);
kunmap_atomic((void *)start);
}
void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
unsigned long vaddr, int len)
{
unsigned long kaddr;
kaddr = (unsigned long) kmap_atomic(page) + (vaddr & ~PAGE_MASK);
cache_wbinv_range(kaddr, kaddr + len);
kunmap_atomic((void *)kaddr);
}
void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
pte_t *pte)
{
unsigned long addr, pfn;
struct page *page;
void *va;
if (!(vma->vm_flags & VM_EXEC))
return;
pfn = pte_pfn(*pte);
if (unlikely(!pfn_valid(pfn)))
return;
page = pfn_to_page(pfn);
if (page == ZERO_PAGE(0))
return;
va = page_address(page);
addr = (unsigned long) va;
if (va == NULL && PageHighMem(page))
addr = (unsigned long) kmap_atomic(page);
cache_wbinv_range(addr, addr + PAGE_SIZE);
if (va == NULL && PageHighMem(page))
kunmap_atomic((void *) addr);
}
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/ptrace.h>
#include <linux/uaccess.h>
#include <abi/reg_ops.h>
#define MTCR_MASK 0xFC00FFE0
#define MFCR_MASK 0xFC00FFE0
#define MTCR_DIST 0xC0006420
#define MFCR_DIST 0xC0006020
void __init init_fpu(void)
{
mtcr("cr<1, 2>", 0);
}
/*
* fpu_libc_helper() is to help libc to excute:
* - mfcr %a, cr<1, 2>
* - mfcr %a, cr<2, 2>
* - mtcr %a, cr<1, 2>
* - mtcr %a, cr<2, 2>
*/
int fpu_libc_helper(struct pt_regs *regs)
{
int fault;
unsigned long instrptr, regx = 0;
unsigned long index = 0, tmp = 0;
unsigned long tinstr = 0;
u16 instr_hi, instr_low;
instrptr = instruction_pointer(regs);
if (instrptr & 1)
return 0;
fault = __get_user(instr_low, (u16 *)instrptr);
if (fault)
return 0;
fault = __get_user(instr_hi, (u16 *)(instrptr + 2));
if (fault)
return 0;
tinstr = instr_hi | ((unsigned long)instr_low << 16);
if (((tinstr >> 21) & 0x1F) != 2)
return 0;
if ((tinstr & MTCR_MASK) == MTCR_DIST) {
index = (tinstr >> 16) & 0x1F;
if (index > 13)
return 0;
tmp = tinstr & 0x1F;
if (tmp > 2)
return 0;
regx = *(&regs->a0 + index);
if (tmp == 1)
mtcr("cr<1, 2>", regx);
else if (tmp == 2)
mtcr("cr<2, 2>", regx);
else
return 0;
regs->pc += 4;
return 1;
}
if ((tinstr & MFCR_MASK) == MFCR_DIST) {
index = tinstr & 0x1F;
if (index > 13)
return 0;
tmp = ((tinstr >> 16) & 0x1F);
if (tmp > 2)
return 0;
if (tmp == 1)
regx = mfcr("cr<1, 2>");
else if (tmp == 2)
regx = mfcr("cr<2, 2>");
else
return 0;
*(&regs->a0 + index) = regx;
regs->pc += 4;
return 1;
}
return 0;
}
void fpu_fpe(struct pt_regs *regs)
{
int sig, code;
unsigned int fesr;
fesr = mfcr("cr<2, 2>");
sig = SIGFPE;
code = FPE_FLTUNK;
if (fesr & FPE_ILLE) {
sig = SIGILL;
code = ILL_ILLOPC;
} else if (fesr & FPE_IDC) {
sig = SIGILL;
code = ILL_ILLOPN;
} else if (fesr & FPE_FEC) {
sig = SIGFPE;
if (fesr & FPE_IOC)
code = FPE_FLTINV;
else if (fesr & FPE_DZC)
code = FPE_FLTDIV;
else if (fesr & FPE_UFC)
code = FPE_FLTUND;
else if (fesr & FPE_OFC)
code = FPE_FLTOVF;
else if (fesr & FPE_IXC)
code = FPE_FLTRES;
}
force_sig_fault(sig, code, (void __user *)regs->pc, current);
}
#define FMFVR_FPU_REGS(vrx, vry) \
"fmfvrl %0, "#vrx"\n" \
"fmfvrh %1, "#vrx"\n" \
"fmfvrl %2, "#vry"\n" \
"fmfvrh %3, "#vry"\n"
#define FMTVR_FPU_REGS(vrx, vry) \
"fmtvrl "#vrx", %0\n" \
"fmtvrh "#vrx", %1\n" \
"fmtvrl "#vry", %2\n" \
"fmtvrh "#vry", %3\n"
#define STW_FPU_REGS(a, b, c, d) \
"stw %0, (%4, "#a")\n" \
"stw %1, (%4, "#b")\n" \
"stw %2, (%4, "#c")\n" \
"stw %3, (%4, "#d")\n"
#define LDW_FPU_REGS(a, b, c, d) \
"ldw %0, (%4, "#a")\n" \
"ldw %1, (%4, "#b")\n" \
"ldw %2, (%4, "#c")\n" \
"ldw %3, (%4, "#d")\n"
void save_to_user_fp(struct user_fp *user_fp)
{
unsigned long flg;
unsigned long tmp1, tmp2;
unsigned long *fpregs;
local_irq_save(flg);
tmp1 = mfcr("cr<1, 2>");
tmp2 = mfcr("cr<2, 2>");
user_fp->fcr = tmp1;
user_fp->fesr = tmp2;
fpregs = &user_fp->vr[0];
#ifdef CONFIG_CPU_HAS_FPUV2
#ifdef CONFIG_CPU_HAS_VDSP
asm volatile(
"vstmu.32 vr0-vr3, (%0)\n"
"vstmu.32 vr4-vr7, (%0)\n"
"vstmu.32 vr8-vr11, (%0)\n"
"vstmu.32 vr12-vr15, (%0)\n"
"fstmu.64 vr16-vr31, (%0)\n"
: "+a"(fpregs)
::"memory");
#else
asm volatile(
"fstmu.64 vr0-vr31, (%0)\n"
: "+a"(fpregs)
::"memory");
#endif
#else
{
unsigned long tmp3, tmp4;
asm volatile(
FMFVR_FPU_REGS(vr0, vr1)
STW_FPU_REGS(0, 4, 16, 20)
FMFVR_FPU_REGS(vr2, vr3)
STW_FPU_REGS(32, 36, 48, 52)
FMFVR_FPU_REGS(vr4, vr5)
STW_FPU_REGS(64, 68, 80, 84)
FMFVR_FPU_REGS(vr6, vr7)
STW_FPU_REGS(96, 100, 112, 116)
"addi %4, 128\n"
FMFVR_FPU_REGS(vr8, vr9)
STW_FPU_REGS(0, 4, 16, 20)
FMFVR_FPU_REGS(vr10, vr11)
STW_FPU_REGS(32, 36, 48, 52)
FMFVR_FPU_REGS(vr12, vr13)
STW_FPU_REGS(64, 68, 80, 84)
FMFVR_FPU_REGS(vr14, vr15)
STW_FPU_REGS(96, 100, 112, 116)
: "=a"(tmp1), "=a"(tmp2), "=a"(tmp3),
"=a"(tmp4), "+a"(fpregs)
::"memory");
}
#endif
local_irq_restore(flg);
}
void restore_from_user_fp(struct user_fp *user_fp)
{
unsigned long flg;
unsigned long tmp1, tmp2;
unsigned long *fpregs;
local_irq_save(flg);
tmp1 = user_fp->fcr;
tmp2 = user_fp->fesr;
mtcr("cr<1, 2>", tmp1);
mtcr("cr<2, 2>", tmp2);
fpregs = &user_fp->vr[0];
#ifdef CONFIG_CPU_HAS_FPUV2
#ifdef CONFIG_CPU_HAS_VDSP
asm volatile(
"vldmu.32 vr0-vr3, (%0)\n"
"vldmu.32 vr4-vr7, (%0)\n"
"vldmu.32 vr8-vr11, (%0)\n"
"vldmu.32 vr12-vr15, (%0)\n"
"fldmu.64 vr16-vr31, (%0)\n"
: "+a"(fpregs)
::"memory");
#else
asm volatile(
"fldmu.64 vr0-vr31, (%0)\n"
: "+a"(fpregs)
::"memory");
#endif
#else
{
unsigned long tmp3, tmp4;
asm volatile(
LDW_FPU_REGS(0, 4, 16, 20)
FMTVR_FPU_REGS(vr0, vr1)
LDW_FPU_REGS(32, 36, 48, 52)
FMTVR_FPU_REGS(vr2, vr3)
LDW_FPU_REGS(64, 68, 80, 84)
FMTVR_FPU_REGS(vr4, vr5)
LDW_FPU_REGS(96, 100, 112, 116)
FMTVR_FPU_REGS(vr6, vr7)
"addi %4, 128\n"
LDW_FPU_REGS(0, 4, 16, 20)
FMTVR_FPU_REGS(vr8, vr9)
LDW_FPU_REGS(32, 36, 48, 52)
FMTVR_FPU_REGS(vr10, vr11)
LDW_FPU_REGS(64, 68, 80, 84)
FMTVR_FPU_REGS(vr12, vr13)
LDW_FPU_REGS(96, 100, 112, 116)
FMTVR_FPU_REGS(vr14, vr15)
: "=a"(tmp1), "=a"(tmp2), "=a"(tmp3),
"=a"(tmp4), "+a"(fpregs)
::"memory");
}
#endif
local_irq_restore(flg);
}
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ABI_CSKY_CACHEFLUSH_H
#define __ABI_CSKY_CACHEFLUSH_H
/* Keep includes the same across arches. */
#include <linux/mm.h>
/*
* The cache doesn't need to be flushed when TLB entries change when
* the cache is mapped to physical memory, not virtual memory
*/
#define flush_cache_all() do { } while (0)
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_dup_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) \
do { \
if (vma->vm_flags & VM_EXEC) \
icache_inv_all(); \
} while (0)
#define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0
#define flush_dcache_page(page) do { } while (0)
#define flush_dcache_mmap_lock(mapping) do { } while (0)
#define flush_dcache_mmap_unlock(mapping) do { } while (0)
#define flush_icache_range(start, end) cache_wbinv_range(start, end)
void flush_icache_page(struct vm_area_struct *vma, struct page *page);
void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
unsigned long vaddr, int len);
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
memcpy(dst, src, len); \
cache_wbinv_range((unsigned long)dst, (unsigned long)dst + len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* __ABI_CSKY_CACHEFLUSH_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_CKMMUV2_H
#define __ASM_CSKY_CKMMUV2_H
#include <abi/reg_ops.h>
#include <asm/barrier.h>
static inline int read_mmu_index(void)
{
return mfcr("cr<0, 15>");
}
static inline void write_mmu_index(int value)
{
mtcr("cr<0, 15>", value);
}
static inline int read_mmu_entrylo0(void)
{
return mfcr("cr<2, 15>");
}
static inline int read_mmu_entrylo1(void)
{
return mfcr("cr<3, 15>");
}
static inline void write_mmu_pagemask(int value)
{
mtcr("cr<6, 15>", value);
}
static inline int read_mmu_entryhi(void)
{
return mfcr("cr<4, 15>");
}
static inline void write_mmu_entryhi(int value)
{
mtcr("cr<4, 15>", value);
}
/*
* TLB operations.
*/
static inline void tlb_probe(void)
{
mtcr("cr<8, 15>", 0x80000000);
}
static inline void tlb_read(void)
{
mtcr("cr<8, 15>", 0x40000000);
}
static inline void tlb_invalid_all(void)
{
#ifdef CONFIG_CPU_HAS_TLBI
asm volatile("tlbi.alls\n":::"memory");
sync_is();
#else
mtcr("cr<8, 15>", 0x04000000);
#endif
}
static inline void tlb_invalid_indexed(void)
{
mtcr("cr<8, 15>", 0x02000000);
}
/* setup hardrefil pgd */
static inline unsigned long get_pgd(void)
{
return mfcr("cr<29, 15>");
}
static inline void setup_pgd(unsigned long pgd, bool kernel)
{
if (kernel)
mtcr("cr<28, 15>", pgd);
else
mtcr("cr<29, 15>", pgd);
}
#endif /* __ASM_CSKY_CKMMUV2_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ABI_CSKY_ELF_H
#define __ABI_CSKY_ELF_H
/* The member sort in array pr_reg[x] is defined by GDB. */
#define ELF_CORE_COPY_REGS(pr_reg, regs) do { \
pr_reg[0] = regs->pc; \
pr_reg[1] = regs->a1; \
pr_reg[2] = regs->a0; \
pr_reg[3] = regs->sr; \
pr_reg[4] = regs->a2; \
pr_reg[5] = regs->a3; \
pr_reg[6] = regs->regs[0]; \
pr_reg[7] = regs->regs[1]; \
pr_reg[8] = regs->regs[2]; \
pr_reg[9] = regs->regs[3]; \
pr_reg[10] = regs->regs[4]; \
pr_reg[11] = regs->regs[5]; \
pr_reg[12] = regs->regs[6]; \
pr_reg[13] = regs->regs[7]; \
pr_reg[14] = regs->regs[8]; \
pr_reg[15] = regs->regs[9]; \
pr_reg[16] = regs->usp; \
pr_reg[17] = regs->lr; \
pr_reg[18] = regs->exregs[0]; \
pr_reg[19] = regs->exregs[1]; \
pr_reg[20] = regs->exregs[2]; \
pr_reg[21] = regs->exregs[3]; \
pr_reg[22] = regs->exregs[4]; \
pr_reg[23] = regs->exregs[5]; \
pr_reg[24] = regs->exregs[6]; \
pr_reg[25] = regs->exregs[7]; \
pr_reg[26] = regs->exregs[8]; \
pr_reg[27] = regs->exregs[9]; \
pr_reg[28] = regs->exregs[10]; \
pr_reg[29] = regs->exregs[11]; \
pr_reg[30] = regs->exregs[12]; \
pr_reg[31] = regs->exregs[13]; \
pr_reg[32] = regs->exregs[14]; \
pr_reg[33] = regs->tls; \
} while (0);
#endif /* __ABI_CSKY_ELF_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_ENTRY_H
#define __ASM_CSKY_ENTRY_H
#include <asm/setup.h>
#include <abi/regdef.h>
#define LSAVE_PC 8
#define LSAVE_PSR 12
#define LSAVE_A0 24
#define LSAVE_A1 28
#define LSAVE_A2 32
#define LSAVE_A3 36
#define EPC_INCREASE 4
#define EPC_KEEP 0
#define KSPTOUSP
#define USPTOKSP
#define usp cr<14, 1>
.macro INCTRAP rx
addi \rx, EPC_INCREASE
.endm
.macro SAVE_ALL epc_inc
subi sp, 152
stw tls, (sp, 0)
stw lr, (sp, 4)
mfcr lr, epc
movi tls, \epc_inc
add lr, tls
stw lr, (sp, 8)
mfcr lr, epsr
stw lr, (sp, 12)
mfcr lr, usp
stw lr, (sp, 16)
stw a0, (sp, 20)
stw a0, (sp, 24)
stw a1, (sp, 28)
stw a2, (sp, 32)
stw a3, (sp, 36)
addi sp, 40
stm r4-r13, (sp)
addi sp, 40
stm r16-r30, (sp)
#ifdef CONFIG_CPU_HAS_HILO
mfhi lr
stw lr, (sp, 60)
mflo lr
stw lr, (sp, 64)
#endif
subi sp, 80
.endm
.macro RESTORE_ALL
psrclr ie
ldw tls, (sp, 0)
ldw lr, (sp, 4)
ldw a0, (sp, 8)
mtcr a0, epc
ldw a0, (sp, 12)
mtcr a0, epsr
ldw a0, (sp, 16)
mtcr a0, usp
#ifdef CONFIG_CPU_HAS_HILO
ldw a0, (sp, 140)
mthi a0
ldw a0, (sp, 144)
mtlo a0
#endif
ldw a0, (sp, 24)
ldw a1, (sp, 28)
ldw a2, (sp, 32)
ldw a3, (sp, 36)
addi sp, 40
ldm r4-r13, (sp)
addi sp, 40
ldm r16-r30, (sp)
addi sp, 72
rte
.endm
.macro SAVE_SWITCH_STACK
subi sp, 64
stm r4-r11, (sp)
stw r15, (sp, 32)
stw r16, (sp, 36)
stw r17, (sp, 40)
stw r26, (sp, 44)
stw r27, (sp, 48)
stw r28, (sp, 52)
stw r29, (sp, 56)
stw r30, (sp, 60)
.endm
.macro RESTORE_SWITCH_STACK
ldm r4-r11, (sp)
ldw r15, (sp, 32)
ldw r16, (sp, 36)
ldw r17, (sp, 40)
ldw r26, (sp, 44)
ldw r27, (sp, 48)
ldw r28, (sp, 52)
ldw r29, (sp, 56)
ldw r30, (sp, 60)
addi sp, 64
.endm
/* MMU registers operators. */
.macro RD_MIR rx
mfcr \rx, cr<0, 15>
.endm
.macro RD_MEH rx
mfcr \rx, cr<4, 15>
.endm
.macro RD_MCIR rx
mfcr \rx, cr<8, 15>
.endm
.macro RD_PGDR rx
mfcr \rx, cr<29, 15>
.endm
.macro RD_PGDR_K rx
mfcr \rx, cr<28, 15>
.endm
.macro WR_MEH rx
mtcr \rx, cr<4, 15>
.endm
.macro WR_MCIR rx
mtcr \rx, cr<8, 15>
.endm
.macro SETUP_MMU rx
lrw \rx, PHYS_OFFSET | 0xe
mtcr \rx, cr<30, 15>
lrw \rx, (PHYS_OFFSET + 0x20000000) | 0xe
mtcr \rx, cr<31, 15>
.endm
#endif /* __ASM_CSKY_ENTRY_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_FPU_H
#define __ASM_CSKY_FPU_H
#include <asm/sigcontext.h>
#include <asm/ptrace.h>
int fpu_libc_helper(struct pt_regs *regs);
void fpu_fpe(struct pt_regs *regs);
void __init init_fpu(void);
void save_to_user_fp(struct user_fp *user_fp);
void restore_from_user_fp(struct user_fp *user_fp);
/*
* Define the fesr bit for fpe handle.
*/
#define FPE_ILLE (1 << 16) /* Illegal instruction */
#define FPE_FEC (1 << 7) /* Input float-point arithmetic exception */
#define FPE_IDC (1 << 5) /* Input denormalized exception */
#define FPE_IXC (1 << 4) /* Inexact exception */
#define FPE_UFC (1 << 3) /* Underflow exception */
#define FPE_OFC (1 << 2) /* Overflow exception */
#define FPE_DZC (1 << 1) /* Divide by zero exception */
#define FPE_IOC (1 << 0) /* Invalid operation exception */
#define FPE_REGULAR_EXCEPTION (FPE_IXC | FPE_UFC | FPE_OFC | FPE_DZC | FPE_IOC)
#ifdef CONFIG_OPEN_FPU_IDE
#define IDE_STAT (1 << 5)
#else
#define IDE_STAT 0
#endif
#ifdef CONFIG_OPEN_FPU_IXE
#define IXE_STAT (1 << 4)
#else
#define IXE_STAT 0
#endif
#ifdef CONFIG_OPEN_FPU_UFE
#define UFE_STAT (1 << 3)
#else
#define UFE_STAT 0
#endif
#ifdef CONFIG_OPEN_FPU_OFE
#define OFE_STAT (1 << 2)
#else
#define OFE_STAT 0
#endif
#ifdef CONFIG_OPEN_FPU_DZE
#define DZE_STAT (1 << 1)
#else
#define DZE_STAT 0
#endif
#ifdef CONFIG_OPEN_FPU_IOE
#define IOE_STAT (1 << 0)
#else
#define IOE_STAT 0
#endif
#endif /* __ASM_CSKY_FPU_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
static inline void clear_user_page(void *addr, unsigned long vaddr,
struct page *page)
{
clear_page(addr);
}
static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
struct page *page)
{
copy_page(to, from);
}
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_PGTABLE_BITS_H
#define __ASM_CSKY_PGTABLE_BITS_H
/* implemented in software */
#define _PAGE_ACCESSED (1<<7)
#define PAGE_ACCESSED_BIT (7)
#define _PAGE_READ (1<<8)
#define _PAGE_WRITE (1<<9)
#define _PAGE_PRESENT (1<<10)
#define _PAGE_MODIFIED (1<<11)
#define PAGE_MODIFIED_BIT (11)
/* implemented in hardware */
#define _PAGE_GLOBAL (1<<0)
#define _PAGE_VALID (1<<1)
#define PAGE_VALID_BIT (1)
#define _PAGE_DIRTY (1<<2)
#define PAGE_DIRTY_BIT (2)
#define _PAGE_SO (1<<5)
#define _PAGE_BUF (1<<6)
#define _PAGE_CACHE (1<<3)
#define _CACHE_MASK _PAGE_CACHE
#define _CACHE_CACHED (_PAGE_VALID | _PAGE_CACHE | _PAGE_BUF)
#define _CACHE_UNCACHED (_PAGE_VALID | _PAGE_SO)
#endif /* __ASM_CSKY_PGTABLE_BITS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ABI_REG_OPS_H
#define __ABI_REG_OPS_H
#include <asm/reg_ops.h>
static inline unsigned int mfcr_hint(void)
{
return mfcr("cr31");
}
static inline unsigned int mfcr_ccr2(void)
{
return mfcr("cr23");
}
#endif /* __ABI_REG_OPS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_REGDEF_H
#define __ASM_CSKY_REGDEF_H
#define syscallid r7
#define r11_sig r11
#define regs_syscallid(regs) regs->regs[3]
/*
* PSR format:
* | 31 | 30-24 | 23-16 | 15 14 | 13-10 | 9 | 8-0 |
* S VEC TM MM
*
* S: Super Mode
* VEC: Exception Number
* TM: Trace Mode
* MM: Memory unaligned addr access
*/
#define DEFAULT_PSR_VALUE 0x80000200
#define SYSTRACE_SAVENUM 5
#endif /* __ASM_CSKY_REGDEF_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ABI_CSKY_STRING_H
#define __ABI_CSKY_STRING_H
#define __HAVE_ARCH_MEMCMP
extern int memcmp(const void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMCPY
extern void *memcpy(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMMOVE
extern void *memmove(void *, const void *, __kernel_size_t);
#define __HAVE_ARCH_MEMSET
extern void *memset(void *, int, __kernel_size_t);
#define __HAVE_ARCH_STRCMP
extern int strcmp(const char *, const char *);
#define __HAVE_ARCH_STRCPY
extern char *strcpy(char *, const char *);
#define __HAVE_ARCH_STRLEN
extern __kernel_size_t strlen(const char *);
#endif /* __ABI_CSKY_STRING_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ABI_CSKY_VDSO_H
#define __ABI_CSKY_VDSO_H
#include <linux/uaccess.h>
static inline int setup_vdso_page(unsigned short *ptr)
{
int err = 0;
/* movi r7, 173 */
err |= __put_user(0xea07, ptr);
err |= __put_user(0x008b, ptr+1);
/* trap 0 */
err |= __put_user(0xc000, ptr+2);
err |= __put_user(0x2020, ptr+3);
return err;
}
#endif /* __ABI_CSKY_STRING_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
ENTRY(memcmp)
/* Test if len less than 4 bytes. */
mov r3, r0
movi r0, 0
mov r12, r4
cmplti r2, 4
bt .L_compare_by_byte
andi r13, r0, 3
movi r19, 4
/* Test if s1 is not 4 bytes aligned. */
bnez r13, .L_s1_not_aligned
LABLE_ALIGN
.L_s1_aligned:
/* If dest is aligned, then copy. */
zext r18, r2, 31, 4
/* Test if len less than 16 bytes. */
bez r18, .L_compare_by_word
.L_compare_by_4word:
/* If aligned, load word each time. */
ldw r20, (r3, 0)
ldw r21, (r1, 0)
/* If s1[i] != s2[i], goto .L_byte_check. */
cmpne r20, r21
bt .L_byte_check
ldw r20, (r3, 4)
ldw r21, (r1, 4)
cmpne r20, r21
bt .L_byte_check
ldw r20, (r3, 8)
ldw r21, (r1, 8)
cmpne r20, r21
bt .L_byte_check
ldw r20, (r3, 12)
ldw r21, (r1, 12)
cmpne r20, r21
bt .L_byte_check
PRE_BNEZAD (r18)
addi a3, 16
addi a1, 16
BNEZAD (r18, .L_compare_by_4word)
.L_compare_by_word:
zext r18, r2, 3, 2
bez r18, .L_compare_by_byte
.L_compare_by_word_loop:
ldw r20, (r3, 0)
ldw r21, (r1, 0)
addi r3, 4
PRE_BNEZAD (r18)
cmpne r20, r21
addi r1, 4
bt .L_byte_check
BNEZAD (r18, .L_compare_by_word_loop)
.L_compare_by_byte:
zext r18, r2, 1, 0
bez r18, .L_return
.L_compare_by_byte_loop:
ldb r0, (r3, 0)
ldb r4, (r1, 0)
addi r3, 1
subu r0, r4
PRE_BNEZAD (r18)
addi r1, 1
bnez r0, .L_return
BNEZAD (r18, .L_compare_by_byte_loop)
.L_return:
mov r4, r12
rts
# ifdef __CSKYBE__
/* d[i] != s[i] in word, so we check byte 0. */
.L_byte_check:
xtrb0 r0, r20
xtrb0 r2, r21
subu r0, r2
bnez r0, .L_return
/* check byte 1 */
xtrb1 r0, r20
xtrb1 r2, r21
subu r0, r2
bnez r0, .L_return
/* check byte 2 */
xtrb2 r0, r20
xtrb2 r2, r21
subu r0, r2
bnez r0, .L_return
/* check byte 3 */
xtrb3 r0, r20
xtrb3 r2, r21
subu r0, r2
# else
/* s1[i] != s2[i] in word, so we check byte 3. */
.L_byte_check:
xtrb3 r0, r20
xtrb3 r2, r21
subu r0, r2
bnez r0, .L_return
/* check byte 2 */
xtrb2 r0, r20
xtrb2 r2, r21
subu r0, r2
bnez r0, .L_return
/* check byte 1 */
xtrb1 r0, r20
xtrb1 r2, r21
subu r0, r2
bnez r0, .L_return
/* check byte 0 */
xtrb0 r0, r20
xtrb0 r2, r21
subu r0, r2
br .L_return
# endif /* !__CSKYBE__ */
/* Compare when s1 is not aligned. */
.L_s1_not_aligned:
sub r13, r19, r13
sub r2, r13
.L_s1_not_aligned_loop:
ldb r0, (r3, 0)
ldb r4, (r1, 0)
addi r3, 1
subu r0, r4
PRE_BNEZAD (r13)
addi r1, 1
bnez r0, .L_return
BNEZAD (r13, .L_s1_not_aligned_loop)
br .L_s1_aligned
ENDPROC(memcmp)
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
ENTRY(__memcpy)
ENTRY(memcpy)
/* Test if len less than 4 bytes. */
mov r12, r0
cmplti r2, 4
bt .L_copy_by_byte
andi r13, r0, 3
movi r19, 4
/* Test if dest is not 4 bytes aligned. */
bnez r13, .L_dest_not_aligned
/* Hardware can handle unaligned access directly. */
.L_dest_aligned:
/* If dest is aligned, then copy. */
zext r18, r2, 31, 4
/* Test if len less than 16 bytes. */
bez r18, .L_len_less_16bytes
movi r19, 0
LABLE_ALIGN
.L_len_larger_16bytes:
#if defined(__CSKY_VDSPV2__)
vldx.8 vr0, (r1), r19
PRE_BNEZAD (r18)
addi r1, 16
vstx.8 vr0, (r0), r19
addi r0, 16
#elif defined(__CK860__)
ldw r3, (r1, 0)
stw r3, (r0, 0)
ldw r3, (r1, 4)
stw r3, (r0, 4)
ldw r3, (r1, 8)
stw r3, (r0, 8)
ldw r3, (r1, 12)
addi r1, 16
stw r3, (r0, 12)
addi r0, 16
#else
ldw r20, (r1, 0)
ldw r21, (r1, 4)
ldw r22, (r1, 8)
ldw r23, (r1, 12)
stw r20, (r0, 0)
stw r21, (r0, 4)
stw r22, (r0, 8)
stw r23, (r0, 12)
PRE_BNEZAD (r18)
addi r1, 16
addi r0, 16
#endif
BNEZAD (r18, .L_len_larger_16bytes)
.L_len_less_16bytes:
zext r18, r2, 3, 2
bez r18, .L_copy_by_byte
.L_len_less_16bytes_loop:
ldw r3, (r1, 0)
PRE_BNEZAD (r18)
addi r1, 4
stw r3, (r0, 0)
addi r0, 4
BNEZAD (r18, .L_len_less_16bytes_loop)
/* Test if len less than 4 bytes. */
.L_copy_by_byte:
zext r18, r2, 1, 0
bez r18, .L_return
.L_copy_by_byte_loop:
ldb r3, (r1, 0)
PRE_BNEZAD (r18)
addi r1, 1
stb r3, (r0, 0)
addi r0, 1
BNEZAD (r18, .L_copy_by_byte_loop)
.L_return:
mov r0, r12
rts
/*
* If dest is not aligned, just copying some bytes makes the
* dest align.
*/
.L_dest_not_aligned:
sub r13, r19, r13
sub r2, r13
/* Makes the dest align. */
.L_dest_not_aligned_loop:
ldb r3, (r1, 0)
PRE_BNEZAD (r13)
addi r1, 1
stb r3, (r0, 0)
addi r0, 1
BNEZAD (r13, .L_dest_not_aligned_loop)
cmplti r2, 4
bt .L_copy_by_byte
/* Check whether the src is aligned. */
jbr .L_dest_aligned
ENDPROC(__memcpy)
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
.weak memmove
ENTRY(__memmove)
ENTRY(memmove)
subu r3, r0, r1
cmphs r3, r2
bt memcpy
mov r12, r0
addu r0, r0, r2
addu r1, r1, r2
/* Test if len less than 4 bytes. */
cmplti r2, 4
bt .L_copy_by_byte
andi r13, r0, 3
/* Test if dest is not 4 bytes aligned. */
bnez r13, .L_dest_not_aligned
/* Hardware can handle unaligned access directly. */
.L_dest_aligned:
/* If dest is aligned, then copy. */
zext r18, r2, 31, 4
/* Test if len less than 16 bytes. */
bez r18, .L_len_less_16bytes
movi r19, 0
/* len > 16 bytes */
LABLE_ALIGN
.L_len_larger_16bytes:
subi r1, 16
subi r0, 16
#if defined(__CSKY_VDSPV2__)
vldx.8 vr0, (r1), r19
PRE_BNEZAD (r18)
vstx.8 vr0, (r0), r19
#elif defined(__CK860__)
ldw r3, (r1, 12)
stw r3, (r0, 12)
ldw r3, (r1, 8)
stw r3, (r0, 8)
ldw r3, (r1, 4)
stw r3, (r0, 4)
ldw r3, (r1, 0)
stw r3, (r0, 0)
#else
ldw r20, (r1, 0)
ldw r21, (r1, 4)
ldw r22, (r1, 8)
ldw r23, (r1, 12)
stw r20, (r0, 0)
stw r21, (r0, 4)
stw r22, (r0, 8)
stw r23, (r0, 12)
PRE_BNEZAD (r18)
#endif
BNEZAD (r18, .L_len_larger_16bytes)
.L_len_less_16bytes:
zext r18, r2, 3, 2
bez r18, .L_copy_by_byte
.L_len_less_16bytes_loop:
subi r1, 4
subi r0, 4
ldw r3, (r1, 0)
PRE_BNEZAD (r18)
stw r3, (r0, 0)
BNEZAD (r18, .L_len_less_16bytes_loop)
/* Test if len less than 4 bytes. */
.L_copy_by_byte:
zext r18, r2, 1, 0
bez r18, .L_return
.L_copy_by_byte_loop:
subi r1, 1
subi r0, 1
ldb r3, (r1, 0)
PRE_BNEZAD (r18)
stb r3, (r0, 0)
BNEZAD (r18, .L_copy_by_byte_loop)
.L_return:
mov r0, r12
rts
/* If dest is not aligned, just copy some bytes makes the dest
align. */
.L_dest_not_aligned:
sub r2, r13
.L_dest_not_aligned_loop:
subi r1, 1
subi r0, 1
/* Makes the dest align. */
ldb r3, (r1, 0)
PRE_BNEZAD (r13)
stb r3, (r0, 0)
BNEZAD (r13, .L_dest_not_aligned_loop)
cmplti r2, 4
bt .L_copy_by_byte
/* Check whether the src is aligned. */
jbr .L_dest_aligned
ENDPROC(memmove)
ENDPROC(__memmove)
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
.weak memset
ENTRY(__memset)
ENTRY(memset)
/* Test if len less than 4 bytes. */
mov r12, r0
cmplti r2, 8
bt .L_set_by_byte
andi r13, r0, 3
movi r19, 4
/* Test if dest is not 4 bytes aligned. */
bnez r13, .L_dest_not_aligned
/* Hardware can handle unaligned access directly. */
.L_dest_aligned:
zextb r3, r1
lsli r1, 8
or r1, r3
lsli r3, r1, 16
or r3, r1
/* If dest is aligned, then copy. */
zext r18, r2, 31, 4
/* Test if len less than 16 bytes. */
bez r18, .L_len_less_16bytes
LABLE_ALIGN
.L_len_larger_16bytes:
stw r3, (r0, 0)
stw r3, (r0, 4)
stw r3, (r0, 8)
stw r3, (r0, 12)
PRE_BNEZAD (r18)
addi r0, 16
BNEZAD (r18, .L_len_larger_16bytes)
.L_len_less_16bytes:
zext r18, r2, 3, 2
andi r2, 3
bez r18, .L_set_by_byte
.L_len_less_16bytes_loop:
stw r3, (r0, 0)
PRE_BNEZAD (r18)
addi r0, 4
BNEZAD (r18, .L_len_less_16bytes_loop)
/* Test if len less than 4 bytes. */
.L_set_by_byte:
zext r18, r2, 2, 0
bez r18, .L_return
.L_set_by_byte_loop:
stb r1, (r0, 0)
PRE_BNEZAD (r18)
addi r0, 1
BNEZAD (r18, .L_set_by_byte_loop)
.L_return:
mov r0, r12
rts
/* If dest is not aligned, just set some bytes makes the dest
align. */
.L_dest_not_aligned:
sub r13, r19, r13
sub r2, r13
.L_dest_not_aligned_loop:
/* Makes the dest align. */
stb r1, (r0, 0)
PRE_BNEZAD (r13)
addi r0, 1
BNEZAD (r13, .L_dest_not_aligned_loop)
cmplti r2, 8
bt .L_set_by_byte
/* Check whether the src is aligned. */
jbr .L_dest_aligned
ENDPROC(memset)
ENDPROC(__memset)
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
ENTRY(strcmp)
mov a3, a0
/* Check if the s1 addr is aligned. */
xor a2, a3, a1
andi a2, 0x3
bnez a2, 7f
andi t1, a0, 0x3
bnez t1, 5f
1:
/* If aligned, load word each time. */
ldw t0, (a3, 0)
ldw t1, (a1, 0)
/* If s1[i] != s2[i], goto 2f. */
cmpne t0, t1
bt 2f
/* If s1[i] == s2[i], check if s1 or s2 is at the end. */
tstnbz t0
/* If at the end, goto 3f (finish comparing). */
bf 3f
ldw t0, (a3, 4)
ldw t1, (a1, 4)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
ldw t0, (a3, 8)
ldw t1, (a1, 8)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
ldw t0, (a3, 12)
ldw t1, (a1, 12)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
ldw t0, (a3, 16)
ldw t1, (a1, 16)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
ldw t0, (a3, 20)
ldw t1, (a1, 20)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
ldw t0, (a3, 24)
ldw t1, (a1, 24)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
ldw t0, (a3, 28)
ldw t1, (a1, 28)
cmpne t0, t1
bt 2f
tstnbz t0
bf 3f
addi a3, 32
addi a1, 32
br 1b
# ifdef __CSKYBE__
/* d[i] != s[i] in word, so we check byte 0. */
2:
xtrb0 a0, t0
xtrb0 a2, t1
subu a0, a2
bez a2, 4f
bnez a0, 4f
/* check byte 1 */
xtrb1 a0, t0
xtrb1 a2, t1
subu a0, a2
bez a2, 4f
bnez a0, 4f
/* check byte 2 */
xtrb2 a0, t0
xtrb2 a2, t1
subu a0, a2
bez a2, 4f
bnez a0, 4f
/* check byte 3 */
xtrb3 a0, t0
xtrb3 a2, t1
subu a0, a2
# else
/* s1[i] != s2[i] in word, so we check byte 3. */
2:
xtrb3 a0, t0
xtrb3 a2, t1
subu a0, a2
bez a2, 4f
bnez a0, 4f
/* check byte 2 */
xtrb2 a0, t0
xtrb2 a2, t1
subu a0, a2
bez a2, 4f
bnez a0, 4f
/* check byte 1 */
xtrb1 a0, t0
xtrb1 a2, t1
subu a0, a2
bez a2, 4f
bnez a0, 4f
/* check byte 0 */
xtrb0 a0, t0
xtrb0 a2, t1
subu a0, a2
# endif /* !__CSKYBE__ */
jmp lr
3:
movi a0, 0
4:
jmp lr
/* Compare when s1 or s2 is not aligned. */
5:
subi t1, 4
6:
ldb a0, (a3, 0)
ldb a2, (a1, 0)
subu a0, a2
bez a2, 4b
bnez a0, 4b
addi t1, 1
addi a1, 1
addi a3, 1
bnez t1, 6b
br 1b
7:
ldb a0, (a3, 0)
addi a3, 1
ldb a2, (a1, 0)
addi a1, 1
subu a0, a2
bnez a0, 4b
bnez a2, 7b
jmp r15
ENDPROC(strcmp)
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
ENTRY(strcpy)
mov a3, a0
/* Check if the src addr is aligned. */
andi t0, a1, 3
bnez t0, 11f
1:
/* Check if all the bytes in the word are not zero. */
ldw a2, (a1)
tstnbz a2
bf 9f
stw a2, (a3)
ldw a2, (a1, 4)
tstnbz a2
bf 2f
stw a2, (a3, 4)
ldw a2, (a1, 8)
tstnbz a2
bf 3f
stw a2, (a3, 8)
ldw a2, (a1, 12)
tstnbz a2
bf 4f
stw a2, (a3, 12)
ldw a2, (a1, 16)
tstnbz a2
bf 5f
stw a2, (a3, 16)
ldw a2, (a1, 20)
tstnbz a2
bf 6f
stw a2, (a3, 20)
ldw a2, (a1, 24)
tstnbz a2
bf 7f
stw a2, (a3, 24)
ldw a2, (a1, 28)
tstnbz a2
bf 8f
stw a2, (a3, 28)
addi a3, 32
addi a1, 32
br 1b
2:
addi a3, 4
br 9f
3:
addi a3, 8
br 9f
4:
addi a3, 12
br 9f
5:
addi a3, 16
br 9f
6:
addi a3, 20
br 9f
7:
addi a3, 24
br 9f
8:
addi a3, 28
9:
# ifdef __CSKYBE__
xtrb0 t0, a2
st.b t0, (a3)
bez t0, 10f
xtrb1 t0, a2
st.b t0, (a3, 1)
bez t0, 10f
xtrb2 t0, a2
st.b t0, (a3, 2)
bez t0, 10f
stw a2, (a3)
# else
xtrb3 t0, a2
st.b t0, (a3)
bez t0, 10f
xtrb2 t0, a2
st.b t0, (a3, 1)
bez t0, 10f
xtrb1 t0, a2
st.b t0, (a3, 2)
bez t0, 10f
stw a2, (a3)
# endif /* !__CSKYBE__ */
10:
jmp lr
11:
subi t0, 4
12:
ld.b a2, (a1)
st.b a2, (a3)
bez a2, 10b
addi t0, 1
addi a1, a1, 1
addi a3, a3, 1
bnez t0, 12b
jbr 1b
ENDPROC(strcpy)
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/module.h>
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcmp);
EXPORT_SYMBOL(memmove);
EXPORT_SYMBOL(strcmp);
EXPORT_SYMBOL(strcpy);
EXPORT_SYMBOL(strlen);
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#include <linux/linkage.h>
#include "sysdep.h"
ENTRY(strlen)
/* Check if the start addr is aligned. */
mov r3, r0
andi r1, r0, 3
movi r2, 4
movi r0, 0
bnez r1, .L_start_not_aligned
LABLE_ALIGN
.L_start_addr_aligned:
/* Check if all the bytes in the word are not zero. */
ldw r1, (r3)
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 4)
addi r0, 4
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 8)
addi r0, 4
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 12)
addi r0, 4
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 16)
addi r0, 4
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 20)
addi r0, 4
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 24)
addi r0, 4
tstnbz r1
bf .L_string_tail
ldw r1, (r3, 28)
addi r0, 4
tstnbz r1
bf .L_string_tail
addi r0, 4
addi r3, 32
br .L_start_addr_aligned
.L_string_tail:
# ifdef __CSKYBE__
xtrb0 r3, r1
bez r3, .L_return
addi r0, 1
xtrb1 r3, r1
bez r3, .L_return
addi r0, 1
xtrb2 r3, r1
bez r3, .L_return
addi r0, 1
# else
xtrb3 r3, r1
bez r3, .L_return
addi r0, 1
xtrb2 r3, r1
bez r3, .L_return
addi r0, 1
xtrb1 r3, r1
bez r3, .L_return
addi r0, 1
# endif /* !__CSKYBE__ */
.L_return:
rts
.L_start_not_aligned:
sub r2, r2, r1
.L_start_not_aligned_loop:
ldb r1, (r3)
PRE_BNEZAD (r2)
addi r3, 1
bez r1, .L_return
addi r0, 1
BNEZAD (r2, .L_start_not_aligned_loop)
br .L_start_addr_aligned
ENDPROC(strlen)
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __SYSDEP_H
#define __SYSDEP_H
#ifdef __ASSEMBLER__
#if defined(__CK860__)
#define LABLE_ALIGN \
.balignw 16, 0x6c03
#define PRE_BNEZAD(R)
#define BNEZAD(R, L) \
bnezad R, L
#else
#define LABLE_ALIGN \
.balignw 8, 0x6c03
#define PRE_BNEZAD(R) \
subi R, 1
#define BNEZAD(R, L) \
bnez R, L
#endif
#endif
#endif
targets := Image zImage uImage
targets += $(dtb-y)
$(obj)/Image: vmlinux FORCE
$(call if_changed,objcopy)
@echo ' Kernel: $@ is ready'
compress-$(CONFIG_KERNEL_GZIP) = gzip
compress-$(CONFIG_KERNEL_LZO) = lzo
compress-$(CONFIG_KERNEL_LZMA) = lzma
compress-$(CONFIG_KERNEL_XZ) = xzkern
compress-$(CONFIG_KERNEL_LZ4) = lz4
$(obj)/zImage: $(obj)/Image FORCE
$(call if_changed,$(compress-y))
@echo ' Kernel: $@ is ready'
UIMAGE_ARCH = sandbox
UIMAGE_COMPRESSION = $(compress-y)
UIMAGE_LOADADDR = $(shell $(NM) vmlinux | awk '$$NF == "_start" {print $$1}')
$(obj)/uImage: $(obj)/zImage
$(call if_changed,uimage)
@echo 'Image: $@ is ready'
dtstree := $(srctree)/$(src)
ifneq '$(CONFIG_CSKY_BUILTIN_DTB)' '""'
builtindtb-y := $(patsubst "%",%,$(CONFIG_CSKY_BUILTIN_DTB))
dtb-y += $(builtindtb-y).dtb
obj-y += $(builtindtb-y).dtb.o
.SECONDARY: $(obj)/$(builtindtb-y).dtb.S
else
dtb-y := $(patsubst $(dtstree)/%.dts,%.dtb, $(wildcard $(dtstree)/*.dts))
endif
always += $(dtb-y)
clean-files += *.dtb *.dtb.S
../../../../../include/dt-bindings
\ No newline at end of file
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_DEFAULT_HOSTNAME="csky"
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_AUDIT=y
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_DEFAULT_DEADLINE=y
CONFIG_CPU_CK807=y
CONFIG_CPU_HAS_FPU=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=65536
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_TTY_PRINTK=y
# CONFIG_VGA_CONSOLE is not set
CONFIG_CSKY_MPTIMER=y
CONFIG_GX6605S_TIMER=y
CONFIG_PM_DEVFREQ=y
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=y
CONFIG_DEVFREQ_GOV_PERFORMANCE=y
CONFIG_DEVFREQ_GOV_POWERSAVE=y
CONFIG_DEVFREQ_GOV_USERSPACE=y
CONFIG_GENERIC_PHY=y
CONFIG_EXT4_FS=y
CONFIG_FANOTIFY=y
CONFIG_QUOTA=y
CONFIG_FSCACHE=m
CONFIG_FSCACHE_STATS=y
CONFIG_CACHEFILES=m
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_UTF8=y
CONFIG_NTFS_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_CHILDREN=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_CONFIGFS_FS=y
CONFIG_CRAMFS=y
CONFIG_ROMFS_FS=y
CONFIG_NFS_FS=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_FS=y
CONFIG_MAGIC_SYSRQ=y
generic-y += asm-offsets.h
generic-y += bugs.h
generic-y += clkdev.h
generic-y += compat.h
generic-y += current.h
generic-y += delay.h
generic-y += device.h
generic-y += div64.h
generic-y += dma.h
generic-y += dma-contiguous.h
generic-y += dma-mapping.h
generic-y += emergency-restart.h
generic-y += exec.h
generic-y += fb.h
generic-y += ftrace.h
generic-y += futex.h
generic-y += gpio.h
generic-y += hardirq.h
generic-y += hw_irq.h
generic-y += irq.h
generic-y += irq_regs.h
generic-y += irq_work.h
generic-y += kdebug.h
generic-y += kmap_types.h
generic-y += kprobes.h
generic-y += kvm_para.h
generic-y += linkage.h
generic-y += local.h
generic-y += local64.h
generic-y += mm-arch-hooks.h
generic-y += module.h
generic-y += mutex.h
generic-y += pci.h
generic-y += percpu.h
generic-y += preempt.h
generic-y += qrwlock.h
generic-y += scatterlist.h
generic-y += sections.h
generic-y += serial.h
generic-y += shm.h
generic-y += timex.h
generic-y += topology.h
generic-y += trace_clock.h
generic-y += unaligned.h
generic-y += user.h
generic-y += vga.h
generic-y += vmlinux.lds.h
generic-y += word-at-a-time.h
generic-y += xor.h
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_ADDRSPACE_H
#define __ASM_CSKY_ADDRSPACE_H
#define KSEG0 0x80000000ul
#define KSEG0ADDR(a) (((unsigned long)a & 0x1fffffff) | KSEG0)
#endif /* __ASM_CSKY_ADDRSPACE_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_ATOMIC_H
#define __ASM_CSKY_ATOMIC_H
#include <linux/version.h>
#include <asm/cmpxchg.h>
#include <asm/barrier.h>
#ifdef CONFIG_CPU_HAS_LDSTEX
#define __atomic_add_unless __atomic_add_unless
static inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
unsigned long tmp, ret;
smp_mb();
asm volatile (
"1: ldex.w %0, (%3) \n"
" mov %1, %0 \n"
" cmpne %0, %4 \n"
" bf 2f \n"
" add %0, %2 \n"
" stex.w %0, (%3) \n"
" bez %0, 1b \n"
"2: \n"
: "=&r" (tmp), "=&r" (ret)
: "r" (a), "r"(&v->counter), "r"(u)
: "memory");
if (ret != u)
smp_mb();
return ret;
}
#define ATOMIC_OP(op, c_op) \
static inline void atomic_##op(int i, atomic_t *v) \
{ \
unsigned long tmp; \
\
asm volatile ( \
"1: ldex.w %0, (%2) \n" \
" " #op " %0, %1 \n" \
" stex.w %0, (%2) \n" \
" bez %0, 1b \n" \
: "=&r" (tmp) \
: "r" (i), "r"(&v->counter) \
: "memory"); \
}
#define ATOMIC_OP_RETURN(op, c_op) \
static inline int atomic_##op##_return(int i, atomic_t *v) \
{ \
unsigned long tmp, ret; \
\
smp_mb(); \
asm volatile ( \
"1: ldex.w %0, (%3) \n" \
" " #op " %0, %2 \n" \
" mov %1, %0 \n" \
" stex.w %0, (%3) \n" \
" bez %0, 1b \n" \
: "=&r" (tmp), "=&r" (ret) \
: "r" (i), "r"(&v->counter) \
: "memory"); \
smp_mb(); \
\
return ret; \
}
#define ATOMIC_FETCH_OP(op, c_op) \
static inline int atomic_fetch_##op(int i, atomic_t *v) \
{ \
unsigned long tmp, ret; \
\
smp_mb(); \
asm volatile ( \
"1: ldex.w %0, (%3) \n" \
" mov %1, %0 \n" \
" " #op " %0, %2 \n" \
" stex.w %0, (%3) \n" \
" bez %0, 1b \n" \
: "=&r" (tmp), "=&r" (ret) \
: "r" (i), "r"(&v->counter) \
: "memory"); \
smp_mb(); \
\
return ret; \
}
#else /* CONFIG_CPU_HAS_LDSTEX */
#include <linux/irqflags.h>
#define __atomic_add_unless __atomic_add_unless
static inline int __atomic_add_unless(atomic_t *v, int a, int u)
{
unsigned long tmp, ret, flags;
raw_local_irq_save(flags);
asm volatile (
" ldw %0, (%3) \n"
" mov %1, %0 \n"
" cmpne %0, %4 \n"
" bf 2f \n"
" add %0, %2 \n"
" stw %0, (%3) \n"
"2: \n"
: "=&r" (tmp), "=&r" (ret)
: "r" (a), "r"(&v->counter), "r"(u)
: "memory");
raw_local_irq_restore(flags);
return ret;
}
#define ATOMIC_OP(op, c_op) \
static inline void atomic_##op(int i, atomic_t *v) \
{ \
unsigned long tmp, flags; \
\
raw_local_irq_save(flags); \
\
asm volatile ( \
" ldw %0, (%2) \n" \
" " #op " %0, %1 \n" \
" stw %0, (%2) \n" \
: "=&r" (tmp) \
: "r" (i), "r"(&v->counter) \
: "memory"); \
\
raw_local_irq_restore(flags); \
}
#define ATOMIC_OP_RETURN(op, c_op) \
static inline int atomic_##op##_return(int i, atomic_t *v) \
{ \
unsigned long tmp, ret, flags; \
\
raw_local_irq_save(flags); \
\
asm volatile ( \
" ldw %0, (%3) \n" \
" " #op " %0, %2 \n" \
" stw %0, (%3) \n" \
" mov %1, %0 \n" \
: "=&r" (tmp), "=&r" (ret) \
: "r" (i), "r"(&v->counter) \
: "memory"); \
\
raw_local_irq_restore(flags); \
\
return ret; \
}
#define ATOMIC_FETCH_OP(op, c_op) \
static inline int atomic_fetch_##op(int i, atomic_t *v) \
{ \
unsigned long tmp, ret, flags; \
\
raw_local_irq_save(flags); \
\
asm volatile ( \
" ldw %0, (%3) \n" \
" mov %1, %0 \n" \
" " #op " %0, %2 \n" \
" stw %0, (%3) \n" \
: "=&r" (tmp), "=&r" (ret) \
: "r" (i), "r"(&v->counter) \
: "memory"); \
\
raw_local_irq_restore(flags); \
\
return ret; \
}
#endif /* CONFIG_CPU_HAS_LDSTEX */
#define atomic_add_return atomic_add_return
ATOMIC_OP_RETURN(add, +)
#define atomic_sub_return atomic_sub_return
ATOMIC_OP_RETURN(sub, -)
#define atomic_fetch_add atomic_fetch_add
ATOMIC_FETCH_OP(add, +)
#define atomic_fetch_sub atomic_fetch_sub
ATOMIC_FETCH_OP(sub, -)
#define atomic_fetch_and atomic_fetch_and
ATOMIC_FETCH_OP(and, &)
#define atomic_fetch_or atomic_fetch_or
ATOMIC_FETCH_OP(or, |)
#define atomic_fetch_xor atomic_fetch_xor
ATOMIC_FETCH_OP(xor, ^)
#define atomic_and atomic_and
ATOMIC_OP(and, &)
#define atomic_or atomic_or
ATOMIC_OP(or, |)
#define atomic_xor atomic_xor
ATOMIC_OP(xor, ^)
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP_RETURN
#undef ATOMIC_OP
#include <asm-generic/atomic.h>
#endif /* __ASM_CSKY_ATOMIC_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_BARRIER_H
#define __ASM_CSKY_BARRIER_H
#ifndef __ASSEMBLY__
#define nop() asm volatile ("nop\n":::"memory")
/*
* sync: completion barrier
* sync.s: completion barrier and shareable to other cores
* sync.i: completion barrier with flush cpu pipeline
* sync.is: completion barrier with flush cpu pipeline and shareable to
* other cores
*
* bar.brwarw: ordering barrier for all load/store instructions before it
* bar.brwarws: ordering barrier for all load/store instructions before it
* and shareable to other cores
* bar.brar: ordering barrier for all load instructions before it
* bar.brars: ordering barrier for all load instructions before it
* and shareable to other cores
* bar.bwaw: ordering barrier for all store instructions before it
* bar.bwaws: ordering barrier for all store instructions before it
* and shareable to other cores
*/
#ifdef CONFIG_CPU_HAS_CACHEV2
#define mb() asm volatile ("bar.brwarw\n":::"memory")
#define rmb() asm volatile ("bar.brar\n":::"memory")
#define wmb() asm volatile ("bar.bwaw\n":::"memory")
#ifdef CONFIG_SMP
#define __smp_mb() asm volatile ("bar.brwarws\n":::"memory")
#define __smp_rmb() asm volatile ("bar.brars\n":::"memory")
#define __smp_wmb() asm volatile ("bar.bwaws\n":::"memory")
#endif /* CONFIG_SMP */
#define sync_is() asm volatile ("sync.is\n":::"memory")
#else /* !CONFIG_CPU_HAS_CACHEV2 */
#define mb() asm volatile ("sync\n":::"memory")
#endif
#include <asm-generic/barrier.h>
#endif /* __ASSEMBLY__ */
#endif /* __ASM_CSKY_BARRIER_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_BITOPS_H
#define __ASM_CSKY_BITOPS_H
#include <linux/compiler.h>
#include <asm/barrier.h>
/*
* asm-generic/bitops/ffs.h
*/
static inline int ffs(int x)
{
if (!x)
return 0;
asm volatile (
"brev %0\n"
"ff1 %0\n"
"addi %0, 1\n"
: "=&r"(x)
: "0"(x));
return x;
}
/*
* asm-generic/bitops/__ffs.h
*/
static __always_inline unsigned long __ffs(unsigned long x)
{
asm volatile (
"brev %0\n"
"ff1 %0\n"
: "=&r"(x)
: "0"(x));
return x;
}
/*
* asm-generic/bitops/fls.h
*/
static __always_inline int fls(int x)
{
asm volatile(
"ff1 %0\n"
: "=&r"(x)
: "0"(x));
return (32 - x);
}
/*
* asm-generic/bitops/__fls.h
*/
static __always_inline unsigned long __fls(unsigned long x)
{
return fls(x) - 1;
}
#include <asm-generic/bitops/ffz.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/find.h>
#ifndef _LINUX_BITOPS_H
#error only <linux/bitops.h> can be included directly
#endif
#include <asm-generic/bitops/sched.h>
#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/lock.h>
#include <asm-generic/bitops/atomic.h>
/*
* bug fix, why only could use atomic!!!!
*/
#include <asm-generic/bitops/non-atomic.h>
#define __clear_bit(nr, vaddr) clear_bit(nr, vaddr)
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic.h>
#endif /* __ASM_CSKY_BITOPS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_BUG_H
#define __ASM_CSKY_BUG_H
#include <linux/compiler.h>
#include <linux/const.h>
#include <linux/types.h>
#define BUG() \
do { \
asm volatile ("bkpt\n"); \
unreachable(); \
} while (0)
#define HAVE_ARCH_BUG
#include <asm-generic/bug.h>
struct pt_regs;
void die_if_kernel(char *str, struct pt_regs *regs, int nr);
void show_regs(struct pt_regs *regs);
#endif /* __ASM_CSKY_BUG_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_CACHE_H
#define __ASM_CSKY_CACHE_H
/* bytes per L1 cache line */
#define L1_CACHE_SHIFT CONFIG_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
#ifndef __ASSEMBLY__
void dcache_wb_line(unsigned long start);
void icache_inv_range(unsigned long start, unsigned long end);
void icache_inv_all(void);
void dcache_wb_range(unsigned long start, unsigned long end);
void dcache_wbinv_all(void);
void cache_wbinv_range(unsigned long start, unsigned long end);
void cache_wbinv_all(void);
void dma_wbinv_range(unsigned long start, unsigned long end);
void dma_wb_range(unsigned long start, unsigned long end);
#endif
#endif /* __ASM_CSKY_CACHE_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_CACHEFLUSH_H
#define __ASM_CSKY_CACHEFLUSH_H
#include <abi/cacheflush.h>
#endif /* __ASM_CSKY_CACHEFLUSH_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_CHECKSUM_H
#define __ASM_CSKY_CHECKSUM_H
#include <linux/in6.h>
#include <asm/byteorder.h>
static inline __sum16 csum_fold(__wsum csum)
{
u32 tmp;
asm volatile(
"mov %1, %0\n"
"rori %0, 16\n"
"addu %0, %1\n"
"lsri %0, 16\n"
: "=r"(csum), "=r"(tmp)
: "0"(csum));
return (__force __sum16) ~csum;
}
#define csum_fold csum_fold
static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
unsigned short len, unsigned short proto, __wsum sum)
{
asm volatile(
"clrc\n"
"addc %0, %1\n"
"addc %0, %2\n"
"addc %0, %3\n"
"inct %0\n"
: "=r"(sum)
: "r"((__force u32)saddr), "r"((__force u32)daddr),
#ifdef __BIG_ENDIAN
"r"(proto + len),
#else
"r"((proto + len) << 8),
#endif
"0" ((__force unsigned long)sum)
: "cc");
return sum;
}
#define csum_tcpudp_nofold csum_tcpudp_nofold
#include <asm-generic/checksum.h>
#endif /* __ASM_CSKY_CHECKSUM_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_CMPXCHG_H
#define __ASM_CSKY_CMPXCHG_H
#ifdef CONFIG_CPU_HAS_LDSTEX
#include <asm/barrier.h>
extern void __bad_xchg(void);
#define __xchg(new, ptr, size) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(new) __new = (new); \
__typeof__(*(ptr)) __ret; \
unsigned long tmp; \
switch (size) { \
case 4: \
smp_mb(); \
asm volatile ( \
"1: ldex.w %0, (%3) \n" \
" mov %1, %2 \n" \
" stex.w %1, (%3) \n" \
" bez %1, 1b \n" \
: "=&r" (__ret), "=&r" (tmp) \
: "r" (__new), "r"(__ptr) \
:); \
smp_mb(); \
break; \
default: \
__bad_xchg(); \
} \
__ret; \
})
#define xchg(ptr, x) (__xchg((x), (ptr), sizeof(*(ptr))))
#define __cmpxchg(ptr, old, new, size) \
({ \
__typeof__(ptr) __ptr = (ptr); \
__typeof__(new) __new = (new); \
__typeof__(new) __tmp; \
__typeof__(old) __old = (old); \
__typeof__(*(ptr)) __ret; \
switch (size) { \
case 4: \
smp_mb(); \
asm volatile ( \
"1: ldex.w %0, (%3) \n" \
" cmpne %0, %4 \n" \
" bt 2f \n" \
" mov %1, %2 \n" \
" stex.w %1, (%3) \n" \
" bez %1, 1b \n" \
"2: \n" \
: "=&r" (__ret), "=&r" (__tmp) \
: "r" (__new), "r"(__ptr), "r"(__old) \
:); \
smp_mb(); \
break; \
default: \
__bad_xchg(); \
} \
__ret; \
})
#define cmpxchg(ptr, o, n) \
(__cmpxchg((ptr), (o), (n), sizeof(*(ptr))))
#else
#include <asm-generic/cmpxchg.h>
#endif
#endif /* __ASM_CSKY_CMPXCHG_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_ELF_H
#define __ASM_CSKY_ELF_H
#include <asm/ptrace.h>
#include <abi/regdef.h>
#define ELF_ARCH 252
/* CSKY Relocations */
#define R_CSKY_NONE 0
#define R_CSKY_32 1
#define R_CSKY_PCIMM8BY4 2
#define R_CSKY_PCIMM11BY2 3
#define R_CSKY_PCIMM4BY2 4
#define R_CSKY_PC32 5
#define R_CSKY_PCRELJSR_IMM11BY2 6
#define R_CSKY_GNU_VTINHERIT 7
#define R_CSKY_GNU_VTENTRY 8
#define R_CSKY_RELATIVE 9
#define R_CSKY_COPY 10
#define R_CSKY_GLOB_DAT 11
#define R_CSKY_JUMP_SLOT 12
#define R_CSKY_ADDR_HI16 24
#define R_CSKY_ADDR_LO16 25
#define R_CSKY_PCRELJSR_IMM26BY2 40
typedef unsigned long elf_greg_t;
typedef struct user_fp elf_fpregset_t;
#define ELF_NGREG (sizeof(struct pt_regs) / sizeof(elf_greg_t))
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(x) ((x)->e_machine == ELF_ARCH)
/*
* These are used to set parameters in the core dumps.
*/
#define USE_ELF_CORE_DUMP
#define ELF_EXEC_PAGESIZE 4096
#define ELF_CLASS ELFCLASS32
#define ELF_PLAT_INIT(_r, load_addr) { _r->a0 = 0; }
#ifdef __cskyBE__
#define ELF_DATA ELFDATA2MSB
#else
#define ELF_DATA ELFDATA2LSB
#endif
/*
* This is the location that an ET_DYN program is loaded if exec'ed. Typical
* use of this is to invoke "./ld.so someprog" to test out a new version of
* the loader. We need to make sure that it is out of the way of the program
* that it will "exec", and that there is sufficient room for the brk.
*/
#define ELF_ET_DYN_BASE 0x0UL
#include <abi/elf.h>
/* Similar, but for a thread other than current. */
struct task_struct;
extern int dump_task_regs(struct task_struct *tsk, elf_gregset_t *elf_regs);
#define ELF_CORE_COPY_TASK_REGS(tsk, elf_regs) dump_task_regs(tsk, elf_regs)
#define ELF_HWCAP (0)
/*
* This yields a string that ld.so will use to load implementation specific
* libraries for optimization. This is more specific in intent than poking
* at uname or /proc/cpuinfo.
*/
#define ELF_PLATFORM (NULL)
#define SET_PERSONALITY(ex) set_personality(PER_LINUX)
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm;
extern int arch_setup_additional_pages(struct linux_binprm *bprm,
int uses_interp);
#endif /* __ASM_CSKY_ELF_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_FIXMAP_H
#define __ASM_CSKY_FIXMAP_H
#include <asm/page.h>
#ifdef CONFIG_HIGHMEM
#include <linux/threads.h>
#include <asm/kmap_types.h>
#endif
enum fixed_addresses {
#ifdef CONFIG_HIGHMEM
FIX_KMAP_BEGIN,
FIX_KMAP_END = FIX_KMAP_BEGIN + (KM_TYPE_NR * NR_CPUS) - 1,
#endif
__end_of_fixed_addresses
};
#define FIXADDR_TOP 0xffffc000
#define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
#include <asm-generic/fixmap.h>
#endif /* __ASM_CSKY_FIXMAP_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_HIGHMEM_H
#define __ASM_CSKY_HIGHMEM_H
#ifdef __KERNEL__
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/uaccess.h>
#include <asm/kmap_types.h>
#include <asm/cache.h>
/* undef for production */
#define HIGHMEM_DEBUG 1
/* declarations for highmem.c */
extern unsigned long highstart_pfn, highend_pfn;
extern pte_t *pkmap_page_table;
/*
* Right now we initialize only a single pte table. It can be extended
* easily, subsequent pte tables have to be allocated in one physical
* chunk of RAM.
*/
#define LAST_PKMAP 1024
#define LAST_PKMAP_MASK (LAST_PKMAP-1)
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
extern void *kmap_high(struct page *page);
extern void kunmap_high(struct page *page);
extern void *kmap(struct page *page);
extern void kunmap(struct page *page);
extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
extern struct page *kmap_atomic_to_page(void *ptr);
#define flush_cache_kmaps() do {} while (0)
extern void kmap_init(void);
#define kmap_prot PAGE_KERNEL
#endif /* __KERNEL__ */
#endif /* __ASM_CSKY_HIGHMEM_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_IO_H
#define __ASM_CSKY_IO_H
#include <abi/pgtable-bits.h>
#include <linux/types.h>
#include <linux/version.h>
extern void __iomem *ioremap(phys_addr_t offset, size_t size);
extern void iounmap(void *addr);
extern int remap_area_pages(unsigned long address, phys_addr_t phys_addr,
size_t size, unsigned long flags);
#define ioremap_nocache(phy, sz) ioremap(phy, sz)
#define ioremap_wc ioremap_nocache
#define ioremap_wt ioremap_nocache
#include <asm-generic/io.h>
#endif /* __ASM_CSKY_IO_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_IRQFLAGS_H
#define __ASM_CSKY_IRQFLAGS_H
#include <abi/reg_ops.h>
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags;
flags = mfcr("psr");
asm volatile("psrclr ie\n":::"memory");
return flags;
}
#define arch_local_irq_save arch_local_irq_save
static inline void arch_local_irq_enable(void)
{
asm volatile("psrset ee, ie\n":::"memory");
}
#define arch_local_irq_enable arch_local_irq_enable
static inline void arch_local_irq_disable(void)
{
asm volatile("psrclr ie\n":::"memory");
}
#define arch_local_irq_disable arch_local_irq_disable
static inline unsigned long arch_local_save_flags(void)
{
return mfcr("psr");
}
#define arch_local_save_flags arch_local_save_flags
static inline void arch_local_irq_restore(unsigned long flags)
{
mtcr("psr", flags);
}
#define arch_local_irq_restore arch_local_irq_restore
static inline int arch_irqs_disabled_flags(unsigned long flags)
{
return !(flags & (1<<6));
}
#define arch_irqs_disabled_flags arch_irqs_disabled_flags
#include <asm-generic/irqflags.h>
#endif /* __ASM_CSKY_IRQFLAGS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_MMU_H
#define __ASM_CSKY_MMU_H
typedef struct {
unsigned long asid[NR_CPUS];
void *vdso;
} mm_context_t;
#endif /* __ASM_CSKY_MMU_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_MMU_CONTEXT_H
#define __ASM_CSKY_MMU_CONTEXT_H
#include <asm-generic/mm_hooks.h>
#include <asm/setup.h>
#include <asm/page.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <abi/ckmmu.h>
static inline void tlbmiss_handler_setup_pgd(unsigned long pgd, bool kernel)
{
pgd &= ~(1<<31);
pgd += PHYS_OFFSET;
pgd |= 1;
setup_pgd(pgd, kernel);
}
#define TLBMISS_HANDLER_SETUP_PGD(pgd) \
tlbmiss_handler_setup_pgd((unsigned long)pgd, 0)
#define TLBMISS_HANDLER_SETUP_PGD_KERNEL(pgd) \
tlbmiss_handler_setup_pgd((unsigned long)pgd, 1)
static inline unsigned long tlb_get_pgd(void)
{
return ((get_pgd()|(1<<31)) - PHYS_OFFSET) & ~1;
}
#define cpu_context(cpu, mm) ((mm)->context.asid[cpu])
#define cpu_asid(cpu, mm) (cpu_context((cpu), (mm)) & ASID_MASK)
#define asid_cache(cpu) (cpu_data[cpu].asid_cache)
#define ASID_FIRST_VERSION (1 << CONFIG_CPU_ASID_BITS)
#define ASID_INC 0x1
#define ASID_MASK (ASID_FIRST_VERSION - 1)
#define ASID_VERSION_MASK ~ASID_MASK
#define destroy_context(mm) do {} while (0)
#define enter_lazy_tlb(mm, tsk) do {} while (0)
#define deactivate_mm(tsk, mm) do {} while (0)
/*
* All unused by hardware upper bits will be considered
* as a software asid extension.
*/
static inline void
get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
{
unsigned long asid = asid_cache(cpu);
asid += ASID_INC;
if (!(asid & ASID_MASK)) {
flush_tlb_all(); /* start new asid cycle */
if (!asid) /* fix version if needed */
asid = ASID_FIRST_VERSION;
}
cpu_context(cpu, mm) = asid_cache(cpu) = asid;
}
/*
* Initialize the context related info for a new mm_struct
* instance.
*/
static inline int
init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
int i;
for_each_online_cpu(i)
cpu_context(i, mm) = 0;
return 0;
}
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
unsigned int cpu = smp_processor_id();
unsigned long flags;
local_irq_save(flags);
/* Check if our ASID is of an older version and thus invalid */
if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & ASID_VERSION_MASK)
get_new_mmu_context(next, cpu);
write_mmu_entryhi(cpu_asid(cpu, next));
TLBMISS_HANDLER_SETUP_PGD(next->pgd);
/*
* Mark current->active_mm as not "active" anymore.
* We don't want to mislead possible IPI tlb flush routines.
*/
cpumask_clear_cpu(cpu, mm_cpumask(prev));
cpumask_set_cpu(cpu, mm_cpumask(next));
local_irq_restore(flags);
}
/*
* After we have set current->mm to a new value, this activates
* the context for the new mm so we see the new mappings.
*/
static inline void
activate_mm(struct mm_struct *prev, struct mm_struct *next)
{
unsigned long flags;
int cpu = smp_processor_id();
local_irq_save(flags);
/* Unconditionally get a new ASID. */
get_new_mmu_context(next, cpu);
write_mmu_entryhi(cpu_asid(cpu, next));
TLBMISS_HANDLER_SETUP_PGD(next->pgd);
/* mark mmu ownership change */
cpumask_clear_cpu(cpu, mm_cpumask(prev));
cpumask_set_cpu(cpu, mm_cpumask(next));
local_irq_restore(flags);
}
/*
* If mm is currently active_mm, we can't really drop it. Instead,
* we will get a new one for it.
*/
static inline void
drop_mmu_context(struct mm_struct *mm, unsigned int cpu)
{
unsigned long flags;
local_irq_save(flags);
if (cpumask_test_cpu(cpu, mm_cpumask(mm))) {
get_new_mmu_context(mm, cpu);
write_mmu_entryhi(cpu_asid(cpu, mm));
} else {
/* will get a new context next time */
cpu_context(cpu, mm) = 0;
}
local_irq_restore(flags);
}
#endif /* __ASM_CSKY_MMU_CONTEXT_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_PAGE_H
#define __ASM_CSKY_PAGE_H
#include <asm/setup.h>
#include <asm/cache.h>
#include <linux/const.h>
/*
* PAGE_SHIFT determines the page size
*/
#define PAGE_SHIFT 12
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE - 1))
#define THREAD_SIZE (PAGE_SIZE * 2)
#define THREAD_MASK (~(THREAD_SIZE - 1))
#define THREAD_SHIFT (PAGE_SHIFT + 1)
/*
* NOTE: virtual isn't really correct, actually it should be the offset into the
* memory node, but we have no highmem, so that works for now.
* TODO: implement (fast) pfn<->pgdat_idx conversion functions, this makes lots
* of the shifts unnecessary.
*/
#ifndef __ASSEMBLY__
#include <linux/pfn.h>
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
#define pfn_to_virt(pfn) __va((pfn) << PAGE_SHIFT)
#define virt_addr_valid(kaddr) ((void *)(kaddr) >= (void *)PAGE_OFFSET && \
(void *)(kaddr) < high_memory)
#define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && ((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
extern void *memset(void *dest, int c, size_t l);
extern void *memcpy(void *to, const void *from, size_t l);
#define clear_page(page) memset((page), 0, PAGE_SIZE)
#define copy_page(to, from) memcpy((to), (from), PAGE_SIZE)
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
#define phys_to_page(paddr) (pfn_to_page(PFN_DOWN(paddr)))
struct page;
#include <abi/page.h>
struct vm_area_struct;
/*
* These are used to make use of C type-checking..
*/
typedef struct { unsigned long pte_low; } pte_t;
#define pte_val(x) ((x).pte_low)
typedef struct { unsigned long pgd; } pgd_t;
typedef struct { unsigned long pgprot; } pgprot_t;
typedef struct page *pgtable_t;
#define pgd_val(x) ((x).pgd)
#define pgprot_val(x) ((x).pgprot)
#define ptep_buddy(x) ((pte_t *)((unsigned long)(x) ^ sizeof(pte_t)))
#define __pte(x) ((pte_t) { (x) })
#define __pgd(x) ((pgd_t) { (x) })
#define __pgprot(x) ((pgprot_t) { (x) })
#endif /* !__ASSEMBLY__ */
#define PHYS_OFFSET (CONFIG_RAM_BASE & ~(LOWMEM_LIMIT - 1))
#define PHYS_OFFSET_OFFSET (CONFIG_RAM_BASE & (LOWMEM_LIMIT - 1))
#define ARCH_PFN_OFFSET PFN_DOWN(CONFIG_RAM_BASE)
#define PAGE_OFFSET 0x80000000
#define LOWMEM_LIMIT 0x40000000
#define __pa(x) ((unsigned long)(x) - PAGE_OFFSET + PHYS_OFFSET)
#define __va(x) ((void *)((unsigned long)(x) + PAGE_OFFSET - \
PHYS_OFFSET))
#define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0))
#define MAP_NR(x) PFN_DOWN((unsigned long)(x) - PAGE_OFFSET - \
PHYS_OFFSET_OFFSET)
#define virt_to_page(x) (mem_map + MAP_NR(x))
#define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \
VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
/*
* main RAM and kernel working space are coincident at 0x80000000, but to make
* life more interesting, there's also an uncached virtual shadow at 0xb0000000
* - these mappings are fixed in the MMU
*/
#define pfn_to_kaddr(x) __va(PFN_PHYS(x))
#include <asm-generic/memory_model.h>
#include <asm-generic/getorder.h>
#endif /* __ASM_CSKY_PAGE_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_PGALLOC_H
#define __ASM_CSKY_PGALLOC_H
#include <linux/highmem.h>
#include <linux/mm.h>
#include <linux/sched.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
set_pmd(pmd, __pmd(__pa(pte)));
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte)
{
set_pmd(pmd, __pmd(__pa(page_address(pte))));
}
#define pmd_pgtable(pmd) pmd_page(pmd)
extern void pgd_init(unsigned long *p);
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
pte_t *pte;
unsigned long *kaddr, i;
pte = (pte_t *) __get_free_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL,
PTE_ORDER);
kaddr = (unsigned long *)pte;
if (address & 0x80000000)
for (i = 0; i < (PAGE_SIZE/4); i++)
*(kaddr + i) = 0x1;
else
clear_page(kaddr);
return pte;
}
static inline struct page *pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
struct page *pte;
unsigned long *kaddr, i;
pte = alloc_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL, PTE_ORDER);
if (pte) {
kaddr = kmap_atomic(pte);
if (address & 0x80000000) {
for (i = 0; i < (PAGE_SIZE/4); i++)
*(kaddr + i) = 0x1;
} else
clear_page(kaddr);
kunmap_atomic(kaddr);
pgtable_page_ctor(pte);
}
return pte;
}
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_pages((unsigned long)pte, PTE_ORDER);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t pte)
{
pgtable_page_dtor(pte);
__free_pages(pte, PTE_ORDER);
}
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
free_pages((unsigned long)pgd, PGD_ORDER);
}
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
{
pgd_t *ret;
pgd_t *init;
ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_ORDER);
if (ret) {
init = pgd_offset(&init_mm, 0UL);
pgd_init((unsigned long *)ret);
memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD,
(PTRS_PER_PGD - USER_PTRS_PER_PGD) * sizeof(pgd_t));
/* prevent out of order excute */
smp_mb();
#ifdef CONFIG_CPU_NEED_TLBSYNC
dcache_wb_range((unsigned int)ret,
(unsigned int)(ret + PTRS_PER_PGD));
#endif
}
return ret;
}
#define __pte_free_tlb(tlb, pte, address) \
do { \
pgtable_page_dtor(pte); \
tlb_remove_page(tlb, pte); \
} while (0)
#define check_pgt_cache() do {} while (0)
extern void pagetable_init(void);
extern void pre_mmu_init(void);
extern void pre_trap_init(void);
#endif /* __ASM_CSKY_PGALLOC_H */
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_REGS_OPS_H
#define __ASM_REGS_OPS_H
#define mfcr(reg) \
({ \
unsigned int tmp; \
asm volatile( \
"mfcr %0, "reg"\n" \
: "=r"(tmp) \
: \
: "memory"); \
tmp; \
})
#define mtcr(reg, val) \
({ \
asm volatile( \
"mtcr %0, "reg"\n" \
: \
: "r"(val) \
: "memory"); \
})
#endif /* __ASM_REGS_OPS_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_SEGMENT_H
#define __ASM_CSKY_SEGMENT_H
typedef struct {
unsigned long seg;
} mm_segment_t;
#define KERNEL_DS ((mm_segment_t) { 0xFFFFFFFF })
#define get_ds() KERNEL_DS
#define USER_DS ((mm_segment_t) { 0x80000000UL })
#define get_fs() (current_thread_info()->addr_limit)
#define set_fs(x) (current_thread_info()->addr_limit = (x))
#define segment_eq(a, b) ((a).seg == (b).seg)
#endif /* __ASM_CSKY_SEGMENT_H */
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef __ASM_CSKY_SHMPARAM_H
#define __ASM_CSKY_SHMPARAM_H
#define SHMLBA (4 * PAGE_SIZE)
#define __ARCH_FORCE_SHMLBA
#endif /* __ASM_CSKY_SHMPARAM_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_CSKY_SMP_H
#define __ASM_CSKY_SMP_H
#include <linux/cpumask.h>
#include <linux/irqreturn.h>
#include <linux/threads.h>
#ifdef CONFIG_SMP
void __init setup_smp(void);
void __init setup_smp_ipi(void);
void arch_send_call_function_ipi_mask(struct cpumask *mask);
void arch_send_call_function_single_ipi(int cpu);
void __init set_send_ipi(void (*func)(const struct cpumask *mask), int irq);
#define raw_smp_processor_id() (current_thread_info()->cpu)
#endif /* CONFIG_SMP */
#endif /* __ASM_CSKY_SMP_H */
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
// Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
#ifndef _CSKY_STRING_MM_H_
#define _CSKY_STRING_MM_H_
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <linux/compiler.h>
#include <abi/string.h>
#endif
#endif /* _CSKY_STRING_MM_H_ */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment