Commit 3bd6e585 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'asm-generic-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic

Pull asm-generic updates from Arnd Bergmann:
 "There are three independent sets of changes:

   - Sai Prakash Ranjan adds tracing support to the asm-generic version
     of the MMIO accessors, which is intended to help understand
     problems with device drivers and has been part of Qualcomm's vendor
     kernels for many years

   - A patch from Sebastian Siewior to rework the handling of IRQ stacks
     in softirqs across architectures, which is needed for enabling
     PREEMPT_RT

   - The last patch to remove the CONFIG_VIRT_TO_BUS option and some of
     the code behind that, after the last users of this old interface
     made it in through the netdev, scsi, media and staging trees"

* tag 'asm-generic-6.0' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
  uapi: asm-generic: fcntl: Fix typo 'the the' in comment
  arch/*/: remove CONFIG_VIRT_TO_BUS
  soc: qcom: geni: Disable MMIO tracing for GENI SE
  serial: qcom_geni_serial: Disable MMIO tracing for geni serial
  asm-generic/io: Add logging support for MMIO accessors
  KVM: arm64: Add a flag to disable MMIO trace for nVHE KVM
  lib: Add register read/write tracing support
  drm/meson: Fix overflow implicit truncation warnings
  irqchip/tegra: Fix overflow implicit truncation warnings
  coresight: etm4x: Use asm-generic IO memory barriers
  arm64: io: Use asm-generic high level MMIO accessors
  arch/*: Disable softirq stacks on PREEMPT_RT.
parents fad235ed 6f05e014
==========================================================
How to access I/O mapped memory from within device drivers
==========================================================
:Author: Linus
.. warning::
The virt_to_bus() and bus_to_virt() functions have been
superseded by the functionality provided by the PCI DMA interface
(see Documentation/core-api/dma-api-howto.rst). They continue
to be documented below for historical purposes, but new code
must not use them. --davidm 00/12/12
::
[ This is a mail message in response to a query on IO mapping, thus the
strange format for a "document" ]
The AHA-1542 is a bus-master device, and your patch makes the driver give the
controller the physical address of the buffers, which is correct on x86
(because all bus master devices see the physical memory mappings directly).
However, on many setups, there are actually **three** different ways of looking
at memory addresses, and in this case we actually want the third, the
so-called "bus address".
Essentially, the three ways of addressing memory are (this is "real memory",
that is, normal RAM--see later about other details):
- CPU untranslated. This is the "physical" address. Physical address
0 is what the CPU sees when it drives zeroes on the memory bus.
- CPU translated address. This is the "virtual" address, and is
completely internal to the CPU itself with the CPU doing the appropriate
translations into "CPU untranslated".
- bus address. This is the address of memory as seen by OTHER devices,
not the CPU. Now, in theory there could be many different bus
addresses, with each device seeing memory in some device-specific way, but
happily most hardware designers aren't actually actively trying to make
things any more complex than necessary, so you can assume that all
external hardware sees the memory the same way.
Now, on normal PCs the bus address is exactly the same as the physical
address, and things are very simple indeed. However, they are that simple
because the memory and the devices share the same address space, and that is
not generally necessarily true on other PCI/ISA setups.
Now, just as an example, on the PReP (PowerPC Reference Platform), the
CPU sees a memory map something like this (this is from memory)::
0-2 GB "real memory"
2 GB-3 GB "system IO" (inb/out and similar accesses on x86)
3 GB-4 GB "IO memory" (shared memory over the IO bus)
Now, that looks simple enough. However, when you look at the same thing from
the viewpoint of the devices, you have the reverse, and the physical memory
address 0 actually shows up as address 2 GB for any IO master.
So when the CPU wants any bus master to write to physical memory 0, it
has to give the master address 0x80000000 as the memory address.
So, for example, depending on how the kernel is actually mapped on the
PPC, you can end up with a setup like this::
physical address: 0
virtual address: 0xC0000000
bus address: 0x80000000
where all the addresses actually point to the same thing. It's just seen
through different translations..
Similarly, on the Alpha, the normal translation is::
physical address: 0
virtual address: 0xfffffc0000000000
bus address: 0x40000000
(but there are also Alphas where the physical address and the bus address
are the same).
Anyway, the way to look up all these translations, you do::
#include <asm/io.h>
phys_addr = virt_to_phys(virt_addr);
virt_addr = phys_to_virt(phys_addr);
bus_addr = virt_to_bus(virt_addr);
virt_addr = bus_to_virt(bus_addr);
Now, when do you need these?
You want the **virtual** address when you are actually going to access that
pointer from the kernel. So you can have something like this::
/*
* this is the hardware "mailbox" we use to communicate with
* the controller. The controller sees this directly.
*/
struct mailbox {
__u32 status;
__u32 bufstart;
__u32 buflen;
..
} mbox;
unsigned char * retbuffer;
/* get the address from the controller */
retbuffer = bus_to_virt(mbox.bufstart);
switch (retbuffer[0]) {
case STATUS_OK:
...
on the other hand, you want the bus address when you have a buffer that
you want to give to the controller::
/* ask the controller to read the sense status into "sense_buffer" */
mbox.bufstart = virt_to_bus(&sense_buffer);
mbox.buflen = sizeof(sense_buffer);
mbox.status = 0;
notify_controller(&mbox);
And you generally **never** want to use the physical address, because you can't
use that from the CPU (the CPU only uses translated virtual addresses), and
you can't use it from the bus master.
So why do we care about the physical address at all? We do need the physical
address in some cases, it's just not very often in normal code. The physical
address is needed if you use memory mappings, for example, because the
"remap_pfn_range()" mm function wants the physical address of the memory to
be remapped as measured in units of pages, a.k.a. the pfn (the memory
management layer doesn't know about devices outside the CPU, so it
shouldn't need to know about "bus addresses" etc).
.. note::
The above is only one part of the whole equation. The above
only talks about "real memory", that is, CPU memory (RAM).
There is a completely different type of memory too, and that's the "shared
memory" on the PCI or ISA bus. That's generally not RAM (although in the case
of a video graphics card it can be normal DRAM that is just used for a frame
buffer), but can be things like a packet buffer in a network card etc.
This memory is called "PCI memory" or "shared memory" or "IO memory" or
whatever, and there is only one way to access it: the readb/writeb and
related functions. You should never take the address of such memory, because
there is really nothing you can do with such an address: it's not
conceptually in the same memory space as "real memory" at all, so you cannot
just dereference a pointer. (Sadly, on x86 it **is** in the same memory space,
so on x86 it actually works to just deference a pointer, but it's not
portable).
For such memory, you can do things like:
- reading::
/*
* read first 32 bits from ISA memory at 0xC0000, aka
* C000:0000 in DOS terms
*/
unsigned int signature = isa_readl(0xC0000);
- remapping and writing::
/*
* remap framebuffer PCI memory area at 0xFC000000,
* size 1MB, so that we can access it: We can directly
* access only the 640k-1MB area, so anything else
* has to be remapped.
*/
void __iomem *baseptr = ioremap(0xFC000000, 1024*1024);
/* write a 'A' to the offset 10 of the area */
writeb('A',baseptr+10);
/* unmap when we unload the driver */
iounmap(baseptr);
- copying and clearing::
/* get the 6-byte Ethernet address at ISA address E000:0040 */
memcpy_fromio(kernel_buffer, 0xE0040, 6);
/* write a packet to the driver */
memcpy_toio(0xE1000, skb->data, skb->len);
/* clear the frame buffer */
memset_io(0xA0000, 0, 0x10000);
OK, that just about covers the basics of accessing IO portably. Questions?
Comments? You may think that all the above is overly complex, but one day you
might find yourself with a 500 MHz Alpha in front of you, and then you'll be
happy that your driver works ;)
Note that kernel versions 2.0.x (and earlier) mistakenly called the
ioremap() function "vremap()". ioremap() is the proper name, but I
didn't think straight when I wrote it originally. People who have to
support both can do something like::
/* support old naming silliness */
#if LINUX_VERSION_CODE < 0x020100
#define ioremap vremap
#define iounmap vfree
#endif
at the top of their source files, and then they can use the right names
even on 2.0.x systems.
And the above sounds worse than it really is. Most real drivers really
don't do all that complex things (or rather: the complexity is not so
much in the actual IO accesses as in error handling and timeouts etc).
It's generally not hard to fix drivers, and in many cases the code
actually looks better afterwards::
unsigned long signature = *(unsigned int *) 0xC0000;
vs
unsigned long signature = readl(0xC0000);
I think the second version actually is more readable, no?
......@@ -707,20 +707,6 @@ to use the dma_sync_*() interfaces::
}
}
Drivers converted fully to this interface should not use virt_to_bus() any
longer, nor should they use bus_to_virt(). Some drivers have to be changed a
little bit, because there is no longer an equivalent to bus_to_virt() in the
dynamic DMA mapping scheme - you have to always store the DMA addresses
returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
calls (dma_map_sg() stores them in the scatterlist itself if the platform
supports dynamic DMA mapping in hardware) in your driver structures and/or
in the card registers.
All drivers should be using these interfaces with no exceptions. It
is planned to completely remove virt_to_bus() and bus_to_virt() as
they are entirely deprecated. Some ports already do not provide these
as it is impossible to correctly support them.
Handling Errors
===============
......
......@@ -41,7 +41,6 @@ Library functionality that is used throughout the kernel.
rbtree
generic-radix-tree
packing
bus-virt-phys-mapping
this_cpu_ops
timekeeping
errseq
......
......@@ -53,7 +53,6 @@ Todolist:
circular-buffers
generic-radix-tree
packing
bus-virt-phys-mapping
this_cpu_ops
timekeeping
errseq
......
......@@ -1406,6 +1406,9 @@ config ARCH_HAS_ELFCORE_COMPAT
config ARCH_HAS_PARANOID_L1D_FLUSH
bool
config ARCH_HAVE_TRACE_MMIO_ACCESS
bool
config DYNAMIC_SIGFRAME
bool
......
......@@ -17,7 +17,6 @@ config ALPHA
select HAVE_PERF_EVENTS
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
select VIRT_TO_BUS
select GENERIC_IRQ_PROBE
select GENERIC_PCI_IOMAP
select AUTO_IRQ_AFFINITY if SMP
......
......@@ -20,7 +20,7 @@
#define fd_free_dma() free_dma(FLOPPY_DMA)
#define fd_clear_dma_ff() clear_dma_ff(FLOPPY_DMA)
#define fd_set_dma_mode(mode) set_dma_mode(FLOPPY_DMA,mode)
#define fd_set_dma_addr(addr) set_dma_addr(FLOPPY_DMA,virt_to_bus(addr))
#define fd_set_dma_addr(addr) set_dma_addr(FLOPPY_DMA,isa_virt_to_bus(addr))
#define fd_set_dma_count(count) set_dma_count(FLOPPY_DMA,count)
#define fd_enable_irq() enable_irq(FLOPPY_IRQ)
#define fd_disable_irq() disable_irq(FLOPPY_IRQ)
......
......@@ -106,15 +106,15 @@ static inline void * phys_to_virt(unsigned long address)
extern unsigned long __direct_map_base;
extern unsigned long __direct_map_size;
static inline unsigned long __deprecated virt_to_bus(volatile void *address)
static inline unsigned long __deprecated isa_virt_to_bus(volatile void *address)
{
unsigned long phys = virt_to_phys(address);
unsigned long bus = phys + __direct_map_base;
return phys <= __direct_map_size ? bus : 0;
}
#define isa_virt_to_bus virt_to_bus
#define isa_virt_to_bus isa_virt_to_bus
static inline void * __deprecated bus_to_virt(unsigned long address)
static inline void * __deprecated isa_bus_to_virt(unsigned long address)
{
void *virt;
......@@ -125,7 +125,7 @@ static inline void * __deprecated bus_to_virt(unsigned long address)
virt = phys_to_virt(address);
return (long)address <= 0 ? NULL : virt;
}
#define isa_bus_to_virt bus_to_virt
#define isa_bus_to_virt isa_bus_to_virt
/*
* There are different chipsets to interface the Alpha CPUs to the world.
......
......@@ -70,6 +70,7 @@ static void __init init_irq_stacks(void)
}
}
#ifndef CONFIG_PREEMPT_RT
static void ____do_softirq(void *arg)
{
__do_softirq();
......@@ -80,7 +81,7 @@ void do_softirq_own_stack(void)
call_with_stack(____do_softirq, NULL,
__this_cpu_read(irq_stack_ptr));
}
#endif
#endif
int arch_show_interrupts(struct seq_file *p, int prec)
......
......@@ -49,6 +49,7 @@ config ARM64
select ARCH_HAS_ZONE_DMA_SET if EXPERT
select ARCH_HAVE_ELF_PROT
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_HAVE_TRACE_MMIO_ACCESS
select ARCH_INLINE_READ_LOCK if !PREEMPTION
select ARCH_INLINE_READ_LOCK_BH if !PREEMPTION
select ARCH_INLINE_READ_LOCK_IRQ if !PREEMPTION
......
......@@ -91,7 +91,7 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
}
/* IO barriers */
#define __iormb(v) \
#define __io_ar(v) \
({ \
unsigned long tmp; \
\
......@@ -108,39 +108,14 @@ static inline u64 __raw_readq(const volatile void __iomem *addr)
: "memory"); \
})
#define __io_par(v) __iormb(v)
#define __iowmb() dma_wmb()
#define __iomb() dma_mb()
/*
* Relaxed I/O memory access primitives. These follow the Device memory
* ordering rules but do not guarantee any ordering relative to Normal memory
* accesses.
*/
#define readb_relaxed(c) ({ u8 __r = __raw_readb(c); __r; })
#define readw_relaxed(c) ({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
#define readl_relaxed(c) ({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
#define readq_relaxed(c) ({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
#define __io_bw() dma_wmb()
#define __io_br(v)
#define __io_aw(v)
#define writeb_relaxed(v,c) ((void)__raw_writeb((v),(c)))
#define writew_relaxed(v,c) ((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
#define writel_relaxed(v,c) ((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
#define writeq_relaxed(v,c) ((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
/*
* I/O memory access primitives. Reads are ordered relative to any
* following Normal memory access. Writes are ordered relative to any prior
* Normal memory access.
*/
#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(__v); __v; })
#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(__v); __v; })
#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(__v); __v; })
#define readq(c) ({ u64 __v = readq_relaxed(c); __iormb(__v); __v; })
#define writeb(v,c) ({ __iowmb(); writeb_relaxed((v),(c)); })
#define writew(v,c) ({ __iowmb(); writew_relaxed((v),(c)); })
#define writel(v,c) ({ __iowmb(); writel_relaxed((v),(c)); })
#define writeq(v,c) ({ __iowmb(); writeq_relaxed((v),(c)); })
/* arm64-specific, don't use in portable drivers */
#define __iormb(v) __io_ar(v)
#define __iowmb() __io_bw()
#define __iomb() dma_mb()
/*
* I/O port access primitives.
......
......@@ -4,7 +4,12 @@
#
asflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS
# Tracepoint and MMIO logging symbols should not be visible at nVHE KVM as
# there is no way to execute them and any such MMIO access from nVHE KVM
# will explode instantly (Words of Marc Zyngier). So introduce a generic flag
# __DISABLE_TRACE_MMIO__ to disable MMIO tracing for nVHE KVM.
ccflags-y := -D__KVM_NVHE_HYPERVISOR__ -D__DISABLE_EXPORTS -D__DISABLE_TRACE_MMIO__
hostprogs := gen-hyprel
HOST_EXTRACFLAGS += -I$(objtree)/include
......
......@@ -39,7 +39,6 @@ config IA64
select HAVE_FUNCTION_DESCRIPTORS
select HAVE_VIRT_CPU_ACCOUNTING
select HUGETLB_PAGE_SIZE_VARIABLE if HUGETLB_PAGE
select VIRT_TO_BUS
select GENERIC_IRQ_PROBE
select GENERIC_PENDING_IRQ if SMP
select GENERIC_IRQ_SHOW
......
......@@ -96,14 +96,6 @@ extern u64 kern_mem_attribute (unsigned long phys_addr, unsigned long size);
extern int valid_phys_addr_range (phys_addr_t addr, size_t count); /* efi.c */
extern int valid_mmap_phys_addr_range (unsigned long pfn, size_t count);
/*
* The following two macros are deprecated and scheduled for removal.
* Please use the PCI-DMA interface defined in <asm/pci.h> instead.
*/
#define bus_to_virt phys_to_virt
#define virt_to_bus virt_to_phys
#define page_to_bus page_to_phys
# endif /* KERNEL */
/*
......
......@@ -30,7 +30,6 @@ config M68K
select OLD_SIGACTION
select OLD_SIGSUSPEND3
select UACCESS_MEMCPY if !MMU
select VIRT_TO_BUS
select ZONE_DMA
config CPU_BIG_ENDIAN
......
......@@ -33,9 +33,11 @@ static inline void *phys_to_virt(unsigned long address)
/*
* IO bus memory addresses are 1:1 with the physical address,
* deprecated globally but still used on two machines.
*/
#if defined(CONFIG_AMIGA) || defined(CONFIG_VME)
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
#endif
#endif
#endif
......@@ -38,7 +38,6 @@ config MICROBLAZE
select OF_EARLY_FLATTREE
select PCI_DOMAINS_GENERIC if PCI
select PCI_SYSCALL if PCI
select VIRT_TO_BUS
select CPU_NO_EFFICIENT_FFS
select MMU_GATHER_NO_RANGE
select SPARSE_IRQ
......
......@@ -30,8 +30,6 @@ extern resource_size_t isa_mem_base;
#define PCI_IOBASE ((void __iomem *)_IO_BASE)
#define IO_SPACE_LIMIT (0xFFFFFFFF)
#define page_to_bus(page) (page_to_phys(page))
extern void iounmap(volatile void __iomem *addr);
extern void __iomem *ioremap(phys_addr_t address, unsigned long size);
......
......@@ -100,7 +100,6 @@ config MIPS
select RTC_LIB
select SYSCTL_EXCEPTION_TRACE
select TRACE_IRQFLAGS_SUPPORT
select VIRT_TO_BUS
select ARCH_HAS_ELFCORE_COMPAT
select HAVE_ARCH_KCSAN if 64BIT
......
......@@ -147,15 +147,6 @@ static inline void *isa_bus_to_virt(unsigned long address)
return phys_to_virt(address);
}
/*
* However PCI ones are not necessarily 1:1 and therefore these interfaces
* are forbidden in portable PCI drivers.
*
* Allow them for x86 for legacy drivers, though.
*/
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
/*
* Change "struct page" to physical address.
*/
......
......@@ -44,7 +44,6 @@ config PARISC
select SYSCTL_ARCH_UNALIGN_ALLOW
select SYSCTL_EXCEPTION_TRACE
select HAVE_MOD_ARCH_SPECIFIC
select VIRT_TO_BUS
select MODULES_USE_ELF_RELA
select CLONE_BACKWARDS
select TTY # Needed for pdc_cons.c
......
......@@ -179,7 +179,7 @@ static void _fd_chose_dma_mode(char *addr, unsigned long size)
{
if(can_use_virtual_dma == 2) {
if((unsigned int) addr >= (unsigned int) high_memory ||
virt_to_bus(addr) >= 0x1000000 ||
virt_to_phys(addr) >= 0x1000000 ||
_CROSS_64KB(addr, size, 0))
use_virtual_dma = 1;
else
......@@ -215,7 +215,7 @@ static int hard_dma_setup(char *addr, unsigned long size, int mode, int io)
doing_pdma = 0;
clear_dma_ff(FLOPPY_DMA);
set_dma_mode(FLOPPY_DMA,mode);
set_dma_addr(FLOPPY_DMA,virt_to_bus(addr));
set_dma_addr(FLOPPY_DMA,virt_to_phys(addr));
set_dma_count(FLOPPY_DMA,size);
enable_dma(FLOPPY_DMA);
return 0;
......
......@@ -7,8 +7,6 @@
#define virt_to_phys(a) ((unsigned long)__pa(a))
#define phys_to_virt(a) __va(a)
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
static inline unsigned long isa_bus_to_virt(unsigned long addr) {
BUG();
......
......@@ -480,10 +480,12 @@ static void execute_on_irq_stack(void *func, unsigned long param1)
*irq_stack_in_use = 1;
}
#ifndef CONFIG_PREEMPT_RT
void do_softirq_own_stack(void)
{
execute_on_irq_stack(__do_softirq, 0);
}
#endif
#endif /* CONFIG_IRQSTACKS */
/* ONLY called from entry.S:intr_extint() */
......
......@@ -277,7 +277,6 @@ config PPC
select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK
select TRACE_IRQFLAGS_SUPPORT
select VIRT_TO_BUS if !PPC64
#
# Please keep this list sorted alphabetically.
#
......
......@@ -985,8 +985,6 @@ static inline void * bus_to_virt(unsigned long address)
}
#define bus_to_virt bus_to_virt
#define page_to_bus(page) (page_to_phys(page) + PCI_DRAM_OFFSET)
#endif /* CONFIG_PPC32 */
/* access ports */
......
......@@ -611,6 +611,7 @@ static inline void check_stack_overflow(void)
}
}
#ifndef CONFIG_PREEMPT_RT
static __always_inline void call_do_softirq(const void *sp)
{
/* Temporarily switch r1 to sp, call __do_softirq() then restore r1. */
......@@ -629,6 +630,7 @@ static __always_inline void call_do_softirq(const void *sp)
"r11", "r12"
);
}
#endif
static __always_inline void call_do_irq(struct pt_regs *regs, void *sp)
{
......@@ -747,10 +749,12 @@ void *mcheckirq_ctx[NR_CPUS] __read_mostly;
void *softirq_ctx[NR_CPUS] __read_mostly;
void *hardirq_ctx[NR_CPUS] __read_mostly;
#ifndef CONFIG_PREEMPT_RT
void do_softirq_own_stack(void)
{
call_do_softirq(softirq_ctx[smp_processor_id()]);
}
#endif
irq_hw_number_t virq_to_hw(unsigned int virq)
{
......
......@@ -167,7 +167,6 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
#define page_to_virt(page) (pfn_to_virt(page_to_pfn(page)))
#define page_to_phys(page) (pfn_to_phys(page_to_pfn(page)))
#define page_to_bus(page) (page_to_phys(page))
#define phys_to_page(paddr) (pfn_to_page(phys_to_pfn(paddr)))
#define sym_to_pfn(x) __phys_to_pfn(__pa_symbol(x))
......
......@@ -5,9 +5,10 @@
#include <asm/lowcore.h>
#include <asm/stacktrace.h>
#ifndef CONFIG_PREEMPT_RT
static inline void do_softirq_own_stack(void)
{
call_on_stack(0, S390_lowcore.async_stack, void, __do_softirq);
}
#endif
#endif /* __ASM_S390_SOFTIRQ_STACK_H */
......@@ -149,6 +149,7 @@ void irq_ctx_exit(int cpu)
hardirq_ctx[cpu] = NULL;
}
#ifndef CONFIG_PREEMPT_RT
void do_softirq_own_stack(void)
{
struct thread_info *curctx;
......@@ -176,6 +177,7 @@ void do_softirq_own_stack(void)
"r5", "r6", "r7", "r8", "r9", "r15", "t", "pr"
);
}
#endif
#else
static inline void handle_one_irq(unsigned int irq)
{
......
......@@ -855,6 +855,7 @@ void __irq_entry handler_irq(int pil, struct pt_regs *regs)
set_irq_regs(old_regs);
}
#ifndef CONFIG_PREEMPT_RT
void do_softirq_own_stack(void)
{
void *orig_sp, *sp = softirq_stack[smp_processor_id()];
......@@ -869,6 +870,7 @@ void do_softirq_own_stack(void)
__asm__ __volatile__("mov %0, %%sp"
: : "r" (orig_sp));
}
#endif
#ifdef CONFIG_HOTPLUG_CPU
void fixup_irqs(void)
......
......@@ -280,7 +280,6 @@ config X86
select TRACE_IRQFLAGS_SUPPORT
select TRACE_IRQFLAGS_NMI_SUPPORT
select USER_STACKTRACE_SUPPORT
select VIRT_TO_BUS
select HAVE_ARCH_KCSAN if X86_64
select X86_FEATURE_NAMES if PROC_FS
select PROC_PID_ARCH_STATUS if PROC_FS
......
......@@ -169,15 +169,6 @@ static inline unsigned int isa_virt_to_bus(volatile void *address)
}
#define isa_bus_to_virt phys_to_virt
/*
* However PCI ones are not necessarily 1:1 and therefore these interfaces
* are forbidden in portable PCI drivers.
*
* Allow them on x86 for legacy drivers, though.
*/
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
/*
* The default ioremap() behavior is non-cached; if you need something
* else, you probably want one of the following.
......
......@@ -52,7 +52,6 @@ config XTENSA
select MODULES_USE_ELF_RELA
select PERF_USE_VMALLOC
select TRACE_IRQFLAGS_SUPPORT
select VIRT_TO_BUS
help
Xtensa processors are 32-bit RISC machines designed by Tensilica
primarily for embedded systems. These processors are both
......
......@@ -63,9 +63,6 @@ static inline void iounmap(volatile void __iomem *addr)
xtensa_iounmap(addr);
}
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
#endif /* CONFIG_MMU */
#include <asm-generic/io.h>
......
......@@ -469,17 +469,17 @@ void meson_viu_init(struct meson_drm *priv)
priv->io_base + _REG(VD2_IF0_LUMA_FIFO_SIZE));
if (meson_vpu_is_compatible(priv, VPU_COMPATIBLE_G12A)) {
writel_relaxed(VIU_OSD_BLEND_REORDER(0, 1) |
VIU_OSD_BLEND_REORDER(1, 0) |
VIU_OSD_BLEND_REORDER(2, 0) |
VIU_OSD_BLEND_REORDER(3, 0) |
VIU_OSD_BLEND_DIN_EN(1) |
VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
VIU_OSD_BLEND_HOLD_LINES(4),
priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
u32 val = (u32)VIU_OSD_BLEND_REORDER(0, 1) |
(u32)VIU_OSD_BLEND_REORDER(1, 0) |
(u32)VIU_OSD_BLEND_REORDER(2, 0) |
(u32)VIU_OSD_BLEND_REORDER(3, 0) |
(u32)VIU_OSD_BLEND_DIN_EN(1) |
(u32)VIU_OSD_BLEND1_DIN3_BYPASS_TO_DOUT1 |
(u32)VIU_OSD_BLEND1_DOUT_BYPASS_TO_BLEND2 |
(u32)VIU_OSD_BLEND_DIN0_BYPASS_TO_DOUT0 |
(u32)VIU_OSD_BLEND_BLEN2_PREMULT_EN(1) |
(u32)VIU_OSD_BLEND_HOLD_LINES(4);
writel_relaxed(val, priv->io_base + _REG(VIU_OSD_BLEND_CTRL));
writel_relaxed(OSD_BLEND_PATH_SEL_ENABLE,
priv->io_base + _REG(OSD1_BLEND_SRC_CTRL));
......
......@@ -98,7 +98,7 @@ u64 etm4x_sysreg_read(u32 offset, bool _relaxed, bool _64bit)
}
if (!_relaxed)
__iormb(res); /* Imitate the !relaxed I/O helpers */
__io_ar(res); /* Imitate the !relaxed I/O helpers */
return res;
}
......@@ -106,7 +106,7 @@ u64 etm4x_sysreg_read(u32 offset, bool _relaxed, bool _64bit)
void etm4x_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit)
{
if (!_relaxed)
__iowmb(); /* Imitate the !relaxed I/O helpers */
__io_bw(); /* Imitate the !relaxed I/O helpers */
if (!_64bit)
val &= GENMASK(31, 0);
......@@ -130,7 +130,7 @@ static u64 ete_sysreg_read(u32 offset, bool _relaxed, bool _64bit)
}
if (!_relaxed)
__iormb(res); /* Imitate the !relaxed I/O helpers */
__io_ar(res); /* Imitate the !relaxed I/O helpers */
return res;
}
......@@ -138,7 +138,7 @@ static u64 ete_sysreg_read(u32 offset, bool _relaxed, bool _64bit)
static void ete_sysreg_write(u64 val, u32 offset, bool _relaxed, bool _64bit)
{
if (!_relaxed)
__iowmb(); /* Imitate the !relaxed I/O helpers */
__io_bw(); /* Imitate the !relaxed I/O helpers */
if (!_64bit)
val &= GENMASK(31, 0);
......
......@@ -547,14 +547,14 @@
#define etm4x_read32(csa, offset) \
({ \
u32 __val = etm4x_relaxed_read32((csa), (offset)); \
__iormb(__val); \
__io_ar(__val); \
__val; \
})
#define etm4x_read64(csa, offset) \
({ \
u64 __val = etm4x_relaxed_read64((csa), (offset)); \
__iormb(__val); \
__io_ar(__val); \
__val; \
})
......@@ -578,13 +578,13 @@
#define etm4x_write32(csa, val, offset) \
do { \
__iowmb(); \
__io_bw(); \
etm4x_relaxed_write32((csa), (val), (offset)); \
} while (0)
#define etm4x_write64(csa, val, offset) \
do { \
__iowmb(); \
__io_bw(); \
etm4x_relaxed_write64((csa), (val), (offset)); \
} while (0)
......
......@@ -148,10 +148,10 @@ static int tegra_ictlr_suspend(void)
lic->cop_iep[i] = readl_relaxed(ictlr + ICTLR_COP_IEP_CLASS);
/* Disable COP interrupts */
writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
/* Disable CPU interrupts */
writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
/* Enable the wakeup sources of ictlr */
writel_relaxed(lic->ictlr_wake_mask[i], ictlr + ICTLR_CPU_IER_SET);
......@@ -172,12 +172,12 @@ static void tegra_ictlr_resume(void)
writel_relaxed(lic->cpu_iep[i],
ictlr + ICTLR_CPU_IEP_CLASS);
writel_relaxed(~0ul, ictlr + ICTLR_CPU_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_CPU_IER_CLR);
writel_relaxed(lic->cpu_ier[i],
ictlr + ICTLR_CPU_IER_SET);
writel_relaxed(lic->cop_iep[i],
ictlr + ICTLR_COP_IEP_CLASS);
writel_relaxed(~0ul, ictlr + ICTLR_COP_IER_CLR);
writel_relaxed(GENMASK(31, 0), ictlr + ICTLR_COP_IER_CLR);
writel_relaxed(lic->cop_ier[i],
ictlr + ICTLR_COP_IER_SET);
}
......@@ -312,7 +312,7 @@ static int __init tegra_ictlr_init(struct device_node *node,
lic->base[i] = base;
/* Disable all interrupts */
writel_relaxed(~0UL, base + ICTLR_CPU_IER_CLR);
writel_relaxed(GENMASK(31, 0), base + ICTLR_CPU_IER_CLR);
/* All interrupts target IRQ */
writel_relaxed(0, base + ICTLR_CPU_IEP_CLASS);
......
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2017-2018, The Linux Foundation. All rights reserved.
/* Disable MMIO tracing to prevent excessive logging of unwanted MMIO traces */
#define __DISABLE_TRACE_MMIO__
#include <linux/acpi.h>
#include <linux/clk.h>
#include <linux/slab.h>
......
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2017-2018, The Linux foundation. All rights reserved.
/* Disable MMIO tracing to prevent excessive logging of unwanted MMIO traces */
#define __DISABLE_TRACE_MMIO__
#include <linux/clk.h>
#include <linux/console.h>
#include <linux/io.h>
......
......@@ -10,6 +10,7 @@
#include <asm/page.h> /* I/O is all done through memory accesses */
#include <linux/string.h> /* for memset() and memcpy() */
#include <linux/types.h>
#include <linux/instruction_pointer.h>
#ifdef CONFIG_GENERIC_IOMAP
#include <asm-generic/iomap.h>
......@@ -61,6 +62,44 @@
#define __io_par(v) __io_ar(v)
#endif
/*
* "__DISABLE_TRACE_MMIO__" flag can be used to disable MMIO tracing for
* specific kernel drivers in case of excessive/unwanted logging.
*
* Usage: Add a #define flag at the beginning of the driver file.
* Ex: #define __DISABLE_TRACE_MMIO__
* #include <...>
* ...
*/
#if IS_ENABLED(CONFIG_TRACE_MMIO_ACCESS) && !(defined(__DISABLE_TRACE_MMIO__))
#include <linux/tracepoint-defs.h>
DECLARE_TRACEPOINT(rwmmio_write);
DECLARE_TRACEPOINT(rwmmio_post_write);
DECLARE_TRACEPOINT(rwmmio_read);
DECLARE_TRACEPOINT(rwmmio_post_read);
void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
unsigned long caller_addr);
void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
unsigned long caller_addr);
void log_read_mmio(u8 width, const volatile void __iomem *addr,
unsigned long caller_addr);
void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr,
unsigned long caller_addr);
#else
static inline void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
unsigned long caller_addr) {}
static inline void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
unsigned long caller_addr) {}
static inline void log_read_mmio(u8 width, const volatile void __iomem *addr,
unsigned long caller_addr) {}
static inline void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr,
unsigned long caller_addr) {}
#endif /* CONFIG_TRACE_MMIO_ACCESS */
/*
* __raw_{read,write}{b,w,l,q}() access memory in native endianness.
......@@ -149,9 +188,11 @@ static inline u8 readb(const volatile void __iomem *addr)
{
u8 val;
log_read_mmio(8, addr, _THIS_IP_);
__io_br();
val = __raw_readb(addr);
__io_ar(val);
log_post_read_mmio(val, 8, addr, _THIS_IP_);
return val;
}
#endif
......@@ -162,9 +203,11 @@ static inline u16 readw(const volatile void __iomem *addr)
{
u16 val;
log_read_mmio(16, addr, _THIS_IP_);
__io_br();
val = __le16_to_cpu((__le16 __force)__raw_readw(addr));
__io_ar(val);
log_post_read_mmio(val, 16, addr, _THIS_IP_);
return val;
}
#endif
......@@ -175,9 +218,11 @@ static inline u32 readl(const volatile void __iomem *addr)
{
u32 val;
log_read_mmio(32, addr, _THIS_IP_);
__io_br();
val = __le32_to_cpu((__le32 __force)__raw_readl(addr));
__io_ar(val);
log_post_read_mmio(val, 32, addr, _THIS_IP_);
return val;
}
#endif
......@@ -189,9 +234,11 @@ static inline u64 readq(const volatile void __iomem *addr)
{
u64 val;
log_read_mmio(64, addr, _THIS_IP_);
__io_br();
val = __le64_to_cpu(__raw_readq(addr));
__io_ar(val);
log_post_read_mmio(val, 64, addr, _THIS_IP_);
return val;
}
#endif
......@@ -201,9 +248,11 @@ static inline u64 readq(const volatile void __iomem *addr)
#define writeb writeb
static inline void writeb(u8 value, volatile void __iomem *addr)
{
log_write_mmio(value, 8, addr, _THIS_IP_);
__io_bw();
__raw_writeb(value, addr);
__io_aw();
log_post_write_mmio(value, 8, addr, _THIS_IP_);
}
#endif
......@@ -211,9 +260,11 @@ static inline void writeb(u8 value, volatile void __iomem *addr)
#define writew writew
static inline void writew(u16 value, volatile void __iomem *addr)
{
log_write_mmio(value, 16, addr, _THIS_IP_);
__io_bw();
__raw_writew((u16 __force)cpu_to_le16(value), addr);
__io_aw();
log_post_write_mmio(value, 16, addr, _THIS_IP_);
}
#endif
......@@ -221,9 +272,11 @@ static inline void writew(u16 value, volatile void __iomem *addr)
#define writel writel
static inline void writel(u32 value, volatile void __iomem *addr)
{
log_write_mmio(value, 32, addr, _THIS_IP_);
__io_bw();
__raw_writel((u32 __force)__cpu_to_le32(value), addr);
__io_aw();
log_post_write_mmio(value, 32, addr, _THIS_IP_);
}
#endif
......@@ -232,9 +285,11 @@ static inline void writel(u32 value, volatile void __iomem *addr)
#define writeq writeq
static inline void writeq(u64 value, volatile void __iomem *addr)
{
log_write_mmio(value, 64, addr, _THIS_IP_);
__io_bw();
__raw_writeq(__cpu_to_le64(value), addr);
__io_aw();
log_post_write_mmio(value, 64, addr, _THIS_IP_);
}
#endif
#endif /* CONFIG_64BIT */
......@@ -248,7 +303,12 @@ static inline void writeq(u64 value, volatile void __iomem *addr)
#define readb_relaxed readb_relaxed
static inline u8 readb_relaxed(const volatile void __iomem *addr)
{
return __raw_readb(addr);
u8 val;
log_read_mmio(8, addr, _THIS_IP_);
val = __raw_readb(addr);
log_post_read_mmio(val, 8, addr, _THIS_IP_);
return val;
}
#endif
......@@ -256,7 +316,12 @@ static inline u8 readb_relaxed(const volatile void __iomem *addr)
#define readw_relaxed readw_relaxed
static inline u16 readw_relaxed(const volatile void __iomem *addr)
{
return __le16_to_cpu(__raw_readw(addr));
u16 val;
log_read_mmio(16, addr, _THIS_IP_);
val = __le16_to_cpu(__raw_readw(addr));
log_post_read_mmio(val, 16, addr, _THIS_IP_);
return val;
}
#endif
......@@ -264,7 +329,12 @@ static inline u16 readw_relaxed(const volatile void __iomem *addr)
#define readl_relaxed readl_relaxed
static inline u32 readl_relaxed(const volatile void __iomem *addr)
{
return __le32_to_cpu(__raw_readl(addr));
u32 val;
log_read_mmio(32, addr, _THIS_IP_);
val = __le32_to_cpu(__raw_readl(addr));
log_post_read_mmio(val, 32, addr, _THIS_IP_);
return val;
}
#endif
......@@ -272,7 +342,12 @@ static inline u32 readl_relaxed(const volatile void __iomem *addr)
#define readq_relaxed readq_relaxed
static inline u64 readq_relaxed(const volatile void __iomem *addr)
{
return __le64_to_cpu(__raw_readq(addr));
u64 val;
log_read_mmio(64, addr, _THIS_IP_);
val = __le64_to_cpu(__raw_readq(addr));
log_post_read_mmio(val, 64, addr, _THIS_IP_);
return val;
}
#endif
......@@ -280,7 +355,9 @@ static inline u64 readq_relaxed(const volatile void __iomem *addr)
#define writeb_relaxed writeb_relaxed
static inline void writeb_relaxed(u8 value, volatile void __iomem *addr)
{
log_write_mmio(value, 8, addr, _THIS_IP_);
__raw_writeb(value, addr);
log_post_write_mmio(value, 8, addr, _THIS_IP_);
}
#endif
......@@ -288,7 +365,9 @@ static inline void writeb_relaxed(u8 value, volatile void __iomem *addr)
#define writew_relaxed writew_relaxed
static inline void writew_relaxed(u16 value, volatile void __iomem *addr)
{
log_write_mmio(value, 16, addr, _THIS_IP_);
__raw_writew(cpu_to_le16(value), addr);
log_post_write_mmio(value, 16, addr, _THIS_IP_);
}
#endif
......@@ -296,7 +375,9 @@ static inline void writew_relaxed(u16 value, volatile void __iomem *addr)
#define writel_relaxed writel_relaxed
static inline void writel_relaxed(u32 value, volatile void __iomem *addr)
{
log_write_mmio(value, 32, addr, _THIS_IP_);
__raw_writel(__cpu_to_le32(value), addr);
log_post_write_mmio(value, 32, addr, _THIS_IP_);
}
#endif
......@@ -304,7 +385,9 @@ static inline void writel_relaxed(u32 value, volatile void __iomem *addr)
#define writeq_relaxed writeq_relaxed
static inline void writeq_relaxed(u64 value, volatile void __iomem *addr)
{
log_write_mmio(value, 64, addr, _THIS_IP_);
__raw_writeq(__cpu_to_le64(value), addr);
log_post_write_mmio(value, 64, addr, _THIS_IP_);
}
#endif
......@@ -1086,20 +1169,6 @@ static inline void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
}
#endif
#ifdef CONFIG_VIRT_TO_BUS
#ifndef virt_to_bus
static inline unsigned long virt_to_bus(void *address)
{
return (unsigned long)address;
}
static inline void *bus_to_virt(unsigned long address)
{
return (void *)address;
}
#endif
#endif
#ifndef memset_io
#define memset_io memset_io
/**
......
......@@ -2,7 +2,7 @@
#ifndef __ASM_GENERIC_SOFTIRQ_STACK_H
#define __ASM_GENERIC_SOFTIRQ_STACK_H
#ifdef CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK
#if defined(CONFIG_HAVE_SOFTIRQ_ON_OWN_STACK) && !defined(CONFIG_PREEMPT_RT)
void do_softirq_own_stack(void);
#else
static inline void do_softirq_own_stack(void)
......
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM rwmmio
#if !defined(_TRACE_RWMMIO_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_RWMMIO_H
#include <linux/tracepoint.h>
DECLARE_EVENT_CLASS(rwmmio_rw_template,
TP_PROTO(unsigned long caller, u64 val, u8 width, volatile void __iomem *addr),
TP_ARGS(caller, val, width, addr),
TP_STRUCT__entry(
__field(unsigned long, caller)
__field(unsigned long, addr)
__field(u64, val)
__field(u8, width)
),
TP_fast_assign(
__entry->caller = caller;
__entry->val = val;
__entry->addr = (unsigned long)addr;
__entry->width = width;
),
TP_printk("%pS width=%d val=%#llx addr=%#lx",
(void *)__entry->caller, __entry->width,
__entry->val, __entry->addr)
);
DEFINE_EVENT(rwmmio_rw_template, rwmmio_write,
TP_PROTO(unsigned long caller, u64 val, u8 width, volatile void __iomem *addr),
TP_ARGS(caller, val, width, addr)
);
DEFINE_EVENT(rwmmio_rw_template, rwmmio_post_write,
TP_PROTO(unsigned long caller, u64 val, u8 width, volatile void __iomem *addr),
TP_ARGS(caller, val, width, addr)
);
TRACE_EVENT(rwmmio_read,
TP_PROTO(unsigned long caller, u8 width, const volatile void __iomem *addr),
TP_ARGS(caller, width, addr),
TP_STRUCT__entry(
__field(unsigned long, caller)
__field(unsigned long, addr)
__field(u8, width)
),
TP_fast_assign(
__entry->caller = caller;
__entry->addr = (unsigned long)addr;
__entry->width = width;
),
TP_printk("%pS width=%d addr=%#lx",
(void *)__entry->caller, __entry->width, __entry->addr)
);
TRACE_EVENT(rwmmio_post_read,
TP_PROTO(unsigned long caller, u64 val, u8 width, const volatile void __iomem *addr),
TP_ARGS(caller, val, width, addr),
TP_STRUCT__entry(
__field(unsigned long, caller)
__field(unsigned long, addr)
__field(u64, val)
__field(u8, width)
),
TP_fast_assign(
__entry->caller = caller;
__entry->val = val;
__entry->addr = (unsigned long)addr;
__entry->width = width;
),
TP_printk("%pS width=%d val=%#llx addr=%#lx",
(void *)__entry->caller, __entry->width,
__entry->val, __entry->addr)
);
#endif /* _TRACE_RWMMIO_H */
#include <trace/define_trace.h>
......@@ -118,6 +118,13 @@ config INDIRECT_IOMEM_FALLBACK
mmio accesses when the IO memory address is not a registered
emulated region.
config TRACE_MMIO_ACCESS
bool "Register read/write tracing"
depends on TRACING && ARCH_HAVE_TRACE_MMIO_ACCESS
help
Create tracepoints for MMIO read/write operations. These trace events
can be used for logging all MMIO read/write operations.
source "lib/crypto/Kconfig"
config LIB_MEMNEQ
......
......@@ -151,6 +151,8 @@ lib-y += logic_pio.o
lib-$(CONFIG_INDIRECT_IOMEM) += logic_iomem.o
obj-$(CONFIG_TRACE_MMIO_ACCESS) += trace_readwrite.o
obj-$(CONFIG_GENERIC_HWEIGHT) += hweight.o
obj-$(CONFIG_BTREE) += btree.o
......
// SPDX-License-Identifier: GPL-2.0-only
/*
* Register read and write tracepoints
*
* Copyright (c) 2021-2022 Qualcomm Innovation Center, Inc. All rights reserved.
*/
#include <linux/ftrace.h>
#include <linux/module.h>
#include <asm-generic/io.h>
#define CREATE_TRACE_POINTS
#include <trace/events/rwmmio.h>
#ifdef CONFIG_TRACE_MMIO_ACCESS
void log_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
unsigned long caller_addr)
{
trace_rwmmio_write(caller_addr, val, width, addr);
}
EXPORT_SYMBOL_GPL(log_write_mmio);
EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_write);
void log_post_write_mmio(u64 val, u8 width, volatile void __iomem *addr,
unsigned long caller_addr)
{
trace_rwmmio_post_write(caller_addr, val, width, addr);
}
EXPORT_SYMBOL_GPL(log_post_write_mmio);
EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_post_write);
void log_read_mmio(u8 width, const volatile void __iomem *addr,
unsigned long caller_addr)
{
trace_rwmmio_read(caller_addr, width, addr);
}
EXPORT_SYMBOL_GPL(log_read_mmio);
EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_read);
void log_post_read_mmio(u64 val, u8 width, const volatile void __iomem *addr,
unsigned long caller_addr)
{
trace_rwmmio_post_read(caller_addr, val, width, addr);
}
EXPORT_SYMBOL_GPL(log_post_read_mmio);
EXPORT_TRACEPOINT_SYMBOL_GPL(rwmmio_post_read);
#endif /* CONFIG_TRACE_MMIO_ACCESS */
......@@ -639,14 +639,6 @@ config BOUNCE
memory available to the CPU. Enabled by default when HIGHMEM is
selected, but you may say n to override this.
config VIRT_TO_BUS
bool
help
An architecture should select this if it implements the
deprecated interface virt_to_bus(). All new architectures
should probably not select this.
config MMU_NOTIFIER
bool
select SRCU
......
......@@ -143,7 +143,7 @@
* record locks, but are "owned" by the open file description, not the
* process. This means that they are inherited across fork() like BSD (flock)
* locks, and they are only released automatically when the last reference to
* the the open file against which they were acquired is put.
* the open file against which they were acquired is put.
*/
#define F_OFD_GETLK 36
#define F_OFD_SETLK 37
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment