Commit eae21770 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'akpm' (patches from Andrew)

Merge third patch-bomb from Andrew Morton:
 "I'm pretty much done for -rc1 now:

   - the rest of MM, basically

   - lib/ updates

   - checkpatch, epoll, hfs, fatfs, ptrace, coredump, exit

   - cpu_mask simplifications

   - kexec, rapidio, MAINTAINERS etc, etc.

   - more dma-mapping cleanups/simplifications from hch"

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (109 commits)
  MAINTAINERS: add/fix git URLs for various subsystems
  mm: memcontrol: add "sock" to cgroup2 memory.stat
  mm: memcontrol: basic memory statistics in cgroup2 memory controller
  mm: memcontrol: do not uncharge old page in page cache replacement
  Documentation: cgroup: add memory.swap.{current,max} description
  mm: free swap cache aggressively if memcg swap is full
  mm: vmscan: do not scan anon pages if memcg swap limit is hit
  swap.h: move memcg related stuff to the end of the file
  mm: memcontrol: replace mem_cgroup_lruvec_online with mem_cgroup_online
  mm: vmscan: pass memcg to get_scan_count()
  mm: memcontrol: charge swap to cgroup2
  mm: memcontrol: clean up alloc, online, offline, free functions
  mm: memcontrol: flatten struct cg_proto
  mm: memcontrol: rein in the CONFIG space madness
  net: drop tcp_memcontrol.c
  mm: memcontrol: introduce CONFIG_MEMCG_LEGACY_KMEM
  mm: memcontrol: allow to disable kmem accounting for cgroup2
  mm: memcontrol: account "kmem" consumers in cgroup2 memory controller
  mm: memcontrol: move kmem accounting code to CONFIG_MEMCG
  mm: memcontrol: separate kmem code from legacy tcp accounting code
  ...
parents e9f57ebc 9f273c24
...@@ -1856,6 +1856,16 @@ S: Korte Heul 95 ...@@ -1856,6 +1856,16 @@ S: Korte Heul 95
S: 1403 ND BUSSUM S: 1403 ND BUSSUM
S: The Netherlands S: The Netherlands
N: Martin Kepplinger
E: martink@posteo.de
E: martin.kepplinger@theobroma-systems.com
W: http://www.martinkepplinger.com
D: mma8452 accelerators iio driver
D: Kernel cleanups
S: Garnisonstraße 26
S: 4020 Linz
S: Austria
N: Karl Keyte N: Karl Keyte
E: karl@koft.com E: karl@koft.com
D: Disk usage statistics and modifications to line printer driver D: Disk usage statistics and modifications to line printer driver
......
...@@ -951,16 +951,6 @@ to "Closing". ...@@ -951,16 +951,6 @@ to "Closing".
alignment constraints (e.g. the alignment constraints about 64-bit alignment constraints (e.g. the alignment constraints about 64-bit
objects). objects).
3) Supporting multiple types of IOMMUs
If your architecture needs to support multiple types of IOMMUs, you
can use include/linux/asm-generic/dma-mapping-common.h. It's a
library to support the DMA API with multiple types of IOMMUs. Lots
of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
sparc) use it. Choose one to see how it can be used. If you need to
support multiple types of IOMMUs in a single system, the example of
x86 or powerpc helps.
Closing Closing
This document, and the API itself, would not be in its current This document, and the API itself, would not be in its current
......
...@@ -819,6 +819,78 @@ PAGE_SIZE multiple when read back. ...@@ -819,6 +819,78 @@ PAGE_SIZE multiple when read back.
the cgroup. This may not exactly match the number of the cgroup. This may not exactly match the number of
processes killed but should generally be close. processes killed but should generally be close.
memory.stat
A read-only flat-keyed file which exists on non-root cgroups.
This breaks down the cgroup's memory footprint into different
types of memory, type-specific details, and other information
on the state and past events of the memory management system.
All memory amounts are in bytes.
The entries are ordered to be human readable, and new entries
can show up in the middle. Don't rely on items remaining in a
fixed position; use the keys to look up specific values!
anon
Amount of memory used in anonymous mappings such as
brk(), sbrk(), and mmap(MAP_ANONYMOUS)
file
Amount of memory used to cache filesystem data,
including tmpfs and shared memory.
file_mapped
Amount of cached filesystem data mapped with mmap()
file_dirty
Amount of cached filesystem data that was modified but
not yet written back to disk
file_writeback
Amount of cached filesystem data that was modified and
is currently being written back to disk
inactive_anon
active_anon
inactive_file
active_file
unevictable
Amount of memory, swap-backed and filesystem-backed,
on the internal memory management lists used by the
page reclaim algorithm
pgfault
Total number of page faults incurred
pgmajfault
Number of major page faults incurred
memory.swap.current
A read-only single value file which exists on non-root
cgroups.
The total amount of swap currently being used by the cgroup
and its descendants.
memory.swap.max
A read-write single value file which exists on non-root
cgroups. The default is "max".
Swap usage hard limit. If a cgroup's swap usage reaches this
limit, anonymous meomry of the cgroup will not be swapped out.
5-2-2. General Usage 5-2-2. General Usage
...@@ -1291,3 +1363,20 @@ allocation from the slack available in other groups or the rest of the ...@@ -1291,3 +1363,20 @@ allocation from the slack available in other groups or the rest of the
system than killing the group. Otherwise, memory.max is there to system than killing the group. Otherwise, memory.max is there to
limit this type of spillover and ultimately contain buggy or even limit this type of spillover and ultimately contain buggy or even
malicious applications. malicious applications.
The combined memory+swap accounting and limiting is replaced by real
control over swap space.
The main argument for a combined memory+swap facility in the original
cgroup design was that global or parental pressure would always be
able to swap all anonymous memory of a child group, regardless of the
child's own (possibly untrusted) configuration. However, untrusted
groups can sabotage swapping by other means - such as referencing its
anonymous memory in a tight loop - and an admin can not assume full
swappability when overcommitting untrusted jobs.
For trusted jobs, on the other hand, a combined counter is not an
intuitive userspace interface, and it flies in the face of the idea
that cgroup controllers should account and limit specific physical
resources. Swap space is a resource like all others in the system,
and that's why unified hierarchy allows distributing it separately.
#
# Feature name: dma_map_attrs
# Kconfig: HAVE_DMA_ATTRS
# description: arch provides dma_*map*_attrs() APIs
#
-----------------------
| arch |status|
-----------------------
| alpha: | ok |
| arc: | TODO |
| arm: | ok |
| arm64: | ok |
| avr32: | TODO |
| blackfin: | TODO |
| c6x: | TODO |
| cris: | TODO |
| frv: | TODO |
| h8300: | ok |
| hexagon: | ok |
| ia64: | ok |
| m32r: | TODO |
| m68k: | TODO |
| metag: | TODO |
| microblaze: | ok |
| mips: | ok |
| mn10300: | TODO |
| nios2: | TODO |
| openrisc: | ok |
| parisc: | TODO |
| powerpc: | ok |
| s390: | ok |
| score: | TODO |
| sh: | ok |
| sparc: | ok |
| tile: | ok |
| um: | TODO |
| unicore32: | ok |
| x86: | ok |
| xtensa: | TODO |
-----------------------
...@@ -180,6 +180,16 @@ dos1xfloppy -- If set, use a fallback default BIOS Parameter Block ...@@ -180,6 +180,16 @@ dos1xfloppy -- If set, use a fallback default BIOS Parameter Block
<bool>: 0,1,yes,no,true,false <bool>: 0,1,yes,no,true,false
LIMITATION
---------------------------------------------------------------------
* The fallocated region of file is discarded at umount/evict time
when using fallocate with FALLOC_FL_KEEP_SIZE.
So, User should assume that fallocated region can be discarded at
last close if there is memory pressure resulting in eviction of
the inode from the memory. As a result, for any dependency on
the fallocated region, user should make sure to recheck fallocate
after reopening the file.
TODO TODO
---------------------------------------------------------------------- ----------------------------------------------------------------------
* Need to get rid of the raw scanning stuff. Instead, always use * Need to get rid of the raw scanning stuff. Instead, always use
......
...@@ -611,6 +611,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -611,6 +611,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
cgroup.memory= [KNL] Pass options to the cgroup memory controller. cgroup.memory= [KNL] Pass options to the cgroup memory controller.
Format: <string> Format: <string>
nosocket -- Disable socket memory accounting. nosocket -- Disable socket memory accounting.
nokmem -- Disable kernel memory accounting.
checkreqprot [SELINUX] Set initial checkreqprot flag value. checkreqprot [SELINUX] Set initial checkreqprot flag value.
Format: { "0" | "1" } Format: { "0" | "1" }
......
...@@ -825,14 +825,13 @@ via the /proc/sys interface: ...@@ -825,14 +825,13 @@ via the /proc/sys interface:
Each write syscall must fully contain the sysctl value to be Each write syscall must fully contain the sysctl value to be
written, and multiple writes on the same sysctl file descriptor written, and multiple writes on the same sysctl file descriptor
will rewrite the sysctl value, regardless of file position. will rewrite the sysctl value, regardless of file position.
0 - (default) Same behavior as above, but warn about processes that 0 - Same behavior as above, but warn about processes that perform writes
perform writes to a sysctl file descriptor when the file position to a sysctl file descriptor when the file position is not 0.
is not 0. 1 - (default) Respect file position when writing sysctl strings. Multiple
1 - Respect file position when writing sysctl strings. Multiple writes writes will append to the sysctl value buffer. Anything past the max
will append to the sysctl value buffer. Anything past the max length length of the sysctl value buffer will be ignored. Writes to numeric
of the sysctl value buffer will be ignored. Writes to numeric sysctl sysctl entries must always be at file position 0 and the value must
entries must always be at file position 0 and the value must be be fully contained in the buffer sent in the write syscall.
fully contained in the buffer sent in the write syscall.
============================================================== ==============================================================
......
Undefined Behavior Sanitizer - UBSAN
Overview
--------
UBSAN is a runtime undefined behaviour checker.
UBSAN uses compile-time instrumentation to catch undefined behavior (UB).
Compiler inserts code that perform certain kinds of checks before operations
that may cause UB. If check fails (i.e. UB detected) __ubsan_handle_*
function called to print error message.
GCC has that feature since 4.9.x [1] (see -fsanitize=undefined option and
its suboptions). GCC 5.x has more checkers implemented [2].
Report example
---------------
================================================================================
UBSAN: Undefined behaviour in ../include/linux/bitops.h:110:33
shift exponent 32 is to large for 32-bit type 'unsigned int'
CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-rc1+ #26
0000000000000000 ffffffff82403cc8 ffffffff815e6cd6 0000000000000001
ffffffff82403cf8 ffffffff82403ce0 ffffffff8163a5ed 0000000000000020
ffffffff82403d78 ffffffff8163ac2b ffffffff815f0001 0000000000000002
Call Trace:
[<ffffffff815e6cd6>] dump_stack+0x45/0x5f
[<ffffffff8163a5ed>] ubsan_epilogue+0xd/0x40
[<ffffffff8163ac2b>] __ubsan_handle_shift_out_of_bounds+0xeb/0x130
[<ffffffff815f0001>] ? radix_tree_gang_lookup_slot+0x51/0x150
[<ffffffff8173c586>] _mix_pool_bytes+0x1e6/0x480
[<ffffffff83105653>] ? dmi_walk_early+0x48/0x5c
[<ffffffff8173c881>] add_device_randomness+0x61/0x130
[<ffffffff83105b35>] ? dmi_save_one_device+0xaa/0xaa
[<ffffffff83105653>] dmi_walk_early+0x48/0x5c
[<ffffffff831066ae>] dmi_scan_machine+0x278/0x4b4
[<ffffffff8111d58a>] ? vprintk_default+0x1a/0x20
[<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
[<ffffffff830b2240>] setup_arch+0x405/0xc2c
[<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
[<ffffffff830ae053>] start_kernel+0x83/0x49a
[<ffffffff830ad120>] ? early_idt_handler_array+0x120/0x120
[<ffffffff830ad386>] x86_64_start_reservations+0x2a/0x2c
[<ffffffff830ad4f3>] x86_64_start_kernel+0x16b/0x17a
================================================================================
Usage
-----
To enable UBSAN configure kernel with:
CONFIG_UBSAN=y
and to check the entire kernel:
CONFIG_UBSAN_SANITIZE_ALL=y
To enable instrumentation for specific files or directories, add a line
similar to the following to the respective kernel Makefile:
For a single file (e.g. main.o):
UBSAN_SANITIZE_main.o := y
For all files in one directory:
UBSAN_SANITIZE := y
To exclude files from being instrumented even if
CONFIG_UBSAN_SANITIZE_ALL=y, use:
UBSAN_SANITIZE_main.o := n
and:
UBSAN_SANITIZE := n
Detection of unaligned accesses controlled through the separate option -
CONFIG_UBSAN_ALIGNMENT. It's off by default on architectures that support
unaligned accesses (CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y). One could
still enable it in config, just note that it will produce a lot of UBSAN
reports.
References
----------
[1] - https://gcc.gnu.org/onlinedocs/gcc-4.9.0/gcc/Debugging-Options.html
[2] - https://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html
This diff is collapsed.
...@@ -411,7 +411,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE ...@@ -411,7 +411,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KASAN CFLAGS_UBSAN
export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
...@@ -784,6 +784,7 @@ endif ...@@ -784,6 +784,7 @@ endif
include scripts/Makefile.kasan include scripts/Makefile.kasan
include scripts/Makefile.extrawarn include scripts/Makefile.extrawarn
include scripts/Makefile.ubsan
# Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the # Add any arch overrides and user supplied CPPFLAGS, AFLAGS and CFLAGS as the
# last assignments # last assignments
......
...@@ -205,9 +205,6 @@ config HAVE_NMI_WATCHDOG ...@@ -205,9 +205,6 @@ config HAVE_NMI_WATCHDOG
config HAVE_ARCH_TRACEHOOK config HAVE_ARCH_TRACEHOOK
bool bool
config HAVE_DMA_ATTRS
bool
config HAVE_DMA_CONTIGUOUS config HAVE_DMA_CONTIGUOUS
bool bool
...@@ -632,4 +629,7 @@ config OLD_SIGACTION ...@@ -632,4 +629,7 @@ config OLD_SIGACTION
config COMPAT_OLD_SIGACTION config COMPAT_OLD_SIGACTION
bool bool
config ARCH_NO_COHERENT_DMA_MMAP
bool
source "kernel/gcov/Kconfig" source "kernel/gcov/Kconfig"
...@@ -9,7 +9,6 @@ config ALPHA ...@@ -9,7 +9,6 @@ config ALPHA
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_PCSPKR_PLATFORM select HAVE_PCSPKR_PLATFORM
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_DMA_ATTRS
select VIRT_TO_BUS select VIRT_TO_BUS
select GENERIC_IRQ_PROBE select GENERIC_IRQ_PROBE
select AUTO_IRQ_AFFINITY if SMP select AUTO_IRQ_AFFINITY if SMP
......
...@@ -10,8 +10,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -10,8 +10,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return dma_ops; return dma_ops;
} }
#include <asm-generic/dma-mapping-common.h>
#define dma_cache_sync(dev, va, size, dir) ((void)0) #define dma_cache_sync(dev, va, size, dir) ((void)0)
#endif /* _ALPHA_DMA_MAPPING_H */ #endif /* _ALPHA_DMA_MAPPING_H */
...@@ -47,7 +47,6 @@ ...@@ -47,7 +47,6 @@
#define MADV_WILLNEED 3 /* will need these pages */ #define MADV_WILLNEED 3 /* will need these pages */
#define MADV_SPACEAVAIL 5 /* ensure resources are available */ #define MADV_SPACEAVAIL 5 /* ensure resources are available */
#define MADV_DONTNEED 6 /* don't need these pages */ #define MADV_DONTNEED 6 /* don't need these pages */
#define MADV_FREE 7 /* free pages only if memory pressure */
/* common/generic parameters */ /* common/generic parameters */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */
......
...@@ -11,192 +11,11 @@ ...@@ -11,192 +11,11 @@
#ifndef ASM_ARC_DMA_MAPPING_H #ifndef ASM_ARC_DMA_MAPPING_H
#define ASM_ARC_DMA_MAPPING_H #define ASM_ARC_DMA_MAPPING_H
#include <asm-generic/dma-coherent.h> extern struct dma_map_ops arc_dma_ops;
#include <asm/cacheflush.h>
void *dma_alloc_noncoherent(struct device *dev, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_coherent(struct device *dev, size_t size, void *kvaddr,
dma_addr_t dma_handle);
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
/*
* streaming DMA Mapping API...
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/
static inline void __inline_dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr);
}
}
void __arc_dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir);
#define _dma_cache_sync(addr, sz, dir) \
do { \
if (__builtin_constant_p(dir)) \
__inline_dma_cache_sync(addr, sz, dir); \
else \
__arc_dma_cache_sync(addr, sz, dir); \
} \
while (0);
static inline dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction dir)
{
_dma_cache_sync((unsigned long)cpu_addr, size, dir);
return (dma_addr_t)cpu_addr;
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction dir)
{
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
unsigned long paddr = page_to_phys(page) + offset;
return dma_map_single(dev, (void *)paddr, size, dir);
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{ {
struct scatterlist *s; return &arc_dma_ops;
int i;
for_each_sg(sg, s, nents, i)
dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
_dma_cache_sync(dma_handle + offset, size, DMA_FROM_DEVICE);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
_dma_cache_sync(dma_handle + offset, size, DMA_TO_DEVICE);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
int nelems, enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static inline int dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
} }
#endif #endif
...@@ -17,18 +17,14 @@ ...@@ -17,18 +17,14 @@
*/ */
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/dma-debug.h>
#include <linux/export.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
/*
* Helpers for Coherent DMA API. static void *arc_dma_alloc(struct device *dev, size_t size,
*/ dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{ {
void *paddr; void *paddr, *kvaddr;
/* This is linear addr (0x8000_0000 based) */ /* This is linear addr (0x8000_0000 based) */
paddr = alloc_pages_exact(size, gfp); paddr = alloc_pages_exact(size, gfp);
...@@ -38,22 +34,6 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size, ...@@ -38,22 +34,6 @@ void *dma_alloc_noncoherent(struct device *dev, size_t size,
/* This is bus address, platform dependent */ /* This is bus address, platform dependent */
*dma_handle = (dma_addr_t)paddr; *dma_handle = (dma_addr_t)paddr;
return paddr;
}
EXPORT_SYMBOL(dma_alloc_noncoherent);
void dma_free_noncoherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle)
{
free_pages_exact((void *)dma_handle, size);
}
EXPORT_SYMBOL(dma_free_noncoherent);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
{
void *paddr, *kvaddr;
/* /*
* IOC relies on all data (even coherent DMA data) being in cache * IOC relies on all data (even coherent DMA data) being in cache
* Thus allocate normal cached memory * Thus allocate normal cached memory
...@@ -65,22 +45,15 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -65,22 +45,15 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
* -For coherent data, Read/Write to buffers terminate early in cache * -For coherent data, Read/Write to buffers terminate early in cache
* (vs. always going to memory - thus are faster) * (vs. always going to memory - thus are faster)
*/ */
if (is_isa_arcv2() && ioc_exists) if ((is_isa_arcv2() && ioc_exists) ||
return dma_alloc_noncoherent(dev, size, dma_handle, gfp); dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs))
return paddr;
/* This is linear addr (0x8000_0000 based) */
paddr = alloc_pages_exact(size, gfp);
if (!paddr)
return NULL;
/* This is kernel Virtual address (0x7000_0000 based) */ /* This is kernel Virtual address (0x7000_0000 based) */
kvaddr = ioremap_nocache((unsigned long)paddr, size); kvaddr = ioremap_nocache((unsigned long)paddr, size);
if (kvaddr == NULL) if (kvaddr == NULL)
return NULL; return NULL;
/* This is bus address, platform dependent */
*dma_handle = (dma_addr_t)paddr;
/* /*
* Evict any existing L1 and/or L2 lines for the backing page * Evict any existing L1 and/or L2 lines for the backing page
* in case it was used earlier as a normal "cached" page. * in case it was used earlier as a normal "cached" page.
...@@ -95,26 +68,111 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -95,26 +68,111 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return kvaddr; return kvaddr;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *kvaddr, static void arc_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
if (is_isa_arcv2() && ioc_exists) if (!dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs) &&
return dma_free_noncoherent(dev, size, kvaddr, dma_handle); !(is_isa_arcv2() && ioc_exists))
iounmap((void __force __iomem *)vaddr);
iounmap((void __force __iomem *)kvaddr);
free_pages_exact((void *)dma_handle, size); free_pages_exact((void *)dma_handle, size);
} }
EXPORT_SYMBOL(dma_free_coherent);
/* /*
* Helper for streaming DMA... * streaming DMA Mapping API...
* CPU accesses page via normal paddr, thus needs to explicitly made
* consistent before each use
*/ */
void __arc_dma_cache_sync(unsigned long paddr, size_t size, static void _dma_cache_sync(unsigned long paddr, size_t size,
enum dma_data_direction dir)
{
switch (dir) {
case DMA_FROM_DEVICE:
dma_cache_inv(paddr, size);
break;
case DMA_TO_DEVICE:
dma_cache_wback(paddr, size);
break;
case DMA_BIDIRECTIONAL:
dma_cache_wback_inv(paddr, size);
break;
default:
pr_err("Invalid DMA dir [%d] for OP @ %lx\n", dir, paddr);
}
}
static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
unsigned long paddr = page_to_phys(page) + offset;
_dma_cache_sync(paddr, size, dir);
return (dma_addr_t)paddr;
}
static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
{
struct scatterlist *s;
int i;
for_each_sg(sg, s, nents, i)
s->dma_address = dma_map_page(dev, sg_page(s), s->offset,
s->length, dir);
return nents;
}
static void arc_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_FROM_DEVICE);
}
static void arc_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
{
_dma_cache_sync(dma_handle, size, DMA_TO_DEVICE);
}
static void arc_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
__inline_dma_cache_sync(paddr, size, dir); int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
} }
EXPORT_SYMBOL(__arc_dma_cache_sync);
static void arc_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction dir)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
_dma_cache_sync((unsigned int)sg_virt(sg), sg->length, dir);
}
static int arc_dma_supported(struct device *dev, u64 dma_mask)
{
/* Support 32 bit DMA mask exclusively */
return dma_mask == DMA_BIT_MASK(32);
}
struct dma_map_ops arc_dma_ops = {
.alloc = arc_dma_alloc,
.free = arc_dma_free,
.map_page = arc_dma_map_page,
.map_sg = arc_dma_map_sg,
.sync_single_for_device = arc_dma_sync_single_for_device,
.sync_single_for_cpu = arc_dma_sync_single_for_cpu,
.sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
.sync_sg_for_device = arc_dma_sync_sg_for_device,
.dma_supported = arc_dma_supported,
};
EXPORT_SYMBOL(arc_dma_ops);
...@@ -47,7 +47,6 @@ config ARM ...@@ -47,7 +47,6 @@ config ARM
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS if MMU select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
......
...@@ -41,13 +41,6 @@ static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops) ...@@ -41,13 +41,6 @@ static inline void set_dma_ops(struct device *dev, struct dma_map_ops *ops)
#define HAVE_ARCH_DMA_SUPPORTED 1 #define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *dev, u64 mask); extern int dma_supported(struct device *dev, u64 mask);
/*
* Note that while the generic code provides dummy dma_{alloc,free}_noncoherent
* implementations, we don't provide a dma_cache_sync function so drivers using
* this API are highlighted with build warnings.
*/
#include <asm-generic/dma-mapping-common.h>
#ifdef __arch_page_to_dma #ifdef __arch_page_to_dma
#error Please update to __arch_pfn_to_dma #error Please update to __arch_pfn_to_dma
#endif #endif
......
...@@ -64,7 +64,6 @@ config ARM64 ...@@ -64,7 +64,6 @@ config ARM64
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EFFICIENT_UNALIGNED_ACCESS
......
...@@ -64,8 +64,6 @@ static inline bool is_device_dma_coherent(struct device *dev) ...@@ -64,8 +64,6 @@ static inline bool is_device_dma_coherent(struct device *dev)
return dev->archdata.dma_coherent; return dev->archdata.dma_coherent;
} }
#include <asm-generic/dma-mapping-common.h>
static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr) static inline dma_addr_t phys_to_dma(struct device *dev, phys_addr_t paddr)
{ {
return (dma_addr_t)paddr; return (dma_addr_t)paddr;
......
This diff is collapsed.
...@@ -9,9 +9,14 @@ ...@@ -9,9 +9,14 @@
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/mm.h>
#include <linux/device.h>
#include <linux/scatterlist.h>
#include <asm/addrspace.h> #include <asm/processor.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/io.h>
#include <asm/addrspace.h>
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, int direction) void dma_cache_sync(struct device *dev, void *vaddr, size_t size, int direction)
{ {
...@@ -93,60 +98,100 @@ static void __dma_free(struct device *dev, size_t size, ...@@ -93,60 +98,100 @@ static void __dma_free(struct device *dev, size_t size,
__free_page(page++); __free_page(page++);
} }
void *dma_alloc_coherent(struct device *dev, size_t size, static void *avr32_dma_alloc(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp) dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
struct page *page; struct page *page;
void *ret = NULL; dma_addr_t phys;
page = __dma_alloc(dev, size, handle, gfp); page = __dma_alloc(dev, size, handle, gfp);
if (page) if (!page)
ret = phys_to_uncached(page_to_phys(page)); return NULL;
phys = page_to_phys(page);
return ret; if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) {
/* Now, map the page into P3 with write-combining turned on */
*handle = phys;
return __ioremap(phys, size, _PAGE_BUFFER);
} else {
return phys_to_uncached(phys);
}
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, static void avr32_dma_free(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t handle) void *cpu_addr, dma_addr_t handle, struct dma_attrs *attrs)
{ {
void *addr = phys_to_cached(uncached_to_phys(cpu_addr));
struct page *page; struct page *page;
pr_debug("dma_free_coherent addr %p (phys %08lx) size %u\n", if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs)) {
iounmap(cpu_addr);
page = phys_to_page(handle);
} else {
void *addr = phys_to_cached(uncached_to_phys(cpu_addr));
pr_debug("avr32_dma_free addr %p (phys %08lx) size %u\n",
cpu_addr, (unsigned long)handle, (unsigned)size); cpu_addr, (unsigned long)handle, (unsigned)size);
BUG_ON(!virt_addr_valid(addr)); BUG_ON(!virt_addr_valid(addr));
page = virt_to_page(addr); page = virt_to_page(addr);
}
__dma_free(dev, size, page, handle); __dma_free(dev, size, page, handle);
} }
EXPORT_SYMBOL(dma_free_coherent);
void *dma_alloc_writecombine(struct device *dev, size_t size, static dma_addr_t avr32_dma_map_page(struct device *dev, struct page *page,
dma_addr_t *handle, gfp_t gfp) unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{ {
struct page *page; void *cpu_addr = page_address(page) + offset;
dma_addr_t phys;
page = __dma_alloc(dev, size, handle, gfp); dma_cache_sync(dev, cpu_addr, size, direction);
if (!page) return virt_to_bus(cpu_addr);
return NULL; }
phys = page_to_phys(page); static int avr32_dma_map_sg(struct device *dev, struct scatterlist *sglist,
*handle = phys; int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
int i;
struct scatterlist *sg;
/* Now, map the page into P3 with write-combining turned on */ for_each_sg(sglist, sg, nents, i) {
return __ioremap(phys, size, _PAGE_BUFFER); char *virt;
sg->dma_address = page_to_bus(sg_page(sg)) + sg->offset;
virt = sg_virt(sg);
dma_cache_sync(dev, virt, sg->length, direction);
}
return nents;
} }
EXPORT_SYMBOL(dma_alloc_writecombine);
void dma_free_writecombine(struct device *dev, size_t size, static void avr32_dma_sync_single_for_device(struct device *dev,
void *cpu_addr, dma_addr_t handle) dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
struct page *page; dma_cache_sync(dev, bus_to_virt(dma_handle), size, direction);
}
iounmap(cpu_addr); static void avr32_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
page = phys_to_page(handle); for_each_sg(sglist, sg, nents, i)
__dma_free(dev, size, page, handle); dma_cache_sync(dev, sg_virt(sg), sg->length, direction);
} }
EXPORT_SYMBOL(dma_free_writecombine);
struct dma_map_ops avr32_dma_ops = {
.alloc = avr32_dma_alloc,
.free = avr32_dma_free,
.map_page = avr32_dma_map_page,
.map_sg = avr32_dma_map_sg,
.sync_single_for_device = avr32_dma_sync_single_for_device,
.sync_sg_for_device = avr32_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(avr32_dma_ops);
...@@ -8,36 +8,6 @@ ...@@ -8,36 +8,6 @@
#define _BLACKFIN_DMA_MAPPING_H #define _BLACKFIN_DMA_MAPPING_H
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
struct scatterlist;
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle);
/*
* Now for the API extensions over the pci_ one
*/
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
#define dma_supported(d, m) (1)
static inline int
dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
extern void extern void
__dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir); __dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir);
...@@ -66,102 +36,11 @@ _dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir) ...@@ -66,102 +36,11 @@ _dma_sync(dma_addr_t addr, size_t size, enum dma_data_direction dir)
__dma_sync(addr, size, dir); __dma_sync(addr, size, dir);
} }
static inline dma_addr_t extern struct dma_map_ops bfin_dma_ops;
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction dir)
{
_dma_sync((dma_addr_t)ptr, size, dir);
return (dma_addr_t) ptr;
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
return dma_map_single(dev, page_address(page) + offset, size, dir);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction dir)
{
dma_unmap_single(dev, dma_addr, size, dir);
}
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
enum dma_data_direction dir);
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction dir)
{ {
BUG_ON(!valid_dma_direction(dir)); return &bfin_dma_ops;
} }
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t handle,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t handle,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{
_dma_sync(handle + offset, size, dir);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size,
enum dma_data_direction dir)
{
dma_sync_single_range_for_cpu(dev, handle, 0, size, dir);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t handle, size_t size,
enum dma_data_direction dir)
{
dma_sync_single_range_for_device(dev, handle, 0, size, dir);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction dir)
{
BUG_ON(!valid_dma_direction(dir));
}
extern void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir);
static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir)
{
_dma_sync((dma_addr_t)vaddr, size, dir);
}
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _BLACKFIN_DMA_MAPPING_H */ #endif /* _BLACKFIN_DMA_MAPPING_H */
...@@ -78,8 +78,8 @@ static void __free_dma_pages(unsigned long addr, unsigned int pages) ...@@ -78,8 +78,8 @@ static void __free_dma_pages(unsigned long addr, unsigned int pages)
spin_unlock_irqrestore(&dma_page_lock, flags); spin_unlock_irqrestore(&dma_page_lock, flags);
} }
void *dma_alloc_coherent(struct device *dev, size_t size, static void *bfin_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
...@@ -92,15 +92,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -92,15 +92,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void static void bfin_dma_free(struct device *dev, size_t size, void *vaddr,
dma_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle, struct dma_attrs *attrs)
dma_addr_t dma_handle)
{ {
__free_dma_pages((unsigned long)vaddr, get_pages(size)); __free_dma_pages((unsigned long)vaddr, get_pages(size));
} }
EXPORT_SYMBOL(dma_free_coherent);
/* /*
* Streaming DMA mappings * Streaming DMA mappings
...@@ -112,9 +109,9 @@ void __dma_sync(dma_addr_t addr, size_t size, ...@@ -112,9 +109,9 @@ void __dma_sync(dma_addr_t addr, size_t size,
} }
EXPORT_SYMBOL(__dma_sync); EXPORT_SYMBOL(__dma_sync);
int static int bfin_dma_map_sg(struct device *dev, struct scatterlist *sg_list,
dma_map_sg(struct device *dev, struct scatterlist *sg_list, int nents, int nents, enum dma_data_direction direction,
enum dma_data_direction direction) struct dma_attrs *attrs)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
...@@ -126,10 +123,10 @@ dma_map_sg(struct device *dev, struct scatterlist *sg_list, int nents, ...@@ -126,10 +123,10 @@ dma_map_sg(struct device *dev, struct scatterlist *sg_list, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg_list, static void bfin_dma_sync_sg_for_device(struct device *dev,
int nelems, enum dma_data_direction direction) struct scatterlist *sg_list, int nelems,
enum dma_data_direction direction)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
...@@ -139,4 +136,31 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg_list, ...@@ -139,4 +136,31 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg_list,
__dma_sync(sg_dma_address(sg), sg_dma_len(sg), direction); __dma_sync(sg_dma_address(sg), sg_dma_len(sg), direction);
} }
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
static dma_addr_t bfin_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
dma_addr_t handle = (dma_addr_t)(page_address(page) + offset);
_dma_sync(handle, size, dir);
return handle;
}
static inline void bfin_dma_sync_single_for_device(struct device *dev,
dma_addr_t handle, size_t size, enum dma_data_direction dir)
{
_dma_sync(handle, size, dir);
}
struct dma_map_ops bfin_dma_ops = {
.alloc = bfin_dma_alloc,
.free = bfin_dma_free,
.map_page = bfin_dma_map_page,
.map_sg = bfin_dma_map_sg,
.sync_single_for_device = bfin_dma_sync_single_for_device,
.sync_sg_for_device = bfin_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(bfin_dma_ops);
...@@ -17,6 +17,7 @@ config C6X ...@@ -17,6 +17,7 @@ config C6X
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select ARCH_NO_COHERENT_DMA_MMAP
config MMU config MMU
def_bool n def_bool n
......
...@@ -12,104 +12,22 @@ ...@@ -12,104 +12,22 @@
#ifndef _ASM_C6X_DMA_MAPPING_H #ifndef _ASM_C6X_DMA_MAPPING_H
#define _ASM_C6X_DMA_MAPPING_H #define _ASM_C6X_DMA_MAPPING_H
#include <linux/dma-debug.h>
#include <asm-generic/dma-coherent.h>
#define dma_supported(d, m) 1
static inline void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t addr,
unsigned long offset,
size_t size,
enum dma_data_direction dir)
{
}
static inline int dma_set_mask(struct device *dev, u64 dma_mask)
{
if (!dev->dma_mask || !dma_supported(dev, dma_mask))
return -EIO;
*dev->dma_mask = dma_mask;
return 0;
}
/* /*
* DMA errors are defined by all-bits-set in the DMA address. * DMA errors are defined by all-bits-set in the DMA address.
*/ */
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) #define DMA_ERROR_CODE ~0
{
debug_dma_mapping_error(dev, dma_addr);
return dma_addr == ~0;
}
extern dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
size_t size, enum dma_data_direction dir);
extern void dma_unmap_single(struct device *dev, dma_addr_t handle, extern struct dma_map_ops c6x_dma_ops;
size_t size, enum dma_data_direction dir);
extern int dma_map_sg(struct device *dev, struct scatterlist *sglist, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
int nents, enum dma_data_direction direction);
extern void dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction);
static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction dir)
{ {
dma_addr_t handle; return &c6x_dma_ops;
handle = dma_map_single(dev, page_address(page) + offset, size, dir);
debug_dma_map_page(dev, page, offset, size, dir, handle, false);
return handle;
}
static inline void dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
dma_unmap_single(dev, handle, size, dir);
debug_dma_unmap_page(dev, handle, size, dir, false);
} }
extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir);
extern void dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
size_t size,
enum dma_data_direction dir);
extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir);
extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir);
extern void coherent_mem_init(u32 start, u32 size); extern void coherent_mem_init(u32 start, u32 size);
extern void *dma_alloc_coherent(struct device *, size_t, dma_addr_t *, gfp_t); void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
extern void dma_free_coherent(struct device *, size_t, void *, dma_addr_t); gfp_t gfp, struct dma_attrs *attrs);
void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f)) dma_addr_t dma_handle, struct dma_attrs *attrs);
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h))
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _ASM_C6X_DMA_MAPPING_H */ #endif /* _ASM_C6X_DMA_MAPPING_H */
...@@ -36,110 +36,101 @@ static void c6x_dma_sync(dma_addr_t handle, size_t size, ...@@ -36,110 +36,101 @@ static void c6x_dma_sync(dma_addr_t handle, size_t size,
} }
} }
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, static dma_addr_t c6x_dma_map_page(struct device *dev, struct page *page,
enum dma_data_direction dir) unsigned long offset, size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{ {
dma_addr_t addr = virt_to_phys(ptr); dma_addr_t handle = virt_to_phys(page_address(page) + offset);
c6x_dma_sync(addr, size, dir);
debug_dma_map_page(dev, virt_to_page(ptr), c6x_dma_sync(handle, size, dir);
(unsigned long)ptr & ~PAGE_MASK, size, return handle;
dir, addr, true);
return addr;
} }
EXPORT_SYMBOL(dma_map_single);
static void c6x_dma_unmap_page(struct device *dev, dma_addr_t handle,
void dma_unmap_single(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, struct dma_attrs *attrs)
size_t size, enum dma_data_direction dir)
{ {
c6x_dma_sync(handle, size, dir); c6x_dma_sync(handle, size, dir);
debug_dma_unmap_page(dev, handle, size, dir, true);
} }
EXPORT_SYMBOL(dma_unmap_single);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, static int c6x_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction dir) int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i) {
sg->dma_address = dma_map_single(dev, sg_virt(sg), sg->length, sg->dma_address = sg_phys(sg);
dir); c6x_dma_sync(sg->dma_address, sg->length, dir);
}
debug_dma_map_sg(dev, sglist, nents, nents, dir);
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
void dma_unmap_sg(struct device *dev, struct scatterlist *sglist, static void c6x_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction dir) int nents, enum dma_data_direction dir,
struct dma_attrs *attrs)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i)
dma_unmap_single(dev, sg_dma_address(sg), sg->length, dir); c6x_dma_sync(sg_dma_address(sg), sg->length, dir);
debug_dma_unmap_sg(dev, sglist, nents, dir);
} }
EXPORT_SYMBOL(dma_unmap_sg);
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle, static void c6x_dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir) size_t size, enum dma_data_direction dir)
{ {
c6x_dma_sync(handle, size, dir); c6x_dma_sync(handle, size, dir);
debug_dma_sync_single_for_cpu(dev, handle, size, dir);
} }
EXPORT_SYMBOL(dma_sync_single_for_cpu);
static void c6x_dma_sync_single_for_device(struct device *dev,
void dma_sync_single_for_device(struct device *dev, dma_addr_t handle, dma_addr_t handle, size_t size, enum dma_data_direction dir)
size_t size, enum dma_data_direction dir)
{ {
c6x_dma_sync(handle, size, dir); c6x_dma_sync(handle, size, dir);
debug_dma_sync_single_for_device(dev, handle, size, dir);
} }
EXPORT_SYMBOL(dma_sync_single_for_device);
static void c6x_dma_sync_sg_for_cpu(struct device *dev,
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, struct scatterlist *sglist, int nents,
int nents, enum dma_data_direction dir) enum dma_data_direction dir)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i)
dma_sync_single_for_cpu(dev, sg_dma_address(sg), c6x_dma_sync_single_for_cpu(dev, sg_dma_address(sg),
sg->length, dir); sg->length, dir);
debug_dma_sync_sg_for_cpu(dev, sglist, nents, dir);
} }
EXPORT_SYMBOL(dma_sync_sg_for_cpu);
static void c6x_dma_sync_sg_for_device(struct device *dev,
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, struct scatterlist *sglist, int nents,
int nents, enum dma_data_direction dir) enum dma_data_direction dir)
{ {
struct scatterlist *sg; struct scatterlist *sg;
int i; int i;
for_each_sg(sglist, sg, nents, i) for_each_sg(sglist, sg, nents, i)
dma_sync_single_for_device(dev, sg_dma_address(sg), c6x_dma_sync_single_for_device(dev, sg_dma_address(sg),
sg->length, dir); sg->length, dir);
debug_dma_sync_sg_for_device(dev, sglist, nents, dir);
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
struct dma_map_ops c6x_dma_ops = {
.alloc = c6x_dma_alloc,
.free = c6x_dma_free,
.map_page = c6x_dma_map_page,
.unmap_page = c6x_dma_unmap_page,
.map_sg = c6x_dma_map_sg,
.unmap_sg = c6x_dma_unmap_sg,
.sync_single_for_device = c6x_dma_sync_single_for_device,
.sync_single_for_cpu = c6x_dma_sync_single_for_cpu,
.sync_sg_for_device = c6x_dma_sync_sg_for_device,
.sync_sg_for_cpu = c6x_dma_sync_sg_for_cpu,
};
EXPORT_SYMBOL(c6x_dma_ops);
/* Number of entries preallocated for DMA-API debugging */ /* Number of entries preallocated for DMA-API debugging */
#define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16) #define PREALLOC_DMA_DEBUG_ENTRIES (1 << 16)
......
...@@ -73,8 +73,8 @@ static void __free_dma_pages(u32 addr, int order) ...@@ -73,8 +73,8 @@ static void __free_dma_pages(u32 addr, int order)
* Allocate DMA coherent memory space and return both the kernel * Allocate DMA coherent memory space and return both the kernel
* virtual and DMA address for that space. * virtual and DMA address for that space.
*/ */
void *dma_alloc_coherent(struct device *dev, size_t size, void *c6x_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
dma_addr_t *handle, gfp_t gfp) gfp_t gfp, struct dma_attrs *attrs)
{ {
u32 paddr; u32 paddr;
int order; int order;
...@@ -94,13 +94,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -94,13 +94,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return phys_to_virt(paddr); return phys_to_virt(paddr);
} }
EXPORT_SYMBOL(dma_alloc_coherent);
/* /*
* Free DMA coherent memory as defined by the above mapping. * Free DMA coherent memory as defined by the above mapping.
*/ */
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, void c6x_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
int order; int order;
...@@ -111,7 +110,6 @@ void dma_free_coherent(struct device *dev, size_t size, void *vaddr, ...@@ -111,7 +110,6 @@ void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
__free_dma_pages(virt_to_phys(vaddr), order); __free_dma_pages(virt_to_phys(vaddr), order);
} }
EXPORT_SYMBOL(dma_free_coherent);
/* /*
* Initialise the coherent DMA memory allocator using the given uncached region. * Initialise the coherent DMA memory allocator using the given uncached region.
......
...@@ -16,21 +16,18 @@ ...@@ -16,21 +16,18 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <asm/io.h> #include <asm/io.h>
void *dma_alloc_coherent(struct device *dev, size_t size, static void *v32_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
int order = get_order(size);
/* ignore region specifiers */ /* ignore region specifiers */
gfp &= ~(__GFP_DMA | __GFP_HIGHMEM); gfp &= ~(__GFP_DMA | __GFP_HIGHMEM);
if (dma_alloc_from_coherent(dev, size, dma_handle, &ret))
return ret;
if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff)) if (dev == NULL || (dev->coherent_dma_mask < 0xffffffff))
gfp |= GFP_DMA; gfp |= GFP_DMA;
ret = (void *)__get_free_pages(gfp, order); ret = (void *)__get_free_pages(gfp, get_order(size));
if (ret != NULL) { if (ret != NULL) {
memset(ret, 0, size); memset(ret, 0, size);
...@@ -39,12 +36,45 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -39,12 +36,45 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
void dma_free_coherent(struct device *dev, size_t size, static void v32_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{
free_pages((unsigned long)vaddr, get_order(size));
}
static inline dma_addr_t v32_dma_map_page(struct device *dev,
struct page *page, unsigned long offset, size_t size,
enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int order = get_order(size); return page_to_phys(page) + offset;
}
static inline int v32_dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
printk("Map sg\n");
return nents;
}
if (!dma_release_from_coherent(dev, order, vaddr)) static inline int v32_dma_supported(struct device *dev, u64 mask)
free_pages((unsigned long)vaddr, order); {
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
} }
struct dma_map_ops v32_dma_ops = {
.alloc = v32_dma_alloc,
.free = v32_dma_free,
.map_page = v32_dma_map_page,
.map_sg = v32_dma_map_sg,
.dma_supported = v32_dma_supported,
};
EXPORT_SYMBOL(v32_dma_ops);
/* DMA mapping. Nothing tricky here, just virt_to_phys */
#ifndef _ASM_CRIS_DMA_MAPPING_H #ifndef _ASM_CRIS_DMA_MAPPING_H
#define _ASM_CRIS_DMA_MAPPING_H #define _ASM_CRIS_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/kernel.h>
#include <linux/scatterlist.h>
#include <asm/cache.h>
#include <asm/io.h>
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
#include <asm-generic/dma-coherent.h> extern struct dma_map_ops v32_dma_ops;
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
void *vaddr, dma_addr_t dma_handle);
#else
static inline void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag)
{ {
BUG(); return &v32_dma_ops;
return NULL;
} }
#else
static inline void static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_handle)
{ {
BUG(); BUG();
return NULL;
} }
#endif #endif
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
return virt_to_phys(ptr);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
{
printk("Map sg\n");
return nents;
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
return page_to_phys(page) + offset;
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline int
dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if(mask < 0x00ffffff)
return 0;
return 1;
}
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
if(!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
static inline void static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size, dma_cache_sync(struct device *dev, void *vaddr, size_t size,
...@@ -158,15 +22,4 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size, ...@@ -158,15 +22,4 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif #endif
...@@ -15,6 +15,7 @@ config FRV ...@@ -15,6 +15,7 @@ config FRV
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select OLD_SIGACTION select OLD_SIGACTION
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select ARCH_NO_COHERENT_DMA_MMAP
config ZONE_DMA config ZONE_DMA
bool bool
......
#ifndef _ASM_DMA_MAPPING_H #ifndef _ASM_DMA_MAPPING_H
#define _ASM_DMA_MAPPING_H #define _ASM_DMA_MAPPING_H
#include <linux/device.h>
#include <linux/scatterlist.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/io.h>
/*
* See Documentation/DMA-API.txt for the description of how the
* following DMA API should work.
*/
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
extern unsigned long __nongprelbss dma_coherent_mem_start; extern unsigned long __nongprelbss dma_coherent_mem_start;
extern unsigned long __nongprelbss dma_coherent_mem_end; extern unsigned long __nongprelbss dma_coherent_mem_end;
void *dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t gfp); extern struct dma_map_ops frv_dma_ops;
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle);
extern dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction);
static inline
void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction);
static inline
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
extern
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction);
static inline
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static inline
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline
void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static inline
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
}
static inline
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static inline
int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline
int dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
static inline static inline struct dma_map_ops *get_dma_ops(struct device *dev)
int dma_set_mask(struct device *dev, u64 mask)
{ {
if (!dev->dma_mask || !dma_supported(dev, mask)) return &frv_dma_ops;
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
static inline static inline
...@@ -132,19 +21,4 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size, ...@@ -132,19 +21,4 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
flush_write_buffers(); flush_write_buffers();
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif /* _ASM_DMA_MAPPING_H */ #endif /* _ASM_DMA_MAPPING_H */
...@@ -43,9 +43,20 @@ static inline unsigned long _swapl(unsigned long v) ...@@ -43,9 +43,20 @@ static inline unsigned long _swapl(unsigned long v)
//#define __iormb() asm volatile("membar") //#define __iormb() asm volatile("membar")
//#define __iowmb() asm volatile("membar") //#define __iowmb() asm volatile("membar")
#define __raw_readb __builtin_read8 static inline u8 __raw_readb(const volatile void __iomem *addr)
#define __raw_readw __builtin_read16 {
#define __raw_readl __builtin_read32 return __builtin_read8((volatile void __iomem *)addr);
}
static inline u16 __raw_readw(const volatile void __iomem *addr)
{
return __builtin_read16((volatile void __iomem *)addr);
}
static inline u32 __raw_readl(const volatile void __iomem *addr)
{
return __builtin_read32((volatile void __iomem *)addr);
}
#define __raw_writeb(datum, addr) __builtin_write8(addr, datum) #define __raw_writeb(datum, addr) __builtin_write8(addr, datum)
#define __raw_writew(datum, addr) __builtin_write16(addr, datum) #define __raw_writew(datum, addr) __builtin_write16(addr, datum)
......
...@@ -34,7 +34,8 @@ struct dma_alloc_record { ...@@ -34,7 +34,8 @@ struct dma_alloc_record {
static DEFINE_SPINLOCK(dma_alloc_lock); static DEFINE_SPINLOCK(dma_alloc_lock);
static LIST_HEAD(dma_alloc_list); static LIST_HEAD(dma_alloc_list);
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) static void *frv_dma_alloc(struct device *hwdev, size_t size, dma_addr_t *dma_handle,
gfp_t gfp, struct dma_attrs *attrs)
{ {
struct dma_alloc_record *new; struct dma_alloc_record *new;
struct list_head *this = &dma_alloc_list; struct list_head *this = &dma_alloc_list;
...@@ -84,9 +85,8 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand ...@@ -84,9 +85,8 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand
return NULL; return NULL;
} }
EXPORT_SYMBOL(dma_alloc_coherent); static void frv_dma_free(struct device *hwdev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
{ {
struct dma_alloc_record *rec; struct dma_alloc_record *rec;
unsigned long flags; unsigned long flags;
...@@ -105,22 +105,9 @@ void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_ ...@@ -105,22 +105,9 @@ void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_
BUG(); BUG();
} }
EXPORT_SYMBOL(dma_free_coherent); static int frv_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, struct dma_attrs *attrs)
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size);
return virt_to_bus(ptr);
}
EXPORT_SYMBOL(dma_map_single);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
...@@ -135,14 +122,49 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, ...@@ -135,14 +122,49 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg); static dma_addr_t frv_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, enum dma_data_direction direction, struct dma_attrs *attrs)
size_t size, enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_dcache_page(page); flush_dcache_page(page);
return (dma_addr_t) page_to_phys(page) + offset; return (dma_addr_t) page_to_phys(page) + offset;
} }
EXPORT_SYMBOL(dma_map_page); static void frv_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static void frv_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static int frv_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops frv_dma_ops = {
.alloc = frv_dma_alloc,
.free = frv_dma_free,
.map_page = frv_dma_map_page,
.map_sg = frv_dma_map_sg,
.sync_single_for_device = frv_dma_sync_single_for_device,
.sync_sg_for_device = frv_dma_sync_sg_for_device,
.dma_supported = frv_dma_supported,
};
EXPORT_SYMBOL(frv_dma_ops);
...@@ -18,7 +18,9 @@ ...@@ -18,7 +18,9 @@
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include <asm/io.h> #include <asm/io.h>
void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t gfp) static void *frv_dma_alloc(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp,
struct dma_attrs *attrs)
{ {
void *ret; void *ret;
...@@ -29,29 +31,15 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand ...@@ -29,29 +31,15 @@ void *dma_alloc_coherent(struct device *hwdev, size_t size, dma_addr_t *dma_hand
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent); static void frv_dma_free(struct device *hwdev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
void dma_free_coherent(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
{ {
consistent_free(vaddr); consistent_free(vaddr);
} }
EXPORT_SYMBOL(dma_free_coherent); static int frv_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size, struct dma_attrs *attrs)
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
frv_cache_wback_inv((unsigned long) ptr, (unsigned long) ptr + size);
return virt_to_bus(ptr);
}
EXPORT_SYMBOL(dma_map_single);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{ {
unsigned long dampr2; unsigned long dampr2;
void *vaddr; void *vaddr;
...@@ -79,14 +67,48 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, ...@@ -79,14 +67,48 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg); static dma_addr_t frv_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
dma_addr_t dma_map_page(struct device *dev, struct page *page, unsigned long offset, enum dma_data_direction direction, struct dma_attrs *attrs)
size_t size, enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE);
flush_dcache_page(page); flush_dcache_page(page);
return (dma_addr_t) page_to_phys(page) + offset; return (dma_addr_t) page_to_phys(page) + offset;
} }
EXPORT_SYMBOL(dma_map_page); static void frv_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static void frv_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
flush_write_buffers();
}
static int frv_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s,
* so we can't guarantee allocations that must be
* within a tighter range than GFP_DMA..
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops frv_dma_ops = {
.alloc = frv_dma_alloc,
.free = frv_dma_free,
.map_page = frv_dma_map_page,
.map_sg = frv_dma_map_sg,
.sync_single_for_device = frv_dma_sync_single_for_device,
.sync_sg_for_device = frv_dma_sync_sg_for_device,
.dma_supported = frv_dma_supported,
};
EXPORT_SYMBOL(frv_dma_ops);
...@@ -15,7 +15,6 @@ config H8300 ...@@ -15,7 +15,6 @@ config H8300
select OF_IRQ select OF_IRQ
select OF_EARLY_FLATTREE select OF_EARLY_FLATTREE
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_DMA_ATTRS
select CLKSRC_OF select CLKSRC_OF
select H8300_TMR8 select H8300_TMR8
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
......
...@@ -8,6 +8,4 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -8,6 +8,4 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &h8300_dma_map_ops; return &h8300_dma_map_ops;
} }
#include <asm-generic/dma-mapping-common.h>
#endif #endif
...@@ -27,7 +27,6 @@ config HEXAGON ...@@ -27,7 +27,6 @@ config HEXAGON
select GENERIC_CLOCKEVENTS_BROADCAST select GENERIC_CLOCKEVENTS_BROADCAST
select MODULES_USE_ELF_RELA select MODULES_USE_ELF_RELA
select GENERIC_CPU_DEVICES select GENERIC_CPU_DEVICES
select HAVE_DMA_ATTRS
---help--- ---help---
Qualcomm Hexagon is a processor architecture designed for high Qualcomm Hexagon is a processor architecture designed for high
performance and low power across a wide variety of applications. performance and low power across a wide variety of applications.
......
...@@ -49,8 +49,6 @@ extern int dma_is_consistent(struct device *dev, dma_addr_t dma_handle); ...@@ -49,8 +49,6 @@ extern int dma_is_consistent(struct device *dev, dma_addr_t dma_handle);
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (!dev->dma_mask) if (!dev->dma_mask)
......
...@@ -25,7 +25,6 @@ config IA64 ...@@ -25,7 +25,6 @@ config IA64
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE if (!ITANIUM) select HAVE_DYNAMIC_FTRACE if (!ITANIUM)
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_DMA_ATTRS
select TTY select TTY
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
......
...@@ -25,8 +25,6 @@ extern void machvec_dma_sync_sg(struct device *, struct scatterlist *, int, ...@@ -25,8 +25,6 @@ extern void machvec_dma_sync_sg(struct device *, struct scatterlist *, int,
#define get_dma_ops(dev) platform_dma_get_ops(dev) #define get_dma_ops(dev) platform_dma_get_ops(dev)
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (!dev->dma_mask) if (!dev->dma_mask)
......
#ifndef _M68K_DMA_MAPPING_H #ifndef _M68K_DMA_MAPPING_H
#define _M68K_DMA_MAPPING_H #define _M68K_DMA_MAPPING_H
#include <asm/cache.h> extern struct dma_map_ops m68k_dma_ops;
struct scatterlist; static inline struct dma_map_ops *get_dma_ops(struct device *dev)
static inline int dma_supported(struct device *dev, u64 mask)
{
return 1;
}
static inline int dma_set_mask(struct device *dev, u64 mask)
{
return 0;
}
extern void *dma_alloc_coherent(struct device *, size_t,
dma_addr_t *, gfp_t);
extern void dma_free_coherent(struct device *, size_t,
void *, dma_addr_t);
static inline void *dma_alloc_attrs(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag,
struct dma_attrs *attrs)
{
/* attrs is not supported and ignored */
return dma_alloc_coherent(dev, size, dma_handle, flag);
}
static inline void dma_free_attrs(struct device *dev, size_t size,
void *cpu_addr, dma_addr_t dma_handle,
struct dma_attrs *attrs)
{ {
/* attrs is not supported and ignored */ return &m68k_dma_ops;
dma_free_coherent(dev, size, cpu_addr, dma_handle);
} }
static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t flag)
{
return dma_alloc_coherent(dev, size, handle, flag);
}
static inline void dma_free_noncoherent(struct device *dev, size_t size,
void *addr, dma_addr_t handle)
{
dma_free_coherent(dev, size, addr, handle);
}
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
/* we use coherent allocation, so not much to do here. */ /* we use coherent allocation, so not much to do here. */
} }
extern dma_addr_t dma_map_single(struct device *, void *, size_t,
enum dma_data_direction);
static inline void dma_unmap_single(struct device *dev, dma_addr_t addr,
size_t size, enum dma_data_direction dir)
{
}
extern dma_addr_t dma_map_page(struct device *, struct page *,
unsigned long, size_t size,
enum dma_data_direction);
static inline void dma_unmap_page(struct device *dev, dma_addr_t address,
size_t size, enum dma_data_direction dir)
{
}
extern int dma_map_sg(struct device *, struct scatterlist *, int,
enum dma_data_direction);
static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction dir)
{
}
extern void dma_sync_single_for_device(struct device *, dma_addr_t, size_t,
enum dma_data_direction);
extern void dma_sync_sg_for_device(struct device *, struct scatterlist *, int,
enum dma_data_direction);
static inline void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything for now */
dma_sync_single_for_device(dev, dma_handle, offset + size, direction);
}
static inline void dma_sync_single_for_cpu(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
}
static inline void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
{
}
static inline void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything for now */
dma_sync_single_for_cpu(dev, dma_handle, offset + size, direction);
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t handle)
{
return 0;
}
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _M68K_DMA_MAPPING_H */ #endif /* _M68K_DMA_MAPPING_H */
...@@ -18,8 +18,8 @@ ...@@ -18,8 +18,8 @@
#if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE) #if defined(CONFIG_MMU) && !defined(CONFIG_COLDFIRE)
void *dma_alloc_coherent(struct device *dev, size_t size, static void *m68k_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
dma_addr_t *handle, gfp_t flag) gfp_t flag, struct dma_attrs *attrs)
{ {
struct page *page, **map; struct page *page, **map;
pgprot_t pgprot; pgprot_t pgprot;
...@@ -61,8 +61,8 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -61,8 +61,8 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return addr; return addr;
} }
void dma_free_coherent(struct device *dev, size_t size, static void m68k_dma_free(struct device *dev, size_t size, void *addr,
void *addr, dma_addr_t handle) dma_addr_t handle, struct dma_attrs *attrs)
{ {
pr_debug("dma_free_coherent: %p, %x\n", addr, handle); pr_debug("dma_free_coherent: %p, %x\n", addr, handle);
vfree(addr); vfree(addr);
...@@ -72,8 +72,8 @@ void dma_free_coherent(struct device *dev, size_t size, ...@@ -72,8 +72,8 @@ void dma_free_coherent(struct device *dev, size_t size,
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
void *dma_alloc_coherent(struct device *dev, size_t size, static void *m68k_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
/* ignore region specifiers */ /* ignore region specifiers */
...@@ -90,19 +90,16 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -90,19 +90,16 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
void dma_free_coherent(struct device *dev, size_t size, static void m68k_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
free_pages((unsigned long)vaddr, get_order(size)); free_pages((unsigned long)vaddr, get_order(size));
} }
#endif /* CONFIG_MMU && !CONFIG_COLDFIRE */ #endif /* CONFIG_MMU && !CONFIG_COLDFIRE */
EXPORT_SYMBOL(dma_alloc_coherent); static void m68k_dma_sync_single_for_device(struct device *dev,
EXPORT_SYMBOL(dma_free_coherent); dma_addr_t handle, size_t size, enum dma_data_direction dir)
void dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{ {
switch (dir) { switch (dir) {
case DMA_BIDIRECTIONAL: case DMA_BIDIRECTIONAL:
...@@ -118,10 +115,9 @@ void dma_sync_single_for_device(struct device *dev, dma_addr_t handle, ...@@ -118,10 +115,9 @@ void dma_sync_single_for_device(struct device *dev, dma_addr_t handle,
break; break;
} }
} }
EXPORT_SYMBOL(dma_sync_single_for_device);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, static void m68k_dma_sync_sg_for_device(struct device *dev,
int nents, enum dma_data_direction dir) struct scatterlist *sglist, int nents, enum dma_data_direction dir)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
...@@ -131,31 +127,19 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist, ...@@ -131,31 +127,19 @@ void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
dir); dir);
} }
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
dma_addr_t dma_map_single(struct device *dev, void *addr, size_t size,
enum dma_data_direction dir)
{
dma_addr_t handle = virt_to_bus(addr);
dma_sync_single_for_device(dev, handle, size, dir);
return handle;
}
EXPORT_SYMBOL(dma_map_single);
dma_addr_t dma_map_page(struct device *dev, struct page *page, static dma_addr_t m68k_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, unsigned long offset, size_t size, enum dma_data_direction dir,
enum dma_data_direction dir) struct dma_attrs *attrs)
{ {
dma_addr_t handle = page_to_phys(page) + offset; dma_addr_t handle = page_to_phys(page) + offset;
dma_sync_single_for_device(dev, handle, size, dir); dma_sync_single_for_device(dev, handle, size, dir);
return handle; return handle;
} }
EXPORT_SYMBOL(dma_map_page);
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, static int m68k_dma_map_sg(struct device *dev, struct scatterlist *sglist,
enum dma_data_direction dir) int nents, enum dma_data_direction dir, struct dma_attrs *attrs)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
...@@ -167,4 +151,13 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, ...@@ -167,4 +151,13 @@ int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
} }
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
struct dma_map_ops m68k_dma_ops = {
.alloc = m68k_dma_alloc,
.free = m68k_dma_free,
.map_page = m68k_dma_map_page,
.map_sg = m68k_dma_map_sg,
.sync_single_for_device = m68k_dma_sync_single_for_device,
.sync_sg_for_device = m68k_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(m68k_dma_ops);
#ifndef _ASM_METAG_DMA_MAPPING_H #ifndef _ASM_METAG_DMA_MAPPING_H
#define _ASM_METAG_DMA_MAPPING_H #define _ASM_METAG_DMA_MAPPING_H
#include <linux/mm.h> extern struct dma_map_ops metag_dma_ops;
#include <asm/cache.h> static inline struct dma_map_ops *get_dma_ops(struct device *dev)
#include <asm/io.h>
#include <linux/scatterlist.h>
#include <asm/bug.h>
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
void dma_sync_for_device(void *vaddr, size_t size, int dma_direction);
void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction);
int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
WARN_ON(size == 0);
dma_sync_for_device(ptr, size, direction);
return virt_to_phys(ptr);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_cpu(phys_to_virt(dma_addr), size, direction);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nents == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
return nents;
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_device((void *)(page_to_phys(page) + offset), size,
direction);
return page_to_phys(page) + offset;
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nhwentries,
enum dma_data_direction direction)
{ {
struct scatterlist *sg; return &metag_dma_ops;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nhwentries == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nhwentries, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle)+offset, size,
direction);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle)+offset, size,
direction);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nelems,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sglist,
int nelems, enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
static inline int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
#define dma_supported(dev, mask) (1)
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
/* /*
...@@ -184,11 +18,4 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size, ...@@ -184,11 +18,4 @@ dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif #endif
...@@ -171,8 +171,8 @@ static struct metag_vm_region *metag_vm_region_find(struct metag_vm_region ...@@ -171,8 +171,8 @@ static struct metag_vm_region *metag_vm_region_find(struct metag_vm_region
* Allocate DMA-coherent memory space and return both the kernel remapped * Allocate DMA-coherent memory space and return both the kernel remapped
* virtual and bus address for that space. * virtual and bus address for that space.
*/ */
void *dma_alloc_coherent(struct device *dev, size_t size, static void *metag_dma_alloc(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t gfp) dma_addr_t *handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
struct page *page; struct page *page;
struct metag_vm_region *c; struct metag_vm_region *c;
...@@ -263,13 +263,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -263,13 +263,12 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
no_page: no_page:
return NULL; return NULL;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
/* /*
* free a page as defined by the above mapping. * free a page as defined by the above mapping.
*/ */
void dma_free_coherent(struct device *dev, size_t size, static void metag_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
struct metag_vm_region *c; struct metag_vm_region *c;
unsigned long flags, addr; unsigned long flags, addr;
...@@ -329,16 +328,19 @@ void dma_free_coherent(struct device *dev, size_t size, ...@@ -329,16 +328,19 @@ void dma_free_coherent(struct device *dev, size_t size,
__func__, vaddr); __func__, vaddr);
dump_stack(); dump_stack();
} }
EXPORT_SYMBOL(dma_free_coherent);
static int metag_dma_mmap(struct device *dev, struct vm_area_struct *vma,
static int dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size,
void *cpu_addr, dma_addr_t dma_addr, size_t size) struct dma_attrs *attrs)
{ {
int ret = -ENXIO;
unsigned long flags, user_size, kern_size; unsigned long flags, user_size, kern_size;
struct metag_vm_region *c; struct metag_vm_region *c;
int ret = -ENXIO;
if (dma_get_attr(DMA_ATTR_WRITE_COMBINE, attrs))
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
else
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
user_size = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; user_size = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
...@@ -364,25 +366,6 @@ static int dma_mmap(struct device *dev, struct vm_area_struct *vma, ...@@ -364,25 +366,6 @@ static int dma_mmap(struct device *dev, struct vm_area_struct *vma,
return ret; return ret;
} }
int dma_mmap_coherent(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size)
{
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
return dma_mmap(dev, vma, cpu_addr, dma_addr, size);
}
EXPORT_SYMBOL(dma_mmap_coherent);
int dma_mmap_writecombine(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size)
{
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
return dma_mmap(dev, vma, cpu_addr, dma_addr, size);
}
EXPORT_SYMBOL(dma_mmap_writecombine);
/* /*
* Initialise the consistent memory allocation. * Initialise the consistent memory allocation.
*/ */
...@@ -423,7 +406,7 @@ early_initcall(dma_alloc_init); ...@@ -423,7 +406,7 @@ early_initcall(dma_alloc_init);
/* /*
* make an area consistent to devices. * make an area consistent to devices.
*/ */
void dma_sync_for_device(void *vaddr, size_t size, int dma_direction) static void dma_sync_for_device(void *vaddr, size_t size, int dma_direction)
{ {
/* /*
* Ensure any writes get through the write combiner. This is necessary * Ensure any writes get through the write combiner. This is necessary
...@@ -465,12 +448,11 @@ void dma_sync_for_device(void *vaddr, size_t size, int dma_direction) ...@@ -465,12 +448,11 @@ void dma_sync_for_device(void *vaddr, size_t size, int dma_direction)
wmb(); wmb();
} }
EXPORT_SYMBOL(dma_sync_for_device);
/* /*
* make an area consistent to the core. * make an area consistent to the core.
*/ */
void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction) static void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction)
{ {
/* /*
* Hardware L2 cache prefetch doesn't occur across 4K physical * Hardware L2 cache prefetch doesn't occur across 4K physical
...@@ -497,4 +479,100 @@ void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction) ...@@ -497,4 +479,100 @@ void dma_sync_for_cpu(void *vaddr, size_t size, int dma_direction)
rmb(); rmb();
} }
EXPORT_SYMBOL(dma_sync_for_cpu);
static dma_addr_t metag_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{
dma_sync_for_device((void *)(page_to_phys(page) + offset), size,
direction);
return page_to_phys(page) + offset;
}
static void metag_dma_unmap_page(struct device *dev, dma_addr_t dma_address,
size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
}
static int metag_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
return nents;
}
static void metag_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nhwentries, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nhwentries, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
}
static void metag_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
static void metag_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
}
static void metag_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
}
static void metag_dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sglist, int nelems,
enum dma_data_direction direction)
{
int i;
struct scatterlist *sg;
for_each_sg(sglist, sg, nelems, i)
dma_sync_for_device(sg_virt(sg), sg->length, direction);
}
struct dma_map_ops metag_dma_ops = {
.alloc = metag_dma_alloc,
.free = metag_dma_free,
.map_page = metag_dma_map_page,
.map_sg = metag_dma_map_sg,
.sync_single_for_device = metag_dma_sync_single_for_device,
.sync_single_for_cpu = metag_dma_sync_single_for_cpu,
.sync_sg_for_cpu = metag_dma_sync_sg_for_cpu,
.mmap = metag_dma_mmap,
};
EXPORT_SYMBOL(metag_dma_ops);
...@@ -19,7 +19,6 @@ config MICROBLAZE ...@@ -19,7 +19,6 @@ config MICROBLAZE
select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
......
...@@ -44,8 +44,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -44,8 +44,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &dma_direct_ops; return &dma_direct_ops;
} }
#include <asm-generic/dma-mapping-common.h>
static inline void __dma_sync(unsigned long paddr, static inline void __dma_sync(unsigned long paddr,
size_t size, enum dma_data_direction direction) size_t size, enum dma_data_direction direction)
{ {
......
...@@ -31,7 +31,6 @@ config MIPS ...@@ -31,7 +31,6 @@ config MIPS
select RTC_LIB if !MACH_LOONGSON64 select RTC_LIB if !MACH_LOONGSON64
select GENERIC_ATOMIC64 if !64BIT select GENERIC_ATOMIC64 if !64BIT
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select GENERIC_IRQ_PROBE select GENERIC_IRQ_PROBE
......
...@@ -29,8 +29,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) ...@@ -29,8 +29,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
static inline void dma_mark_clean(void *addr, size_t size) {} static inline void dma_mark_clean(void *addr, size_t size) {}
#include <asm-generic/dma-mapping-common.h>
extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size, extern void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);
......
...@@ -73,7 +73,6 @@ ...@@ -73,7 +73,6 @@
#define MADV_SEQUENTIAL 2 /* expect sequential page references */ #define MADV_SEQUENTIAL 2 /* expect sequential page references */
#define MADV_WILLNEED 3 /* will need these pages */ #define MADV_WILLNEED 3 /* will need these pages */
#define MADV_DONTNEED 4 /* don't need these pages */ #define MADV_DONTNEED 4 /* don't need these pages */
#define MADV_FREE 5 /* free pages only if memory pressure */
/* common parameters: try to keep these consistent across architectures */ /* common parameters: try to keep these consistent across architectures */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */
......
...@@ -14,6 +14,7 @@ config MN10300 ...@@ -14,6 +14,7 @@ config MN10300
select OLD_SIGSUSPEND3 select OLD_SIGSUSPEND3
select OLD_SIGACTION select OLD_SIGACTION
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select ARCH_NO_COHERENT_DMA_MMAP
config AM33_2 config AM33_2
def_bool n def_bool n
......
...@@ -11,154 +11,14 @@ ...@@ -11,154 +11,14 @@
#ifndef _ASM_DMA_MAPPING_H #ifndef _ASM_DMA_MAPPING_H
#define _ASM_DMA_MAPPING_H #define _ASM_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/io.h> #include <asm/io.h>
/* extern struct dma_map_ops mn10300_dma_ops;
* See Documentation/DMA-API.txt for the description of how the
* following DMA API should work.
*/
extern void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, int flag);
extern void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f))
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h))
static inline
dma_addr_t dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
mn10300_dcache_flush_inv();
return virt_to_bus(ptr);
}
static inline
void dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline
int dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
BUG_ON(!valid_dma_direction(direction));
WARN_ON(nents == 0 || sglist[0].length == 0);
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg); static inline struct dma_map_ops *get_dma_ops(struct device *dev)
}
mn10300_dcache_flush_inv();
return nents;
}
static inline
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
}
static inline
dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); return &mn10300_dma_ops;
return page_to_bus(page) + offset;
}
static inline
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
BUG_ON(direction == DMA_NONE);
}
static inline
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
}
static inline
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static inline
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static inline
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
}
static inline
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static inline
int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
}
static inline
int dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s, so we can't
* guarantee allocations that must be within a tighter range than
* GFP_DMA
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
static inline
int dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
static inline static inline
...@@ -168,19 +28,4 @@ void dma_cache_sync(void *vaddr, size_t size, ...@@ -168,19 +28,4 @@ void dma_cache_sync(void *vaddr, size_t size,
mn10300_dcache_flush_inv(); mn10300_dcache_flush_inv();
} }
/* Not supported for now */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif #endif
...@@ -20,8 +20,8 @@ ...@@ -20,8 +20,8 @@
static unsigned long pci_sram_allocated = 0xbc000000; static unsigned long pci_sram_allocated = 0xbc000000;
void *dma_alloc_coherent(struct device *dev, size_t size, static void *mn10300_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, int gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
unsigned long addr; unsigned long addr;
void *ret; void *ret;
...@@ -61,10 +61,9 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -61,10 +61,9 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
printk("dma_alloc_coherent() = %p [%x]\n", ret, *dma_handle); printk("dma_alloc_coherent() = %p [%x]\n", ret, *dma_handle);
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, static void mn10300_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
unsigned long addr = (unsigned long) vaddr & ~0x20000000; unsigned long addr = (unsigned long) vaddr & ~0x20000000;
...@@ -73,4 +72,60 @@ void dma_free_coherent(struct device *dev, size_t size, void *vaddr, ...@@ -73,4 +72,60 @@ void dma_free_coherent(struct device *dev, size_t size, void *vaddr,
free_pages(addr, get_order(size)); free_pages(addr, get_order(size));
} }
EXPORT_SYMBOL(dma_free_coherent);
static int mn10300_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{
struct scatterlist *sg;
int i;
for_each_sg(sglist, sg, nents, i) {
BUG_ON(!sg_page(sg));
sg->dma_address = sg_phys(sg);
}
mn10300_dcache_flush_inv();
return nents;
}
static dma_addr_t mn10300_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{
return page_to_bus(page) + offset;
}
static void mn10300_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static void mn10300_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction)
{
mn10300_dcache_flush_inv();
}
static int mn10300_dma_supported(struct device *dev, u64 mask)
{
/*
* we fall back to GFP_DMA when the mask isn't all 1s, so we can't
* guarantee allocations that must be within a tighter range than
* GFP_DMA
*/
if (mask < 0x00ffffff)
return 0;
return 1;
}
struct dma_map_ops mn10300_dma_ops = {
.alloc = mn10300_dma_alloc,
.free = mn10300_dma_free,
.map_page = mn10300_dma_map_page,
.map_sg = mn10300_dma_map_sg,
.sync_single_for_device = mn10300_dma_sync_single_for_device,
.sync_sg_for_device = mn10300_dma_sync_sg_for_device,
};
...@@ -10,131 +10,20 @@ ...@@ -10,131 +10,20 @@
#ifndef _ASM_NIOS2_DMA_MAPPING_H #ifndef _ASM_NIOS2_DMA_MAPPING_H
#define _ASM_NIOS2_DMA_MAPPING_H #define _ASM_NIOS2_DMA_MAPPING_H
#include <linux/scatterlist.h> extern struct dma_map_ops nios2_dma_ops;
#include <linux/cache.h>
#include <asm/cacheflush.h>
static inline void __dma_sync_for_device(void *vaddr, size_t size, static inline struct dma_map_ops *get_dma_ops(struct device *dev)
enum dma_data_direction direction)
{
switch (direction) {
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
/*
* We just need to flush the caches here , but Nios2 flush
* instruction will do both writeback and invalidate.
*/
case DMA_BIDIRECTIONAL: /* flush and invalidate */
flush_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
default:
BUG();
}
}
static inline void __dma_sync_for_cpu(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_BIDIRECTIONAL:
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
break;
default:
BUG();
}
}
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag);
void dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle);
static inline dma_addr_t dma_map_single(struct device *dev, void *ptr,
size_t size,
enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(ptr, size, direction);
return virt_to_phys(ptr);
}
static inline void dma_unmap_single(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction direction)
{ {
} return &nios2_dma_ops;
extern int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction);
extern dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, enum dma_data_direction direction);
extern void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
size_t size, enum dma_data_direction direction);
extern void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction direction);
extern void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction);
extern void dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, enum dma_data_direction direction);
extern void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction);
extern void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction);
extern void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction);
extern void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
int nelems, enum dma_data_direction direction);
static inline int dma_supported(struct device *dev, u64 mask)
{
return 1;
}
static inline int dma_set_mask(struct device *dev, u64 mask)
{
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{
return 0;
} }
/* /*
* dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to * dma_alloc_noncoherent() returns non-cacheable memory, so there's no need to
* do any flushing here. * do any flushing here.
*/ */
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
} }
/* drivers/base/dma-mapping.c */
extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size);
extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size);
#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
#endif /* _ASM_NIOS2_DMA_MAPPING_H */ #endif /* _ASM_NIOS2_DMA_MAPPING_H */
...@@ -20,9 +20,46 @@ ...@@ -20,9 +20,46 @@
#include <linux/cache.h> #include <linux/cache.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
static inline void __dma_sync_for_device(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
/*
* We just need to flush the caches here , but Nios2 flush
* instruction will do both writeback and invalidate.
*/
case DMA_BIDIRECTIONAL: /* flush and invalidate */
flush_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
default:
BUG();
}
}
static inline void __dma_sync_for_cpu(void *vaddr, size_t size,
enum dma_data_direction direction)
{
switch (direction) {
case DMA_BIDIRECTIONAL:
case DMA_FROM_DEVICE:
invalidate_dcache_range((unsigned long)vaddr,
(unsigned long)(vaddr + size));
break;
case DMA_TO_DEVICE:
break;
default:
BUG();
}
}
void *dma_alloc_coherent(struct device *dev, size_t size, static void *nios2_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp) dma_addr_t *dma_handle, gfp_t gfp, struct dma_attrs *attrs)
{ {
void *ret; void *ret;
...@@ -45,24 +82,21 @@ void *dma_alloc_coherent(struct device *dev, size_t size, ...@@ -45,24 +82,21 @@ void *dma_alloc_coherent(struct device *dev, size_t size,
return ret; return ret;
} }
EXPORT_SYMBOL(dma_alloc_coherent);
void dma_free_coherent(struct device *dev, size_t size, void *vaddr, static void nios2_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle) dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
unsigned long addr = (unsigned long) CAC_ADDR((unsigned long) vaddr); unsigned long addr = (unsigned long) CAC_ADDR((unsigned long) vaddr);
free_pages(addr, get_order(size)); free_pages(addr, get_order(size));
} }
EXPORT_SYMBOL(dma_free_coherent);
int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, static int nios2_dma_map_sg(struct device *dev, struct scatterlist *sg,
enum dma_data_direction direction) int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int i; int i;
BUG_ON(!valid_dma_direction(direction));
for_each_sg(sg, sg, nents, i) { for_each_sg(sg, sg, nents, i) {
void *addr; void *addr;
...@@ -75,40 +109,32 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, ...@@ -75,40 +109,32 @@ int dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
return nents; return nents;
} }
EXPORT_SYMBOL(dma_map_sg);
dma_addr_t dma_map_page(struct device *dev, struct page *page, static dma_addr_t nios2_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size, unsigned long offset, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
void *addr; void *addr = page_address(page) + offset;
BUG_ON(!valid_dma_direction(direction));
addr = page_address(page) + offset;
__dma_sync_for_device(addr, size, direction); __dma_sync_for_device(addr, size, direction);
return page_to_phys(page) + offset; return page_to_phys(page) + offset;
} }
EXPORT_SYMBOL(dma_map_page);
void dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, static void nios2_dma_unmap_page(struct device *dev, dma_addr_t dma_address,
enum dma_data_direction direction) size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_address), size, direction); __dma_sync_for_cpu(phys_to_virt(dma_address), size, direction);
} }
EXPORT_SYMBOL(dma_unmap_page);
void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, static void nios2_dma_unmap_sg(struct device *dev, struct scatterlist *sg,
enum dma_data_direction direction) int nhwentries, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
void *addr; void *addr;
int i; int i;
BUG_ON(!valid_dma_direction(direction));
if (direction == DMA_TO_DEVICE) if (direction == DMA_TO_DEVICE)
return; return;
...@@ -118,69 +144,54 @@ void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries, ...@@ -118,69 +144,54 @@ void dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
__dma_sync_for_cpu(addr, sg->length, direction); __dma_sync_for_cpu(addr, sg->length, direction);
} }
} }
EXPORT_SYMBOL(dma_unmap_sg);
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
}
EXPORT_SYMBOL(dma_sync_single_for_cpu);
void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction)
{
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(phys_to_virt(dma_handle), size, direction); static void nios2_dma_sync_single_for_cpu(struct device *dev,
} dma_addr_t dma_handle, size_t size,
EXPORT_SYMBOL(dma_sync_single_for_device);
void dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction); __dma_sync_for_cpu(phys_to_virt(dma_handle), size, direction);
} }
EXPORT_SYMBOL(dma_sync_single_range_for_cpu);
void dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle, static void nios2_dma_sync_single_for_device(struct device *dev,
unsigned long offset, size_t size, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
BUG_ON(!valid_dma_direction(direction));
__dma_sync_for_device(phys_to_virt(dma_handle), size, direction); __dma_sync_for_device(phys_to_virt(dma_handle), size, direction);
} }
EXPORT_SYMBOL(dma_sync_single_range_for_device);
void dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems, static void nios2_dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
int i; int i;
BUG_ON(!valid_dma_direction(direction));
/* Make sure that gcc doesn't leave the empty loop body. */ /* Make sure that gcc doesn't leave the empty loop body. */
for_each_sg(sg, sg, nelems, i) for_each_sg(sg, sg, nelems, i)
__dma_sync_for_cpu(sg_virt(sg), sg->length, direction); __dma_sync_for_cpu(sg_virt(sg), sg->length, direction);
} }
EXPORT_SYMBOL(dma_sync_sg_for_cpu);
void dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, static void nios2_dma_sync_sg_for_device(struct device *dev,
int nelems, enum dma_data_direction direction) struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{ {
int i; int i;
BUG_ON(!valid_dma_direction(direction));
/* Make sure that gcc doesn't leave the empty loop body. */ /* Make sure that gcc doesn't leave the empty loop body. */
for_each_sg(sg, sg, nelems, i) for_each_sg(sg, sg, nelems, i)
__dma_sync_for_device(sg_virt(sg), sg->length, direction); __dma_sync_for_device(sg_virt(sg), sg->length, direction);
} }
EXPORT_SYMBOL(dma_sync_sg_for_device);
struct dma_map_ops nios2_dma_ops = {
.alloc = nios2_dma_alloc,
.free = nios2_dma_free,
.map_page = nios2_dma_map_page,
.unmap_page = nios2_dma_unmap_page,
.map_sg = nios2_dma_map_sg,
.unmap_sg = nios2_dma_unmap_sg,
.sync_single_for_device = nios2_dma_sync_single_for_device,
.sync_single_for_cpu = nios2_dma_sync_single_for_cpu,
.sync_sg_for_cpu = nios2_dma_sync_sg_for_cpu,
.sync_sg_for_device = nios2_dma_sync_sg_for_device,
};
EXPORT_SYMBOL(nios2_dma_ops);
...@@ -29,9 +29,6 @@ config OPENRISC ...@@ -29,9 +29,6 @@ config OPENRISC
config MMU config MMU
def_bool y def_bool y
config HAVE_DMA_ATTRS
def_bool y
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
def_bool y def_bool y
......
...@@ -42,6 +42,4 @@ static inline int dma_supported(struct device *dev, u64 dma_mask) ...@@ -42,6 +42,4 @@ static inline int dma_supported(struct device *dev, u64 dma_mask)
return dma_mask == DMA_BIT_MASK(32); return dma_mask == DMA_BIT_MASK(32);
} }
#include <asm-generic/dma-mapping-common.h>
#endif /* __ASM_OPENRISC_DMA_MAPPING_H */ #endif /* __ASM_OPENRISC_DMA_MAPPING_H */
...@@ -29,6 +29,7 @@ config PARISC ...@@ -29,6 +29,7 @@ config PARISC
select TTY # Needed for pdc_cons.c select TTY # Needed for pdc_cons.c
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_ARCH_AUDITSYSCALL select HAVE_ARCH_AUDITSYSCALL
select ARCH_NO_COHERENT_DMA_MMAP
help help
The PA-RISC microprocessor is designed by Hewlett-Packard and used The PA-RISC microprocessor is designed by Hewlett-Packard and used
......
#ifndef _PARISC_DMA_MAPPING_H #ifndef _PARISC_DMA_MAPPING_H
#define _PARISC_DMA_MAPPING_H #define _PARISC_DMA_MAPPING_H
#include <linux/mm.h>
#include <linux/scatterlist.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
/* See Documentation/DMA-API-HOWTO.txt */
struct hppa_dma_ops {
int (*dma_supported)(struct device *dev, u64 mask);
void *(*alloc_consistent)(struct device *dev, size_t size, dma_addr_t *iova, gfp_t flag);
void *(*alloc_noncoherent)(struct device *dev, size_t size, dma_addr_t *iova, gfp_t flag);
void (*free_consistent)(struct device *dev, size_t size, void *vaddr, dma_addr_t iova);
dma_addr_t (*map_single)(struct device *dev, void *addr, size_t size, enum dma_data_direction direction);
void (*unmap_single)(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction);
int (*map_sg)(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction direction);
void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nhwents, enum dma_data_direction direction);
void (*dma_sync_single_for_cpu)(struct device *dev, dma_addr_t iova, unsigned long offset, size_t size, enum dma_data_direction direction);
void (*dma_sync_single_for_device)(struct device *dev, dma_addr_t iova, unsigned long offset, size_t size, enum dma_data_direction direction);
void (*dma_sync_sg_for_cpu)(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction direction);
void (*dma_sync_sg_for_device)(struct device *dev, struct scatterlist *sg, int nelems, enum dma_data_direction direction);
};
/* /*
** We could live without the hppa_dma_ops indirection if we didn't want ** We need to support 4 different coherent dma models with one binary:
** to support 4 different coherent dma models with one binary (they will **
** someday be loadable modules):
** I/O MMU consistent method dma_sync behavior ** I/O MMU consistent method dma_sync behavior
** ============= ====================== ======================= ** ============= ====================== =======================
** a) PA-7x00LC uncachable host memory flush/purge ** a) PA-7x00LC uncachable host memory flush/purge
...@@ -40,158 +21,22 @@ struct hppa_dma_ops { ...@@ -40,158 +21,22 @@ struct hppa_dma_ops {
*/ */
#ifdef CONFIG_PA11 #ifdef CONFIG_PA11
extern struct hppa_dma_ops pcxl_dma_ops; extern struct dma_map_ops pcxl_dma_ops;
extern struct hppa_dma_ops pcx_dma_ops; extern struct dma_map_ops pcx_dma_ops;
#endif #endif
extern struct hppa_dma_ops *hppa_dma_ops; extern struct dma_map_ops *hppa_dma_ops;
#define dma_alloc_attrs(d, s, h, f, a) dma_alloc_coherent(d, s, h, f)
#define dma_free_attrs(d, s, h, f, a) dma_free_coherent(d, s, h, f)
static inline void *
dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag)
{
return hppa_dma_ops->alloc_consistent(dev, size, dma_handle, flag);
}
static inline void *
dma_alloc_noncoherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag)
{
return hppa_dma_ops->alloc_noncoherent(dev, size, dma_handle, flag);
}
static inline void
dma_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
hppa_dma_ops->free_consistent(dev, size, vaddr, dma_handle);
}
static inline void
dma_free_noncoherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle)
{
hppa_dma_ops->free_consistent(dev, size, vaddr, dma_handle);
}
static inline dma_addr_t
dma_map_single(struct device *dev, void *ptr, size_t size,
enum dma_data_direction direction)
{
return hppa_dma_ops->map_single(dev, ptr, size, direction);
}
static inline void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
{
hppa_dma_ops->unmap_single(dev, dma_addr, size, direction);
}
static inline int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
{
return hppa_dma_ops->map_sg(dev, sg, nents, direction);
}
static inline void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
{
hppa_dma_ops->unmap_sg(dev, sg, nhwentries, direction);
}
static inline dma_addr_t
dma_map_page(struct device *dev, struct page *page, unsigned long offset,
size_t size, enum dma_data_direction direction)
{
return dma_map_single(dev, (page_address(page) + (offset)), size, direction);
}
static inline void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
{
dma_unmap_single(dev, dma_address, size, direction);
}
static inline void
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_cpu)
hppa_dma_ops->dma_sync_single_for_cpu(dev, dma_handle, 0, size, direction);
}
static inline void
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_device)
hppa_dma_ops->dma_sync_single_for_device(dev, dma_handle, 0, size, direction);
}
static inline void
dma_sync_single_range_for_cpu(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_cpu)
hppa_dma_ops->dma_sync_single_for_cpu(dev, dma_handle, offset, size, direction);
}
static inline void
dma_sync_single_range_for_device(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_device)
hppa_dma_ops->dma_sync_single_for_device(dev, dma_handle, offset, size, direction);
}
static inline void
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_sg_for_cpu)
hppa_dma_ops->dma_sync_sg_for_cpu(dev, sg, nelems, direction);
}
static inline void
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_sg_for_device)
hppa_dma_ops->dma_sync_sg_for_device(dev, sg, nelems, direction);
}
static inline int
dma_supported(struct device *dev, u64 mask)
{
return hppa_dma_ops->dma_supported(dev, mask);
}
static inline int static inline struct dma_map_ops *get_dma_ops(struct device *dev)
dma_set_mask(struct device *dev, u64 mask)
{ {
if(!dev->dma_mask || !dma_supported(dev, mask)) return hppa_dma_ops;
return -EIO;
*dev->dma_mask = mask;
return 0;
} }
static inline void static inline void
dma_cache_sync(struct device *dev, void *vaddr, size_t size, dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
if(hppa_dma_ops->dma_sync_single_for_cpu) if (hppa_dma_ops->sync_single_for_cpu)
flush_kernel_dcache_range((unsigned long)vaddr, size); flush_kernel_dcache_range((unsigned long)vaddr, size);
} }
...@@ -238,22 +83,4 @@ struct parisc_device; ...@@ -238,22 +83,4 @@ struct parisc_device;
void * sba_get_iommu(struct parisc_device *dev); void * sba_get_iommu(struct parisc_device *dev);
#endif #endif
/* At the moment, we panic on error for IOMMU resource exaustion */
#define dma_mapping_error(dev, x) 0
/* This API cannot be supported on PA-RISC */
static inline int dma_mmap_coherent(struct device *dev,
struct vm_area_struct *vma, void *cpu_addr,
dma_addr_t dma_addr, size_t size)
{
return -EINVAL;
}
static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t dma_addr,
size_t size)
{
return -EINVAL;
}
#endif #endif
...@@ -43,7 +43,6 @@ ...@@ -43,7 +43,6 @@
#define MADV_SPACEAVAIL 5 /* insure that resources are reserved */ #define MADV_SPACEAVAIL 5 /* insure that resources are reserved */
#define MADV_VPS_PURGE 6 /* Purge pages from VM page cache */ #define MADV_VPS_PURGE 6 /* Purge pages from VM page cache */
#define MADV_VPS_INHERIT 7 /* Inherit parents page size */ #define MADV_VPS_INHERIT 7 /* Inherit parents page size */
#define MADV_FREE 8 /* free pages only if memory pressure */
/* common/generic parameters */ /* common/generic parameters */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */
......
...@@ -40,7 +40,7 @@ ...@@ -40,7 +40,7 @@
#include <asm/parisc-device.h> #include <asm/parisc-device.h>
/* See comments in include/asm-parisc/pci.h */ /* See comments in include/asm-parisc/pci.h */
struct hppa_dma_ops *hppa_dma_ops __read_mostly; struct dma_map_ops *hppa_dma_ops __read_mostly;
EXPORT_SYMBOL(hppa_dma_ops); EXPORT_SYMBOL(hppa_dma_ops);
static struct device root = { static struct device root = {
......
...@@ -413,7 +413,8 @@ pcxl_dma_init(void) ...@@ -413,7 +413,8 @@ pcxl_dma_init(void)
__initcall(pcxl_dma_init); __initcall(pcxl_dma_init);
static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_addr_t *dma_handle, gfp_t flag) static void *pa11_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag, struct dma_attrs *attrs)
{ {
unsigned long vaddr; unsigned long vaddr;
unsigned long paddr; unsigned long paddr;
...@@ -439,7 +440,8 @@ static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_ad ...@@ -439,7 +440,8 @@ static void * pa11_dma_alloc_consistent (struct device *dev, size_t size, dma_ad
return (void *)vaddr; return (void *)vaddr;
} }
static void pa11_dma_free_consistent (struct device *dev, size_t size, void *vaddr, dma_addr_t dma_handle) static void pa11_dma_free(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
{ {
int order; int order;
...@@ -450,15 +452,20 @@ static void pa11_dma_free_consistent (struct device *dev, size_t size, void *vad ...@@ -450,15 +452,20 @@ static void pa11_dma_free_consistent (struct device *dev, size_t size, void *vad
free_pages((unsigned long)__va(dma_handle), order); free_pages((unsigned long)__va(dma_handle), order);
} }
static dma_addr_t pa11_dma_map_single(struct device *dev, void *addr, size_t size, enum dma_data_direction direction) static dma_addr_t pa11_dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction, struct dma_attrs *attrs)
{ {
void *addr = page_address(page) + offset;
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_kernel_dcache_range((unsigned long) addr, size); flush_kernel_dcache_range((unsigned long) addr, size);
return virt_to_phys(addr); return virt_to_phys(addr);
} }
static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction) static void pa11_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
...@@ -475,7 +482,9 @@ static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, siz ...@@ -475,7 +482,9 @@ static void pa11_dma_unmap_single(struct device *dev, dma_addr_t dma_handle, siz
return; return;
} }
static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
...@@ -492,7 +501,9 @@ static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int n ...@@ -492,7 +501,9 @@ static int pa11_dma_map_sg(struct device *dev, struct scatterlist *sglist, int n
return nents; return nents;
} }
static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nents, enum dma_data_direction direction,
struct dma_attrs *attrs)
{ {
int i; int i;
struct scatterlist *sg; struct scatterlist *sg;
...@@ -509,18 +520,24 @@ static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, in ...@@ -509,18 +520,24 @@ static void pa11_dma_unmap_sg(struct device *dev, struct scatterlist *sglist, in
return; return;
} }
static void pa11_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, unsigned long offset, size_t size, enum dma_data_direction direction) static void pa11_dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle) + offset, size); flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle),
size);
} }
static void pa11_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, unsigned long offset, size_t size, enum dma_data_direction direction) static void pa11_dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); BUG_ON(direction == DMA_NONE);
flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle) + offset, size); flush_kernel_dcache_range((unsigned long) phys_to_virt(dma_handle),
size);
} }
static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction) static void pa11_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sglist, int nents, enum dma_data_direction direction)
...@@ -545,32 +562,28 @@ static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist * ...@@ -545,32 +562,28 @@ static void pa11_dma_sync_sg_for_device(struct device *dev, struct scatterlist *
flush_kernel_vmap_range(sg_virt(sg), sg->length); flush_kernel_vmap_range(sg_virt(sg), sg->length);
} }
struct hppa_dma_ops pcxl_dma_ops = { struct dma_map_ops pcxl_dma_ops = {
.dma_supported = pa11_dma_supported, .dma_supported = pa11_dma_supported,
.alloc_consistent = pa11_dma_alloc_consistent, .alloc = pa11_dma_alloc,
.alloc_noncoherent = pa11_dma_alloc_consistent, .free = pa11_dma_free,
.free_consistent = pa11_dma_free_consistent, .map_page = pa11_dma_map_page,
.map_single = pa11_dma_map_single, .unmap_page = pa11_dma_unmap_page,
.unmap_single = pa11_dma_unmap_single,
.map_sg = pa11_dma_map_sg, .map_sg = pa11_dma_map_sg,
.unmap_sg = pa11_dma_unmap_sg, .unmap_sg = pa11_dma_unmap_sg,
.dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu, .sync_single_for_cpu = pa11_dma_sync_single_for_cpu,
.dma_sync_single_for_device = pa11_dma_sync_single_for_device, .sync_single_for_device = pa11_dma_sync_single_for_device,
.dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu,
.dma_sync_sg_for_device = pa11_dma_sync_sg_for_device, .sync_sg_for_device = pa11_dma_sync_sg_for_device,
}; };
static void *fail_alloc_consistent(struct device *dev, size_t size, static void *pcx_dma_alloc(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag) dma_addr_t *dma_handle, gfp_t flag, struct dma_attrs *attrs)
{
return NULL;
}
static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag)
{ {
void *addr; void *addr;
if (!dma_get_attr(DMA_ATTR_NON_CONSISTENT, attrs))
return NULL;
addr = (void *)__get_free_pages(flag, get_order(size)); addr = (void *)__get_free_pages(flag, get_order(size));
if (addr) if (addr)
*dma_handle = (dma_addr_t)virt_to_phys(addr); *dma_handle = (dma_addr_t)virt_to_phys(addr);
...@@ -578,24 +591,23 @@ static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size, ...@@ -578,24 +591,23 @@ static void *pa11_dma_alloc_noncoherent(struct device *dev, size_t size,
return addr; return addr;
} }
static void pa11_dma_free_noncoherent(struct device *dev, size_t size, static void pcx_dma_free(struct device *dev, size_t size, void *vaddr,
void *vaddr, dma_addr_t iova) dma_addr_t iova, struct dma_attrs *attrs)
{ {
free_pages((unsigned long)vaddr, get_order(size)); free_pages((unsigned long)vaddr, get_order(size));
return; return;
} }
struct hppa_dma_ops pcx_dma_ops = { struct dma_map_ops pcx_dma_ops = {
.dma_supported = pa11_dma_supported, .dma_supported = pa11_dma_supported,
.alloc_consistent = fail_alloc_consistent, .alloc = pcx_dma_alloc,
.alloc_noncoherent = pa11_dma_alloc_noncoherent, .free = pcx_dma_free,
.free_consistent = pa11_dma_free_noncoherent, .map_page = pa11_dma_map_page,
.map_single = pa11_dma_map_single, .unmap_page = pa11_dma_unmap_page,
.unmap_single = pa11_dma_unmap_single,
.map_sg = pa11_dma_map_sg, .map_sg = pa11_dma_map_sg,
.unmap_sg = pa11_dma_unmap_sg, .unmap_sg = pa11_dma_unmap_sg,
.dma_sync_single_for_cpu = pa11_dma_sync_single_for_cpu, .sync_single_for_cpu = pa11_dma_sync_single_for_cpu,
.dma_sync_single_for_device = pa11_dma_sync_single_for_device, .sync_single_for_device = pa11_dma_sync_single_for_device,
.dma_sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu, .sync_sg_for_cpu = pa11_dma_sync_sg_for_cpu,
.dma_sync_sg_for_device = pa11_dma_sync_sg_for_device, .sync_sg_for_device = pa11_dma_sync_sg_for_device,
}; };
...@@ -108,7 +108,6 @@ config PPC ...@@ -108,7 +108,6 @@ config PPC
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_OPROFILE select HAVE_OPROFILE
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
...@@ -158,6 +157,7 @@ config PPC ...@@ -158,6 +157,7 @@ config PPC
select ARCH_HAS_DMA_SET_COHERENT_MASK select ARCH_HAS_DMA_SET_COHERENT_MASK
select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DEVMEM_IS_ALLOWED
select HAVE_ARCH_SECCOMP_FILTER select HAVE_ARCH_SECCOMP_FILTER
select ARCH_HAS_UBSAN_SANITIZE_ALL
config GENERIC_CSUM config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN def_bool CPU_LITTLE_ENDIAN
......
...@@ -125,8 +125,6 @@ static inline void set_dma_offset(struct device *dev, dma_addr_t off) ...@@ -125,8 +125,6 @@ static inline void set_dma_offset(struct device *dev, dma_addr_t off)
#define HAVE_ARCH_DMA_SET_MASK 1 #define HAVE_ARCH_DMA_SET_MASK 1
extern int dma_set_mask(struct device *dev, u64 dma_mask); extern int dma_set_mask(struct device *dev, u64 dma_mask);
#include <asm-generic/dma-mapping-common.h>
extern int __dma_set_mask(struct device *dev, u64 dma_mask); extern int __dma_set_mask(struct device *dev, u64 dma_mask);
extern u64 __dma_get_required_mask(struct device *dev); extern u64 __dma_get_required_mask(struct device *dev);
......
...@@ -191,7 +191,7 @@ struct fadump_crash_info_header { ...@@ -191,7 +191,7 @@ struct fadump_crash_info_header {
u64 elfcorehdr_addr; u64 elfcorehdr_addr;
u32 crashing_cpu; u32 crashing_cpu;
struct pt_regs regs; struct pt_regs regs;
struct cpumask cpu_online_mask; struct cpumask online_mask;
}; };
/* Crash memory ranges */ /* Crash memory ranges */
......
...@@ -136,12 +136,18 @@ endif ...@@ -136,12 +136,18 @@ endif
obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o
obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
# Disable GCOV in odd or sensitive code # Disable GCOV & sanitizers in odd or sensitive code
GCOV_PROFILE_prom_init.o := n GCOV_PROFILE_prom_init.o := n
UBSAN_SANITIZE_prom_init.o := n
GCOV_PROFILE_ftrace.o := n GCOV_PROFILE_ftrace.o := n
UBSAN_SANITIZE_ftrace.o := n
GCOV_PROFILE_machine_kexec_64.o := n GCOV_PROFILE_machine_kexec_64.o := n
UBSAN_SANITIZE_machine_kexec_64.o := n
GCOV_PROFILE_machine_kexec_32.o := n GCOV_PROFILE_machine_kexec_32.o := n
UBSAN_SANITIZE_machine_kexec_32.o := n
GCOV_PROFILE_kprobes.o := n GCOV_PROFILE_kprobes.o := n
UBSAN_SANITIZE_kprobes.o := n
UBSAN_SANITIZE_vdso.o := n
extra-$(CONFIG_PPC_FPU) += fpu.o extra-$(CONFIG_PPC_FPU) += fpu.o
extra-$(CONFIG_ALTIVEC) += vector.o extra-$(CONFIG_ALTIVEC) += vector.o
......
...@@ -415,7 +415,7 @@ void crash_fadump(struct pt_regs *regs, const char *str) ...@@ -415,7 +415,7 @@ void crash_fadump(struct pt_regs *regs, const char *str)
else else
ppc_save_regs(&fdh->regs); ppc_save_regs(&fdh->regs);
fdh->cpu_online_mask = *cpu_online_mask; fdh->online_mask = *cpu_online_mask;
/* Call ibm,os-term rtas call to trigger firmware assisted dump */ /* Call ibm,os-term rtas call to trigger firmware assisted dump */
rtas_os_term((char *)str); rtas_os_term((char *)str);
...@@ -646,7 +646,7 @@ static int __init fadump_build_cpu_notes(const struct fadump_mem_struct *fdm) ...@@ -646,7 +646,7 @@ static int __init fadump_build_cpu_notes(const struct fadump_mem_struct *fdm)
} }
/* Lower 4 bytes of reg_value contains logical cpu id */ /* Lower 4 bytes of reg_value contains logical cpu id */
cpu = be64_to_cpu(reg_entry->reg_value) & FADUMP_CPU_ID_MASK; cpu = be64_to_cpu(reg_entry->reg_value) & FADUMP_CPU_ID_MASK;
if (fdh && !cpumask_test_cpu(cpu, &fdh->cpu_online_mask)) { if (fdh && !cpumask_test_cpu(cpu, &fdh->online_mask)) {
SKIP_TO_NEXT_CPU(reg_entry); SKIP_TO_NEXT_CPU(reg_entry);
continue; continue;
} }
......
...@@ -15,6 +15,7 @@ targets := $(obj-vdso32) vdso32.so vdso32.so.dbg ...@@ -15,6 +15,7 @@ targets := $(obj-vdso32) vdso32.so vdso32.so.dbg
obj-vdso32 := $(addprefix $(obj)/, $(obj-vdso32)) obj-vdso32 := $(addprefix $(obj)/, $(obj-vdso32))
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
ccflags-y := -shared -fno-common -fno-builtin ccflags-y := -shared -fno-common -fno-builtin
ccflags-y += -nostdlib -Wl,-soname=linux-vdso32.so.1 \ ccflags-y += -nostdlib -Wl,-soname=linux-vdso32.so.1 \
......
...@@ -8,6 +8,7 @@ targets := $(obj-vdso64) vdso64.so vdso64.so.dbg ...@@ -8,6 +8,7 @@ targets := $(obj-vdso64) vdso64.so vdso64.so.dbg
obj-vdso64 := $(addprefix $(obj)/, $(obj-vdso64)) obj-vdso64 := $(addprefix $(obj)/, $(obj-vdso64))
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
ccflags-y := -shared -fno-common -fno-builtin ccflags-y := -shared -fno-common -fno-builtin
ccflags-y += -nostdlib -Wl,-soname=linux-vdso64.so.1 \ ccflags-y += -nostdlib -Wl,-soname=linux-vdso64.so.1 \
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC) ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
......
...@@ -579,7 +579,6 @@ config QDIO ...@@ -579,7 +579,6 @@ config QDIO
menuconfig PCI menuconfig PCI
bool "PCI support" bool "PCI support"
select HAVE_DMA_ATTRS
select PCI_MSI select PCI_MSI
select IOMMU_SUPPORT select IOMMU_SUPPORT
help help
......
...@@ -23,8 +23,6 @@ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, ...@@ -23,8 +23,6 @@ static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
{ {
} }
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (!dev->dma_mask) if (!dev->dma_mask)
......
...@@ -11,7 +11,6 @@ config SUPERH ...@@ -11,7 +11,6 @@ config SUPERH
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_ARCH_TRACEHOOK select HAVE_ARCH_TRACEHOOK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_DEBUG_BUGVERBOSE select HAVE_DEBUG_BUGVERBOSE
select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_CUSTOM_GPIO_H
......
...@@ -11,8 +11,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -11,8 +11,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
#define DMA_ERROR_CODE 0 #define DMA_ERROR_CODE 0
#include <asm-generic/dma-mapping-common.h>
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction dir); enum dma_data_direction dir);
......
...@@ -26,7 +26,6 @@ config SPARC ...@@ -26,7 +26,6 @@ config SPARC
select RTC_CLASS select RTC_CLASS
select RTC_DRV_M48T59 select RTC_DRV_M48T59
select RTC_SYSTOHC select RTC_SYSTOHC
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_ARCH_JUMP_LABEL if SPARC64 select HAVE_ARCH_JUMP_LABEL if SPARC64
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW
......
...@@ -37,21 +37,4 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -37,21 +37,4 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return dma_ops; return dma_ops;
} }
#define HAVE_ARCH_DMA_SET_MASK 1
static inline int dma_set_mask(struct device *dev, u64 mask)
{
#ifdef CONFIG_PCI
if (dev->bus == &pci_bus_type) {
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EINVAL;
*dev->dma_mask = mask;
return 0;
}
#endif
return -EINVAL;
}
#include <asm-generic/dma-mapping-common.h>
#endif #endif
...@@ -5,7 +5,6 @@ config TILE ...@@ -5,7 +5,6 @@ config TILE
def_bool y def_bool y
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select USE_PMC if PERF_EVENTS select USE_PMC if PERF_EVENTS
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_KVM if !TILEGX select HAVE_KVM if !TILEGX
select GENERIC_FIND_FIRST_BIT select GENERIC_FIND_FIRST_BIT
......
...@@ -73,37 +73,7 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) ...@@ -73,37 +73,7 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
} }
#define HAVE_ARCH_DMA_SET_MASK 1 #define HAVE_ARCH_DMA_SET_MASK 1
int dma_set_mask(struct device *dev, u64 mask);
#include <asm-generic/dma-mapping-common.h>
static inline int
dma_set_mask(struct device *dev, u64 mask)
{
struct dma_map_ops *dma_ops = get_dma_ops(dev);
/*
* For PCI devices with 64-bit DMA addressing capability, promote
* the dma_ops to hybrid, with the consistent memory DMA space limited
* to 32-bit. For 32-bit capable devices, limit the streaming DMA
* address range to max_direct_dma_addr.
*/
if (dma_ops == gx_pci_dma_map_ops ||
dma_ops == gx_hybrid_pci_dma_map_ops ||
dma_ops == gx_legacy_pci_dma_map_ops) {
if (mask == DMA_BIT_MASK(64) &&
dma_ops == gx_legacy_pci_dma_map_ops)
set_dma_ops(dev, gx_hybrid_pci_dma_map_ops);
else if (mask > dev->archdata.max_direct_dma_addr)
mask = dev->archdata.max_direct_dma_addr;
}
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
/* /*
* dma_alloc_noncoherent() is #defined to return coherent memory, * dma_alloc_noncoherent() is #defined to return coherent memory,
......
...@@ -583,6 +583,35 @@ struct dma_map_ops *gx_hybrid_pci_dma_map_ops; ...@@ -583,6 +583,35 @@ struct dma_map_ops *gx_hybrid_pci_dma_map_ops;
EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops); EXPORT_SYMBOL(gx_legacy_pci_dma_map_ops);
EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops); EXPORT_SYMBOL(gx_hybrid_pci_dma_map_ops);
int dma_set_mask(struct device *dev, u64 mask)
{
struct dma_map_ops *dma_ops = get_dma_ops(dev);
/*
* For PCI devices with 64-bit DMA addressing capability, promote
* the dma_ops to hybrid, with the consistent memory DMA space limited
* to 32-bit. For 32-bit capable devices, limit the streaming DMA
* address range to max_direct_dma_addr.
*/
if (dma_ops == gx_pci_dma_map_ops ||
dma_ops == gx_hybrid_pci_dma_map_ops ||
dma_ops == gx_legacy_pci_dma_map_ops) {
if (mask == DMA_BIT_MASK(64) &&
dma_ops == gx_legacy_pci_dma_map_ops)
set_dma_ops(dev, gx_hybrid_pci_dma_map_ops);
else if (mask > dev->archdata.max_direct_dma_addr)
mask = dev->archdata.max_direct_dma_addr;
}
if (!dev->dma_mask || !dma_supported(dev, mask))
return -EIO;
*dev->dma_mask = mask;
return 0;
}
EXPORT_SYMBOL(dma_set_mask);
#ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK #ifdef CONFIG_ARCH_HAS_DMA_SET_COHERENT_MASK
int dma_set_coherent_mask(struct device *dev, u64 mask) int dma_set_coherent_mask(struct device *dev, u64 mask)
{ {
......
...@@ -5,7 +5,6 @@ config UNICORE32 ...@@ -5,7 +5,6 @@ config UNICORE32
select ARCH_MIGHT_HAVE_PC_SERIO select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_MEMBLOCK select HAVE_MEMBLOCK
select HAVE_GENERIC_DMA_COHERENT select HAVE_GENERIC_DMA_COHERENT
select HAVE_DMA_ATTRS
select HAVE_KERNEL_GZIP select HAVE_KERNEL_GZIP
select HAVE_KERNEL_BZIP2 select HAVE_KERNEL_BZIP2
select GENERIC_ATOMIC64 select GENERIC_ATOMIC64
......
...@@ -28,8 +28,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -28,8 +28,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &swiotlb_dma_map_ops; return &swiotlb_dma_map_ops;
} }
#include <asm-generic/dma-mapping-common.h>
static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size) static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
{ {
if (dev && dev->dma_mask) if (dev && dev->dma_mask)
......
...@@ -31,6 +31,7 @@ config X86 ...@@ -31,6 +31,7 @@ config X86
select ARCH_HAS_PMEM_API if X86_64 select ARCH_HAS_PMEM_API if X86_64
select ARCH_HAS_MMIO_FLUSH select ARCH_HAS_MMIO_FLUSH
select ARCH_HAS_SG_CHAIN select ARCH_HAS_SG_CHAIN
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select ARCH_MIGHT_HAVE_PC_PARPORT select ARCH_MIGHT_HAVE_PC_PARPORT
...@@ -99,7 +100,6 @@ config X86 ...@@ -99,7 +100,6 @@ config X86
select HAVE_DEBUG_KMEMLEAK select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_REGS
......
...@@ -60,6 +60,7 @@ clean-files += cpustr.h ...@@ -60,6 +60,7 @@ clean-files += cpustr.h
KBUILD_CFLAGS := $(USERINCLUDE) $(REALMODE_CFLAGS) -D_SETUP KBUILD_CFLAGS := $(USERINCLUDE) $(REALMODE_CFLAGS) -D_SETUP
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
$(obj)/bzImage: asflags-y := $(SVGA_MODE) $(obj)/bzImage: asflags-y := $(SVGA_MODE)
......
...@@ -33,6 +33,7 @@ KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector) ...@@ -33,6 +33,7 @@ KBUILD_CFLAGS += $(call cc-option,-fno-stack-protector)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE :=n
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)
LDFLAGS_vmlinux := -T LDFLAGS_vmlinux := -T
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
KBUILD_CFLAGS += $(DISABLE_LTO) KBUILD_CFLAGS += $(DISABLE_LTO)
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n
VDSO64-$(CONFIG_X86_64) := y VDSO64-$(CONFIG_X86_64) := y
VDSOX32-$(CONFIG_X86_X32_ABI) := y VDSOX32-$(CONFIG_X86_X32_ABI) := y
......
...@@ -46,8 +46,6 @@ bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp); ...@@ -46,8 +46,6 @@ bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp);
#define HAVE_ARCH_DMA_SUPPORTED 1 #define HAVE_ARCH_DMA_SUPPORTED 1
extern int dma_supported(struct device *hwdev, u64 mask); extern int dma_supported(struct device *hwdev, u64 mask);
#include <asm-generic/dma-mapping-common.h>
extern void *dma_generic_alloc_coherent(struct device *dev, size_t size, extern void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_addr, gfp_t flag, dma_addr_t *dma_addr, gfp_t flag,
struct dma_attrs *attrs); struct dma_attrs *attrs);
......
...@@ -385,6 +385,7 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image) ...@@ -385,6 +385,7 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image)
return image->fops->cleanup(image->image_loader_data); return image->fops->cleanup(image->image_loader_data);
} }
#ifdef CONFIG_KEXEC_VERIFY_SIG
int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel, int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
unsigned long kernel_len) unsigned long kernel_len)
{ {
...@@ -395,6 +396,7 @@ int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel, ...@@ -395,6 +396,7 @@ int arch_kexec_kernel_verify_sig(struct kimage *image, void *kernel,
return image->fops->verify_sig(kernel, kernel_len); return image->fops->verify_sig(kernel, kernel_len);
} }
#endif
/* /*
* Apply purgatory relocations. * Apply purgatory relocations.
......
...@@ -70,3 +70,4 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) $(REALMODE_CFLAGS) -D_SETUP -D_WAKEUP \ ...@@ -70,3 +70,4 @@ KBUILD_CFLAGS := $(LINUXINCLUDE) $(REALMODE_CFLAGS) -D_SETUP -D_WAKEUP \
-I$(srctree)/arch/x86/boot -I$(srctree)/arch/x86/boot
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__ KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n GCOV_PROFILE := n
UBSAN_SANITIZE := n
...@@ -15,7 +15,6 @@ config XTENSA ...@@ -15,7 +15,6 @@ config XTENSA
select GENERIC_PCI_IOMAP select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK select GENERIC_SCHED_CLOCK
select HAVE_DMA_API_DEBUG select HAVE_DMA_API_DEBUG
select HAVE_DMA_ATTRS
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUTEX_CMPXCHG if !MMU select HAVE_FUTEX_CMPXCHG if !MMU
select HAVE_IRQ_TIME_ACCOUNTING select HAVE_IRQ_TIME_ACCOUNTING
......
...@@ -13,8 +13,6 @@ ...@@ -13,8 +13,6 @@
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/io.h> #include <asm/io.h>
#include <asm-generic/dma-coherent.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
...@@ -30,8 +28,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) ...@@ -30,8 +28,6 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
return &xtensa_dma_map_ops; return &xtensa_dma_map_ops;
} }
#include <asm-generic/dma-mapping-common.h>
void dma_cache_sync(struct device *dev, void *vaddr, size_t size, void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction); enum dma_data_direction direction);
......
...@@ -86,7 +86,6 @@ ...@@ -86,7 +86,6 @@
#define MADV_SEQUENTIAL 2 /* expect sequential page references */ #define MADV_SEQUENTIAL 2 /* expect sequential page references */
#define MADV_WILLNEED 3 /* will need these pages */ #define MADV_WILLNEED 3 /* will need these pages */
#define MADV_DONTNEED 4 /* don't need these pages */ #define MADV_DONTNEED 4 /* don't need these pages */
#define MADV_FREE 5 /* free pages only if memory pressure */
/* common parameters: try to keep these consistent across architectures */ /* common parameters: try to keep these consistent across architectures */
#define MADV_FREE 8 /* free pages only if memory pressure */ #define MADV_FREE 8 /* free pages only if memory pressure */
......
...@@ -200,7 +200,7 @@ static const struct attribute_group *hotplugable_cpu_attr_groups[] = { ...@@ -200,7 +200,7 @@ static const struct attribute_group *hotplugable_cpu_attr_groups[] = {
struct cpu_attr { struct cpu_attr {
struct device_attribute attr; struct device_attribute attr;
const struct cpumask *const * const map; const struct cpumask *const map;
}; };
static ssize_t show_cpus_attr(struct device *dev, static ssize_t show_cpus_attr(struct device *dev,
...@@ -209,7 +209,7 @@ static ssize_t show_cpus_attr(struct device *dev, ...@@ -209,7 +209,7 @@ static ssize_t show_cpus_attr(struct device *dev,
{ {
struct cpu_attr *ca = container_of(attr, struct cpu_attr, attr); struct cpu_attr *ca = container_of(attr, struct cpu_attr, attr);
return cpumap_print_to_pagebuf(true, buf, *ca->map); return cpumap_print_to_pagebuf(true, buf, ca->map);
} }
#define _CPU_ATTR(name, map) \ #define _CPU_ATTR(name, map) \
...@@ -217,9 +217,9 @@ static ssize_t show_cpus_attr(struct device *dev, ...@@ -217,9 +217,9 @@ static ssize_t show_cpus_attr(struct device *dev,
/* Keep in sync with cpu_subsys_attrs */ /* Keep in sync with cpu_subsys_attrs */
static struct cpu_attr cpu_attrs[] = { static struct cpu_attr cpu_attrs[] = {
_CPU_ATTR(online, &cpu_online_mask), _CPU_ATTR(online, &__cpu_online_mask),
_CPU_ATTR(possible, &cpu_possible_mask), _CPU_ATTR(possible, &__cpu_possible_mask),
_CPU_ATTR(present, &cpu_present_mask), _CPU_ATTR(present, &__cpu_present_mask),
}; };
/* /*
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <asm-generic/dma-coherent.h>
/* /*
* Managed DMA API * Managed DMA API
...@@ -167,7 +166,7 @@ void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr, ...@@ -167,7 +166,7 @@ void dmam_free_noncoherent(struct device *dev, size_t size, void *vaddr,
} }
EXPORT_SYMBOL(dmam_free_noncoherent); EXPORT_SYMBOL(dmam_free_noncoherent);
#ifdef ARCH_HAS_DMA_DECLARE_COHERENT_MEMORY #ifdef CONFIG_HAVE_GENERIC_DMA_COHERENT
static void dmam_coherent_decl_release(struct device *dev, void *res) static void dmam_coherent_decl_release(struct device *dev, void *res)
{ {
...@@ -247,7 +246,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, ...@@ -247,7 +246,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
void *cpu_addr, dma_addr_t dma_addr, size_t size) void *cpu_addr, dma_addr_t dma_addr, size_t size)
{ {
int ret = -ENXIO; int ret = -ENXIO;
#ifdef CONFIG_MMU #if defined(CONFIG_MMU) && !defined(CONFIG_ARCH_NO_COHERENT_DMA_MMAP)
unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; unsigned long user_count = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT;
unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr)); unsigned long pfn = page_to_pfn(virt_to_page(cpu_addr));
...@@ -264,7 +263,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, ...@@ -264,7 +263,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
user_count << PAGE_SHIFT, user_count << PAGE_SHIFT,
vma->vm_page_prot); vma->vm_page_prot);
} }
#endif /* CONFIG_MMU */ #endif /* CONFIG_MMU && !CONFIG_ARCH_NO_COHERENT_DMA_MMAP */
return ret; return ret;
} }
......
...@@ -56,9 +56,7 @@ static u32 find_nvram_size(void __iomem *end) ...@@ -56,9 +56,7 @@ static u32 find_nvram_size(void __iomem *end)
static int nvram_find_and_copy(void __iomem *iobase, u32 lim) static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
{ {
struct nvram_header __iomem *header; struct nvram_header __iomem *header;
int i;
u32 off; u32 off;
u32 *src, *dst;
u32 size; u32 size;
if (nvram_len) { if (nvram_len) {
...@@ -95,10 +93,7 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim) ...@@ -95,10 +93,7 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
return -ENXIO; return -ENXIO;
found: found:
src = (u32 *)header; __ioread32_copy(nvram_buf, header, sizeof(*header) / 4);
dst = (u32 *)nvram_buf;
for (i = 0; i < sizeof(struct nvram_header); i += 4)
*dst++ = __raw_readl(src++);
header = (struct nvram_header *)nvram_buf; header = (struct nvram_header *)nvram_buf;
nvram_len = header->len; nvram_len = header->len;
if (nvram_len > size) { if (nvram_len > size) {
...@@ -111,8 +106,8 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim) ...@@ -111,8 +106,8 @@ static int nvram_find_and_copy(void __iomem *iobase, u32 lim)
nvram_len = NVRAM_SPACE - 1; nvram_len = NVRAM_SPACE - 1;
} }
/* proceed reading data after header */ /* proceed reading data after header */
for (; i < nvram_len; i += 4) __ioread32_copy(nvram_buf + sizeof(*header), header + 1,
*dst++ = readl(src++); DIV_ROUND_UP(nvram_len, 4));
nvram_buf[NVRAM_SPACE - 1] = '\0'; nvram_buf[NVRAM_SPACE - 1] = '\0';
return 0; return 0;
......
...@@ -22,6 +22,7 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \ ...@@ -22,6 +22,7 @@ KBUILD_CFLAGS := $(cflags-y) -DDISABLE_BRANCH_PROFILING \
GCOV_PROFILE := n GCOV_PROFILE := n
KASAN_SANITIZE := n KASAN_SANITIZE := n
UBSAN_SANITIZE := n
lib-y := efi-stub-helper.o lib-y := efi-stub-helper.o
......
...@@ -82,13 +82,13 @@ config DRM_TTM ...@@ -82,13 +82,13 @@ config DRM_TTM
config DRM_GEM_CMA_HELPER config DRM_GEM_CMA_HELPER
bool bool
depends on DRM && HAVE_DMA_ATTRS depends on DRM
help help
Choose this if you need the GEM CMA helper functions Choose this if you need the GEM CMA helper functions
config DRM_KMS_CMA_HELPER config DRM_KMS_CMA_HELPER
bool bool
depends on DRM && HAVE_DMA_ATTRS depends on DRM
select DRM_GEM_CMA_HELPER select DRM_GEM_CMA_HELPER
select DRM_KMS_FB_HELPER select DRM_KMS_FB_HELPER
select FB_SYS_FILLRECT select FB_SYS_FILLRECT
......
...@@ -5,7 +5,7 @@ config DRM_IMX ...@@ -5,7 +5,7 @@ config DRM_IMX
select VIDEOMODE_HELPERS select VIDEOMODE_HELPERS
select DRM_GEM_CMA_HELPER select DRM_GEM_CMA_HELPER
select DRM_KMS_CMA_HELPER select DRM_KMS_CMA_HELPER
depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM) && HAVE_DMA_ATTRS depends on DRM && (ARCH_MXC || ARCH_MULTIPLATFORM)
depends on IMX_IPUV3_CORE depends on IMX_IPUV3_CORE
help help
enable i.MX graphics support enable i.MX graphics support
......
config DRM_RCAR_DU config DRM_RCAR_DU
tristate "DRM Support for R-Car Display Unit" tristate "DRM Support for R-Car Display Unit"
depends on DRM && ARM && HAVE_DMA_ATTRS && OF depends on DRM && ARM && OF
depends on ARCH_SHMOBILE || COMPILE_TEST depends on ARCH_SHMOBILE || COMPILE_TEST
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_KMS_CMA_HELPER select DRM_KMS_CMA_HELPER
......
config DRM_SHMOBILE config DRM_SHMOBILE
tristate "DRM Support for SH Mobile" tristate "DRM Support for SH Mobile"
depends on DRM && ARM && HAVE_DMA_ATTRS depends on DRM && ARM
depends on ARCH_SHMOBILE || COMPILE_TEST depends on ARCH_SHMOBILE || COMPILE_TEST
depends on FB_SH_MOBILE_MERAM || !FB_SH_MOBILE_MERAM depends on FB_SH_MOBILE_MERAM || !FB_SH_MOBILE_MERAM
select BACKLIGHT_CLASS_DEVICE select BACKLIGHT_CLASS_DEVICE
......
config DRM_STI config DRM_STI
tristate "DRM Support for STMicroelectronics SoC stiH41x Series" tristate "DRM Support for STMicroelectronics SoC stiH41x Series"
depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM) && HAVE_DMA_ATTRS depends on DRM && (SOC_STIH415 || SOC_STIH416 || ARCH_MULTIPLATFORM)
select RESET_CONTROLLER select RESET_CONTROLLER
select DRM_KMS_HELPER select DRM_KMS_HELPER
select DRM_GEM_CMA_HELPER select DRM_GEM_CMA_HELPER
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment