Commit b2dd8674 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] memory writeback/invalidation fixes

From: "David S. Miller" <davem@redhat.com>

This attempts to take care of 2 of the MM todos I had on
my backlog:

1) Zap the stupid flush_cache_all() thing with more meaningful
   interfaces.

2) Resolve the ptrace user page access issues, first stage.

The "first stage" mentioned for #2 is simply creating the
user page accesor interfaces.  The next stage needs to be
mucking with get_user_pages() so that we can control when
the flush_dcache_page() occurs.  Then we:

1) For every platform where flush_dcache_page() is a non-nop
   add a call to the beginning of copy_{from,to}_user_page().
2) Make access_process_vm() set the "no dcache flush" bit in
   it's call to get_user_pages().

The idea also was that we'd consolidate the write etc. boolean
args passed to get_user_pages() into flag bits too.

But at least with the below, we can delete that reminder FIXME
comment from kernel/ptrace.c, the platforms have the necessary
tools and just need to make use of it :)

As a bonus I noticed that VMALLOC_VMADDR() did absolutely nothing.

After all of this I only have 1 real TODO left, and that's dealing
with the SMP TLB/pte invalidation stuff, very low priority until
someone starts doing more work with sparc32/SMP in 2.6.x :)
parent bd094583
...@@ -59,9 +59,9 @@ changes occur: ...@@ -59,9 +59,9 @@ changes occur:
address translations from the TLB. After running, this address translations from the TLB. After running, this
interface must make sure that any previous page table interface must make sure that any previous page table
modifications for the address space 'vma->vm_mm' in the range modifications for the address space 'vma->vm_mm' in the range
'start' to 'end' will be visible to the cpu. That is, after 'start' to 'end-1' will be visible to the cpu. That is, after
running, here will be no entries in the TLB for 'mm' for running, here will be no entries in the TLB for 'mm' for
virtual addresses in the range 'start' to 'end'. virtual addresses in the range 'start' to 'end-1'.
The "vma" is the backing store being used for the region. The "vma" is the backing store being used for the region.
Primarily, this is used for munmap() type operations. Primarily, this is used for munmap() type operations.
...@@ -100,7 +100,7 @@ changes occur: ...@@ -100,7 +100,7 @@ changes occur:
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
The software page tables for address space 'mm' for virtual The software page tables for address space 'mm' for virtual
addresses in the range 'start' to 'end' are being torn down. addresses in the range 'start' to 'end-1' are being torn down.
Some platforms cache the lowest level of the software page tables Some platforms cache the lowest level of the software page tables
in a linear virtually mapped array, to make TLB miss processing in a linear virtually mapped array, to make TLB miss processing
...@@ -165,15 +165,7 @@ and have no dependency on translation information. ...@@ -165,15 +165,7 @@ and have no dependency on translation information.
Here are the routines, one by one: Here are the routines, one by one:
1) void flush_cache_all(void) 1) void flush_cache_mm(struct mm_struct *mm)
The most severe flush of all. After this interface runs,
the entire cpu cache is flushed.
This is usually invoked when the kernel page tables are
changed, since such translations are "global" in nature.
2) void flush_cache_mm(struct mm_struct *mm)
This interface flushes an entire user address space from This interface flushes an entire user address space from
the caches. That is, after running, there will be no cache the caches. That is, after running, there will be no cache
...@@ -183,13 +175,13 @@ Here are the routines, one by one: ...@@ -183,13 +175,13 @@ Here are the routines, one by one:
page table operations such as what happens during page table operations such as what happens during
fork, exit, and exec. fork, exit, and exec.
3) void flush_cache_range(struct vm_area_struct *vma, 2) void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
Here we are flushing a specific range of (user) virtual Here we are flushing a specific range of (user) virtual
addresses from the cache. After running, there will be no addresses from the cache. After running, there will be no
entries in the cache for 'vma->vm_mm' for virtual addresses in entries in the cache for 'vma->vm_mm' for virtual addresses in
the range 'start' to 'end'. the range 'start' to 'end-1'.
The "vma" is the backing store being used for the region. The "vma" is the backing store being used for the region.
Primarily, this is used for munmap() type operations. Primarily, this is used for munmap() type operations.
...@@ -200,7 +192,7 @@ Here are the routines, one by one: ...@@ -200,7 +192,7 @@ Here are the routines, one by one:
call flush_cache_page (see below) for each entry which may be call flush_cache_page (see below) for each entry which may be
modified. modified.
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr) 3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
This time we need to remove a PAGE_SIZE sized range This time we need to remove a PAGE_SIZE sized range
from the cache. The 'vma' is the backing structure used by from the cache. The 'vma' is the backing structure used by
...@@ -215,6 +207,30 @@ Here are the routines, one by one: ...@@ -215,6 +207,30 @@ Here are the routines, one by one:
This is used primarily during fault processing. This is used primarily during fault processing.
4) void flush_cache_kmaps(void)
This routine need only be implemented if the platform utilizes
highmem. It will be called right before all of the kmaps
are invalidated.
After running, there will be no entries in the cache for
the kernel virtual address range PKMAP_ADDR(0) to
PKMAP_ADDR(LAST_PKMAP).
This routing should be implemented in asm/highmem.h
5) void flush_cache_vmap(unsigned long start, unsigned long end)
void flush_cache_vunmap(unsigned long start, unsigned long end)
Here in these two interfaces we are flushing a specific range
of (kernel) virtual addresses from the cache. After running,
there will be no entries in the cache for the kernel address
space for virtual addresses in the range 'start' to 'end-1'.
The first of these two routines is invoked after map_vm_area()
has installed the page table entries. The second is invoked
before unmap_vm_area() deletes the page table entries.
There exists another whole class of cpu cache issues which currently There exists another whole class of cpu cache issues which currently
require a whole different set of interfaces to handle properly. require a whole different set of interfaces to handle properly.
The biggest problem is that of virtual aliasing in the data cache The biggest problem is that of virtual aliasing in the data cache
...@@ -317,6 +333,26 @@ maps this page at its virtual address. ...@@ -317,6 +333,26 @@ maps this page at its virtual address.
dirty. Again, see sparc64 for examples of how dirty. Again, see sparc64 for examples of how
to deal with this. to deal with this.
void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr,
void *dst, void *src, int len)
void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
unsigned long user_vaddr,
void *dst, void *src, int len)
When the kernel needs to copy arbitrary data in and out
of arbitrary user pages (f.e. for ptrace()) it will use
these two routines.
The page has been kmap()'d, and flush_cache_page() has
just been called for the user mapping of this page (if
necessary).
Any necessary cache flushing or other coherency operations
that need to occur should happen here. If the processor's
instruction cache does not snoop cpu stores, it is very
likely that you will need to flush the instruction cache
for copy_to_user_page().
void flush_icache_range(unsigned long start, unsigned long end) void flush_icache_range(unsigned long start, unsigned long end)
When the kernel stores into addresses that it will execute When the kernel stores into addresses that it will execute
out of (eg when loading modules), this function is called. out of (eg when loading modules), this function is called.
...@@ -324,17 +360,6 @@ maps this page at its virtual address. ...@@ -324,17 +360,6 @@ maps this page at its virtual address.
If the icache does not snoop stores then this routine will need If the icache does not snoop stores then this routine will need
to flush it. to flush it.
void flush_icache_user_range(struct vm_area_struct *vma,
struct page *page, unsigned long addr, int len)
This is called when the kernel stores into addresses that are
part of the address space of a user process (which may be some
other process than the current process). The addr argument
gives the virtual address in that process's address space,
page is the page which is being modified, and len indicates
how many bytes have been modified. The modified region must
not cross a page boundary. Currently this is only called from
kernel/ptrace.c.
void flush_icache_page(struct vm_area_struct *vma, struct page *page) void flush_icache_page(struct vm_area_struct *vma, struct page *page)
All the functionality of flush_icache_page can be implemented in All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
......
...@@ -391,7 +391,7 @@ irongate_ioremap(unsigned long addr, unsigned long size) ...@@ -391,7 +391,7 @@ irongate_ioremap(unsigned long addr, unsigned long size)
cur_gatt = phys_to_virt(GET_GATT(baddr) & ~1); cur_gatt = phys_to_virt(GET_GATT(baddr) & ~1);
pte = cur_gatt[GET_GATT_OFF(baddr)] & ~1; pte = cur_gatt[GET_GATT_OFF(baddr)] & ~1;
if (__alpha_remap_area_pages(VMALLOC_VMADDR(vaddr), if (__alpha_remap_area_pages(vaddr,
pte, PAGE_SIZE, 0)) { pte, PAGE_SIZE, 0)) {
printk("AGP ioremap: FAILED to map...\n"); printk("AGP ioremap: FAILED to map...\n");
vfree(area->addr); vfree(area->addr);
......
...@@ -696,7 +696,7 @@ marvel_ioremap(unsigned long addr, unsigned long size) ...@@ -696,7 +696,7 @@ marvel_ioremap(unsigned long addr, unsigned long size)
} }
pfn >>= 1; /* make it a true pfn */ pfn >>= 1; /* make it a true pfn */
if (__alpha_remap_area_pages(VMALLOC_VMADDR(vaddr), if (__alpha_remap_area_pages(vaddr,
pfn << PAGE_SHIFT, pfn << PAGE_SHIFT,
PAGE_SIZE, 0)) { PAGE_SIZE, 0)) {
printk("FAILED to map...\n"); printk("FAILED to map...\n");
......
...@@ -534,7 +534,7 @@ titan_ioremap(unsigned long addr, unsigned long size) ...@@ -534,7 +534,7 @@ titan_ioremap(unsigned long addr, unsigned long size)
} }
pfn >>= 1; /* make it a true pfn */ pfn >>= 1; /* make it a true pfn */
if (__alpha_remap_area_pages(VMALLOC_VMADDR(vaddr), if (__alpha_remap_area_pages(vaddr,
pfn << PAGE_SHIFT, pfn << PAGE_SHIFT,
PAGE_SIZE, 0)) { PAGE_SIZE, 0)) {
printk("FAILED to map...\n"); printk("FAILED to map...\n");
......
...@@ -150,7 +150,7 @@ __ioremap(unsigned long phys_addr, size_t size, unsigned long flags, ...@@ -150,7 +150,7 @@ __ioremap(unsigned long phys_addr, size_t size, unsigned long flags,
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vfree(addr); vfree(addr);
return NULL; return NULL;
} }
......
...@@ -157,7 +157,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag ...@@ -157,7 +157,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vfree(addr); vfree(addr);
return NULL; return NULL;
} }
......
...@@ -158,7 +158,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag ...@@ -158,7 +158,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag
return NULL; return NULL;
area->phys_addr = phys_addr; area->phys_addr = phys_addr;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vunmap(addr); vunmap(addr);
return NULL; return NULL;
} }
......
...@@ -162,7 +162,7 @@ void * __ioremap(phys_t phys_addr, phys_t size, unsigned long flags) ...@@ -162,7 +162,7 @@ void * __ioremap(phys_t phys_addr, phys_t size, unsigned long flags)
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vunmap(addr); vunmap(addr);
return NULL; return NULL;
} }
......
...@@ -159,7 +159,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag ...@@ -159,7 +159,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vfree(addr); vfree(addr);
return NULL; return NULL;
} }
......
...@@ -101,7 +101,7 @@ void *consistent_alloc(int gfp, size_t size, dma_addr_t *dma_handle) ...@@ -101,7 +101,7 @@ void *consistent_alloc(int gfp, size_t size, dma_addr_t *dma_handle)
if (! area) if (! area)
goto out; goto out;
va = VMALLOC_VMADDR(area->addr); va = (unsigned long) area->addr;
flags = _PAGE_KERNEL | _PAGE_NO_CACHE; flags = _PAGE_KERNEL | _PAGE_NO_CACHE;
......
...@@ -195,7 +195,7 @@ __ioremap(phys_addr_t addr, unsigned long size, unsigned long flags) ...@@ -195,7 +195,7 @@ __ioremap(phys_addr_t addr, unsigned long size, unsigned long flags)
area = get_vm_area(size, VM_IOREMAP); area = get_vm_area(size, VM_IOREMAP);
if (area == 0) if (area == 0)
return NULL; return NULL;
v = VMALLOC_VMADDR(area->addr); v = (unsigned long) area->addr;
} else { } else {
v = (ioremap_bot -= size); v = (ioremap_bot -= size);
} }
......
...@@ -124,7 +124,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag ...@@ -124,7 +124,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vfree(addr); vfree(addr);
return NULL; return NULL;
} }
......
...@@ -149,7 +149,7 @@ void * p3_ioremap(unsigned long phys_addr, unsigned long size, unsigned long fla ...@@ -149,7 +149,7 @@ void * p3_ioremap(unsigned long phys_addr, unsigned long size, unsigned long fla
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vfree(addr); vfree(addr);
return NULL; return NULL;
} }
......
...@@ -159,7 +159,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag ...@@ -159,7 +159,7 @@ void * __ioremap(unsigned long phys_addr, unsigned long size, unsigned long flag
if (!area) if (!area)
return NULL; return NULL;
addr = area->addr; addr = area->addr;
if (remap_area_pages(VMALLOC_VMADDR(addr), phys_addr, size, flags)) { if (remap_area_pages((unsigned long) addr, phys_addr, size, flags)) {
vunmap(addr); vunmap(addr);
return NULL; return NULL;
} }
......
...@@ -10,6 +10,8 @@ ...@@ -10,6 +10,8 @@
#define flush_cache_range(vma, start, end) do { } while (0) #define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0) #define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_dcache_page(page) do { } while (0) #define flush_dcache_page(page) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
/* Note that the following two definitions are _highly_ dependent /* Note that the following two definitions are _highly_ dependent
on the contexts in which they are used in the kernel. I personally on the contexts in which they are used in the kernel. I personally
...@@ -60,4 +62,11 @@ extern void flush_icache_user_range(struct vm_area_struct *vma, ...@@ -60,4 +62,11 @@ extern void flush_icache_user_range(struct vm_area_struct *vma,
#define flush_icache_page(vma, page) \ #define flush_icache_page(vma, page) \
flush_icache_user_range((vma), (page), 0, 0) flush_icache_user_range((vma), (page), 0, 0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* _ALPHA_CACHEFLUSH_H */ #endif /* _ALPHA_CACHEFLUSH_H */
...@@ -49,7 +49,6 @@ ...@@ -49,7 +49,6 @@
#else #else
#define VMALLOC_START (-2*PGDIR_SIZE) #define VMALLOC_START (-2*PGDIR_SIZE)
#endif #endif
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (-PGDIR_SIZE) #define VMALLOC_END (-PGDIR_SIZE)
/* /*
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (0xe8000000) #define VMALLOC_END (0xe8000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -19,8 +19,7 @@ ...@@ -19,8 +19,7 @@
* linux/arch/arm/kernel/traps.c) * linux/arch/arm/kernel/traps.c)
*/ */
#define VMALLOC_ARCH_OFFSET (8 * 1024 * 1024) #define VMALLOC_ARCH_OFFSET (8 * 1024 * 1024)
#define VMALLOC_VMADDR(a) ((unsigned int) (a)) #define VMALLOC_START (((unsigned long) (high_memory) + VMALLOC_ARCH_OFFSET) & ~(VMALLOC_ARCH_OFFSET - 1))
#define VMALLOC_START ((VMALLOC_VMADDR(high_memory) + VMALLOC_ARCH_OFFSET) & ~(VMALLOC_ARCH_OFFSET - 1))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x1c000000) #define VMALLOC_END (PAGE_OFFSET + 0x1c000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -28,7 +28,6 @@ ...@@ -28,7 +28,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x1f000000) #define VMALLOC_END (PAGE_OFFSET + 0x1f000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#ifdef CONFIG_ARCH_FOOTBRIDGE #ifdef CONFIG_ARCH_FOOTBRIDGE
#define VMALLOC_END (PAGE_OFFSET + 0x30000000) #define VMALLOC_END (PAGE_OFFSET + 0x30000000)
......
...@@ -28,7 +28,6 @@ ...@@ -28,7 +28,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -28,7 +28,6 @@ ...@@ -28,7 +28,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (0xe8000000) #define VMALLOC_END (0xe8000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x20000000) #define VMALLOC_END (PAGE_OFFSET + 0x20000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (0xe8000000) #define VMALLOC_END (0xe8000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x1c000000) #define VMALLOC_END (PAGE_OFFSET + 0x1c000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (0xe8000000) #define VMALLOC_END (0xe8000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (PAGE_OFFSET + 0x10000000) #define VMALLOC_END (PAGE_OFFSET + 0x10000000)
#define MODULE_START (PAGE_OFFSET - 16*1048576) #define MODULE_START (PAGE_OFFSET - 16*1048576)
......
...@@ -207,6 +207,15 @@ extern void dmac_inv_range(unsigned long, unsigned long); ...@@ -207,6 +207,15 @@ extern void dmac_inv_range(unsigned long, unsigned long);
extern void dmac_clean_range(unsigned long, unsigned long); extern void dmac_clean_range(unsigned long, unsigned long);
extern void dmac_flush_range(unsigned long, unsigned long); extern void dmac_flush_range(unsigned long, unsigned long);
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif #endif
/* /*
......
...@@ -25,6 +25,8 @@ ...@@ -25,6 +25,8 @@
#define flush_cache_range(vma,start,end) do { } while (0) #define flush_cache_range(vma,start,end) do { } while (0)
#define flush_cache_page(vma,vmaddr) do { } while (0) #define flush_cache_page(vma,vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0) #define flush_page_to_ram(page) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define invalidate_dcache_range(start,end) do { } while (0) #define invalidate_dcache_range(start,end) do { } while (0)
#define clean_dcache_range(start,end) do { } while (0) #define clean_dcache_range(start,end) do { } while (0)
...@@ -37,6 +39,11 @@ ...@@ -37,6 +39,11 @@
#define flush_icache_range(start,end) do { } while (0) #define flush_icache_range(start,end) do { } while (0)
#define flush_icache_page(vma,page) do { } while (0) #define flush_icache_page(vma,page) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
/* DAG: ARM3 will flush cache on MEMC updates anyway? so don't bother */ /* DAG: ARM3 will flush cache on MEMC updates anyway? so don't bother */
/* IM : Yes, it will, but only if setup to do so (we do this). */ /* IM : Yes, it will, but only if setup to do so (we do this). */
#define clean_cache_area(_start,_size) do { } while (0) #define clean_cache_area(_start,_size) do { } while (0)
......
...@@ -173,7 +173,6 @@ extern struct page *empty_zero_page; ...@@ -173,7 +173,6 @@ extern struct page *empty_zero_page;
* area for the same reason. ;) FIXME: surely 1 page not 4k ? * area for the same reason. ;) FIXME: surely 1 page not 4k ?
*/ */
#define VMALLOC_START 0x01a00000 #define VMALLOC_START 0x01a00000
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END 0x01c00000 #define VMALLOC_END 0x01c00000
/* Is pmd_page supposed to return a pointer to a page in some arches? ours seems to /* Is pmd_page supposed to return a pointer to a page in some arches? ours seems to
......
...@@ -7,11 +7,9 @@ ...@@ -7,11 +7,9 @@
#ifdef CONFIG_CRIS_LOW_MAP #ifdef CONFIG_CRIS_LOW_MAP
#define VMALLOC_START KSEG_7 #define VMALLOC_START KSEG_7
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END KSEG_8 #define VMALLOC_END KSEG_8
#else #else
#define VMALLOC_START KSEG_D #define VMALLOC_START KSEG_D
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END KSEG_E #define VMALLOC_END KSEG_E
#endif #endif
......
...@@ -16,6 +16,13 @@ ...@@ -16,6 +16,13 @@
#define flush_icache_range(start, end) do { } while (0) #define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0) #define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
void global_flush_tlb(void); void global_flush_tlb(void);
int change_page_attr(struct page *page, int numpages, pgprot_t prot); int change_page_attr(struct page *page, int numpages, pgprot_t prot);
......
...@@ -11,7 +11,6 @@ ...@@ -11,7 +11,6 @@
*/ */
#define flush_cache_all() #define flush_cache_all()
#define flush_cache_all()
#define flush_cache_mm(mm) #define flush_cache_mm(mm)
#define flush_cache_range(vma,a,b) #define flush_cache_range(vma,a,b)
#define flush_cache_page(vma,p) #define flush_cache_page(vma,p)
...@@ -20,6 +19,8 @@ ...@@ -20,6 +19,8 @@
#define flush_icache() #define flush_icache()
#define flush_icache_page(vma,page) #define flush_icache_page(vma,page)
#define flush_icache_range(start,len) #define flush_icache_range(start,len)
#define flush_cache_vmap(start, end)
#define flush_cache_vunmap(start, end)
#define cache_push_v(vaddr,len) #define cache_push_v(vaddr,len)
#define cache_push(paddr,len) #define cache_push(paddr,len)
#define cache_clear(paddr,len) #define cache_clear(paddr,len)
...@@ -28,4 +29,9 @@ ...@@ -28,4 +29,9 @@
#define flush_icache_user_range(vma,page,addr,len) #define flush_icache_user_range(vma,page,addr,len)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* _ASM_H8300_CACHEFLUSH_H */ #endif /* _ASM_H8300_CACHEFLUSH_H */
...@@ -13,6 +13,13 @@ ...@@ -13,6 +13,13 @@
#define flush_icache_range(start, end) do { } while (0) #define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0) #define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
void global_flush_tlb(void); void global_flush_tlb(void);
int change_page_attr(struct page *page, int numpages, pgprot_t prot); int change_page_attr(struct page *page, int numpages, pgprot_t prot);
......
...@@ -63,6 +63,8 @@ void *kmap_atomic(struct page *page, enum km_type type); ...@@ -63,6 +63,8 @@ void *kmap_atomic(struct page *page, enum km_type type);
void kunmap_atomic(void *kvaddr, enum km_type type); void kunmap_atomic(void *kvaddr, enum km_type type);
struct page *kmap_atomic_to_page(void *ptr); struct page *kmap_atomic_to_page(void *ptr);
#define flush_cache_kmaps() do { } while (0)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_HIGHMEM_H */ #endif /* _ASM_HIGHMEM_H */
...@@ -85,7 +85,6 @@ void paging_init(void); ...@@ -85,7 +85,6 @@ void paging_init(void);
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long) high_memory + 2*VMALLOC_OFFSET-1) & \ #define VMALLOC_START (((unsigned long) high_memory + 2*VMALLOC_OFFSET-1) & \
~(VMALLOC_OFFSET-1)) ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE) # define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
#else #else
......
...@@ -21,6 +21,8 @@ ...@@ -21,6 +21,8 @@
#define flush_cache_range(vma, start, end) do { } while (0) #define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0) #define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_icache_page(vma,page) do { } while (0) #define flush_icache_page(vma,page) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define flush_dcache_page(page) \ #define flush_dcache_page(page) \
do { \ do { \
...@@ -35,4 +37,11 @@ do { \ ...@@ -35,4 +37,11 @@ do { \
flush_icache_range(_addr, _addr + (len)); \ flush_icache_range(_addr, _addr + (len)); \
} while (0) } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* _ASM_IA64_CACHEFLUSH_H */ #endif /* _ASM_IA64_CACHEFLUSH_H */
...@@ -207,7 +207,6 @@ ia64_phys_addr_valid (unsigned long addr) ...@@ -207,7 +207,6 @@ ia64_phys_addr_valid (unsigned long addr)
#define RGN_KERNEL 7 #define RGN_KERNEL 7
#define VMALLOC_START 0xa000000200000000 #define VMALLOC_START 0xa000000200000000
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#ifdef CONFIG_VIRTUAL_MEM_MAP #ifdef CONFIG_VIRTUAL_MEM_MAP
# define VMALLOC_END_INIT (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 9))) # define VMALLOC_END_INIT (0xa000000000000000 + (1UL << (4*PAGE_SHIFT - 9)))
# define VMALLOC_END vmalloc_end # define VMALLOC_END vmalloc_end
......
...@@ -80,6 +80,9 @@ extern void cache_push_v(unsigned long vaddr, int len); ...@@ -80,6 +80,9 @@ extern void cache_push_v(unsigned long vaddr, int len);
#define flush_cache_all() __flush_cache_all() #define flush_cache_all() __flush_cache_all()
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
extern inline void flush_cache_mm(struct mm_struct *mm) extern inline void flush_cache_mm(struct mm_struct *mm)
{ {
if (mm == current->mm) if (mm == current->mm)
...@@ -127,6 +130,10 @@ extern inline void __flush_page_to_ram(void *vaddr) ...@@ -127,6 +130,10 @@ extern inline void __flush_page_to_ram(void *vaddr)
#define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) #define flush_dcache_page(page) __flush_page_to_ram(page_address(page))
#define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) #define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page))
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
extern void flush_icache_range(unsigned long address, unsigned long endaddr); extern void flush_icache_range(unsigned long address, unsigned long endaddr);
......
...@@ -79,12 +79,10 @@ ...@@ -79,12 +79,10 @@
*/ */
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long) high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long) high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END KMAP_START #define VMALLOC_END KMAP_START
#else #else
extern unsigned long vmalloc_end; extern unsigned long vmalloc_end;
#define VMALLOC_START 0x0f800000 #define VMALLOC_START 0x0f800000
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END vmalloc_end #define VMALLOC_END vmalloc_end
#endif /* CONFIG_SUN3 */ #endif /* CONFIG_SUN3 */
......
...@@ -15,7 +15,13 @@ ...@@ -15,7 +15,13 @@
#define flush_icache_range(start,len) __flush_cache_all() #define flush_icache_range(start,len) __flush_cache_all()
#define flush_icache_page(vma,pg) do { } while (0) #define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
extern inline void __flush_cache_all(void) extern inline void __flush_cache_all(void)
{ {
......
...@@ -43,7 +43,15 @@ extern void (*flush_icache_page)(struct vm_area_struct *vma, ...@@ -43,7 +43,15 @@ extern void (*flush_icache_page)(struct vm_area_struct *vma,
extern void (*flush_icache_range)(unsigned long start, unsigned long end); extern void (*flush_icache_range)(unsigned long start, unsigned long end);
#define flush_icache_user_range(vma, page, addr, len) \ #define flush_icache_user_range(vma, page, addr, len) \
flush_icache_page(vma, page) flush_icache_page(vma, page)
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
extern void (*flush_cache_sigtramp)(unsigned long addr); extern void (*flush_cache_sigtramp)(unsigned long addr);
extern void (*flush_icache_all)(void); extern void (*flush_icache_all)(void);
......
...@@ -54,6 +54,8 @@ extern void *kmap_atomic(struct page *page, enum km_type type); ...@@ -54,6 +54,8 @@ extern void *kmap_atomic(struct page *page, enum km_type type);
extern void kunmap_atomic(void *kvaddr, enum km_type type); extern void kunmap_atomic(void *kvaddr, enum km_type type);
extern struct page *kmap_atomic_to_page(void *ptr); extern struct page *kmap_atomic_to_page(void *ptr);
#define flush_cache_kmaps() flush_cache_all()
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_HIGHMEM_H */ #endif /* _ASM_HIGHMEM_H */
...@@ -79,7 +79,6 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, ...@@ -79,7 +79,6 @@ extern int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1,
#define FIRST_USER_PGD_NR 0 #define FIRST_USER_PGD_NR 0
#define VMALLOC_START KSEG2 #define VMALLOC_START KSEG2
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#if CONFIG_HIGHMEM #if CONFIG_HIGHMEM
# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE) # define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
......
...@@ -64,7 +64,6 @@ ...@@ -64,7 +64,6 @@
#define FIRST_USER_PGD_NR 0 #define FIRST_USER_PGD_NR 0
#define VMALLOC_START XKSEG #define VMALLOC_START XKSEG
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END \ #define VMALLOC_END \
(VMALLOC_START + ((1 << PGD_ORDER) * PTRS_PER_PTE * PAGE_SIZE)) (VMALLOC_START + ((1 << PGD_ORDER) * PTRS_PER_PTE * PAGE_SIZE))
......
...@@ -30,6 +30,9 @@ static inline void flush_cache_all(void) ...@@ -30,6 +30,9 @@ static inline void flush_cache_all(void)
on_each_cpu(cacheflush_h_tmp_function, NULL, 1, 1); on_each_cpu(cacheflush_h_tmp_function, NULL, 1, 1);
} }
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
/* The following value needs to be tuned and probably scaled with the /* The following value needs to be tuned and probably scaled with the
* cache size. * cache size.
*/ */
...@@ -82,6 +85,13 @@ static inline void flush_dcache_page(struct page *page) ...@@ -82,6 +85,13 @@ static inline void flush_dcache_page(struct page *page)
flush_user_dcache_range(addr, addr + len); \ flush_user_dcache_range(addr, addr + len); \
flush_user_icache_range(addr, addr + len); } while (0) flush_user_icache_range(addr, addr + len); } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
static inline void flush_cache_range(struct vm_area_struct *vma, static inline void flush_cache_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
{ {
......
...@@ -109,7 +109,6 @@ ...@@ -109,7 +109,6 @@
extern void *vmalloc_start; extern void *vmalloc_start;
#define PCXL_DMA_MAP_SIZE (8*1024*1024) #define PCXL_DMA_MAP_SIZE (8*1024*1024)
#define VMALLOC_START ((unsigned long)vmalloc_start) #define VMALLOC_START ((unsigned long)vmalloc_start)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
/* this is a fixmap remnant, see fixmap.h */ /* this is a fixmap remnant, see fixmap.h */
#define VMALLOC_END (TMPALIAS_MAP_START) #define VMALLOC_END (TMPALIAS_MAP_START)
#endif #endif
......
...@@ -24,12 +24,21 @@ ...@@ -24,12 +24,21 @@
#define flush_cache_range(vma, a, b) do { } while (0) #define flush_cache_range(vma, a, b) do { } while (0)
#define flush_cache_page(vma, p) do { } while (0) #define flush_cache_page(vma, p) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_page(vma, page) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
extern void flush_dcache_page(struct page *page); extern void flush_dcache_page(struct page *page);
extern void flush_icache_range(unsigned long, unsigned long); extern void flush_icache_range(unsigned long, unsigned long);
extern void flush_icache_user_range(struct vm_area_struct *vma, extern void flush_icache_user_range(struct vm_area_struct *vma,
struct page *page, unsigned long addr, int len); struct page *page, unsigned long addr, int len);
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
extern void __flush_dcache_icache(void *page_va); extern void __flush_dcache_icache(void *page_va);
extern void __flush_dcache_icache_phys(unsigned long physaddr); extern void __flush_dcache_icache_phys(unsigned long physaddr);
......
...@@ -132,6 +132,8 @@ static inline struct page *kmap_atomic_to_page(void *ptr) ...@@ -132,6 +132,8 @@ static inline struct page *kmap_atomic_to_page(void *ptr)
return pte_page(kmap_pte[idx]); return pte_page(kmap_pte[idx]);
} }
#define flush_cache_kmaps() flush_cache_all()
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_HIGHMEM_H */ #endif /* _ASM_HIGHMEM_H */
...@@ -129,7 +129,6 @@ extern unsigned long ioremap_bot, ioremap_base; ...@@ -129,7 +129,6 @@ extern unsigned long ioremap_bot, ioremap_base;
#else #else
#define VMALLOC_START ((((long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))) #define VMALLOC_START ((((long)high_memory + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)))
#endif #endif
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END ioremap_bot #define VMALLOC_END ioremap_bot
/* /*
......
...@@ -14,12 +14,22 @@ ...@@ -14,12 +14,22 @@
#define flush_cache_range(vma, start, end) do { } while (0) #define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0) #define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0) #define flush_icache_page(vma, page) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
extern void flush_dcache_page(struct page *page); extern void flush_dcache_page(struct page *page);
extern void flush_icache_range(unsigned long, unsigned long); extern void flush_icache_range(unsigned long, unsigned long);
extern void flush_icache_user_range(struct vm_area_struct *vma, extern void flush_icache_user_range(struct vm_area_struct *vma,
struct page *page, unsigned long addr, struct page *page, unsigned long addr,
int len); int len);
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
extern void __flush_dcache_icache(void *page_va); extern void __flush_dcache_icache(void *page_va);
#endif /* _PPC64_CACHEFLUSH_H */ #endif /* _PPC64_CACHEFLUSH_H */
...@@ -45,7 +45,6 @@ ...@@ -45,7 +45,6 @@
* Define the address range of the vmalloc VM area. * Define the address range of the vmalloc VM area.
*/ */
#define VMALLOC_START (0xD000000000000000) #define VMALLOC_START (0xD000000000000000)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END (VMALLOC_START + VALID_EA_BITS) #define VMALLOC_END (VMALLOC_START + VALID_EA_BITS)
/* /*
......
...@@ -13,5 +13,12 @@ ...@@ -13,5 +13,12 @@
#define flush_icache_range(start, end) do { } while (0) #define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0) #define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* _S390_CACHEFLUSH_H */ #endif /* _S390_CACHEFLUSH_H */
...@@ -117,7 +117,6 @@ extern char empty_zero_page[PAGE_SIZE]; ...@@ -117,7 +117,6 @@ extern char empty_zero_page[PAGE_SIZE];
#define VMALLOC_OFFSET (8*1024*1024) #define VMALLOC_OFFSET (8*1024*1024)
#define VMALLOC_START (((unsigned long) high_memory + VMALLOC_OFFSET) \ #define VMALLOC_START (((unsigned long) high_memory + VMALLOC_OFFSET) \
& ~(VMALLOC_OFFSET-1)) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#ifndef __s390x__ #ifndef __s390x__
# define VMALLOC_END (0x7fffffffL) # define VMALLOC_END (0x7fffffffL)
#else /* __s390x__ */ #else /* __s390x__ */
......
...@@ -10,4 +10,14 @@ extern void __flush_purge_region(void *start, int size); ...@@ -10,4 +10,14 @@ extern void __flush_purge_region(void *start, int size);
/* Flush (invalidate only) a region (smaller than a page) */ /* Flush (invalidate only) a region (smaller than a page) */
extern void __flush_invalidate_region(void *start, int size); extern void __flush_invalidate_region(void *start, int size);
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* __ASM_SH_CACHEFLUSH_H */ #endif /* __ASM_SH_CACHEFLUSH_H */
...@@ -51,7 +51,6 @@ extern unsigned long empty_zero_page[1024]; ...@@ -51,7 +51,6 @@ extern unsigned long empty_zero_page[1024];
* Currently only 4-enty (16kB) is used (see arch/sh/mm/cache.c) * Currently only 4-enty (16kB) is used (see arch/sh/mm/cache.c)
*/ */
#define VMALLOC_START (P3SEG+0x00100000) #define VMALLOC_START (P3SEG+0x00100000)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END P4SEG #define VMALLOC_END P4SEG
/* 0x001 WT-bit on SH-4, 0 on SH-3 */ /* 0x001 WT-bit on SH-4, 0 on SH-3 */
......
...@@ -56,6 +56,11 @@ BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long) ...@@ -56,6 +56,11 @@ BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long) BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long)
BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long) BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long)
...@@ -66,4 +71,7 @@ extern void sparc_flush_page_to_ram(struct page *page); ...@@ -66,4 +71,7 @@ extern void sparc_flush_page_to_ram(struct page *page);
#define flush_dcache_page(page) sparc_flush_page_to_ram(page) #define flush_dcache_page(page) sparc_flush_page_to_ram(page)
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
#endif /* _SPARC_CACHEFLUSH_H */ #endif /* _SPARC_CACHEFLUSH_H */
...@@ -89,6 +89,8 @@ static inline struct page *kmap_atomic_to_page(void *ptr) ...@@ -89,6 +89,8 @@ static inline struct page *kmap_atomic_to_page(void *ptr)
return pte_page(*pte); return pte_page(*pte);
} }
#define flush_cache_kmaps() flush_cache_all()
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_HIGHMEM_H */ #endif /* _ASM_HIGHMEM_H */
...@@ -101,8 +101,6 @@ BTFIXUPDEF_SIMM13(ptrs_per_pmd) ...@@ -101,8 +101,6 @@ BTFIXUPDEF_SIMM13(ptrs_per_pmd)
BTFIXUPDEF_SIMM13(ptrs_per_pgd) BTFIXUPDEF_SIMM13(ptrs_per_pgd)
BTFIXUPDEF_SIMM13(user_ptrs_per_pgd) BTFIXUPDEF_SIMM13(user_ptrs_per_pgd)
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define pte_ERROR(e) __builtin_trap() #define pte_ERROR(e) __builtin_trap()
#define pmd_ERROR(e) __builtin_trap() #define pmd_ERROR(e) __builtin_trap()
#define pgd_ERROR(e) __builtin_trap() #define pgd_ERROR(e) __builtin_trap()
......
...@@ -48,6 +48,14 @@ extern void smp_flush_cache_all(void); ...@@ -48,6 +48,14 @@ extern void smp_flush_cache_all(void);
#define flush_icache_page(vma, pg) do { } while(0) #define flush_icache_page(vma, pg) do { } while(0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
extern void flush_dcache_page(struct page *page); extern void flush_dcache_page(struct page *page);
#define flush_cache_vmap(start, end) flush_cache_all()
#define flush_cache_vunmap(start, end) flush_cache_all()
#endif /* _SPARC64_CACHEFLUSH_H */ #endif /* _SPARC64_CACHEFLUSH_H */
...@@ -30,7 +30,6 @@ ...@@ -30,7 +30,6 @@
#define MODULES_LEN 0x000000007e000000 #define MODULES_LEN 0x000000007e000000
#define MODULES_END 0x0000000080000000 #define MODULES_END 0x0000000080000000
#define VMALLOC_START 0x0000000140000000 #define VMALLOC_START 0x0000000140000000
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define VMALLOC_END 0x0000000200000000 #define VMALLOC_END 0x0000000200000000
#define LOW_OBP_ADDRESS 0x00000000f0000000 #define LOW_OBP_ADDRESS 0x00000000f0000000
#define HI_OBP_ADDRESS 0x0000000100000000 #define HI_OBP_ADDRESS 0x0000000100000000
......
...@@ -69,7 +69,6 @@ extern unsigned long high_physmem; ...@@ -69,7 +69,6 @@ extern unsigned long high_physmem;
#define VMALLOC_OFFSET (__va_space) #define VMALLOC_OFFSET (__va_space)
#define VMALLOC_START (((unsigned long) high_physmem + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1)) #define VMALLOC_START (((unsigned long) high_physmem + VMALLOC_OFFSET) & ~(VMALLOC_OFFSET-1))
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#ifdef CONFIG_HIGHMEM #ifdef CONFIG_HIGHMEM
# define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE) # define VMALLOC_END (PKMAP_BASE-2*PAGE_SIZE)
......
...@@ -27,6 +27,8 @@ ...@@ -27,6 +27,8 @@
#define flush_cache_range(vma, start, end) ((void)0) #define flush_cache_range(vma, start, end) ((void)0)
#define flush_cache_page(vma, vmaddr) ((void)0) #define flush_cache_page(vma, vmaddr) ((void)0)
#define flush_dcache_page(page) ((void)0) #define flush_dcache_page(page) ((void)0)
#define flush_cache_vmap(start, end) ((void)0)
#define flush_cache_vunmap(start, end) ((void)0)
#ifdef CONFIG_NO_CACHE #ifdef CONFIG_NO_CACHE
...@@ -55,5 +57,11 @@ extern void flush_cache_sigtramp (unsigned long addr); ...@@ -55,5 +57,11 @@ extern void flush_cache_sigtramp (unsigned long addr);
#endif /* CONFIG_NO_CACHE */ #endif /* CONFIG_NO_CACHE */
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { memcpy(dst, src, len); \
flush_icache_user_range(vma, page, vaddr, len); \
} while (0)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#endif /* __V850_CACHEFLUSH_H__ */ #endif /* __V850_CACHEFLUSH_H__ */
...@@ -13,6 +13,13 @@ ...@@ -13,6 +13,13 @@
#define flush_icache_range(start, end) do { } while (0) #define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0) #define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0) #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
#define flush_cache_vmap(start, end) do { } while (0)
#define flush_cache_vunmap(start, end) do { } while (0)
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
memcpy(dst, src, len)
void global_flush_tlb(void); void global_flush_tlb(void);
int change_page_attr(struct page *page, int numpages, pgprot_t prot); int change_page_attr(struct page *page, int numpages, pgprot_t prot);
......
...@@ -126,7 +126,6 @@ static inline void set_pml4(pml4_t *dst, pml4_t val) ...@@ -126,7 +126,6 @@ static inline void set_pml4(pml4_t *dst, pml4_t val)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define VMALLOC_START 0xffffff0000000000 #define VMALLOC_START 0xffffff0000000000
#define VMALLOC_END 0xffffff7fffffffff #define VMALLOC_END 0xffffff7fffffffff
#define VMALLOC_VMADDR(x) ((unsigned long)(x))
#define MODULES_VADDR 0xffffffffa0000000 #define MODULES_VADDR 0xffffffffa0000000
#define MODULES_END 0xffffffffafffffff #define MODULES_END 0xffffffffafffffff
#define MODULES_LEN (MODULES_END - MODULES_VADDR) #define MODULES_LEN (MODULES_END - MODULES_VADDR)
......
...@@ -179,19 +179,14 @@ int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, in ...@@ -179,19 +179,14 @@ int access_process_vm(struct task_struct *tsk, unsigned long addr, void *buf, in
flush_cache_page(vma, addr); flush_cache_page(vma, addr);
/*
* FIXME! We used to have flush_page_to_ram() in here, but
* that was wrong. davem says we need a new per-arch primitive
* to handle this correctly.
*/
maddr = kmap(page); maddr = kmap(page);
if (write) { if (write) {
memcpy(maddr + offset, buf, bytes); copy_to_user_page(vma, page, addr,
flush_icache_user_range(vma, page, addr, bytes); maddr + offset, buf, bytes);
set_page_dirty_lock(page); set_page_dirty_lock(page);
} else { } else {
memcpy(buf, maddr + offset, bytes); copy_from_user_page(vma, page, addr,
buf, maddr + offset, bytes);
} }
kunmap(page); kunmap(page);
page_cache_release(page); page_cache_release(page);
......
...@@ -24,6 +24,7 @@ ...@@ -24,6 +24,7 @@
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/hash.h> #include <linux/hash.h>
#include <linux/highmem.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
...@@ -62,7 +63,7 @@ static void flush_all_zero_pkmaps(void) ...@@ -62,7 +63,7 @@ static void flush_all_zero_pkmaps(void)
{ {
int i; int i;
flush_cache_all(); flush_cache_kmaps();
for (i = 0; i < LAST_PKMAP; i++) { for (i = 0; i < LAST_PKMAP; i++) {
struct page *page; struct page *page;
......
...@@ -135,23 +135,23 @@ static int map_area_pmd(pmd_t *pmd, unsigned long address, ...@@ -135,23 +135,23 @@ static int map_area_pmd(pmd_t *pmd, unsigned long address,
void unmap_vm_area(struct vm_struct *area) void unmap_vm_area(struct vm_struct *area)
{ {
unsigned long address = VMALLOC_VMADDR(area->addr); unsigned long address = (unsigned long) area->addr;
unsigned long end = (address + area->size); unsigned long end = (address + area->size);
pgd_t *dir; pgd_t *dir;
dir = pgd_offset_k(address); dir = pgd_offset_k(address);
flush_cache_all(); flush_cache_vunmap(address, end);
do { do {
unmap_area_pmd(dir, address, end - address); unmap_area_pmd(dir, address, end - address);
address = (address + PGDIR_SIZE) & PGDIR_MASK; address = (address + PGDIR_SIZE) & PGDIR_MASK;
dir++; dir++;
} while (address && (address < end)); } while (address && (address < end));
flush_tlb_kernel_range(VMALLOC_VMADDR(area->addr), end); flush_tlb_kernel_range((unsigned long) area->addr, end);
} }
int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page ***pages) int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page ***pages)
{ {
unsigned long address = VMALLOC_VMADDR(area->addr); unsigned long address = (unsigned long) area->addr;
unsigned long end = address + (area->size-PAGE_SIZE); unsigned long end = address + (area->size-PAGE_SIZE);
pgd_t *dir; pgd_t *dir;
int err = 0; int err = 0;
...@@ -174,7 +174,7 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page ***pages) ...@@ -174,7 +174,7 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page ***pages)
} while (address && (address < end)); } while (address && (address < end));
spin_unlock(&init_mm.page_table_lock); spin_unlock(&init_mm.page_table_lock);
flush_cache_all(); flush_cache_vmap((unsigned long) area->addr, end);
return err; return err;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment