Commit eb870565 authored by David S. Miller's avatar David S. Miller

Merge davem@nuts.ninka.net:/home/davem/src/BK/net-2.5

into kernel.bkbits.net:/home/davem/net-2.5
parents 5c52f39b 7b87c44e
......@@ -75,7 +75,7 @@ changes occur:
Platform developers note that generic code will always
invoke this interface with mm->page_table_lock held.
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
This time we need to remove the PAGE_SIZE sized translation
from the TLB. The 'vma' is the backing structure used by
......@@ -87,9 +87,9 @@ changes occur:
After running, this interface must make sure that any previous
page table modification for address space 'vma->vm_mm' for
user virtual address 'page' will be visible to the cpu. That
user virtual address 'addr' will be visible to the cpu. That
is, after running, there will be no entries in the TLB for
'vma->vm_mm' for virtual address 'page'.
'vma->vm_mm' for virtual address 'addr'.
This is used primarily during fault processing.
......@@ -144,9 +144,9 @@ the sequence will be in one of the following forms:
change_range_of_page_tables(mm, start, end);
flush_tlb_range(vma, start, end);
3) flush_cache_page(vma, page);
3) flush_cache_page(vma, addr);
set_pte(pte_pointer, new_pte_val);
flush_tlb_page(vma, page);
flush_tlb_page(vma, addr);
The cache level flush will always be first, because this allows
us to properly handle systems whose caches are strict and require
......@@ -200,7 +200,7 @@ Here are the routines, one by one:
call flush_cache_page (see below) for each entry which may be
modified.
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long page)
4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
This time we need to remove a PAGE_SIZE sized range
from the cache. The 'vma' is the backing structure used by
......@@ -211,7 +211,7 @@ Here are the routines, one by one:
"Harvard" type cache layouts).
After running, there will be no entries in the cache for
'vma->vm_mm' for virtual address 'page'.
'vma->vm_mm' for virtual address 'addr'.
This is used primarily during fault processing.
......@@ -235,7 +235,7 @@ this value.
NOTE: This does not fix shared mmaps, check out the sparc64 port for
one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
Next, you have two methods to solve the D-cache aliasing issue for all
Next, you have to solve the D-cache aliasing issue for all
other cases. Please keep in mind that fact that, for a given page
mapped into some user address space, there is always at least one more
mapping, that of the kernel in it's linear mapping starting at
......@@ -244,35 +244,8 @@ physical page into its address space, by implication the D-cache
aliasing problem has the potential to exist since the kernel already
maps this page at its virtual address.
First, I describe the old method to deal with this problem. I am
describing it for documentation purposes, but it is deprecated and the
latter method I describe next should be used by all new ports and all
existing ports should move over to the new mechanism as well.
flush_page_to_ram(struct page *page)
The physical page 'page' is about to be place into the
user address space of a process. If it is possible for
stores done recently by the kernel into this physical
page, to not be visible to an arbitrary mapping in userspace,
you must flush this page from the D-cache.
If the D-cache is writeback in nature, the dirty data (if
any) for this physical page must be written back to main
memory before the cache lines are invalidated.
Admittedly, the author did not think very much when designing this
interface. It does not give the architecture enough information about
what exactly is going on, and there is no context to base a judgment
on about whether an alias is possible at all. The new interfaces to
deal with D-cache aliasing are meant to address this by telling the
architecture specific code exactly which is going on at the proper points
in time.
Here is the new interface:
void copy_user_page(void *to, void *from, unsigned long address)
void clear_user_page(void *to, unsigned long address)
void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)
void clear_user_page(void *to, unsigned long addr, struct page *page)
These two routines store data in user anonymous or COW
pages. It allows a port to efficiently avoid D-cache alias
......@@ -285,8 +258,9 @@ Here is the new interface:
of the same "color" as the user mapping of the page. Sparc64
for example, uses this technique.
The "address" parameter tells the virtual address where the
user will ultimately have this page mapped.
The 'addr' parameter tells the virtual address where the
user will ultimately have this page mapped, and the 'page'
parameter gives a pointer to the struct page of the target.
If D-cache aliasing is not an issue, these two routines may
simply call memcpy/memset directly and do nothing more.
......@@ -363,5 +337,5 @@ Here is the new interface:
void flush_icache_page(struct vm_area_struct *vma, struct page *page)
All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.5 the hope is to
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
remove this interface completely.
......@@ -362,6 +362,93 @@ IDE devices:
ide-cdrom version 4.53
ide-disk version 1.08
..............................................................................
meminfo:
Provides information about distribution and utilization of memory. This
varies by architecture and compile options. The following is from a
16GB PIII, which has highmem enabled. You may not have all of these fields.
> cat /proc/meminfo
MemTotal: 16344972 kB
MemFree: 13634064 kB
Buffers: 3656 kB
Cached: 1195708 kB
SwapCached: 0 kB
Active: 891636 kB
Inactive: 1077224 kB
HighTotal: 15597528 kB
HighFree: 13629632 kB
LowTotal: 747444 kB
LowFree: 4432 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 968 kB
Writeback: 0 kB
Mapped: 280372 kB
Slab: 684068 kB
Committed_AS: 1576424 kB
PageTables: 24448 kB
ReverseMaps: 1080904
VmallocTotal: 112216 kB
VmallocUsed: 428 kB
VmallocChunk: 111088 kB
MemTotal: Total usable ram (i.e. physical ram minus a few reserved
bits and the kernel binary code)
MemFree: The sum of LowFree+HighFree
Buffers: Relatively temporary storage for raw disk blocks
shouldn't get tremendously large (20MB or so)
Cached: in-memory cache for files read from the disk (the
pagecache). Doesn't include SwapCached
SwapCached: Memory that once was swapped out, is swapped back in but
still also is in the swapfile (if memory is needed it
doesn't need to be swapped out AGAIN because it is already
in the swapfile. This saves I/O)
Active: Memory that has been used more recently and usually not
reclaimed unless absolutely necessary.
Inactive: Memory which has been less recently used. It is more
eligible to be reclaimed for other purposes
HighTotal:
HighFree: Highmem is all memory above ~860MB of physical memory
Highmem areas are for use by userspace programs, or
for the pagecache. The kernel must use tricks to access
this memory, making it slower to access than lowmem.
LowTotal:
LowFree: Lowmem is memory which can be used for everything that
highmem can be used for, but it is also availble for the
kernel's use for its own data structures. Among many
other things, it is where everything from the Slab is
allocated. Bad things happen when you're out of lowmem.
SwapTotal: total amount of swap space available
SwapFree: Memory which has been evicted from RAM, and is temporarily
on the disk
Dirty: Memory which is waiting to get written back to the disk
Writeback: Memory which is actively being written back to the disk
Mapped: files which have been mmaped, such as libraries
Slab: in-kernel data structures cache
Committed_AS: An estimate of how much RAM you would need to make a
99.99% guarantee that there never is OOM (out of memory)
for this workload. Normally the kernel will overcommit
memory. That means, say you do a 1GB malloc, nothing
happens, really. Only when you start USING that malloc
memory you will get real memory on demand, and just as
much as you use. So you sort of take a mortgage and hope
the bank doesn't go bust. Other cases might include when
you mmap a file that's shared only when you write to it
and you get a private copy of that data. While it normally
is shared between processes. The Committed_AS is a
guesstimate of how much RAM/swap you would need
worst-case.
PageTables: amount of memory dedicated to the lowest level of page
tables.
ReverseMaps: number of reverse mappings performed
VmallocTotal: total size of vmalloc memory area
VmallocUsed: amount of vmalloc area which is used
VmallocChunk: largest contigious block of vmalloc area which is free
More detailed information can be found in the controller specific
subdirectories. These are named ide0, ide1 and so on. Each of these
......
......@@ -32,7 +32,11 @@ SECTIONS
/* Will be freed after init */
. = ALIGN(8192); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
......
......@@ -14,7 +14,9 @@ SECTIONS
.init : { /* Init code and data */
_stext = .;
__init_begin = .;
_sinittext = .;
*(.init.text)
_einittext = .;
__proc_info_begin = .;
*(.proc.info)
__proc_info_end = .;
......
......@@ -18,7 +18,9 @@ SECTIONS
.init : { /* Init code and data */
_stext = .;
__init_begin = .;
_sinittext = .;
*(.init.text)
_einittext = .;
__proc_info_begin = .;
*(.proc.info)
__proc_info_end = .;
......
......@@ -54,7 +54,11 @@ SECTIONS
/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -251,7 +251,6 @@ put_gate_page (struct page *page, unsigned long address)
pte_unmap(pte);
goto out;
}
flush_page_to_ram(page);
set_pte(pte, mk_pte(page, PAGE_GATE));
pte_unmap(pte);
}
......
......@@ -96,7 +96,11 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
__init_begin = .;
.init.text : AT(ADDR(.init.text) - PAGE_OFFSET)
{ *(.init.text) }
{
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : AT(ADDR(.init.data) - PAGE_OFFSET)
{ *(.init.data) }
......
......@@ -40,7 +40,11 @@ SECTIONS
/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -34,7 +34,11 @@ SECTIONS
/* will be freed after init */
. = ALIGN(8192); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -282,7 +282,9 @@ SECTIONS {
.init : {
. = ALIGN(4096);
__init_begin = .;
_sinittext = .;
*(.init.text)
_einittext = .;
*(.init.data)
. = ALIGN(16);
__setup_start = .;
......
......@@ -195,7 +195,7 @@ int copy_strings32(int argc, u32 * argv, struct linux_binprm *bprm)
}
err = copy_from_user(kaddr + offset, (char *)A(str),
bytes_to_copy);
flush_page_to_ram(page);
flush_dcache_page(page);
kunmap(page);
if (err)
......
......@@ -183,7 +183,6 @@ static int copy_strings32(int argc, u32 *argv, struct linux_binprm *bprm)
}
err = copy_from_user(kaddr + offset, (char *)A(str), bytes_to_copy);
flush_dcache_page(page);
flush_page_to_ram(page);
kunmap(page);
if (err)
......
......@@ -53,7 +53,11 @@ SECTIONS
. = ALIGN(16384);
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -78,7 +78,11 @@ SECTIONS
. = ALIGN(4096);
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : {
*(.init.data);
__vtop_table_begin = .;
......
......@@ -2077,7 +2077,6 @@ static int copy_strings32(int argc, u32 * argv, struct linux_binprm *bprm)
err = copy_from_user(kaddr + offset, (char *)A(str),
bytes_to_copy);
flush_page_to_ram(page);
kunmap((unsigned long)kaddr);
if (err)
......
......@@ -77,7 +77,11 @@ SECTIONS
/* will be freed after init */
. = ALIGN(4096);
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -58,7 +58,11 @@ SECTIONS
/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(256);
__setup_start = .;
......
......@@ -1888,7 +1888,6 @@ static int copy_strings32(int argc, u32 * argv, struct linux_binprm *bprm)
err = copy_from_user(kaddr + offset, (char *)A(str),
bytes_to_copy);
flush_page_to_ram(page);
kunmap(page);
if (err)
......
......@@ -58,7 +58,11 @@ SECTIONS
/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(256);
__setup_start = .;
......
......@@ -34,7 +34,11 @@ SECTIONS
. = ALIGN(4096);
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
__init_text_end = .;
.init.data : { *(.init.data) }
. = ALIGN(16);
......
......@@ -41,7 +41,11 @@ SECTIONS
. = ALIGN(8192);
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -105,7 +105,9 @@
#define RAMK_INIT_CONTENTS_NO_END \
. = ALIGN (4096) ; \
__init_start = . ; \
_sinittext = .; \
*(.init.text) /* 2.5 convention */ \
_einittext = .; \
*(.init.data) \
*(.text.init) /* 2.4 convention */ \
*(.data.init) \
......@@ -125,7 +127,9 @@
/* The contents of `init' section for a ROM-resident kernel which
should go into ROM. */
#define ROMK_INIT_ROM_CONTENTS \
_sinittext = .; \
*(.init.text) /* 2.5 convention */ \
_einittext = .; \
*(.text.init) /* 2.4 convention */ \
INITCALL_CONTENTS \
INITRAMFS_CONTENTS
......
......@@ -78,7 +78,11 @@ SECTIONS
. = ALIGN(4096); /* Init code and data */
__init_begin = .;
.init.text : { *(.init.text) }
.init.text : {
_sinittext = .;
*(.init.text)
_einittext = .;
}
.init.data : { *(.init.data) }
. = ALIGN(16);
__setup_start = .;
......
......@@ -1874,7 +1874,7 @@ static void __do_SAK(void *arg)
}
task_lock(p);
if (p->files) {
read_lock(&p->files->file_lock);
spin_lock(&p->files->file_lock);
for (i=0; i < p->files->max_fds; i++) {
filp = fcheck_files(p->files, i);
if (filp && (filp->f_op == &tty_fops) &&
......@@ -1886,7 +1886,7 @@ static void __do_SAK(void *arg)
break;
}
}
read_unlock(&p->files->file_lock);
spin_unlock(&p->files->file_lock);
}
task_unlock(p);
}
......
......@@ -238,11 +238,12 @@ static void reschedule_retry(r1bio_t *r1_bio)
* operation and are ready to return a success/failure code to the buffer
* cache layer.
*/
static void raid_end_bio_io(r1bio_t *r1_bio, int uptodate)
static void raid_end_bio_io(r1bio_t *r1_bio)
{
struct bio *bio = r1_bio->master_bio;
bio_endio(bio, bio->bi_size, uptodate ? 0 : -EIO);
bio_endio(bio, bio->bi_size,
test_bit(R1BIO_Uptodate, &r1_bio->state) ? 0 : -EIO);
free_r1bio(r1_bio);
}
......@@ -299,7 +300,7 @@ static int end_request(struct bio *bio, unsigned int bytes_done, int error)
* we have only one bio on the read side
*/
if (uptodate)
raid_end_bio_io(r1_bio, uptodate);
raid_end_bio_io(r1_bio);
else {
/*
* oops, read error:
......@@ -320,7 +321,7 @@ static int end_request(struct bio *bio, unsigned int bytes_done, int error)
*/
if (atomic_dec_and_test(&r1_bio->remaining)) {
md_write_end(r1_bio->mddev);
raid_end_bio_io(r1_bio, uptodate);
raid_end_bio_io(r1_bio);
}
}
atomic_dec(&conf->mirrors[mirror].rdev->nr_pending);
......@@ -542,10 +543,10 @@ static int make_request(request_queue_t *q, struct bio * bio)
* then return an IO error:
*/
md_write_end(mddev);
raid_end_bio_io(r1_bio, 0);
raid_end_bio_io(r1_bio);
return 0;
}
atomic_set(&r1_bio->remaining, sum_bios);
atomic_set(&r1_bio->remaining, sum_bios+1);
/*
* We have to be a bit careful about the semaphore above, thats
......@@ -567,6 +568,12 @@ static int make_request(request_queue_t *q, struct bio * bio)
generic_make_request(mbio);
}
if (atomic_dec_and_test(&r1_bio->remaining)) {
md_write_end(mddev);
raid_end_bio_io(r1_bio);
}
return 0;
}
......@@ -917,7 +924,7 @@ static void raid1d(mddev_t *mddev)
" read error for block %llu\n",
bdev_partition_name(bio->bi_bdev),
(unsigned long long)r1_bio->sector);
raid_end_bio_io(r1_bio, 0);
raid_end_bio_io(r1_bio);
break;
}
printk(KERN_ERR "raid1: %s: redirecting sector %llu to"
......
......@@ -1378,7 +1378,6 @@ static int elf_core_dump(long signr, struct pt_regs * regs, struct file * file)
flush_cache_page(vma, addr);
kaddr = kmap(page);
DUMP_WRITE(kaddr, PAGE_SIZE);
flush_page_to_ram(page);
kunmap(page);
}
page_cache_release(page);
......
......@@ -1754,7 +1754,6 @@ static int __block_write_full_page(struct inode *inode, struct page *page,
* exposing stale data.
* The page is currently locked and not marked for writeback
*/
ClearPageUptodate(page);
bh = head;
/* Recovery: lock and submit the mapped buffers */
do {
......
......@@ -326,7 +326,7 @@ static int vfs_quota_sync(struct super_block *sb, int type)
if (!dquot_dirty(dquot))
continue;
spin_unlock(&dq_list_lock);
commit_dqblk(dquot);
sb->dq_op->sync_dquot(dquot);
goto restart;
}
spin_unlock(&dq_list_lock);
......@@ -1072,9 +1072,16 @@ struct dquot_operations dquot_operations = {
.alloc_inode = dquot_alloc_inode,
.free_space = dquot_free_space,
.free_inode = dquot_free_inode,
.transfer = dquot_transfer
.transfer = dquot_transfer,
.sync_dquot = commit_dqblk
};
/* Function used by filesystems for initializing the dquot_operations structure */
void init_dquot_operations(struct dquot_operations *fsdqops)
{
memcpy(fsdqops, &dquot_operations, sizeof(dquot_operations));
}
static inline void set_enable_flags(struct quota_info *dqopt, int type)
{
switch (type) {
......@@ -1432,3 +1439,4 @@ EXPORT_SYMBOL(unregister_quota_format);
EXPORT_SYMBOL(dqstats);
EXPORT_SYMBOL(dq_list_lock);
EXPORT_SYMBOL(dq_data_lock);
EXPORT_SYMBOL(init_dquot_operations);
......@@ -314,7 +314,6 @@ void put_dirty_page(struct task_struct * tsk, struct page *page, unsigned long a
}
lru_cache_add_active(page);
flush_dcache_page(page);
flush_page_to_ram(page);
set_pte(pte, pte_mkdirty(pte_mkwrite(mk_pte(page, PAGE_COPY))));
pte_chain = page_add_rmap(page, pte, pte_chain);
pte_unmap(pte);
......@@ -407,7 +406,7 @@ int setup_arg_pages(struct linux_binprm *bprm)
mpnt->vm_start = PAGE_MASK & (unsigned long) bprm->p;
mpnt->vm_end = STACK_TOP;
#endif
mpnt->vm_page_prot = PAGE_COPY;
mpnt->vm_page_prot = protection_map[VM_STACK_FLAGS & 0x7];
mpnt->vm_flags = VM_STACK_FLAGS;
mpnt->vm_ops = NULL;
mpnt->vm_pgoff = 0;
......@@ -750,7 +749,7 @@ static inline void flush_old_files(struct files_struct * files)
{
long j = -1;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
for (;;) {
unsigned long set, i;
......@@ -762,16 +761,16 @@ static inline void flush_old_files(struct files_struct * files)
if (!set)
continue;
files->close_on_exec->fds_bits[j] = 0;
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
for ( ; set ; i++,set >>= 1) {
if (set & 1) {
sys_close(i);
}
}
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
}
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
}
int flush_old_exec(struct linux_binprm * bprm)
......
This diff is collapsed.
This diff is collapsed.
......@@ -769,6 +769,10 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
printk ("EXT2-fs: not enough memory\n");
goto failed_mount;
}
percpu_counter_init(&sbi->s_freeblocks_counter);
percpu_counter_init(&sbi->s_freeinodes_counter);
percpu_counter_init(&sbi->s_dirs_counter);
bgl_lock_init(&sbi->s_blockgroup_lock);
sbi->s_debts = kmalloc(sbi->s_groups_count * sizeof(*sbi->s_debts),
GFP_KERNEL);
if (!sbi->s_debts) {
......@@ -792,7 +796,6 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
goto failed_mount2;
}
sbi->s_gdb_count = db_count;
sbi->s_dir_count = ext2_count_dirs(sb);
get_random_bytes(&sbi->s_next_generation, sizeof(u32));
/*
* set up enough so that it can read an inode
......@@ -814,6 +817,12 @@ static int ext2_fill_super(struct super_block *sb, void *data, int silent)
ext2_warning(sb, __FUNCTION__,
"mounting ext3 filesystem as ext2\n");
ext2_setup_super (sb, es, sb->s_flags & MS_RDONLY);
percpu_counter_mod(&sbi->s_freeblocks_counter,
ext2_count_free_blocks(sb));
percpu_counter_mod(&sbi->s_freeinodes_counter,
ext2_count_free_inodes(sb));
percpu_counter_mod(&sbi->s_dirs_counter,
ext2_count_dirs(sb));
return 0;
failed_mount2:
for (i = 0; i < db_count; i++)
......@@ -840,6 +849,8 @@ static void ext2_commit_super (struct super_block * sb,
static void ext2_sync_super(struct super_block *sb, struct ext2_super_block *es)
{
es->s_free_blocks_count = cpu_to_le32(ext2_count_free_blocks(sb));
es->s_free_inodes_count = cpu_to_le32(ext2_count_free_inodes(sb));
es->s_wtime = cpu_to_le32(get_seconds());
mark_buffer_dirty(EXT2_SB(sb)->s_sbh);
sync_dirty_buffer(EXT2_SB(sb)->s_sbh);
......@@ -868,6 +879,8 @@ void ext2_write_super (struct super_block * sb)
ext2_debug ("setting valid to 0\n");
es->s_state = cpu_to_le16(le16_to_cpu(es->s_state) &
~EXT2_VALID_FS);
es->s_free_blocks_count = cpu_to_le32(ext2_count_free_blocks(sb));
es->s_free_inodes_count = cpu_to_le32(ext2_count_free_inodes(sb));
es->s_mtime = cpu_to_le32(get_seconds());
ext2_sync_super(sb, es);
} else
......@@ -965,7 +978,7 @@ static int ext2_statfs (struct super_block * sb, struct statfs * buf)
buf->f_type = EXT2_SUPER_MAGIC;
buf->f_bsize = sb->s_blocksize;
buf->f_blocks = le32_to_cpu(sbi->s_es->s_blocks_count) - overhead;
buf->f_bfree = ext2_count_free_blocks (sb);
buf->f_bfree = ext2_count_free_blocks(sb);
buf->f_bavail = buf->f_bfree - le32_to_cpu(sbi->s_es->s_r_blocks_count);
if (buf->f_bfree < le32_to_cpu(sbi->s_es->s_r_blocks_count))
buf->f_bavail = 0;
......
......@@ -566,6 +566,8 @@ static void ext3_clear_inode(struct inode *inode)
# define ext3_clear_inode NULL
#endif
static struct dquot_operations ext3_qops;
static struct super_operations ext3_sops = {
.alloc_inode = ext3_alloc_inode,
.destroy_inode = ext3_destroy_inode,
......@@ -1337,6 +1339,7 @@ static int ext3_fill_super (struct super_block *sb, void *data, int silent)
*/
sb->s_op = &ext3_sops;
sb->s_export_op = &ext3_export_ops;
sb->dq_op = &ext3_qops;
INIT_LIST_HEAD(&sbi->s_orphan); /* unlinked but open files */
sb->s_root = 0;
......@@ -1977,6 +1980,56 @@ int ext3_statfs (struct super_block * sb, struct statfs * buf)
return 0;
}
/* Helper function for writing quotas on sync - we need to start transaction before quota file
* is locked for write. Otherwise the are possible deadlocks:
* Process 1 Process 2
* ext3_create() quota_sync()
* journal_start() write_dquot()
* DQUOT_INIT() down(dqio_sem)
* down(dqio_sem) journal_start()
*
*/
#ifdef CONFIG_QUOTA
#define EXT3_OLD_QFMT_BLOCKS 2
#define EXT3_V0_QFMT_BLOCKS 6
static int (*old_sync_dquot)(struct dquot *dquot);
static int ext3_sync_dquot(struct dquot *dquot)
{
int nblocks, ret;
handle_t *handle;
struct quota_info *dqops = sb_dqopt(dquot->dq_sb);
struct inode *qinode;
switch (dqops->info[dquot->dq_type].dqi_format->qf_fmt_id) {
case QFMT_VFS_OLD:
nblocks = EXT3_OLD_QFMT_BLOCKS;
break;
case QFMT_VFS_V0:
nblocks = EXT3_V0_QFMT_BLOCKS;
break;
default:
nblocks = EXT3_MAX_TRANS_DATA;
}
lock_kernel();
qinode = dqops->files[dquot->dq_type]->f_dentry->d_inode;
handle = ext3_journal_start(qinode, nblocks);
if (IS_ERR(handle)) {
unlock_kernel();
return PTR_ERR(handle);
}
unlock_kernel();
ret = old_sync_dquot(dquot);
lock_kernel();
ret = ext3_journal_stop(handle);
unlock_kernel();
return ret;
}
#endif
static struct super_block *ext3_get_sb(struct file_system_type *fs_type,
int flags, char *dev_name, void *data)
{
......@@ -1999,6 +2052,11 @@ static int __init init_ext3_fs(void)
err = init_inodecache();
if (err)
goto out1;
#ifdef CONFIG_QUOTA
init_dquot_operations(&ext3_qops);
old_sync_dquot = ext3_qops.sync_dquot;
ext3_qops.sync_dquot = ext3_sync_dquot;
#endif
err = register_filesystem(&ext3_fs_type);
if (err)
goto out;
......
......@@ -23,21 +23,21 @@ extern int fcntl_getlease(struct file *filp);
void set_close_on_exec(unsigned int fd, int flag)
{
struct files_struct *files = current->files;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (flag)
FD_SET(fd, files->close_on_exec);
else
FD_CLR(fd, files->close_on_exec);
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
}
static inline int get_close_on_exec(unsigned int fd)
{
struct files_struct *files = current->files;
int res;
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
res = FD_ISSET(fd, files->close_on_exec);
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
return res;
}
......@@ -134,15 +134,15 @@ static int dupfd(struct file *file, int start)
struct files_struct * files = current->files;
int fd;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
fd = locate_fd(files, file, start);
if (fd >= 0) {
FD_SET(fd, files->open_fds);
FD_CLR(fd, files->close_on_exec);
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
fd_install(fd, file);
} else {
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
fput(file);
}
......@@ -155,7 +155,7 @@ asmlinkage long sys_dup2(unsigned int oldfd, unsigned int newfd)
struct file * file, *tofree;
struct files_struct * files = current->files;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (!(file = fcheck(oldfd)))
goto out_unlock;
err = newfd;
......@@ -186,7 +186,7 @@ asmlinkage long sys_dup2(unsigned int oldfd, unsigned int newfd)
files->fd[newfd] = file;
FD_SET(newfd, files->open_fds);
FD_CLR(newfd, files->close_on_exec);
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
if (tofree)
filp_close(tofree, files);
......@@ -194,11 +194,11 @@ asmlinkage long sys_dup2(unsigned int oldfd, unsigned int newfd)
out:
return err;
out_unlock:
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
goto out;
out_fput:
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
fput(file);
goto out;
}
......
......@@ -65,7 +65,7 @@ int expand_fd_array(struct files_struct *files, int nr)
goto out;
nfds = files->max_fds;
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
/*
* Expand to the max in easy steps, and keep expanding it until
......@@ -89,7 +89,7 @@ int expand_fd_array(struct files_struct *files, int nr)
error = -ENOMEM;
new_fds = alloc_fd_array(nfds);
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (!new_fds)
goto out;
......@@ -110,15 +110,15 @@ int expand_fd_array(struct files_struct *files, int nr)
memset(&new_fds[i], 0,
(nfds-i) * sizeof(struct file *));
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
free_fd_array(old_fds, i);
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
}
} else {
/* Somebody expanded the array while we slept ... */
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
free_fd_array(new_fds, nfds);
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
}
error = 0;
out:
......@@ -167,7 +167,7 @@ int expand_fdset(struct files_struct *files, int nr)
goto out;
nfds = files->max_fdset;
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
/* Expand to the max in easy steps */
do {
......@@ -183,7 +183,7 @@ int expand_fdset(struct files_struct *files, int nr)
error = -ENOMEM;
new_openset = alloc_fdset(nfds);
new_execset = alloc_fdset(nfds);
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (!new_openset || !new_execset)
goto out;
......@@ -208,21 +208,21 @@ int expand_fdset(struct files_struct *files, int nr)
nfds = xchg(&files->max_fdset, nfds);
new_openset = xchg(&files->open_fds, new_openset);
new_execset = xchg(&files->close_on_exec, new_execset);
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
free_fdset (new_openset, nfds);
free_fdset (new_execset, nfds);
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
return 0;
}
/* Somebody expanded the array while we slept ... */
out:
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
if (new_openset)
free_fdset(new_openset, nfds);
if (new_execset)
free_fdset(new_execset, nfds);
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
return error;
}
......@@ -182,11 +182,11 @@ struct file *fget(unsigned int fd)
struct file *file;
struct files_struct *files = current->files;
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
file = fcheck(fd);
if (file)
get_file(file);
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
return file;
}
......
......@@ -7,5 +7,5 @@ obj-$(CONFIG_NFSD) += nfsd.o
nfsd-y := nfssvc.o nfsctl.o nfsproc.o nfsfh.o vfs.o \
export.o auth.o lockd.o nfscache.o nfsxdr.o stats.o
nfsd-$(CONFIG_NFSD_V3) += nfs3proc.o nfs3xdr.o
nfsd-$(CONFIG_NFSD_V4) += nfs4proc.o nfs4xdr.o
nfsd-$(CONFIG_NFSD_V4) += nfs4proc.o nfs4xdr.o nfs4state.o
nfsd-objs := $(nfsd-y)
......@@ -175,7 +175,7 @@ int expkey_parse(struct cache_detail *cd, char *mesg, int mlen)
ek = svc_expkey_lookup(&key, 2);
if (ek)
expkey_put(&ek->h, &svc_expkey_cache);
svc_export_put(&exp->h, &svc_export_cache);
exp_put(exp);
err = 0;
out_nd:
path_release(&nd);
......@@ -648,7 +648,6 @@ exp_export(struct nfsctl_export *nxp)
struct svc_export new;
struct svc_expkey *fsid_key = NULL;
struct nameidata nd;
struct inode *inode = NULL;
int err;
/* Consistency check */
......@@ -674,7 +673,6 @@ exp_export(struct nfsctl_export *nxp)
err = path_lookup(nxp->ex_path, 0, &nd);
if (err)
goto out_unlock;
inode = nd.dentry->d_inode;
err = -EINVAL;
exp = exp_get_by_name(clp, nd.mnt, nd.dentry, NULL);
......@@ -687,7 +685,7 @@ exp_export(struct nfsctl_export *nxp)
fsid_key->ek_export != exp)
goto finish;
if (exp != NULL) {
if (exp) {
/* just a flags/id/fsid update */
exp_fsid_unhash(exp);
......@@ -700,7 +698,7 @@ exp_export(struct nfsctl_export *nxp)
goto finish;
}
err = check_export(inode, nxp->ex_flags);
err = check_export(nd.dentry->d_inode, nxp->ex_flags);
if (err) goto finish;
err = -ENOMEM;
......@@ -838,7 +836,7 @@ exp_rootfh(svc_client *clp, char *path, struct knfsd_fh *f, int maxsize)
err = 0;
memcpy(f, &fh.fh_handle, sizeof(struct knfsd_fh));
fh_put(&fh);
exp_put(exp);
out:
path_release(&nd);
return err;
......
......@@ -173,20 +173,6 @@ nfsd4_renew(clientid_t *clientid)
return nfs_ok;
}
static inline int
nfsd4_setclientid(struct svc_rqst *rqstp, struct nfsd4_setclientid *setclientid)
{
memset(&setclientid->se_clientid, 0, sizeof(clientid_t));
memset(&setclientid->se_confirm, 0, sizeof(nfs4_verifier));
return nfs_ok;
}
static inline int
nfsd4_setclientid_confirm(struct svc_rqst *rqstp, struct nfsd4_setclientid_confirm *setclientid_confirm)
{
return nfs_ok;
}
/*
* filehandle-manipulating ops.
*/
......
This diff is collapsed.
......@@ -512,6 +512,7 @@ static int __init init_nfsd(void)
nfsd_cache_init(); /* RPC reply cache */
nfsd_export_init(); /* Exports table */
nfsd_lockd_init(); /* lockd->nfsd callbacks */
nfs4_state_init(); /* NFSv4 State */
if (proc_mkdir("fs/nfs", 0)) {
struct proc_dir_entry *entry;
entry = create_proc_entry("fs/nfs/exports", 0, NULL);
......@@ -530,6 +531,7 @@ static void __exit exit_nfsd(void)
remove_proc_entry("fs/nfs", NULL);
nfsd_stat_shutdown();
nfsd_lockd_shutdown();
nfs4_state_shutdown();
unregister_filesystem(&nfsd_fs_type);
}
......
......@@ -1568,13 +1568,11 @@ nfsd_permission(struct svc_export *exp, struct dentry *dentry, int acc)
inode->i_uid == current->fsuid)
return 0;
acc &= ~ MAY_OWNER_OVERRIDE; /* This bit is no longer needed,
and gets in the way later */
err = permission(inode, acc & (MAY_READ|MAY_WRITE|MAY_EXEC));
/* Allow read access to binaries even when mode 111 */
if (err == -EACCES && S_ISREG(inode->i_mode) && acc == MAY_READ)
if (err == -EACCES && S_ISREG(inode->i_mode) &&
acc == (MAY_READ | MAY_OWNER_OVERRIDE))
err = permission(inode, MAY_EXEC);
return err? nfserrno(err) : 0;
......
......@@ -702,7 +702,7 @@ int get_unused_fd(void)
int fd, error;
error = -EMFILE;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
repeat:
fd = find_next_zero_bit(files->open_fds->fds_bits,
......@@ -751,7 +751,7 @@ int get_unused_fd(void)
error = fd;
out:
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
return error;
}
......@@ -765,9 +765,9 @@ static inline void __put_unused_fd(struct files_struct *files, unsigned int fd)
void put_unused_fd(unsigned int fd)
{
struct files_struct *files = current->files;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
__put_unused_fd(files, fd);
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
}
/*
......@@ -786,11 +786,11 @@ void put_unused_fd(unsigned int fd)
void fd_install(unsigned int fd, struct file * file)
{
struct files_struct *files = current->files;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (unlikely(files->fd[fd] != NULL))
BUG();
files->fd[fd] = file;
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
}
asmlinkage long sys_open(const char __user * filename, int flags, int mode)
......@@ -870,7 +870,7 @@ asmlinkage long sys_close(unsigned int fd)
struct file * filp;
struct files_struct *files = current->files;
write_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (fd >= files->max_fds)
goto out_unlock;
filp = files->fd[fd];
......@@ -879,11 +879,11 @@ asmlinkage long sys_close(unsigned int fd)
files->fd[fd] = NULL;
FD_CLR(fd, files->close_on_exec);
__put_unused_fd(files, fd);
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
return filp_close(filp, files);
out_unlock:
write_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
return -EBADF;
}
......
......@@ -117,16 +117,16 @@ static int proc_fd_link(struct inode *inode, struct dentry **dentry, struct vfsm
atomic_inc(&files->count);
task_unlock(task);
if (files) {
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
file = fcheck_files(files, fd);
if (file) {
*mnt = mntget(file->f_vfsmnt);
*dentry = dget(file->f_dentry);
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
return 0;
}
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
}
return -ENOENT;
......@@ -655,7 +655,7 @@ static int proc_readfd(struct file * filp, void * dirent, filldir_t filldir)
task_unlock(p);
if (!files)
goto out;
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
for (fd = filp->f_pos-2;
fd < files->max_fds;
fd++, filp->f_pos++) {
......@@ -663,7 +663,7 @@ static int proc_readfd(struct file * filp, void * dirent, filldir_t filldir)
if (!fcheck_files(files, fd))
continue;
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
j = NUMBUF;
i = fd;
......@@ -675,12 +675,12 @@ static int proc_readfd(struct file * filp, void * dirent, filldir_t filldir)
ino = fake_ino(pid, PROC_PID_FD_DIR + fd);
if (filldir(dirent, buf+j, NUMBUF-j, fd+2, ino, DT_LNK) < 0) {
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
break;
}
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
}
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
}
out:
......@@ -824,13 +824,13 @@ static int pid_fd_revalidate(struct dentry * dentry, int flags)
atomic_inc(&files->count);
task_unlock(task);
if (files) {
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
if (fcheck_files(files, fd)) {
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
return 1;
}
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
}
d_drop(dentry);
......@@ -920,7 +920,7 @@ static struct dentry *proc_lookupfd(struct inode * dir, struct dentry * dentry)
if (!files)
goto out_unlock;
inode->i_mode = S_IFLNK;
read_lock(&files->file_lock);
spin_lock(&files->file_lock);
file = fcheck_files(files, fd);
if (!file)
goto out_unlock2;
......@@ -928,7 +928,7 @@ static struct dentry *proc_lookupfd(struct inode * dir, struct dentry * dentry)
inode->i_mode |= S_IRUSR | S_IXUSR;
if (file->f_mode & 2)
inode->i_mode |= S_IWUSR | S_IXUSR;
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
inode->i_op = &proc_pid_link_inode_operations;
inode->i_size = 64;
......@@ -940,7 +940,7 @@ static struct dentry *proc_lookupfd(struct inode * dir, struct dentry * dentry)
return NULL;
out_unlock2:
read_unlock(&files->file_lock);
spin_unlock(&files->file_lock);
put_files_struct(files);
out_unlock:
iput(inode);
......
......@@ -43,6 +43,7 @@
#include <linux/hugetlb.h>
#include <linux/jiffies.h>
#include <linux/sysrq.h>
#include <linux/vmalloc.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#include <asm/io.h>
......@@ -98,6 +99,41 @@ static int loadavg_read_proc(char *page, char **start, off_t off,
return proc_calc_metrics(page, start, off, count, eof, len);
}
struct vmalloc_info {
unsigned long used;
unsigned long largest_chunk;
};
static struct vmalloc_info get_vmalloc_info(void)
{
unsigned long prev_end = VMALLOC_START;
struct vm_struct* vma;
struct vmalloc_info vmi;
vmi.used = 0;
read_lock(&vmlist_lock);
if(!vmlist)
vmi.largest_chunk = (VMALLOC_END-VMALLOC_START);
else
vmi.largest_chunk = 0;
for (vma = vmlist; vma; vma = vma->next) {
unsigned long free_area_size =
(unsigned long)vma->addr - prev_end;
vmi.used += vma->size;
if (vmi.largest_chunk < free_area_size )
vmi.largest_chunk = free_area_size;
prev_end = vma->size + (unsigned long)vma->addr;
}
if(VMALLOC_END-prev_end > vmi.largest_chunk)
vmi.largest_chunk = VMALLOC_END-prev_end;
read_unlock(&vmlist_lock);
return vmi;
}
static int uptime_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
......@@ -143,6 +179,8 @@ static int meminfo_read_proc(char *page, char **start, off_t off,
unsigned long inactive;
unsigned long active;
unsigned long free;
unsigned long vmtot;
struct vmalloc_info vmi;
get_page_state(&ps);
get_zone_counts(&active, &inactive, &free);
......@@ -155,6 +193,11 @@ static int meminfo_read_proc(char *page, char **start, off_t off,
si_swapinfo(&i);
committed = atomic_read(&vm_committed_space);
vmtot = (VMALLOC_END-VMALLOC_START)>>10;
vmi = get_vmalloc_info();
vmi.used >>= 10;
vmi.largest_chunk >>= 10;
/*
* Tagged format, for easy grepping and expansion.
*/
......@@ -177,7 +220,10 @@ static int meminfo_read_proc(char *page, char **start, off_t off,
"Mapped: %8lu kB\n"
"Slab: %8lu kB\n"
"Committed_AS: %8u kB\n"
"PageTables: %8lu kB\n",
"PageTables: %8lu kB\n"
"VmallocTotal: %8lu kB\n"
"VmallocUsed: %8lu kB\n"
"VmallocChunk: %8lu kB\n",
K(i.totalram),
K(i.freeram),
K(i.bufferram),
......@@ -196,7 +242,10 @@ static int meminfo_read_proc(char *page, char **start, off_t off,
K(ps.nr_mapped),
K(ps.nr_slab),
K(committed),
K(ps.nr_page_table_pages)
K(ps.nr_page_table_pages),
vmtot,
vmi.used,
vmi.largest_chunk
);
len += hugetlb_report_meminfo(page + len);
......@@ -386,7 +435,7 @@ static int devices_read_proc(char *page, char **start, off_t off,
extern int show_interrupts(struct seq_file *p, void *v);
static int interrupts_open(struct inode *inode, struct file *file)
{
unsigned size = PAGE_SIZE * (1 + NR_CPUS / 8);
unsigned size = 4096 * (1 + num_online_cpus() / 8);
char *buf = kmalloc(size, GFP_KERNEL);
struct seq_file *m;
int res;
......
......@@ -179,9 +179,9 @@ int do_select(int n, fd_set_bits *fds, long *timeout)
int retval, i, off;
long __timeout = *timeout;
read_lock(&current->files->file_lock);
spin_lock(&current->files->file_lock);
retval = max_select_fd(n, fds);
read_unlock(&current->files->file_lock);
spin_unlock(&current->files->file_lock);
if (retval < 0)
return retval;
......
......@@ -487,7 +487,9 @@ sched_find_first_bit(unsigned long b[3])
#define ext2_set_bit __test_and_set_bit
#define ext2_set_bit_atomic(l,n,a) test_and_set_bit(n,a)
#define ext2_clear_bit __test_and_clear_bit
#define ext2_clear_bit_atomic(l,n,a) test_and_clear_bit(n,a)
#define ext2_test_bit test_bit
#define ext2_find_first_zero_bit find_first_zero_bit
#define ext2_find_next_zero_bit find_next_zero_bit
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
/* Note that the following two definitions are _highly_ dependent
......
......@@ -357,8 +357,12 @@ static inline int sched_find_first_bit(unsigned long *b)
*/
#define ext2_set_bit(nr,p) \
__test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_set_bit_atomic(lock,nr,p) \
test_and_set_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_clear_bit(nr,p) \
__test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_clear_bit_atomic(lock,nr,p) \
test_and_clear_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_test_bit(nr,p) \
__test_bit(WORD_BITOFF_TO_LE(nr), (unsigned long *)(p))
#define ext2_find_first_zero_bit(p,sz) \
......
......@@ -13,7 +13,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma,start,end) do { } while (0)
#define flush_cache_page(vma,vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define invalidate_dcache_range(start,end) do { } while (0)
#define clean_dcache_range(start,end) do { } while (0)
......
......@@ -70,13 +70,6 @@
cpu_cache_clean_invalidate_range((unsigned long)start, \
((unsigned long)start) + size, 0);
/*
* This is an obsolete interface; the functionality that was provided by this
* function is now merged into our flush_dcache_page, flush_icache_page,
* copy_user_page and clear_user_page functions.
*/
#define flush_page_to_ram(page) do { } while (0)
/*
* flush_dcache_page is used when the kernel has written to the page
* cache page at virtual address page->virtual.
......
......@@ -360,7 +360,9 @@ static inline int find_next_zero_bit (void * addr, int size, int offset)
#define hweight8(x) generic_hweight8(x)
#define ext2_set_bit test_and_set_bit
#define ext2_set_bit_atomic(l,n,a) test_and_set_bit(n,a)
#define ext2_clear_bit test_and_clear_bit
#define ext2_clear_bit_atomic(l,n,a) test_and_clear_bit(n,a)
#define ext2_test_bit test_bit
#define ext2_find_first_zero_bit find_first_zero_bit
#define ext2_find_next_zero_bit find_next_zero_bit
......
......@@ -121,7 +121,6 @@ extern void paging_init(void);
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -22,9 +22,8 @@
#define RTC_AIE 0x20 /* alarm interrupt enable */
#define RTC_UIE 0x10 /* update-finished interrupt enable */
extern void gen_rtc_interrupt(unsigned long);
/* some dummy definitions */
#define RTC_BATT_BAD 0x100 /* battery bad */
#define RTC_SQWE 0x08 /* enable square-wave output */
#define RTC_DM_BINARY 0x04 /* all time/date values are BCD if clear */
#define RTC_24H 0x02 /* 24 hour mode - else hours bit 7 means pm */
......@@ -43,7 +42,7 @@ static inline unsigned char rtc_is_updating(void)
return uip;
}
static inline void get_rtc_time(struct rtc_time *time)
static inline unsigned int get_rtc_time(struct rtc_time *time)
{
unsigned long uip_watchdog = jiffies;
unsigned char ctrl;
......@@ -108,6 +107,8 @@ static inline void get_rtc_time(struct rtc_time *time)
time->tm_year += 100;
time->tm_mon--;
return RTC_24H;
}
/* Set the current date and time in the real time clock. */
......
......@@ -479,8 +479,12 @@ static __inline__ int ffs(int x)
#define ext2_set_bit(nr,addr) \
__test_and_set_bit((nr),(unsigned long*)addr)
#define ext2_set_bit_atomic(lock,nr,addr) \
test_and_set_bit((nr),(unsigned long*)addr)
#define ext2_clear_bit(nr, addr) \
__test_and_clear_bit((nr),(unsigned long*)addr)
#define ext2_clear_bit_atomic(lock,nr, addr) \
test_and_clear_bit((nr),(unsigned long*)addr)
#define ext2_test_bit(nr, addr) test_bit((nr),(unsigned long*)addr)
#define ext2_find_first_zero_bit(addr, size) \
find_first_zero_bit((unsigned long*)addr, size)
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -453,7 +453,9 @@ find_next_bit (void *addr, unsigned long size, unsigned long offset)
#define __clear_bit(nr, addr) clear_bit(nr, addr)
#define ext2_set_bit test_and_set_bit
#define ext2_set_atomic(l,n,a) test_and_set_bit(n,a)
#define ext2_clear_bit test_and_clear_bit
#define ext2_clear_atomic(l,n,a) test_and_clear_bit(n,a)
#define ext2_test_bit test_bit
#define ext2_find_first_zero_bit find_first_zero_bit
#define ext2_find_next_zero_bit find_next_zero_bit
......
......@@ -20,7 +20,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma,page) do { } while (0)
#define flush_dcache_page(page) \
......
......@@ -365,6 +365,24 @@ ext2_clear_bit (int nr, volatile void *vaddr)
return retval;
}
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
extern __inline__ int
ext2_test_bit (int nr, const volatile void *vaddr)
{
......
......@@ -106,7 +106,6 @@ extern inline void flush_cache_page(struct vm_area_struct *vma,
/* Push the page at kernel virtual address and clear the icache */
/* RZ: use cpush %bc instead of cpush %dc, cinv %ic */
#define flush_page_to_ram(page) __flush_page_to_ram(page_address(page))
extern inline void __flush_page_to_ram(void *vaddr)
{
if (CPU_IS_040_OR_060) {
......@@ -125,7 +124,7 @@ extern inline void __flush_page_to_ram(void *vaddr)
}
}
#define flush_dcache_page(page) do { } while (0)
#define flush_dcache_page(page) __flush_page_to_ram(page_address(page))
#define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
......
......@@ -79,8 +79,14 @@ static inline void clear_page(void *page)
#define copy_page(to,from) memcpy((to), (from), PAGE_SIZE)
#endif
#define clear_user_page(page, vaddr, pg) clear_page(page)
#define copy_user_page(to, from, vaddr, pg) copy_page(to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
/*
* These are used to make use of C type-checking..
......
......@@ -402,6 +402,24 @@ extern __inline__ int ext2_clear_bit(int nr, volatile void * addr)
return retval;
}
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
extern __inline__ int ext2_test_bit(int nr, const volatile void * addr)
{
int mask;
......
......@@ -10,7 +10,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_range(start,len) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start,len) __flush_cache_all()
......
......@@ -824,6 +824,24 @@ extern __inline__ int ext2_clear_bit(int nr, void * addr)
return retval;
}
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
extern __inline__ int ext2_test_bit(int nr, const void * addr)
{
int mask;
......@@ -890,7 +908,9 @@ extern __inline__ unsigned long ext2_find_next_zero_bit(void *addr, unsigned lon
/* Native ext2 byte ordering, just collapse using defines. */
#define ext2_set_bit(nr, addr) test_and_set_bit((nr), (addr))
#define ext2_set_bit_atomic(lock, nr, addr) test_and_set_bit((nr), (addr))
#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr), (addr))
#define ext2_clear_bit_atomic(lock, nr, addr) test_and_clear_bit((nr), (addr))
#define ext2_test_bit(nr, addr) test_bit((nr), (addr))
#define ext2_find_first_zero_bit(addr, size) find_first_zero_bit((addr), (size))
#define ext2_find_next_zero_bit(addr, size, offset) \
......
......@@ -25,8 +25,15 @@ extern void (*_copy_page)(void * to, void * from);
#define clear_page(page) _clear_page(page)
#define copy_page(to, from) _copy_page(to, from)
#define clear_user_page(page, vaddr) clear_page(page)
#define copy_user_page(to, from, vaddr) copy_page(to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
/*
* These are used to make use of C type-checking..
......
......@@ -24,7 +24,6 @@
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
* - flush_cache_page(mm, vmaddr) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
* - flush_page_to_ram(page) write back kernel page to ram
* - flush_icache_range(start, end) flush a range of instructions
*/
extern void (*_flush_cache_all)(void);
......@@ -39,15 +38,13 @@ extern void (*_flush_icache_range)(unsigned long start, unsigned long end);
extern void (*_flush_icache_page)(struct vm_area_struct *vma,
struct page *page);
#define flush_dcache_page(page) do { } while (0)
#define flush_cache_all() _flush_cache_all()
#define __flush_cache_all() ___flush_cache_all()
#define flush_cache_mm(mm) _flush_cache_mm(mm)
#define flush_cache_range(vma,start,end) _flush_cache_range(vma,start,end)
#define flush_cache_page(vma,page) _flush_cache_page(vma, page)
#define flush_cache_sigtramp(addr) _flush_cache_sigtramp(addr)
#define flush_page_to_ram(page) _flush_page_to_ram(page)
#define flush_dcache_page(page) _flush_page_to_ram(page)
#define flush_icache_range(start, end) _flush_icache_range(start,end)
#define flush_icache_page(vma, page) _flush_icache_page(vma, page)
......
......@@ -531,6 +531,24 @@ ext2_clear_bit(int nr, void * addr)
return retval;
}
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
extern inline int
ext2_test_bit(int nr, const void * addr)
{
......@@ -599,7 +617,9 @@ ext2_find_next_zero_bit(void *addr, unsigned long size, unsigned long offset)
/* Native ext2 byte ordering, just collapse using defines. */
#define ext2_set_bit(nr, addr) test_and_set_bit((nr), (addr))
#define ext2_set_bit_atomic(lock, nr, addr) test_and_set_bit((nr), (addr))
#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr), (addr))
#define ext2_clear_bit_atomic(lock, nr, addr) test_and_clear_bit((nr), (addr))
#define ext2_test_bit(nr, addr) test_bit((nr), (addr))
#define ext2_find_first_zero_bit(addr, size) find_first_zero_bit((addr), (size))
#define ext2_find_next_zero_bit(addr, size, offset) \
......
......@@ -25,8 +25,15 @@ extern void (*_copy_page)(void * to, void * from);
#define clear_page(page) _clear_page(page)
#define copy_page(to, from) _copy_page(to, from)
#define clear_user_page(page, vaddr) clear_page(page)
#define copy_user_page(to, from, vaddr) copy_page(to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
/*
* These are used to make use of C type-checking..
......
......@@ -25,7 +25,6 @@
* - flush_cache_mm(mm) flushes the specified mm context's cache lines
* - flush_cache_page(mm, vmaddr) flushes a single page
* - flush_cache_range(vma, start, end) flushes a range of pages
* - flush_page_to_ram(page) write back kernel page to ram
*/
extern void (*_flush_cache_mm)(struct mm_struct *mm);
extern void (*_flush_cache_range)(struct vm_area_struct *vma, unsigned long start,
......@@ -34,14 +33,12 @@ extern void (*_flush_cache_page)(struct vm_area_struct *vma, unsigned long page)
extern void (*_flush_page_to_ram)(struct page * page);
#define flush_cache_all() do { } while(0)
#define flush_dcache_page(page) do { } while (0)
#ifndef CONFIG_CPU_R10000
#define flush_cache_mm(mm) _flush_cache_mm(mm)
#define flush_cache_range(vma,start,end) _flush_cache_range(vma,start,end)
#define flush_cache_page(vma,page) _flush_cache_page(vma, page)
#define flush_page_to_ram(page) _flush_page_to_ram(page)
#define flush_dcache_page(page) _flush_page_to_ram(page)
#define flush_icache_range(start, end) _flush_cache_l1()
#define flush_icache_user_range(vma, page, addr, len) \
flush_icache_page((vma), (page))
......@@ -66,7 +63,7 @@ extern void andes_flush_icache_page(unsigned long);
#define flush_cache_mm(mm) do { } while(0)
#define flush_cache_range(vma,start,end) do { } while(0)
#define flush_cache_page(vma,page) do { } while(0)
#define flush_page_to_ram(page) do { } while(0)
#define flush_dcache_page(page) do { } while(0)
#define flush_icache_range(start, end) _flush_cache_l1()
#define flush_icache_user_range(vma, page, addr, len) \
flush_icache_page((vma), (page))
......
......@@ -389,10 +389,14 @@ static __inline__ unsigned long find_next_bit(unsigned long *addr, unsigned long
*/
#ifdef __LP64__
#define ext2_set_bit(nr, addr) test_and_set_bit((nr) ^ 0x38, addr)
#define ext2_set_bit_atomic(l,nr,addr) test_and_set_bit((nr) ^ 0x38, addr)
#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr) ^ 0x38, addr)
#define ext2_clear_bit_atomic(l,nr,addr) test_and_clear_bit((nr) ^ 0x38, addr)
#else
#define ext2_set_bit(nr, addr) test_and_set_bit((nr) ^ 0x18, addr)
#define ext2_set_bit_atomic(l,nr,addr) test_and_set_bit((nr) ^ 0x18, addr)
#define ext2_clear_bit(nr, addr) test_and_clear_bit((nr) ^ 0x18, addr)
#define ext2_clear_bit_atomic(l,nr,addr) test_and_clear_bit((nr) ^ 0x18, addr)
#endif
#endif /* __KERNEL__ */
......
......@@ -18,11 +18,6 @@
#define flush_kernel_dcache_range(start,size) \
flush_kernel_dcache_range_asm((start), (start)+(size));
static inline void
flush_page_to_ram(struct page *page)
{
}
extern void flush_cache_all_local(void);
static inline void cacheflush_h_tmp_function(void *dummy)
......
......@@ -24,7 +24,7 @@
#define RTC_AIE 0x20 /* alarm interrupt enable */
#define RTC_UIE 0x10 /* update-finished interrupt enable */
extern void gen_rtc_interrupt(unsigned long);
#define RTC_BATT_BAD 0x100 /* battery bad */
/* some dummy definitions */
#define RTC_SQWE 0x08 /* enable square-wave output */
......@@ -44,16 +44,16 @@ static const unsigned short int __mon_yday[2][13] =
{ 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 }
};
static int get_rtc_time(struct rtc_time *wtime)
static inline unsigned int get_rtc_time(struct rtc_time *wtime)
{
struct pdc_tod tod_data;
long int days, rem, y;
const unsigned short int *ip;
if(pdc_tod_read(&tod_data) < 0)
return -1;
return RTC_24H | RTC_BATT_BAD;
// most of the remainder of this function is:
// Copyright (C) 1991, 1993, 1997, 1998 Free Software Foundation, Inc.
// This was originally a part of the GNU C Library.
......@@ -69,7 +69,7 @@ static int get_rtc_time(struct rtc_time *wtime)
wtime->tm_sec = rem % 60;
y = 1970;
#define DIV(a, b) ((a) / (b) - ((a) % (b) < 0))
#define LEAPS_THRU_END_OF(y) (DIV (y, 4) - DIV (y, 100) + DIV (y, 400))
......@@ -92,8 +92,8 @@ static int get_rtc_time(struct rtc_time *wtime)
days -= ip[y];
wtime->tm_mon = y;
wtime->tm_mday = days + 1;
return 0;
return RTC_24H;
}
static int set_rtc_time(struct rtc_time *wtime)
......
......@@ -392,7 +392,9 @@ static __inline__ unsigned long find_next_zero_bit(unsigned long * addr,
#define ext2_set_bit(nr, addr) __test_and_set_bit((nr) ^ 0x18, (unsigned long *)(addr))
#define ext2_set_bit_atomic(lock, nr, addr) test_and_set_bit((nr) ^ 0x18, (unsigned long *)(addr))
#define ext2_clear_bit(nr, addr) __test_and_clear_bit((nr) ^ 0x18, (unsigned long *)(addr))
#define ext2_clear_bit_atomic(lock, nr, addr) test_and_clear_bit((nr) ^ 0x18, (unsigned long *)(addr))
static __inline__ int ext2_test_bit(int nr, __const__ void * addr)
{
......
......@@ -23,7 +23,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, a, b) do { } while (0)
#define flush_cache_page(vma, p) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0)
extern void flush_dcache_page(struct page *page);
......
......@@ -338,6 +338,25 @@ static __inline__ int __test_and_clear_le_bit(unsigned long nr, unsigned long *a
__test_and_set_le_bit((nr),(unsigned long*)addr)
#define ext2_clear_bit(nr, addr) \
__test_and_clear_le_bit((nr),(unsigned long*)addr)
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_test_bit(nr, addr) test_le_bit((nr),(unsigned long*)addr)
#define ext2_find_first_zero_bit(addr, size) \
find_first_zero_le_bit((unsigned long*)addr, size)
......
......@@ -13,7 +13,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma, page) do { } while (0)
extern void flush_dcache_page(struct page *page);
......
......@@ -805,8 +805,12 @@ extern __inline__ int fls(int x)
#define ext2_set_bit(nr, addr) \
test_and_set_bit((nr)^24, (unsigned long *)addr)
#define ext2_set_bit_atomic(lock, nr, addr) \
test_and_set_bit((nr)^24, (unsigned long *)addr)
#define ext2_clear_bit(nr, addr) \
test_and_clear_bit((nr)^24, (unsigned long *)addr)
#define ext2_clear_bit_atomic(lock, nr, addr) \
test_and_clear_bit((nr)^24, (unsigned long *)addr)
#define ext2_test_bit(nr, addr) \
test_bit((nr)^24, (unsigned long *)addr)
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -838,8 +838,12 @@ extern __inline__ int fls(int x)
#define ext2_set_bit(nr, addr) \
test_and_set_bit((nr)^56, (unsigned long *)addr)
#define ext2_set_bit_atomic(lock, nr, addr) \
test_and_set_bit((nr)^56, (unsigned long *)addr)
#define ext2_clear_bit(nr, addr) \
test_and_clear_bit((nr)^56, (unsigned long *)addr)
#define ext2_clear_bit_atomic(lock, nr, addr) \
test_and_clear_bit((nr)^56, (unsigned long *)addr)
#define ext2_test_bit(nr, addr) \
test_bit((nr)^56, (unsigned long *)addr)
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
......@@ -344,6 +344,24 @@ static __inline__ unsigned long ext2_find_next_zero_bit(void *addr, unsigned lon
}
#endif
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
/* Bitmap functions for the minix filesystem. */
#define minix_test_and_set_bit(nr,addr) test_and_set_bit(nr,addr)
#define minix_set_bit(nr,addr) set_bit(nr,addr)
......
......@@ -26,7 +26,6 @@ extern void paging_init(void);
* - flush_cache_range(vma, start, end) flushes a range of pages
*
* - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
* - flush_page_to_ram(page) write back kernel page to ram
* - flush_icache_range(start, end) flushes(invalidates) a range for icache
* - flush_icache_page(vma, pg) flushes(invalidates) a page for icache
*
......@@ -37,7 +36,6 @@ extern void paging_init(void);
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......@@ -63,7 +61,6 @@ extern void flush_dcache_page(struct page *pg);
extern void flush_icache_range(unsigned long start, unsigned long end);
extern void flush_cache_sigtramp(unsigned long addr);
#define flush_page_to_ram(page) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
#define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
......
......@@ -455,6 +455,25 @@ static __inline__ unsigned long find_next_zero_le_bit(void *addr, unsigned long
#define ext2_set_bit __test_and_set_le_bit
#define ext2_clear_bit __test_and_clear_le_bit
#define ext2_set_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_set_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_clear_bit_atomic(lock, nr, addr) \
({ \
int ret; \
spin_lock(lock); \
ret = ext2_clear_bit((nr), (addr)); \
spin_unlock(lock); \
ret; \
})
#define ext2_test_bit test_le_bit
#define ext2_find_first_zero_bit find_first_zero_le_bit
#define ext2_find_next_zero_bit find_next_zero_le_bit
......
......@@ -64,7 +64,6 @@ BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long)
extern void sparc_flush_page_to_ram(struct page *page);
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) sparc_flush_page_to_ram(page)
#endif /* _SPARC_CACHEFLUSH_H */
......@@ -351,7 +351,9 @@ static __inline__ unsigned long find_next_zero_le_bit(unsigned long *addr, unsig
#ifdef __KERNEL__
#define ext2_set_bit(nr,addr) test_and_set_le_bit((nr),(unsigned long *)(addr))
#define ext2_set_bit_atomic(lock,nr,addr) test_and_set_le_bit((nr),(unsigned long *)(addr))
#define ext2_clear_bit(nr,addr) test_and_clear_le_bit((nr),(unsigned long *)(addr))
#define ext2_clear_bit_atomic(lock,nr,addr) test_and_clear_le_bit((nr),(unsigned long *)(addr))
#define ext2_test_bit(nr,addr) test_le_bit((nr),(unsigned long *)(addr))
#define ext2_find_first_zero_bit(addr, size) \
find_first_zero_le_bit((unsigned long *)(addr), (size))
......
......@@ -50,7 +50,4 @@ extern void smp_flush_cache_all(void);
extern void flush_dcache_page(struct page *page);
/* This is unnecessary on the SpitFire since D-CACHE is write-through. */
#define flush_page_to_ram(page) do { } while (0)
#endif /* _SPARC64_CACHEFLUSH_H */
......@@ -252,7 +252,9 @@ static inline int sched_find_first_bit(unsigned long *b)
#define hweight8(x) generic_hweight8 (x)
#define ext2_set_bit test_and_set_bit
#define ext2_set_bit_atomic(l,n,a) test_and_set_bit(n,a)
#define ext2_clear_bit test_and_clear_bit
#define ext2_clear_bit_atomic(l,n,a) test_and_clear_bit(n,a)
#define ext2_test_bit test_bit
#define ext2_find_first_zero_bit find_first_zero_bit
#define ext2_find_next_zero_bit find_next_zero_bit
......
......@@ -29,7 +29,6 @@
#define flush_cache_mm(mm) ((void)0)
#define flush_cache_range(vma, start, end) ((void)0)
#define flush_cache_page(vma, vmaddr) ((void)0)
#define flush_page_to_ram(page) ((void)0)
#define flush_dcache_page(page) ((void)0)
#define flush_icache() ((void)0)
#define flush_icache_range(start, end) ((void)0)
......
......@@ -62,7 +62,6 @@ extern void nb85e_cache_flush_icache_user_range (struct vm_area_struct *vma,
unsigned long adr, int len);
extern void nb85e_cache_flush_sigtramp (unsigned long addr);
#define flush_page_to_ram(x) ((void)0)
#define flush_cache_all nb85e_cache_flush_all
#define flush_cache_mm nb85e_cache_flush_mm
#define flush_cache_range nb85e_cache_flush_range
......
......@@ -40,8 +40,14 @@
#define clear_page(page) memset ((void *)(page), 0, PAGE_SIZE)
#define copy_page(to, from) memcpy ((void *)(to), (void *)from, PAGE_SIZE)
#define clear_user_page(page, vaddr, pg) clear_page (page)
#define copy_user_page(to, from, vaddr,pg) copy_page (to, from)
#define clear_user_page(addr, vaddr, page) \
do { clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { copy_page(to, from); \
flush_dcache_page(page); \
} while (0)
#ifdef STRICT_MM_TYPECHECKS
/*
......
......@@ -487,8 +487,12 @@ static __inline__ int ffs(int x)
#define ext2_set_bit(nr,addr) \
__test_and_set_bit((nr),(unsigned long*)addr)
#define ext2_set_bit_atomic(lock,nr,addr) \
test_and_set_bit((nr),(unsigned long*)addr)
#define ext2_clear_bit(nr, addr) \
__test_and_clear_bit((nr),(unsigned long*)addr)
#define ext2_clear_bit_atomic(lock,nr,addr) \
test_and_clear_bit((nr),(unsigned long*)addr)
#define ext2_test_bit(nr, addr) test_bit((nr),(unsigned long*)addr)
#define ext2_find_first_zero_bit(addr, size) \
find_first_zero_bit((unsigned long*)addr, size)
......
......@@ -9,7 +9,6 @@
#define flush_cache_mm(mm) do { } while (0)
#define flush_cache_range(vma, start, end) do { } while (0)
#define flush_cache_page(vma, vmaddr) do { } while (0)
#define flush_page_to_ram(page) do { } while (0)
#define flush_dcache_page(page) do { } while (0)
#define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma,pg) do { } while (0)
......
/*
* Per-blockgroup locking for ext2 and ext3.
*
* Simple hashed spinlocking.
*/
#include <linux/config.h>
#include <linux/spinlock.h>
#include <linux/cache.h>
#ifdef CONFIG_SMP
/*
* We want a power-of-two. Is there a better way than this?
*/
#if NR_CPUS >= 32
#define NR_BG_LOCKS 128
#elif NR_CPUS >= 16
#define NR_BG_LOCKS 64
#elif NR_CPUS >= 8
#define NR_BG_LOCKS 32
#elif NR_CPUS >= 4
#define NR_BG_LOCKS 16
#elif NR_CPUS >= 2
#define NR_BG_LOCKS 8
#else
#define NR_BG_LOCKS 4
#endif
#else /* CONFIG_SMP */
#define NR_BG_LOCKS 1
#endif /* CONFIG_SMP */
struct bgl_lock {
spinlock_t lock;
} ____cacheline_aligned_in_smp;
struct blockgroup_lock {
struct bgl_lock locks[NR_BG_LOCKS];
};
static inline void bgl_lock_init(struct blockgroup_lock *bgl)
{
int i;
for (i = 0; i < NR_BG_LOCKS; i++)
spin_lock_init(&bgl->locks[i].lock);
}
/*
* The accessor is a macro so we can embed a blockgroup_lock into different
* superblock types
*/
#define sb_bgl_lock(sb, block_group) \
(&(sb)->s_blockgroup_lock.locks[(block_group) & (NR_BG_LOCKS-1)].lock)
......@@ -32,6 +32,8 @@ typedef struct bootmem_data {
void *node_bootmem_map;
unsigned long last_offset;
unsigned long last_pos;
unsigned long last_success; /* Previous allocation point. To speed
* up searching */
} bootmem_data_t;
extern unsigned long __init bootmem_bootmap_pages (unsigned long);
......
......@@ -16,6 +16,9 @@
#ifndef _LINUX_EXT2_FS_SB
#define _LINUX_EXT2_FS_SB
#include <linux/blockgroup_lock.h>
#include <linux/percpu_counter.h>
/*
* second extended-fs super-block data in memory
*/
......@@ -45,6 +48,10 @@ struct ext2_sb_info {
u32 s_next_generation;
unsigned long s_dir_count;
u8 *s_debts;
struct percpu_counter s_freeblocks_counter;
struct percpu_counter s_freeinodes_counter;
struct percpu_counter s_dirs_counter;
struct blockgroup_lock s_blockgroup_lock;
};
#endif /* _LINUX_EXT2_FS_SB */
......@@ -21,7 +21,7 @@
*/
struct files_struct {
atomic_t count;
rwlock_t file_lock; /* Protects all the below members. Nests inside tsk->alloc_lock */
spinlock_t file_lock; /* Protects all the below members. Nests inside tsk->alloc_lock */
int max_fds;
int max_fdset;
int next_fd;
......
......@@ -67,7 +67,6 @@ static inline void memclear_highpage_flush(struct page *page, unsigned int offse
kaddr = kmap_atomic(page, KM_USER0);
memset((char *)kaddr + offset, 0, size);
flush_dcache_page(page);
flush_page_to_ram(page);
kunmap_atomic(kaddr, KM_USER0);
}
......
......@@ -6,7 +6,7 @@
#define INIT_FILES \
{ \
.count = ATOMIC_INIT(1), \
.file_lock = RW_LOCK_UNLOCKED, \
.file_lock = SPIN_LOCK_UNLOCKED, \
.max_fds = NR_OPEN_DEFAULT, \
.max_fdset = __FD_SETSIZE, \
.next_fd = 0, \
......
......@@ -486,6 +486,8 @@ extern void free_area_init(unsigned long * zones_size);
extern void free_area_init_node(int nid, pg_data_t *pgdat, struct page *pmap,
unsigned long * zones_size, unsigned long zone_start_pfn,
unsigned long *zholes_size);
extern void memmap_init_zone(struct page *, unsigned long, int,
unsigned long, unsigned long);
extern void mem_init(void);
extern void show_mem(void);
extern void si_meminfo(struct sysinfo * val);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment