Commit aab008db authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'stable/for-linus-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/mm

Pull cleancache changes from Konrad Rzeszutek Wilk:
 "This has some patches for the cleancache API that should have been
  submitted a _long_ time ago.  They are basically cleanups:

   - rename of flush to invalidate

   - moving reporting of statistics into debugfs

   - use __read_mostly as necessary.

  Oh, and also the MAINTAINERS file change.  The files (except the
  MAINTAINERS file) have been in #linux-next for months now.  The late
  addition of MAINTAINERS file is a brain-fart on my side - didn't
  realize I needed that just until I was typing this up - and I based
  that patch on v3.3 - so the tree is on top of v3.3."

* tag 'stable/for-linus-3.4' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/mm:
  MAINTAINERS: Adding cleancache API to the list.
  mm: cleancache: Use __read_mostly as appropiate.
  mm: cleancache: report statistics via debugfs instead of sysfs.
  mm: zcache/tmem/cleancache: s/flush/invalidate/
  mm: cleancache: s/flush/invalidate/
parents 4f5b1aff 16c0cfa4
What: /sys/kernel/mm/cleancache/
Date: April 2011
Contact: Dan Magenheimer <dan.magenheimer@oracle.com>
Description:
/sys/kernel/mm/cleancache/ contains a number of files which
record a count of various cleancache operations
(sum across all filesystems):
succ_gets
failed_gets
puts
flushes
...@@ -46,10 +46,11 @@ a negative return value indicates failure. A "put_page" will copy a ...@@ -46,10 +46,11 @@ a negative return value indicates failure. A "put_page" will copy a
the pool id, a file key, and a page index into the file. (The combination the pool id, a file key, and a page index into the file. (The combination
of a pool id, a file key, and an index is sometimes called a "handle".) of a pool id, a file key, and an index is sometimes called a "handle".)
A "get_page" will copy the page, if found, from cleancache into kernel memory. A "get_page" will copy the page, if found, from cleancache into kernel memory.
A "flush_page" will ensure the page no longer is present in cleancache; An "invalidate_page" will ensure the page no longer is present in cleancache;
a "flush_inode" will flush all pages associated with the specified file; an "invalidate_inode" will invalidate all pages associated with the specified
and, when a filesystem is unmounted, a "flush_fs" will flush all pages in file; and, when a filesystem is unmounted, an "invalidate_fs" will invalidate
all files specified by the given pool id and also surrender the pool id. all pages in all files specified by the given pool id and also surrender
the pool id.
An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache An "init_shared_fs", like init_fs, obtains a pool id but tells cleancache
to treat the pool as shared using a 128-bit UUID as a key. On systems to treat the pool as shared using a 128-bit UUID as a key. On systems
...@@ -62,12 +63,12 @@ of the kernel (e.g. by "tools" that control cleancache). Or a ...@@ -62,12 +63,12 @@ of the kernel (e.g. by "tools" that control cleancache). Or a
cleancache implementation can simply disable shared_init by always cleancache implementation can simply disable shared_init by always
returning a negative value. returning a negative value.
If a get_page is successful on a non-shared pool, the page is flushed (thus If a get_page is successful on a non-shared pool, the page is invalidated
making cleancache an "exclusive" cache). On a shared pool, the page (thus making cleancache an "exclusive" cache). On a shared pool, the page
is NOT flushed on a successful get_page so that it remains accessible to is NOT invalidated on a successful get_page so that it remains accessible to
other sharers. The kernel is responsible for ensuring coherency between other sharers. The kernel is responsible for ensuring coherency between
cleancache (shared or not), the page cache, and the filesystem, using cleancache (shared or not), the page cache, and the filesystem, using
cleancache flush operations as required. cleancache invalidate operations as required.
Note that cleancache must enforce put-put-get coherency and get-get Note that cleancache must enforce put-put-get coherency and get-get
coherency. For the former, if two puts are made to the same handle but coherency. For the former, if two puts are made to the same handle but
...@@ -77,20 +78,20 @@ if a get for a given handle fails, subsequent gets for that handle will ...@@ -77,20 +78,20 @@ if a get for a given handle fails, subsequent gets for that handle will
never succeed unless preceded by a successful put with that handle. never succeed unless preceded by a successful put with that handle.
Last, cleancache provides no SMP serialization guarantees; if two Last, cleancache provides no SMP serialization guarantees; if two
different Linux threads are simultaneously putting and flushing a page different Linux threads are simultaneously putting and invalidating a page
with the same handle, the results are indeterminate. Callers must with the same handle, the results are indeterminate. Callers must
lock the page to ensure serial behavior. lock the page to ensure serial behavior.
CLEANCACHE PERFORMANCE METRICS CLEANCACHE PERFORMANCE METRICS
Cleancache monitoring is done by sysfs files in the If properly configured, monitoring of cleancache is done via debugfs in
/sys/kernel/mm/cleancache directory. The effectiveness of cleancache the /sys/kernel/debug/mm/cleancache directory. The effectiveness of cleancache
can be measured (across all filesystems) with: can be measured (across all filesystems) with:
succ_gets - number of gets that were successful succ_gets - number of gets that were successful
failed_gets - number of gets that failed failed_gets - number of gets that failed
puts - number of puts attempted (all "succeed") puts - number of puts attempted (all "succeed")
flushes - number of flushes attempted invalidates - number of invalidates attempted
A backend implementation may provide additional metrics. A backend implementation may provide additional metrics.
...@@ -143,7 +144,7 @@ systems. ...@@ -143,7 +144,7 @@ systems.
The core hooks for cleancache in VFS are in most cases a single line The core hooks for cleancache in VFS are in most cases a single line
and the minimum set are placed precisely where needed to maintain and the minimum set are placed precisely where needed to maintain
coherency (via cleancache_flush operations) between cleancache, coherency (via cleancache_invalidate operations) between cleancache,
the page cache, and disk. All hooks compile into nothingness if the page cache, and disk. All hooks compile into nothingness if
cleancache is config'ed off and turn into a function-pointer- cleancache is config'ed off and turn into a function-pointer-
compare-to-NULL if config'ed on but no backend claims the ops compare-to-NULL if config'ed on but no backend claims the ops
...@@ -184,15 +185,15 @@ or for real kernel-addressable RAM, it makes perfect sense for ...@@ -184,15 +185,15 @@ or for real kernel-addressable RAM, it makes perfect sense for
transcendent memory. transcendent memory.
4) Why is non-shared cleancache "exclusive"? And where is the 4) Why is non-shared cleancache "exclusive"? And where is the
page "flushed" after a "get"? (Minchan Kim) page "invalidated" after a "get"? (Minchan Kim)
The main reason is to free up space in transcendent memory and The main reason is to free up space in transcendent memory and
to avoid unnecessary cleancache_flush calls. If you want inclusive, to avoid unnecessary cleancache_invalidate calls. If you want inclusive,
the page can be "put" immediately following the "get". If the page can be "put" immediately following the "get". If
put-after-get for inclusive becomes common, the interface could put-after-get for inclusive becomes common, the interface could
be easily extended to add a "get_no_flush" call. be easily extended to add a "get_no_invalidate" call.
The flush is done by the cleancache backend implementation. The invalidate is done by the cleancache backend implementation.
5) What's the performance impact? 5) What's the performance impact?
...@@ -222,7 +223,7 @@ Some points for a filesystem to consider: ...@@ -222,7 +223,7 @@ Some points for a filesystem to consider:
as tmpfs should not enable cleancache) as tmpfs should not enable cleancache)
- To ensure coherency/correctness, the FS must ensure that all - To ensure coherency/correctness, the FS must ensure that all
file removal or truncation operations either go through VFS or file removal or truncation operations either go through VFS or
add hooks to do the equivalent cleancache "flush" operations add hooks to do the equivalent cleancache "invalidate" operations
- To ensure coherency/correctness, either inode numbers must - To ensure coherency/correctness, either inode numbers must
be unique across the lifetime of the on-disk file OR the be unique across the lifetime of the on-disk file OR the
FS must provide an "encode_fh" function. FS must provide an "encode_fh" function.
...@@ -243,11 +244,11 @@ If cleancache would use the inode virtual address instead of ...@@ -243,11 +244,11 @@ If cleancache would use the inode virtual address instead of
inode/filehandle, the pool id could be eliminated. But, this inode/filehandle, the pool id could be eliminated. But, this
won't work because cleancache retains pagecache data pages won't work because cleancache retains pagecache data pages
persistently even when the inode has been pruned from the persistently even when the inode has been pruned from the
inode unused list, and only flushes the data page if the file inode unused list, and only invalidates the data page if the file
gets removed/truncated. So if cleancache used the inode kva, gets removed/truncated. So if cleancache used the inode kva,
there would be potential coherency issues if/when the inode there would be potential coherency issues if/when the inode
kva is reused for a different file. Alternately, if cleancache kva is reused for a different file. Alternately, if cleancache
flushed the pages when the inode kva was freed, much of the value invalidated the pages when the inode kva was freed, much of the value
of cleancache would be lost because the cache of pages in cleanache of cleancache would be lost because the cache of pages in cleanache
is potentially much larger than the kernel pagecache and is most is potentially much larger than the kernel pagecache and is most
useful if the pages survive inode cache removal. useful if the pages survive inode cache removal.
......
...@@ -1832,6 +1832,13 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers) ...@@ -1832,6 +1832,13 @@ L: alsa-devel@alsa-project.org (moderated for non-subscribers)
S: Supported S: Supported
F: sound/soc/codecs/cs4270* F: sound/soc/codecs/cs4270*
CLEANCACHE API
M: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
L: linux-kernel@vger.kernel.org
S: Maintained
F: mm/cleancache.c
F: include/linux/cleancache.h
CLK API CLK API
M: Russell King <linux@arm.linux.org.uk> M: Russell King <linux@arm.linux.org.uk>
F: include/linux/clk.h F: include/linux/clk.h
......
...@@ -1811,9 +1811,9 @@ static int zcache_cleancache_init_shared_fs(char *uuid, size_t pagesize) ...@@ -1811,9 +1811,9 @@ static int zcache_cleancache_init_shared_fs(char *uuid, size_t pagesize)
static struct cleancache_ops zcache_cleancache_ops = { static struct cleancache_ops zcache_cleancache_ops = {
.put_page = zcache_cleancache_put_page, .put_page = zcache_cleancache_put_page,
.get_page = zcache_cleancache_get_page, .get_page = zcache_cleancache_get_page,
.flush_page = zcache_cleancache_flush_page, .invalidate_page = zcache_cleancache_flush_page,
.flush_inode = zcache_cleancache_flush_inode, .invalidate_inode = zcache_cleancache_flush_inode,
.flush_fs = zcache_cleancache_flush_fs, .invalidate_fs = zcache_cleancache_flush_fs,
.init_shared_fs = zcache_cleancache_init_shared_fs, .init_shared_fs = zcache_cleancache_init_shared_fs,
.init_fs = zcache_cleancache_init_fs .init_fs = zcache_cleancache_init_fs
}; };
...@@ -1921,8 +1921,8 @@ static void zcache_frontswap_init(unsigned ignored) ...@@ -1921,8 +1921,8 @@ static void zcache_frontswap_init(unsigned ignored)
static struct frontswap_ops zcache_frontswap_ops = { static struct frontswap_ops zcache_frontswap_ops = {
.put_page = zcache_frontswap_put_page, .put_page = zcache_frontswap_put_page,
.get_page = zcache_frontswap_get_page, .get_page = zcache_frontswap_get_page,
.flush_page = zcache_frontswap_flush_page, .invalidate_page = zcache_frontswap_flush_page,
.flush_area = zcache_frontswap_flush_area, .invalidate_area = zcache_frontswap_flush_area,
.init = zcache_frontswap_init .init = zcache_frontswap_init
}; };
......
...@@ -242,9 +242,9 @@ __setup("nocleancache", no_cleancache); ...@@ -242,9 +242,9 @@ __setup("nocleancache", no_cleancache);
static struct cleancache_ops tmem_cleancache_ops = { static struct cleancache_ops tmem_cleancache_ops = {
.put_page = tmem_cleancache_put_page, .put_page = tmem_cleancache_put_page,
.get_page = tmem_cleancache_get_page, .get_page = tmem_cleancache_get_page,
.flush_page = tmem_cleancache_flush_page, .invalidate_page = tmem_cleancache_flush_page,
.flush_inode = tmem_cleancache_flush_inode, .invalidate_inode = tmem_cleancache_flush_inode,
.flush_fs = tmem_cleancache_flush_fs, .invalidate_fs = tmem_cleancache_flush_fs,
.init_shared_fs = tmem_cleancache_init_shared_fs, .init_shared_fs = tmem_cleancache_init_shared_fs,
.init_fs = tmem_cleancache_init_fs .init_fs = tmem_cleancache_init_fs
}; };
...@@ -369,8 +369,8 @@ __setup("nofrontswap", no_frontswap); ...@@ -369,8 +369,8 @@ __setup("nofrontswap", no_frontswap);
static struct frontswap_ops tmem_frontswap_ops = { static struct frontswap_ops tmem_frontswap_ops = {
.put_page = tmem_frontswap_put_page, .put_page = tmem_frontswap_put_page,
.get_page = tmem_frontswap_get_page, .get_page = tmem_frontswap_get_page,
.flush_page = tmem_frontswap_flush_page, .invalidate_page = tmem_frontswap_flush_page,
.flush_area = tmem_frontswap_flush_area, .invalidate_area = tmem_frontswap_flush_area,
.init = tmem_frontswap_init .init = tmem_frontswap_init
}; };
#endif #endif
......
...@@ -109,7 +109,7 @@ void invalidate_bdev(struct block_device *bdev) ...@@ -109,7 +109,7 @@ void invalidate_bdev(struct block_device *bdev)
/* 99% of the time, we don't need to flush the cleancache on the bdev. /* 99% of the time, we don't need to flush the cleancache on the bdev.
* But, for the strange corners, lets be cautious * But, for the strange corners, lets be cautious
*/ */
cleancache_flush_inode(mapping); cleancache_invalidate_inode(mapping);
} }
EXPORT_SYMBOL(invalidate_bdev); EXPORT_SYMBOL(invalidate_bdev);
......
...@@ -251,7 +251,7 @@ void deactivate_locked_super(struct super_block *s) ...@@ -251,7 +251,7 @@ void deactivate_locked_super(struct super_block *s)
{ {
struct file_system_type *fs = s->s_type; struct file_system_type *fs = s->s_type;
if (atomic_dec_and_test(&s->s_active)) { if (atomic_dec_and_test(&s->s_active)) {
cleancache_flush_fs(s); cleancache_invalidate_fs(s);
fs->kill_sb(s); fs->kill_sb(s);
/* caches are now gone, we can safely kill the shrinker now */ /* caches are now gone, we can safely kill the shrinker now */
......
...@@ -28,9 +28,9 @@ struct cleancache_ops { ...@@ -28,9 +28,9 @@ struct cleancache_ops {
pgoff_t, struct page *); pgoff_t, struct page *);
void (*put_page)(int, struct cleancache_filekey, void (*put_page)(int, struct cleancache_filekey,
pgoff_t, struct page *); pgoff_t, struct page *);
void (*flush_page)(int, struct cleancache_filekey, pgoff_t); void (*invalidate_page)(int, struct cleancache_filekey, pgoff_t);
void (*flush_inode)(int, struct cleancache_filekey); void (*invalidate_inode)(int, struct cleancache_filekey);
void (*flush_fs)(int); void (*invalidate_fs)(int);
}; };
extern struct cleancache_ops extern struct cleancache_ops
...@@ -39,9 +39,9 @@ extern void __cleancache_init_fs(struct super_block *); ...@@ -39,9 +39,9 @@ extern void __cleancache_init_fs(struct super_block *);
extern void __cleancache_init_shared_fs(char *, struct super_block *); extern void __cleancache_init_shared_fs(char *, struct super_block *);
extern int __cleancache_get_page(struct page *); extern int __cleancache_get_page(struct page *);
extern void __cleancache_put_page(struct page *); extern void __cleancache_put_page(struct page *);
extern void __cleancache_flush_page(struct address_space *, struct page *); extern void __cleancache_invalidate_page(struct address_space *, struct page *);
extern void __cleancache_flush_inode(struct address_space *); extern void __cleancache_invalidate_inode(struct address_space *);
extern void __cleancache_flush_fs(struct super_block *); extern void __cleancache_invalidate_fs(struct super_block *);
extern int cleancache_enabled; extern int cleancache_enabled;
#ifdef CONFIG_CLEANCACHE #ifdef CONFIG_CLEANCACHE
...@@ -99,24 +99,24 @@ static inline void cleancache_put_page(struct page *page) ...@@ -99,24 +99,24 @@ static inline void cleancache_put_page(struct page *page)
__cleancache_put_page(page); __cleancache_put_page(page);
} }
static inline void cleancache_flush_page(struct address_space *mapping, static inline void cleancache_invalidate_page(struct address_space *mapping,
struct page *page) struct page *page)
{ {
/* careful... page->mapping is NULL sometimes when this is called */ /* careful... page->mapping is NULL sometimes when this is called */
if (cleancache_enabled && cleancache_fs_enabled_mapping(mapping)) if (cleancache_enabled && cleancache_fs_enabled_mapping(mapping))
__cleancache_flush_page(mapping, page); __cleancache_invalidate_page(mapping, page);
} }
static inline void cleancache_flush_inode(struct address_space *mapping) static inline void cleancache_invalidate_inode(struct address_space *mapping)
{ {
if (cleancache_enabled && cleancache_fs_enabled_mapping(mapping)) if (cleancache_enabled && cleancache_fs_enabled_mapping(mapping))
__cleancache_flush_inode(mapping); __cleancache_invalidate_inode(mapping);
} }
static inline void cleancache_flush_fs(struct super_block *sb) static inline void cleancache_invalidate_fs(struct super_block *sb)
{ {
if (cleancache_enabled) if (cleancache_enabled)
__cleancache_flush_fs(sb); __cleancache_invalidate_fs(sb);
} }
#endif /* _LINUX_CLEANCACHE_H */ #endif /* _LINUX_CLEANCACHE_H */
...@@ -15,29 +15,34 @@ ...@@ -15,29 +15,34 @@
#include <linux/fs.h> #include <linux/fs.h>
#include <linux/exportfs.h> #include <linux/exportfs.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/debugfs.h>
#include <linux/cleancache.h> #include <linux/cleancache.h>
/* /*
* This global enablement flag may be read thousands of times per second * This global enablement flag may be read thousands of times per second
* by cleancache_get/put/flush even on systems where cleancache_ops * by cleancache_get/put/invalidate even on systems where cleancache_ops
* is not claimed (e.g. cleancache is config'ed on but remains * is not claimed (e.g. cleancache is config'ed on but remains
* disabled), so is preferred to the slower alternative: a function * disabled), so is preferred to the slower alternative: a function
* call that checks a non-global. * call that checks a non-global.
*/ */
int cleancache_enabled; int cleancache_enabled __read_mostly;
EXPORT_SYMBOL(cleancache_enabled); EXPORT_SYMBOL(cleancache_enabled);
/* /*
* cleancache_ops is set by cleancache_ops_register to contain the pointers * cleancache_ops is set by cleancache_ops_register to contain the pointers
* to the cleancache "backend" implementation functions. * to the cleancache "backend" implementation functions.
*/ */
static struct cleancache_ops cleancache_ops; static struct cleancache_ops cleancache_ops __read_mostly;
/* useful stats available in /sys/kernel/mm/cleancache */ /*
static unsigned long cleancache_succ_gets; * Counters available via /sys/kernel/debug/frontswap (if debugfs is
static unsigned long cleancache_failed_gets; * properly configured. These are for information only so are not protected
static unsigned long cleancache_puts; * against increment races.
static unsigned long cleancache_flushes; */
static u64 cleancache_succ_gets;
static u64 cleancache_failed_gets;
static u64 cleancache_puts;
static u64 cleancache_invalidates;
/* /*
* register operations for cleancache, returning previous thus allowing * register operations for cleancache, returning previous thus allowing
...@@ -148,10 +153,11 @@ void __cleancache_put_page(struct page *page) ...@@ -148,10 +153,11 @@ void __cleancache_put_page(struct page *page)
EXPORT_SYMBOL(__cleancache_put_page); EXPORT_SYMBOL(__cleancache_put_page);
/* /*
* Flush any data from cleancache associated with the poolid and the * Invalidate any data from cleancache associated with the poolid and the
* page's inode and page index so that a subsequent "get" will fail. * page's inode and page index so that a subsequent "get" will fail.
*/ */
void __cleancache_flush_page(struct address_space *mapping, struct page *page) void __cleancache_invalidate_page(struct address_space *mapping,
struct page *page)
{ {
/* careful... page->mapping is NULL sometimes when this is called */ /* careful... page->mapping is NULL sometimes when this is called */
int pool_id = mapping->host->i_sb->cleancache_poolid; int pool_id = mapping->host->i_sb->cleancache_poolid;
...@@ -160,85 +166,57 @@ void __cleancache_flush_page(struct address_space *mapping, struct page *page) ...@@ -160,85 +166,57 @@ void __cleancache_flush_page(struct address_space *mapping, struct page *page)
if (pool_id >= 0) { if (pool_id >= 0) {
VM_BUG_ON(!PageLocked(page)); VM_BUG_ON(!PageLocked(page));
if (cleancache_get_key(mapping->host, &key) >= 0) { if (cleancache_get_key(mapping->host, &key) >= 0) {
(*cleancache_ops.flush_page)(pool_id, key, page->index); (*cleancache_ops.invalidate_page)(pool_id,
cleancache_flushes++; key, page->index);
cleancache_invalidates++;
} }
} }
} }
EXPORT_SYMBOL(__cleancache_flush_page); EXPORT_SYMBOL(__cleancache_invalidate_page);
/* /*
* Flush all data from cleancache associated with the poolid and the * Invalidate all data from cleancache associated with the poolid and the
* mappings's inode so that all subsequent gets to this poolid/inode * mappings's inode so that all subsequent gets to this poolid/inode
* will fail. * will fail.
*/ */
void __cleancache_flush_inode(struct address_space *mapping) void __cleancache_invalidate_inode(struct address_space *mapping)
{ {
int pool_id = mapping->host->i_sb->cleancache_poolid; int pool_id = mapping->host->i_sb->cleancache_poolid;
struct cleancache_filekey key = { .u.key = { 0 } }; struct cleancache_filekey key = { .u.key = { 0 } };
if (pool_id >= 0 && cleancache_get_key(mapping->host, &key) >= 0) if (pool_id >= 0 && cleancache_get_key(mapping->host, &key) >= 0)
(*cleancache_ops.flush_inode)(pool_id, key); (*cleancache_ops.invalidate_inode)(pool_id, key);
} }
EXPORT_SYMBOL(__cleancache_flush_inode); EXPORT_SYMBOL(__cleancache_invalidate_inode);
/* /*
* Called by any cleancache-enabled filesystem at time of unmount; * Called by any cleancache-enabled filesystem at time of unmount;
* note that pool_id is surrendered and may be reutrned by a subsequent * note that pool_id is surrendered and may be reutrned by a subsequent
* cleancache_init_fs or cleancache_init_shared_fs * cleancache_init_fs or cleancache_init_shared_fs
*/ */
void __cleancache_flush_fs(struct super_block *sb) void __cleancache_invalidate_fs(struct super_block *sb)
{ {
if (sb->cleancache_poolid >= 0) { if (sb->cleancache_poolid >= 0) {
int old_poolid = sb->cleancache_poolid; int old_poolid = sb->cleancache_poolid;
sb->cleancache_poolid = -1; sb->cleancache_poolid = -1;
(*cleancache_ops.flush_fs)(old_poolid); (*cleancache_ops.invalidate_fs)(old_poolid);
} }
} }
EXPORT_SYMBOL(__cleancache_flush_fs); EXPORT_SYMBOL(__cleancache_invalidate_fs);
#ifdef CONFIG_SYSFS
/* see Documentation/ABI/xxx/sysfs-kernel-mm-cleancache */
#define CLEANCACHE_SYSFS_RO(_name) \
static ssize_t cleancache_##_name##_show(struct kobject *kobj, \
struct kobj_attribute *attr, char *buf) \
{ \
return sprintf(buf, "%lu\n", cleancache_##_name); \
} \
static struct kobj_attribute cleancache_##_name##_attr = { \
.attr = { .name = __stringify(_name), .mode = 0444 }, \
.show = cleancache_##_name##_show, \
}
CLEANCACHE_SYSFS_RO(succ_gets);
CLEANCACHE_SYSFS_RO(failed_gets);
CLEANCACHE_SYSFS_RO(puts);
CLEANCACHE_SYSFS_RO(flushes);
static struct attribute *cleancache_attrs[] = {
&cleancache_succ_gets_attr.attr,
&cleancache_failed_gets_attr.attr,
&cleancache_puts_attr.attr,
&cleancache_flushes_attr.attr,
NULL,
};
static struct attribute_group cleancache_attr_group = {
.attrs = cleancache_attrs,
.name = "cleancache",
};
#endif /* CONFIG_SYSFS */
static int __init init_cleancache(void) static int __init init_cleancache(void)
{ {
#ifdef CONFIG_SYSFS #ifdef CONFIG_DEBUG_FS
int err; struct dentry *root = debugfs_create_dir("cleancache", NULL);
if (root == NULL)
err = sysfs_create_group(mm_kobj, &cleancache_attr_group); return -ENXIO;
#endif /* CONFIG_SYSFS */ debugfs_create_u64("succ_gets", S_IRUGO, root, &cleancache_succ_gets);
debugfs_create_u64("failed_gets", S_IRUGO,
root, &cleancache_failed_gets);
debugfs_create_u64("puts", S_IRUGO, root, &cleancache_puts);
debugfs_create_u64("invalidates", S_IRUGO,
root, &cleancache_invalidates);
#endif
return 0; return 0;
} }
module_init(init_cleancache) module_init(init_cleancache)
...@@ -122,7 +122,7 @@ void __delete_from_page_cache(struct page *page) ...@@ -122,7 +122,7 @@ void __delete_from_page_cache(struct page *page)
if (PageUptodate(page) && PageMappedToDisk(page)) if (PageUptodate(page) && PageMappedToDisk(page))
cleancache_put_page(page); cleancache_put_page(page);
else else
cleancache_flush_page(mapping, page); cleancache_invalidate_page(mapping, page);
radix_tree_delete(&mapping->page_tree, page->index); radix_tree_delete(&mapping->page_tree, page->index);
page->mapping = NULL; page->mapping = NULL;
......
...@@ -52,7 +52,7 @@ void do_invalidatepage(struct page *page, unsigned long offset) ...@@ -52,7 +52,7 @@ void do_invalidatepage(struct page *page, unsigned long offset)
static inline void truncate_partial_page(struct page *page, unsigned partial) static inline void truncate_partial_page(struct page *page, unsigned partial)
{ {
zero_user_segment(page, partial, PAGE_CACHE_SIZE); zero_user_segment(page, partial, PAGE_CACHE_SIZE);
cleancache_flush_page(page->mapping, page); cleancache_invalidate_page(page->mapping, page);
if (page_has_private(page)) if (page_has_private(page))
do_invalidatepage(page, partial); do_invalidatepage(page, partial);
} }
...@@ -213,7 +213,7 @@ void truncate_inode_pages_range(struct address_space *mapping, ...@@ -213,7 +213,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
pgoff_t end; pgoff_t end;
int i; int i;
cleancache_flush_inode(mapping); cleancache_invalidate_inode(mapping);
if (mapping->nrpages == 0) if (mapping->nrpages == 0)
return; return;
...@@ -292,7 +292,7 @@ void truncate_inode_pages_range(struct address_space *mapping, ...@@ -292,7 +292,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
mem_cgroup_uncharge_end(); mem_cgroup_uncharge_end();
index++; index++;
} }
cleancache_flush_inode(mapping); cleancache_invalidate_inode(mapping);
} }
EXPORT_SYMBOL(truncate_inode_pages_range); EXPORT_SYMBOL(truncate_inode_pages_range);
...@@ -444,7 +444,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping, ...@@ -444,7 +444,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
int ret2 = 0; int ret2 = 0;
int did_range_unmap = 0; int did_range_unmap = 0;
cleancache_flush_inode(mapping); cleancache_invalidate_inode(mapping);
pagevec_init(&pvec, 0); pagevec_init(&pvec, 0);
index = start; index = start;
while (index <= end && pagevec_lookup(&pvec, mapping, index, while (index <= end && pagevec_lookup(&pvec, mapping, index,
...@@ -500,7 +500,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping, ...@@ -500,7 +500,7 @@ int invalidate_inode_pages2_range(struct address_space *mapping,
cond_resched(); cond_resched();
index++; index++;
} }
cleancache_flush_inode(mapping); cleancache_invalidate_inode(mapping);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(invalidate_inode_pages2_range); EXPORT_SYMBOL_GPL(invalidate_inode_pages2_range);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment