Commit 37cae6ad authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dm-3.9-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-dm

Pull device-mapper update from Alasdair G Kergon:
 "The main addition here is a long-desired target framework to allow an
  SSD to be used as a cache in front of a slower device.  Cache tuning
  is delegated to interchangeable policy modules so these can be
  developed independently of the mechanics needed to shuffle the data
  around.

  Other than that, kcopyd users acquire a throttling parameter, ioctl
  buffer usage gets streamlined, more mempool reliance is reduced and
  there are a few other bug fixes and tidy-ups."

* tag 'dm-3.9-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-dm: (30 commits)
  dm cache: add cleaner policy
  dm cache: add mq policy
  dm: add cache target
  dm persistent data: add bitset
  dm persistent data: add transactional array
  dm thin: remove cells from stack
  dm bio prison: pass cell memory in
  dm persistent data: add btree_walk
  dm: add target num_write_bios fn
  dm kcopyd: introduce configurable throttling
  dm ioctl: allow message to return data
  dm ioctl: optimize functions without variable params
  dm ioctl: introduce ioctl_flags
  dm: merge io_pool and tio_pool
  dm: remove unused _rq_bio_info_cache
  dm: fix limits initialization when there are no data devices
  dm snapshot: add missing module aliases
  dm persistent data: set some btree fn parms const
  dm: refactor bio cloning
  dm: rename bio cloning functions
  ...
parents 98624899 8735a813
Guidance for writing policies
=============================
Try to keep transactionality out of it. The core is careful to
avoid asking about anything that is migrating. This is a pain, but
makes it easier to write the policies.
Mappings are loaded into the policy at construction time.
Every bio that is mapped by the target is referred to the policy.
The policy can return a simple HIT or MISS or issue a migration.
Currently there's no way for the policy to issue background work,
e.g. to start writing back dirty blocks that are going to be evicte
soon.
Because we map bios, rather than requests it's easy for the policy
to get fooled by many small bios. For this reason the core target
issues periodic ticks to the policy. It's suggested that the policy
doesn't update states (eg, hit counts) for a block more than once
for each tick. The core ticks by watching bios complete, and so
trying to see when the io scheduler has let the ios run.
Overview of supplied cache replacement policies
===============================================
multiqueue
----------
This policy is the default.
The multiqueue policy has two sets of 16 queues: one set for entries
waiting for the cache and another one for those in the cache.
Cache entries in the queues are aged based on logical time. Entry into
the cache is based on variable thresholds and queue selection is based
on hit count on entry. The policy aims to take different cache miss
costs into account and to adjust to varying load patterns automatically.
Message and constructor argument pairs are:
'sequential_threshold <#nr_sequential_ios>' and
'random_threshold <#nr_random_ios>'.
The sequential threshold indicates the number of contiguous I/Os
required before a stream is treated as sequential. The random threshold
is the number of intervening non-contiguous I/Os that must be seen
before the stream is treated as random again.
The sequential and random thresholds default to 512 and 4 respectively.
Large, sequential ios are probably better left on the origin device
since spindles tend to have good bandwidth. The io_tracker counts
contiguous I/Os to try to spot when the io is in one of these sequential
modes.
cleaner
-------
The cleaner writes back all dirty blocks in a cache to decommission it.
Examples
========
The syntax for a table is:
cache <metadata dev> <cache dev> <origin dev> <block size>
<#feature_args> [<feature arg>]*
<policy> <#policy_args> [<policy arg>]*
The syntax to send a message using the dmsetup command is:
dmsetup message <mapped device> 0 sequential_threshold 1024
dmsetup message <mapped device> 0 random_threshold 8
Using dmsetup:
dmsetup create blah --table "0 268435456 cache /dev/sdb /dev/sdc \
/dev/sdd 512 0 mq 4 sequential_threshold 1024 random_threshold 8"
creates a 128GB large mapped device named 'blah' with the
sequential threshold set to 1024 and the random_threshold set to 8.
Introduction
============
dm-cache is a device mapper target written by Joe Thornber, Heinz
Mauelshagen, and Mike Snitzer.
It aims to improve performance of a block device (eg, a spindle) by
dynamically migrating some of its data to a faster, smaller device
(eg, an SSD).
This device-mapper solution allows us to insert this caching at
different levels of the dm stack, for instance above the data device for
a thin-provisioning pool. Caching solutions that are integrated more
closely with the virtual memory system should give better performance.
The target reuses the metadata library used in the thin-provisioning
library.
The decision as to what data to migrate and when is left to a plug-in
policy module. Several of these have been written as we experiment,
and we hope other people will contribute others for specific io
scenarios (eg. a vm image server).
Glossary
========
Migration - Movement of the primary copy of a logical block from one
device to the other.
Promotion - Migration from slow device to fast device.
Demotion - Migration from fast device to slow device.
The origin device always contains a copy of the logical block, which
may be out of date or kept in sync with the copy on the cache device
(depending on policy).
Design
======
Sub-devices
-----------
The target is constructed by passing three devices to it (along with
other parameters detailed later):
1. An origin device - the big, slow one.
2. A cache device - the small, fast one.
3. A small metadata device - records which blocks are in the cache,
which are dirty, and extra hints for use by the policy object.
This information could be put on the cache device, but having it
separate allows the volume manager to configure it differently,
e.g. as a mirror for extra robustness.
Fixed block size
----------------
The origin is divided up into blocks of a fixed size. This block size
is configurable when you first create the cache. Typically we've been
using block sizes of 256k - 1024k.
Having a fixed block size simplifies the target a lot. But it is
something of a compromise. For instance, a small part of a block may be
getting hit a lot, yet the whole block will be promoted to the cache.
So large block sizes are bad because they waste cache space. And small
block sizes are bad because they increase the amount of metadata (both
in core and on disk).
Writeback/writethrough
----------------------
The cache has two modes, writeback and writethrough.
If writeback, the default, is selected then a write to a block that is
cached will go only to the cache and the block will be marked dirty in
the metadata.
If writethrough is selected then a write to a cached block will not
complete until it has hit both the origin and cache devices. Clean
blocks should remain clean.
A simple cleaner policy is provided, which will clean (write back) all
dirty blocks in a cache. Useful for decommissioning a cache.
Migration throttling
--------------------
Migrating data between the origin and cache device uses bandwidth.
The user can set a throttle to prevent more than a certain amount of
migration occuring at any one time. Currently we're not taking any
account of normal io traffic going to the devices. More work needs
doing here to avoid migrating during those peak io moments.
For the time being, a message "migration_threshold <#sectors>"
can be used to set the maximum number of sectors being migrated,
the default being 204800 sectors (or 100MB).
Updating on-disk metadata
-------------------------
On-disk metadata is committed every time a REQ_SYNC or REQ_FUA bio is
written. If no such requests are made then commits will occur every
second. This means the cache behaves like a physical disk that has a
write cache (the same is true of the thin-provisioning target). If
power is lost you may lose some recent writes. The metadata should
always be consistent in spite of any crash.
The 'dirty' state for a cache block changes far too frequently for us
to keep updating it on the fly. So we treat it as a hint. In normal
operation it will be written when the dm device is suspended. If the
system crashes all cache blocks will be assumed dirty when restarted.
Per-block policy hints
----------------------
Policy plug-ins can store a chunk of data per cache block. It's up to
the policy how big this chunk is, but it should be kept small. Like the
dirty flags this data is lost if there's a crash so a safe fallback
value should always be possible.
For instance, the 'mq' policy, which is currently the default policy,
uses this facility to store the hit count of the cache blocks. If
there's a crash this information will be lost, which means the cache
may be less efficient until those hit counts are regenerated.
Policy hints affect performance, not correctness.
Policy messaging
----------------
Policies will have different tunables, specific to each one, so we
need a generic way of getting and setting these. Device-mapper
messages are used. Refer to cache-policies.txt.
Discard bitset resolution
-------------------------
We can avoid copying data during migration if we know the block has
been discarded. A prime example of this is when mkfs discards the
whole block device. We store a bitset tracking the discard state of
blocks. However, we allow this bitset to have a different block size
from the cache blocks. This is because we need to track the discard
state for all of the origin device (compare with the dirty bitset
which is just for the smaller cache device).
Target interface
================
Constructor
-----------
cache <metadata dev> <cache dev> <origin dev> <block size>
<#feature args> [<feature arg>]*
<policy> <#policy args> [policy args]*
metadata dev : fast device holding the persistent metadata
cache dev : fast device holding cached data blocks
origin dev : slow device holding original data blocks
block size : cache unit size in sectors
#feature args : number of feature arguments passed
feature args : writethrough. (The default is writeback.)
policy : the replacement policy to use
#policy args : an even number of arguments corresponding to
key/value pairs passed to the policy
policy args : key/value pairs passed to the policy
E.g. 'sequential_threshold 1024'
See cache-policies.txt for details.
Optional feature arguments are:
writethrough : write through caching that prohibits cache block
content from being different from origin block content.
Without this argument, the default behaviour is to write
back cache block contents later for performance reasons,
so they may differ from the corresponding origin blocks.
A policy called 'default' is always registered. This is an alias for
the policy we currently think is giving best all round performance.
As the default policy could vary between kernels, if you are relying on
the characteristics of a specific policy, always request it by name.
Status
------
<#used metadata blocks>/<#total metadata blocks> <#read hits> <#read misses>
<#write hits> <#write misses> <#demotions> <#promotions> <#blocks in cache>
<#dirty> <#features> <features>* <#core args> <core args>* <#policy args>
<policy args>*
#used metadata blocks : Number of metadata blocks used
#total metadata blocks : Total number of metadata blocks
#read hits : Number of times a READ bio has been mapped
to the cache
#read misses : Number of times a READ bio has been mapped
to the origin
#write hits : Number of times a WRITE bio has been mapped
to the cache
#write misses : Number of times a WRITE bio has been
mapped to the origin
#demotions : Number of times a block has been removed
from the cache
#promotions : Number of times a block has been moved to
the cache
#blocks in cache : Number of blocks resident in the cache
#dirty : Number of blocks in the cache that differ
from the origin
#feature args : Number of feature args to follow
feature args : 'writethrough' (optional)
#core args : Number of core arguments (must be even)
core args : Key/value pairs for tuning the core
e.g. migration_threshold
#policy args : Number of policy arguments to follow (must be even)
policy args : Key/value pairs
e.g. 'sequential_threshold 1024
Messages
--------
Policies will have different tunables, specific to each one, so we
need a generic way of getting and setting these. Device-mapper
messages are used. (A sysfs interface would also be possible.)
The message format is:
<key> <value>
E.g.
dmsetup message my_cache 0 sequential_threshold 1024
Examples
========
The test suite can be found here:
https://github.com/jthornber/thinp-test-suite
dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \
/dev/mapper/ssd /dev/mapper/origin 512 1 writeback default 0'
dmsetup create my_cache --table '0 41943040 cache /dev/mapper/metadata \
/dev/mapper/ssd /dev/mapper/origin 1024 1 writeback \
mq 4 sequential_threshold 1024 random_threshold 8'
...@@ -210,7 +210,7 @@ config DM_DEBUG ...@@ -210,7 +210,7 @@ config DM_DEBUG
config DM_BUFIO config DM_BUFIO
tristate tristate
depends on BLK_DEV_DM && EXPERIMENTAL depends on BLK_DEV_DM
---help--- ---help---
This interface allows you to do buffered I/O on a device and acts This interface allows you to do buffered I/O on a device and acts
as a cache, holding recently-read blocks in memory and performing as a cache, holding recently-read blocks in memory and performing
...@@ -218,7 +218,7 @@ config DM_BUFIO ...@@ -218,7 +218,7 @@ config DM_BUFIO
config DM_BIO_PRISON config DM_BIO_PRISON
tristate tristate
depends on BLK_DEV_DM && EXPERIMENTAL depends on BLK_DEV_DM
---help--- ---help---
Some bio locking schemes used by other device-mapper targets Some bio locking schemes used by other device-mapper targets
including thin provisioning. including thin provisioning.
...@@ -251,8 +251,8 @@ config DM_SNAPSHOT ...@@ -251,8 +251,8 @@ config DM_SNAPSHOT
Allow volume managers to take writable snapshots of a device. Allow volume managers to take writable snapshots of a device.
config DM_THIN_PROVISIONING config DM_THIN_PROVISIONING
tristate "Thin provisioning target (EXPERIMENTAL)" tristate "Thin provisioning target"
depends on BLK_DEV_DM && EXPERIMENTAL depends on BLK_DEV_DM
select DM_PERSISTENT_DATA select DM_PERSISTENT_DATA
select DM_BIO_PRISON select DM_BIO_PRISON
---help--- ---help---
...@@ -268,6 +268,37 @@ config DM_DEBUG_BLOCK_STACK_TRACING ...@@ -268,6 +268,37 @@ config DM_DEBUG_BLOCK_STACK_TRACING
If unsure, say N. If unsure, say N.
config DM_CACHE
tristate "Cache target (EXPERIMENTAL)"
depends on BLK_DEV_DM
default n
select DM_PERSISTENT_DATA
select DM_BIO_PRISON
---help---
dm-cache attempts to improve performance of a block device by
moving frequently used data to a smaller, higher performance
device. Different 'policy' plugins can be used to change the
algorithms used to select which blocks are promoted, demoted,
cleaned etc. It supports writeback and writethrough modes.
config DM_CACHE_MQ
tristate "MQ Cache Policy (EXPERIMENTAL)"
depends on DM_CACHE
default y
---help---
A cache policy that uses a multiqueue ordered by recent hit
count to select which blocks should be promoted and demoted.
This is meant to be a general purpose policy. It prioritises
reads over writes.
config DM_CACHE_CLEANER
tristate "Cleaner Cache Policy (EXPERIMENTAL)"
depends on DM_CACHE
default y
---help---
A simple cache policy that writes back all data to the
origin. Used when decommissioning a dm-cache.
config DM_MIRROR config DM_MIRROR
tristate "Mirror target" tristate "Mirror target"
depends on BLK_DEV_DM depends on BLK_DEV_DM
...@@ -302,8 +333,8 @@ config DM_RAID ...@@ -302,8 +333,8 @@ config DM_RAID
in one of the available parity distribution methods. in one of the available parity distribution methods.
config DM_LOG_USERSPACE config DM_LOG_USERSPACE
tristate "Mirror userspace logging (EXPERIMENTAL)" tristate "Mirror userspace logging"
depends on DM_MIRROR && EXPERIMENTAL && NET depends on DM_MIRROR && NET
select CONNECTOR select CONNECTOR
---help--- ---help---
The userspace logging module provides a mechanism for The userspace logging module provides a mechanism for
...@@ -350,8 +381,8 @@ config DM_MULTIPATH_ST ...@@ -350,8 +381,8 @@ config DM_MULTIPATH_ST
If unsure, say N. If unsure, say N.
config DM_DELAY config DM_DELAY
tristate "I/O delaying target (EXPERIMENTAL)" tristate "I/O delaying target"
depends on BLK_DEV_DM && EXPERIMENTAL depends on BLK_DEV_DM
---help--- ---help---
A target that delays reads and/or writes and can send A target that delays reads and/or writes and can send
them to different devices. Useful for testing. them to different devices. Useful for testing.
...@@ -365,14 +396,14 @@ config DM_UEVENT ...@@ -365,14 +396,14 @@ config DM_UEVENT
Generate udev events for DM events. Generate udev events for DM events.
config DM_FLAKEY config DM_FLAKEY
tristate "Flakey target (EXPERIMENTAL)" tristate "Flakey target"
depends on BLK_DEV_DM && EXPERIMENTAL depends on BLK_DEV_DM
---help--- ---help---
A target that intermittently fails I/O for debugging purposes. A target that intermittently fails I/O for debugging purposes.
config DM_VERITY config DM_VERITY
tristate "Verity target support (EXPERIMENTAL)" tristate "Verity target support"
depends on BLK_DEV_DM && EXPERIMENTAL depends on BLK_DEV_DM
select CRYPTO select CRYPTO
select CRYPTO_HASH select CRYPTO_HASH
select DM_BUFIO select DM_BUFIO
......
...@@ -11,6 +11,9 @@ dm-mirror-y += dm-raid1.o ...@@ -11,6 +11,9 @@ dm-mirror-y += dm-raid1.o
dm-log-userspace-y \ dm-log-userspace-y \
+= dm-log-userspace-base.o dm-log-userspace-transfer.o += dm-log-userspace-base.o dm-log-userspace-transfer.o
dm-thin-pool-y += dm-thin.o dm-thin-metadata.o dm-thin-pool-y += dm-thin.o dm-thin-metadata.o
dm-cache-y += dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o
dm-cache-mq-y += dm-cache-policy-mq.o
dm-cache-cleaner-y += dm-cache-policy-cleaner.o
md-mod-y += md.o bitmap.o md-mod-y += md.o bitmap.o
raid456-y += raid5.o raid456-y += raid5.o
...@@ -44,6 +47,9 @@ obj-$(CONFIG_DM_ZERO) += dm-zero.o ...@@ -44,6 +47,9 @@ obj-$(CONFIG_DM_ZERO) += dm-zero.o
obj-$(CONFIG_DM_RAID) += dm-raid.o obj-$(CONFIG_DM_RAID) += dm-raid.o
obj-$(CONFIG_DM_THIN_PROVISIONING) += dm-thin-pool.o obj-$(CONFIG_DM_THIN_PROVISIONING) += dm-thin-pool.o
obj-$(CONFIG_DM_VERITY) += dm-verity.o obj-$(CONFIG_DM_VERITY) += dm-verity.o
obj-$(CONFIG_DM_CACHE) += dm-cache.o
obj-$(CONFIG_DM_CACHE_MQ) += dm-cache-mq.o
obj-$(CONFIG_DM_CACHE_CLEANER) += dm-cache-cleaner.o
ifeq ($(CONFIG_DM_UEVENT),y) ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o dm-mod-objs += dm-uevent.o
......
...@@ -14,14 +14,6 @@ ...@@ -14,14 +14,6 @@
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
struct dm_bio_prison_cell {
struct hlist_node list;
struct dm_bio_prison *prison;
struct dm_cell_key key;
struct bio *holder;
struct bio_list bios;
};
struct dm_bio_prison { struct dm_bio_prison {
spinlock_t lock; spinlock_t lock;
mempool_t *cell_pool; mempool_t *cell_pool;
...@@ -87,6 +79,19 @@ void dm_bio_prison_destroy(struct dm_bio_prison *prison) ...@@ -87,6 +79,19 @@ void dm_bio_prison_destroy(struct dm_bio_prison *prison)
} }
EXPORT_SYMBOL_GPL(dm_bio_prison_destroy); EXPORT_SYMBOL_GPL(dm_bio_prison_destroy);
struct dm_bio_prison_cell *dm_bio_prison_alloc_cell(struct dm_bio_prison *prison, gfp_t gfp)
{
return mempool_alloc(prison->cell_pool, gfp);
}
EXPORT_SYMBOL_GPL(dm_bio_prison_alloc_cell);
void dm_bio_prison_free_cell(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell)
{
mempool_free(cell, prison->cell_pool);
}
EXPORT_SYMBOL_GPL(dm_bio_prison_free_cell);
static uint32_t hash_key(struct dm_bio_prison *prison, struct dm_cell_key *key) static uint32_t hash_key(struct dm_bio_prison *prison, struct dm_cell_key *key)
{ {
const unsigned long BIG_PRIME = 4294967291UL; const unsigned long BIG_PRIME = 4294967291UL;
...@@ -114,91 +119,95 @@ static struct dm_bio_prison_cell *__search_bucket(struct hlist_head *bucket, ...@@ -114,91 +119,95 @@ static struct dm_bio_prison_cell *__search_bucket(struct hlist_head *bucket,
return NULL; return NULL;
} }
/* static void __setup_new_cell(struct dm_bio_prison *prison,
* This may block if a new cell needs allocating. You must ensure that struct dm_cell_key *key,
* cells will be unlocked even if the calling thread is blocked. struct bio *holder,
* uint32_t hash,
* Returns 1 if the cell was already held, 0 if @inmate is the new holder. struct dm_bio_prison_cell *cell)
*/
int dm_bio_detain(struct dm_bio_prison *prison, struct dm_cell_key *key,
struct bio *inmate, struct dm_bio_prison_cell **ref)
{ {
int r = 1; memcpy(&cell->key, key, sizeof(cell->key));
unsigned long flags; cell->holder = holder;
uint32_t hash = hash_key(prison, key); bio_list_init(&cell->bios);
struct dm_bio_prison_cell *cell, *cell2; hlist_add_head(&cell->list, prison->cells + hash);
}
BUG_ON(hash > prison->nr_buckets);
spin_lock_irqsave(&prison->lock, flags);
cell = __search_bucket(prison->cells + hash, key);
if (cell) {
bio_list_add(&cell->bios, inmate);
goto out;
}
/* static int __bio_detain(struct dm_bio_prison *prison,
* Allocate a new cell struct dm_cell_key *key,
*/ struct bio *inmate,
spin_unlock_irqrestore(&prison->lock, flags); struct dm_bio_prison_cell *cell_prealloc,
cell2 = mempool_alloc(prison->cell_pool, GFP_NOIO); struct dm_bio_prison_cell **cell_result)
spin_lock_irqsave(&prison->lock, flags); {
uint32_t hash = hash_key(prison, key);
struct dm_bio_prison_cell *cell;
/*
* We've been unlocked, so we have to double check that
* nobody else has inserted this cell in the meantime.
*/
cell = __search_bucket(prison->cells + hash, key); cell = __search_bucket(prison->cells + hash, key);
if (cell) { if (cell) {
mempool_free(cell2, prison->cell_pool); if (inmate)
bio_list_add(&cell->bios, inmate); bio_list_add(&cell->bios, inmate);
goto out; *cell_result = cell;
return 1;
} }
/* __setup_new_cell(prison, key, inmate, hash, cell_prealloc);
* Use new cell. *cell_result = cell_prealloc;
*/ return 0;
cell = cell2; }
cell->prison = prison;
memcpy(&cell->key, key, sizeof(cell->key));
cell->holder = inmate;
bio_list_init(&cell->bios);
hlist_add_head(&cell->list, prison->cells + hash);
r = 0; static int bio_detain(struct dm_bio_prison *prison,
struct dm_cell_key *key,
struct bio *inmate,
struct dm_bio_prison_cell *cell_prealloc,
struct dm_bio_prison_cell **cell_result)
{
int r;
unsigned long flags;
out: spin_lock_irqsave(&prison->lock, flags);
r = __bio_detain(prison, key, inmate, cell_prealloc, cell_result);
spin_unlock_irqrestore(&prison->lock, flags); spin_unlock_irqrestore(&prison->lock, flags);
*ref = cell;
return r; return r;
} }
int dm_bio_detain(struct dm_bio_prison *prison,
struct dm_cell_key *key,
struct bio *inmate,
struct dm_bio_prison_cell *cell_prealloc,
struct dm_bio_prison_cell **cell_result)
{
return bio_detain(prison, key, inmate, cell_prealloc, cell_result);
}
EXPORT_SYMBOL_GPL(dm_bio_detain); EXPORT_SYMBOL_GPL(dm_bio_detain);
int dm_get_cell(struct dm_bio_prison *prison,
struct dm_cell_key *key,
struct dm_bio_prison_cell *cell_prealloc,
struct dm_bio_prison_cell **cell_result)
{
return bio_detain(prison, key, NULL, cell_prealloc, cell_result);
}
EXPORT_SYMBOL_GPL(dm_get_cell);
/* /*
* @inmates must have been initialised prior to this call * @inmates must have been initialised prior to this call
*/ */
static void __cell_release(struct dm_bio_prison_cell *cell, struct bio_list *inmates) static void __cell_release(struct dm_bio_prison_cell *cell,
struct bio_list *inmates)
{ {
struct dm_bio_prison *prison = cell->prison;
hlist_del(&cell->list); hlist_del(&cell->list);
if (inmates) { if (inmates) {
if (cell->holder)
bio_list_add(inmates, cell->holder); bio_list_add(inmates, cell->holder);
bio_list_merge(inmates, &cell->bios); bio_list_merge(inmates, &cell->bios);
} }
mempool_free(cell, prison->cell_pool);
} }
void dm_cell_release(struct dm_bio_prison_cell *cell, struct bio_list *bios) void dm_cell_release(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell,
struct bio_list *bios)
{ {
unsigned long flags; unsigned long flags;
struct dm_bio_prison *prison = cell->prison;
spin_lock_irqsave(&prison->lock, flags); spin_lock_irqsave(&prison->lock, flags);
__cell_release(cell, bios); __cell_release(cell, bios);
...@@ -209,20 +218,18 @@ EXPORT_SYMBOL_GPL(dm_cell_release); ...@@ -209,20 +218,18 @@ EXPORT_SYMBOL_GPL(dm_cell_release);
/* /*
* Sometimes we don't want the holder, just the additional bios. * Sometimes we don't want the holder, just the additional bios.
*/ */
static void __cell_release_no_holder(struct dm_bio_prison_cell *cell, struct bio_list *inmates) static void __cell_release_no_holder(struct dm_bio_prison_cell *cell,
struct bio_list *inmates)
{ {
struct dm_bio_prison *prison = cell->prison;
hlist_del(&cell->list); hlist_del(&cell->list);
bio_list_merge(inmates, &cell->bios); bio_list_merge(inmates, &cell->bios);
mempool_free(cell, prison->cell_pool);
} }
void dm_cell_release_no_holder(struct dm_bio_prison_cell *cell, struct bio_list *inmates) void dm_cell_release_no_holder(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell,
struct bio_list *inmates)
{ {
unsigned long flags; unsigned long flags;
struct dm_bio_prison *prison = cell->prison;
spin_lock_irqsave(&prison->lock, flags); spin_lock_irqsave(&prison->lock, flags);
__cell_release_no_holder(cell, inmates); __cell_release_no_holder(cell, inmates);
...@@ -230,9 +237,9 @@ void dm_cell_release_no_holder(struct dm_bio_prison_cell *cell, struct bio_list ...@@ -230,9 +237,9 @@ void dm_cell_release_no_holder(struct dm_bio_prison_cell *cell, struct bio_list
} }
EXPORT_SYMBOL_GPL(dm_cell_release_no_holder); EXPORT_SYMBOL_GPL(dm_cell_release_no_holder);
void dm_cell_error(struct dm_bio_prison_cell *cell) void dm_cell_error(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell)
{ {
struct dm_bio_prison *prison = cell->prison;
struct bio_list bios; struct bio_list bios;
struct bio *bio; struct bio *bio;
unsigned long flags; unsigned long flags;
......
...@@ -22,7 +22,6 @@ ...@@ -22,7 +22,6 @@
* subsequently unlocked the bios become available. * subsequently unlocked the bios become available.
*/ */
struct dm_bio_prison; struct dm_bio_prison;
struct dm_bio_prison_cell;
/* FIXME: this needs to be more abstract */ /* FIXME: this needs to be more abstract */
struct dm_cell_key { struct dm_cell_key {
...@@ -31,21 +30,62 @@ struct dm_cell_key { ...@@ -31,21 +30,62 @@ struct dm_cell_key {
dm_block_t block; dm_block_t block;
}; };
/*
* Treat this as opaque, only in header so callers can manage allocation
* themselves.
*/
struct dm_bio_prison_cell {
struct hlist_node list;
struct dm_cell_key key;
struct bio *holder;
struct bio_list bios;
};
struct dm_bio_prison *dm_bio_prison_create(unsigned nr_cells); struct dm_bio_prison *dm_bio_prison_create(unsigned nr_cells);
void dm_bio_prison_destroy(struct dm_bio_prison *prison); void dm_bio_prison_destroy(struct dm_bio_prison *prison);
/* /*
* This may block if a new cell needs allocating. You must ensure that * These two functions just wrap a mempool. This is a transitory step:
* cells will be unlocked even if the calling thread is blocked. * Eventually all bio prison clients should manage their own cell memory.
* *
* Returns 1 if the cell was already held, 0 if @inmate is the new holder. * Like mempool_alloc(), dm_bio_prison_alloc_cell() can only fail if called
* in interrupt context or passed GFP_NOWAIT.
*/ */
int dm_bio_detain(struct dm_bio_prison *prison, struct dm_cell_key *key, struct dm_bio_prison_cell *dm_bio_prison_alloc_cell(struct dm_bio_prison *prison,
struct bio *inmate, struct dm_bio_prison_cell **ref); gfp_t gfp);
void dm_bio_prison_free_cell(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell);
void dm_cell_release(struct dm_bio_prison_cell *cell, struct bio_list *bios); /*
void dm_cell_release_no_holder(struct dm_bio_prison_cell *cell, struct bio_list *inmates); * Creates, or retrieves a cell for the given key.
void dm_cell_error(struct dm_bio_prison_cell *cell); *
* Returns 1 if pre-existing cell returned, zero if new cell created using
* @cell_prealloc.
*/
int dm_get_cell(struct dm_bio_prison *prison,
struct dm_cell_key *key,
struct dm_bio_prison_cell *cell_prealloc,
struct dm_bio_prison_cell **cell_result);
/*
* An atomic op that combines retrieving a cell, and adding a bio to it.
*
* Returns 1 if the cell was already held, 0 if @inmate is the new holder.
*/
int dm_bio_detain(struct dm_bio_prison *prison,
struct dm_cell_key *key,
struct bio *inmate,
struct dm_bio_prison_cell *cell_prealloc,
struct dm_bio_prison_cell **cell_result);
void dm_cell_release(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell,
struct bio_list *bios);
void dm_cell_release_no_holder(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell,
struct bio_list *inmates);
void dm_cell_error(struct dm_bio_prison *prison,
struct dm_bio_prison_cell *cell);
/*----------------------------------------------------------------*/ /*----------------------------------------------------------------*/
......
...@@ -1192,7 +1192,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers); ...@@ -1192,7 +1192,7 @@ EXPORT_SYMBOL_GPL(dm_bufio_write_dirty_buffers);
int dm_bufio_issue_flush(struct dm_bufio_client *c) int dm_bufio_issue_flush(struct dm_bufio_client *c)
{ {
struct dm_io_request io_req = { struct dm_io_request io_req = {
.bi_rw = REQ_FLUSH, .bi_rw = WRITE_FLUSH,
.mem.type = DM_IO_KMEM, .mem.type = DM_IO_KMEM,
.mem.ptr.addr = NULL, .mem.ptr.addr = NULL,
.client = c->dm_io, .client = c->dm_io,
......
/*
* Copyright (C) 2012 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef DM_CACHE_BLOCK_TYPES_H
#define DM_CACHE_BLOCK_TYPES_H
#include "persistent-data/dm-block-manager.h"
/*----------------------------------------------------------------*/
/*
* It's helpful to get sparse to differentiate between indexes into the
* origin device, indexes into the cache device, and indexes into the
* discard bitset.
*/
typedef dm_block_t __bitwise__ dm_oblock_t;
typedef uint32_t __bitwise__ dm_cblock_t;
typedef dm_block_t __bitwise__ dm_dblock_t;
static inline dm_oblock_t to_oblock(dm_block_t b)
{
return (__force dm_oblock_t) b;
}
static inline dm_block_t from_oblock(dm_oblock_t b)
{
return (__force dm_block_t) b;
}
static inline dm_cblock_t to_cblock(uint32_t b)
{
return (__force dm_cblock_t) b;
}
static inline uint32_t from_cblock(dm_cblock_t b)
{
return (__force uint32_t) b;
}
static inline dm_dblock_t to_dblock(dm_block_t b)
{
return (__force dm_dblock_t) b;
}
static inline dm_block_t from_dblock(dm_dblock_t b)
{
return (__force dm_block_t) b;
}
#endif /* DM_CACHE_BLOCK_TYPES_H */
This diff is collapsed.
/*
* Copyright (C) 2012 Red Hat, Inc.
*
* This file is released under the GPL.
*/
#ifndef DM_CACHE_METADATA_H
#define DM_CACHE_METADATA_H
#include "dm-cache-block-types.h"
#include "dm-cache-policy-internal.h"
/*----------------------------------------------------------------*/
#define DM_CACHE_METADATA_BLOCK_SIZE 4096
/* FIXME: remove this restriction */
/*
* The metadata device is currently limited in size.
*
* We have one block of index, which can hold 255 index entries. Each
* index entry contains allocation info about 16k metadata blocks.
*/
#define DM_CACHE_METADATA_MAX_SECTORS (255 * (1 << 14) * (DM_CACHE_METADATA_BLOCK_SIZE / (1 << SECTOR_SHIFT)))
/*
* A metadata device larger than 16GB triggers a warning.
*/
#define DM_CACHE_METADATA_MAX_SECTORS_WARNING (16 * (1024 * 1024 * 1024 >> SECTOR_SHIFT))
/*----------------------------------------------------------------*/
/*
* Ext[234]-style compat feature flags.
*
* A new feature which old metadata will still be compatible with should
* define a DM_CACHE_FEATURE_COMPAT_* flag (rarely useful).
*
* A new feature that is not compatible with old code should define a
* DM_CACHE_FEATURE_INCOMPAT_* flag and guard the relevant code with
* that flag.
*
* A new feature that is not compatible with old code accessing the
* metadata RDWR should define a DM_CACHE_FEATURE_RO_COMPAT_* flag and
* guard the relevant code with that flag.
*
* As these various flags are defined they should be added to the
* following masks.
*/
#define DM_CACHE_FEATURE_COMPAT_SUPP 0UL
#define DM_CACHE_FEATURE_COMPAT_RO_SUPP 0UL
#define DM_CACHE_FEATURE_INCOMPAT_SUPP 0UL
/*
* Reopens or creates a new, empty metadata volume.
* Returns an ERR_PTR on failure.
*/
struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev,
sector_t data_block_size,
bool may_format_device,
size_t policy_hint_size);
void dm_cache_metadata_close(struct dm_cache_metadata *cmd);
/*
* The metadata needs to know how many cache blocks there are. We don't
* care about the origin, assuming the core target is giving us valid
* origin blocks to map to.
*/
int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size);
dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd);
int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
sector_t discard_block_size,
dm_dblock_t new_nr_entries);
typedef int (*load_discard_fn)(void *context, sector_t discard_block_size,
dm_dblock_t dblock, bool discarded);
int dm_cache_load_discards(struct dm_cache_metadata *cmd,
load_discard_fn fn, void *context);
int dm_cache_set_discard(struct dm_cache_metadata *cmd, dm_dblock_t dblock, bool discard);
int dm_cache_remove_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock);
int dm_cache_insert_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock, dm_oblock_t oblock);
int dm_cache_changed_this_transaction(struct dm_cache_metadata *cmd);
typedef int (*load_mapping_fn)(void *context, dm_oblock_t oblock,
dm_cblock_t cblock, bool dirty,
uint32_t hint, bool hint_valid);
int dm_cache_load_mappings(struct dm_cache_metadata *cmd,
const char *policy_name,
load_mapping_fn fn,
void *context);
int dm_cache_set_dirty(struct dm_cache_metadata *cmd, dm_cblock_t cblock, bool dirty);
struct dm_cache_statistics {
uint32_t read_hits;
uint32_t read_misses;
uint32_t write_hits;
uint32_t write_misses;
};
void dm_cache_metadata_get_stats(struct dm_cache_metadata *cmd,
struct dm_cache_statistics *stats);
void dm_cache_metadata_set_stats(struct dm_cache_metadata *cmd,
struct dm_cache_statistics *stats);
int dm_cache_commit(struct dm_cache_metadata *cmd, bool clean_shutdown);
int dm_cache_get_free_metadata_block_count(struct dm_cache_metadata *cmd,
dm_block_t *result);
int dm_cache_get_metadata_dev_size(struct dm_cache_metadata *cmd,
dm_block_t *result);
void dm_cache_dump(struct dm_cache_metadata *cmd);
/*
* The policy is invited to save a 32bit hint value for every cblock (eg,
* for a hit count). These are stored against the policy name. If
* policies are changed, then hints will be lost. If the machine crashes,
* hints will be lost.
*
* The hints are indexed by the cblock, but many policies will not
* neccessarily have a fast way of accessing efficiently via cblock. So
* rather than querying the policy for each cblock, we let it walk its data
* structures and fill in the hints in whatever order it wishes.
*/
int dm_cache_begin_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *p);
/*
* requests hints for every cblock and stores in the metadata device.
*/
int dm_cache_save_hint(struct dm_cache_metadata *cmd,
dm_cblock_t cblock, uint32_t hint);
/*----------------------------------------------------------------*/
#endif /* DM_CACHE_METADATA_H */
This diff is collapsed.
/*
* Copyright (C) 2012 Red Hat. All rights reserved.
*
* This file is released under the GPL.
*/
#ifndef DM_CACHE_POLICY_INTERNAL_H
#define DM_CACHE_POLICY_INTERNAL_H
#include "dm-cache-policy.h"
/*----------------------------------------------------------------*/
/*
* Little inline functions that simplify calling the policy methods.
*/
static inline int policy_map(struct dm_cache_policy *p, dm_oblock_t oblock,
bool can_block, bool can_migrate, bool discarded_oblock,
struct bio *bio, struct policy_result *result)
{
return p->map(p, oblock, can_block, can_migrate, discarded_oblock, bio, result);
}
static inline int policy_lookup(struct dm_cache_policy *p, dm_oblock_t oblock, dm_cblock_t *cblock)
{
BUG_ON(!p->lookup);
return p->lookup(p, oblock, cblock);
}
static inline void policy_set_dirty(struct dm_cache_policy *p, dm_oblock_t oblock)
{
if (p->set_dirty)
p->set_dirty(p, oblock);
}
static inline void policy_clear_dirty(struct dm_cache_policy *p, dm_oblock_t oblock)
{
if (p->clear_dirty)
p->clear_dirty(p, oblock);
}
static inline int policy_load_mapping(struct dm_cache_policy *p,
dm_oblock_t oblock, dm_cblock_t cblock,
uint32_t hint, bool hint_valid)
{
return p->load_mapping(p, oblock, cblock, hint, hint_valid);
}
static inline int policy_walk_mappings(struct dm_cache_policy *p,
policy_walk_fn fn, void *context)
{
return p->walk_mappings ? p->walk_mappings(p, fn, context) : 0;
}
static inline int policy_writeback_work(struct dm_cache_policy *p,
dm_oblock_t *oblock,
dm_cblock_t *cblock)
{
return p->writeback_work ? p->writeback_work(p, oblock, cblock) : -ENOENT;
}
static inline void policy_remove_mapping(struct dm_cache_policy *p, dm_oblock_t oblock)
{
return p->remove_mapping(p, oblock);
}
static inline void policy_force_mapping(struct dm_cache_policy *p,
dm_oblock_t current_oblock, dm_oblock_t new_oblock)
{
return p->force_mapping(p, current_oblock, new_oblock);
}
static inline dm_cblock_t policy_residency(struct dm_cache_policy *p)
{
return p->residency(p);
}
static inline void policy_tick(struct dm_cache_policy *p)
{
if (p->tick)
return p->tick(p);
}
static inline int policy_emit_config_values(struct dm_cache_policy *p, char *result, unsigned maxlen)
{
ssize_t sz = 0;
if (p->emit_config_values)
return p->emit_config_values(p, result, maxlen);
DMEMIT("0");
return 0;
}
static inline int policy_set_config_value(struct dm_cache_policy *p,
const char *key, const char *value)
{
return p->set_config_value ? p->set_config_value(p, key, value) : -EINVAL;
}
/*----------------------------------------------------------------*/
/*
* Creates a new cache policy given a policy name, a cache size, an origin size and the block size.
*/
struct dm_cache_policy *dm_cache_policy_create(const char *name, dm_cblock_t cache_size,
sector_t origin_size, sector_t block_size);
/*
* Destroys the policy. This drops references to the policy module as well
* as calling it's destroy method. So always use this rather than calling
* the policy->destroy method directly.
*/
void dm_cache_policy_destroy(struct dm_cache_policy *p);
/*
* In case we've forgotten.
*/
const char *dm_cache_policy_get_name(struct dm_cache_policy *p);
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p);
/*----------------------------------------------------------------*/
#endif /* DM_CACHE_POLICY_INTERNAL_H */
This diff is collapsed.
/*
* Copyright (C) 2012 Red Hat. All rights reserved.
*
* This file is released under the GPL.
*/
#include "dm-cache-policy-internal.h"
#include "dm.h"
#include <linux/module.h>
#include <linux/slab.h>
/*----------------------------------------------------------------*/
#define DM_MSG_PREFIX "cache-policy"
static DEFINE_SPINLOCK(register_lock);
static LIST_HEAD(register_list);
static struct dm_cache_policy_type *__find_policy(const char *name)
{
struct dm_cache_policy_type *t;
list_for_each_entry(t, &register_list, list)
if (!strcmp(t->name, name))
return t;
return NULL;
}
static struct dm_cache_policy_type *__get_policy_once(const char *name)
{
struct dm_cache_policy_type *t = __find_policy(name);
if (t && !try_module_get(t->owner)) {
DMWARN("couldn't get module %s", name);
t = ERR_PTR(-EINVAL);
}
return t;
}
static struct dm_cache_policy_type *get_policy_once(const char *name)
{
struct dm_cache_policy_type *t;
spin_lock(&register_lock);
t = __get_policy_once(name);
spin_unlock(&register_lock);
return t;
}
static struct dm_cache_policy_type *get_policy(const char *name)
{
struct dm_cache_policy_type *t;
t = get_policy_once(name);
if (IS_ERR(t))
return NULL;
if (t)
return t;
request_module("dm-cache-%s", name);
t = get_policy_once(name);
if (IS_ERR(t))
return NULL;
return t;
}
static void put_policy(struct dm_cache_policy_type *t)
{
module_put(t->owner);
}
int dm_cache_policy_register(struct dm_cache_policy_type *type)
{
int r;
/* One size fits all for now */
if (type->hint_size != 0 && type->hint_size != 4) {
DMWARN("hint size must be 0 or 4 but %llu supplied.", (unsigned long long) type->hint_size);
return -EINVAL;
}
spin_lock(&register_lock);
if (__find_policy(type->name)) {
DMWARN("attempt to register policy under duplicate name %s", type->name);
r = -EINVAL;
} else {
list_add(&type->list, &register_list);
r = 0;
}
spin_unlock(&register_lock);
return r;
}
EXPORT_SYMBOL_GPL(dm_cache_policy_register);
void dm_cache_policy_unregister(struct dm_cache_policy_type *type)
{
spin_lock(&register_lock);
list_del_init(&type->list);
spin_unlock(&register_lock);
}
EXPORT_SYMBOL_GPL(dm_cache_policy_unregister);
struct dm_cache_policy *dm_cache_policy_create(const char *name,
dm_cblock_t cache_size,
sector_t origin_size,
sector_t cache_block_size)
{
struct dm_cache_policy *p = NULL;
struct dm_cache_policy_type *type;
type = get_policy(name);
if (!type) {
DMWARN("unknown policy type");
return NULL;
}
p = type->create(cache_size, origin_size, cache_block_size);
if (!p) {
put_policy(type);
return NULL;
}
p->private = type;
return p;
}
EXPORT_SYMBOL_GPL(dm_cache_policy_create);
void dm_cache_policy_destroy(struct dm_cache_policy *p)
{
struct dm_cache_policy_type *t = p->private;
p->destroy(p);
put_policy(t);
}
EXPORT_SYMBOL_GPL(dm_cache_policy_destroy);
const char *dm_cache_policy_get_name(struct dm_cache_policy *p)
{
struct dm_cache_policy_type *t = p->private;
return t->name;
}
EXPORT_SYMBOL_GPL(dm_cache_policy_get_name);
size_t dm_cache_policy_get_hint_size(struct dm_cache_policy *p)
{
struct dm_cache_policy_type *t = p->private;
return t->hint_size;
}
EXPORT_SYMBOL_GPL(dm_cache_policy_get_hint_size);
/*----------------------------------------------------------------*/
/*
* Copyright (C) 2012 Red Hat. All rights reserved.
*
* This file is released under the GPL.
*/
#ifndef DM_CACHE_POLICY_H
#define DM_CACHE_POLICY_H
#include "dm-cache-block-types.h"
#include <linux/device-mapper.h>
/*----------------------------------------------------------------*/
/* FIXME: make it clear which methods are optional. Get debug policy to
* double check this at start.
*/
/*
* The cache policy makes the important decisions about which blocks get to
* live on the faster cache device.
*
* When the core target has to remap a bio it calls the 'map' method of the
* policy. This returns an instruction telling the core target what to do.
*
* POLICY_HIT:
* That block is in the cache. Remap to the cache and carry on.
*
* POLICY_MISS:
* This block is on the origin device. Remap and carry on.
*
* POLICY_NEW:
* This block is currently on the origin device, but the policy wants to
* move it. The core should:
*
* - hold any further io to this origin block
* - copy the origin to the given cache block
* - release all the held blocks
* - remap the original block to the cache
*
* POLICY_REPLACE:
* This block is currently on the origin device. The policy wants to
* move it to the cache, with the added complication that the destination
* cache block needs a writeback first. The core should:
*
* - hold any further io to this origin block
* - hold any further io to the origin block that's being written back
* - writeback
* - copy new block to cache
* - release held blocks
* - remap bio to cache and reissue.
*
* Should the core run into trouble while processing a POLICY_NEW or
* POLICY_REPLACE instruction it will roll back the policies mapping using
* remove_mapping() or force_mapping(). These methods must not fail. This
* approach avoids having transactional semantics in the policy (ie, the
* core informing the policy when a migration is complete), and hence makes
* it easier to write new policies.
*
* In general policy methods should never block, except in the case of the
* map function when can_migrate is set. So be careful to implement using
* bounded, preallocated memory.
*/
enum policy_operation {
POLICY_HIT,
POLICY_MISS,
POLICY_NEW,
POLICY_REPLACE
};
/*
* This is the instruction passed back to the core target.
*/
struct policy_result {
enum policy_operation op;
dm_oblock_t old_oblock; /* POLICY_REPLACE */
dm_cblock_t cblock; /* POLICY_HIT, POLICY_NEW, POLICY_REPLACE */
};
typedef int (*policy_walk_fn)(void *context, dm_cblock_t cblock,
dm_oblock_t oblock, uint32_t hint);
/*
* The cache policy object. Just a bunch of methods. It is envisaged that
* this structure will be embedded in a bigger, policy specific structure
* (ie. use container_of()).
*/
struct dm_cache_policy {
/*
* FIXME: make it clear which methods are optional, and which may
* block.
*/
/*
* Destroys this object.
*/
void (*destroy)(struct dm_cache_policy *p);
/*
* See large comment above.
*
* oblock - the origin block we're interested in.
*
* can_block - indicates whether the current thread is allowed to
* block. -EWOULDBLOCK returned if it can't and would.
*
* can_migrate - gives permission for POLICY_NEW or POLICY_REPLACE
* instructions. If denied and the policy would have
* returned one of these instructions it should
* return -EWOULDBLOCK.
*
* discarded_oblock - indicates whether the whole origin block is
* in a discarded state (FIXME: better to tell the
* policy about this sooner, so it can recycle that
* cache block if it wants.)
* bio - the bio that triggered this call.
* result - gets filled in with the instruction.
*
* May only return 0, or -EWOULDBLOCK (if !can_migrate)
*/
int (*map)(struct dm_cache_policy *p, dm_oblock_t oblock,
bool can_block, bool can_migrate, bool discarded_oblock,
struct bio *bio, struct policy_result *result);
/*
* Sometimes we want to see if a block is in the cache, without
* triggering any update of stats. (ie. it's not a real hit).
*
* Must not block.
*
* Returns 1 iff in cache, 0 iff not, < 0 on error (-EWOULDBLOCK
* would be typical).
*/
int (*lookup)(struct dm_cache_policy *p, dm_oblock_t oblock, dm_cblock_t *cblock);
/*
* oblock must be a mapped block. Must not block.
*/
void (*set_dirty)(struct dm_cache_policy *p, dm_oblock_t oblock);
void (*clear_dirty)(struct dm_cache_policy *p, dm_oblock_t oblock);
/*
* Called when a cache target is first created. Used to load a
* mapping from the metadata device into the policy.
*/
int (*load_mapping)(struct dm_cache_policy *p, dm_oblock_t oblock,
dm_cblock_t cblock, uint32_t hint, bool hint_valid);
int (*walk_mappings)(struct dm_cache_policy *p, policy_walk_fn fn,
void *context);
/*
* Override functions used on the error paths of the core target.
* They must succeed.
*/
void (*remove_mapping)(struct dm_cache_policy *p, dm_oblock_t oblock);
void (*force_mapping)(struct dm_cache_policy *p, dm_oblock_t current_oblock,
dm_oblock_t new_oblock);
int (*writeback_work)(struct dm_cache_policy *p, dm_oblock_t *oblock, dm_cblock_t *cblock);
/*
* How full is the cache?
*/
dm_cblock_t (*residency)(struct dm_cache_policy *p);
/*
* Because of where we sit in the block layer, we can be asked to
* map a lot of little bios that are all in the same block (no
* queue merging has occurred). To stop the policy being fooled by
* these the core target sends regular tick() calls to the policy.
* The policy should only count an entry as hit once per tick.
*/
void (*tick)(struct dm_cache_policy *p);
/*
* Configuration.
*/
int (*emit_config_values)(struct dm_cache_policy *p,
char *result, unsigned maxlen);
int (*set_config_value)(struct dm_cache_policy *p,
const char *key, const char *value);
/*
* Book keeping ptr for the policy register, not for general use.
*/
void *private;
};
/*----------------------------------------------------------------*/
/*
* We maintain a little register of the different policy types.
*/
#define CACHE_POLICY_NAME_SIZE 16
struct dm_cache_policy_type {
/* For use by the register code only. */
struct list_head list;
/*
* Policy writers should fill in these fields. The name field is
* what gets passed on the target line to select your policy.
*/
char name[CACHE_POLICY_NAME_SIZE];
/*
* Policies may store a hint for each each cache block.
* Currently the size of this hint must be 0 or 4 bytes but we
* expect to relax this in future.
*/
size_t hint_size;
struct module *owner;
struct dm_cache_policy *(*create)(dm_cblock_t cache_size,
sector_t origin_size,
sector_t block_size);
};
int dm_cache_policy_register(struct dm_cache_policy_type *type);
void dm_cache_policy_unregister(struct dm_cache_policy_type *type);
/*----------------------------------------------------------------*/
#endif /* DM_CACHE_POLICY_H */
This diff is collapsed.
...@@ -1234,20 +1234,6 @@ static int crypt_decode_key(u8 *key, char *hex, unsigned int size) ...@@ -1234,20 +1234,6 @@ static int crypt_decode_key(u8 *key, char *hex, unsigned int size)
return 0; return 0;
} }
/*
* Encode key into its hex representation
*/
static void crypt_encode_key(char *hex, u8 *key, unsigned int size)
{
unsigned int i;
for (i = 0; i < size; i++) {
sprintf(hex, "%02x", *key);
hex += 2;
key++;
}
}
static void crypt_free_tfms(struct crypt_config *cc) static void crypt_free_tfms(struct crypt_config *cc)
{ {
unsigned i; unsigned i;
...@@ -1651,7 +1637,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) ...@@ -1651,7 +1637,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
if (opt_params == 1 && opt_string && if (opt_params == 1 && opt_string &&
!strcasecmp(opt_string, "allow_discards")) !strcasecmp(opt_string, "allow_discards"))
ti->num_discard_requests = 1; ti->num_discard_bios = 1;
else if (opt_params) { else if (opt_params) {
ret = -EINVAL; ret = -EINVAL;
ti->error = "Invalid feature arguments"; ti->error = "Invalid feature arguments";
...@@ -1679,7 +1665,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) ...@@ -1679,7 +1665,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad; goto bad;
} }
ti->num_flush_requests = 1; ti->num_flush_bios = 1;
ti->discard_zeroes_data_unsupported = true; ti->discard_zeroes_data_unsupported = true;
return 0; return 0;
...@@ -1717,11 +1703,11 @@ static int crypt_map(struct dm_target *ti, struct bio *bio) ...@@ -1717,11 +1703,11 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
static int crypt_status(struct dm_target *ti, status_type_t type, static void crypt_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen) unsigned status_flags, char *result, unsigned maxlen)
{ {
struct crypt_config *cc = ti->private; struct crypt_config *cc = ti->private;
unsigned int sz = 0; unsigned i, sz = 0;
switch (type) { switch (type) {
case STATUSTYPE_INFO: case STATUSTYPE_INFO:
...@@ -1731,27 +1717,20 @@ static int crypt_status(struct dm_target *ti, status_type_t type, ...@@ -1731,27 +1717,20 @@ static int crypt_status(struct dm_target *ti, status_type_t type,
case STATUSTYPE_TABLE: case STATUSTYPE_TABLE:
DMEMIT("%s ", cc->cipher_string); DMEMIT("%s ", cc->cipher_string);
if (cc->key_size > 0) { if (cc->key_size > 0)
if ((maxlen - sz) < ((cc->key_size << 1) + 1)) for (i = 0; i < cc->key_size; i++)
return -ENOMEM; DMEMIT("%02x", cc->key[i]);
else
crypt_encode_key(result + sz, cc->key, cc->key_size); DMEMIT("-");
sz += cc->key_size << 1;
} else {
if (sz >= maxlen)
return -ENOMEM;
result[sz++] = '-';
}
DMEMIT(" %llu %s %llu", (unsigned long long)cc->iv_offset, DMEMIT(" %llu %s %llu", (unsigned long long)cc->iv_offset,
cc->dev->name, (unsigned long long)cc->start); cc->dev->name, (unsigned long long)cc->start);
if (ti->num_discard_requests) if (ti->num_discard_bios)
DMEMIT(" 1 allow_discards"); DMEMIT(" 1 allow_discards");
break; break;
} }
return 0;
} }
static void crypt_postsuspend(struct dm_target *ti) static void crypt_postsuspend(struct dm_target *ti)
...@@ -1845,7 +1824,7 @@ static int crypt_iterate_devices(struct dm_target *ti, ...@@ -1845,7 +1824,7 @@ static int crypt_iterate_devices(struct dm_target *ti,
static struct target_type crypt_target = { static struct target_type crypt_target = {
.name = "crypt", .name = "crypt",
.version = {1, 12, 0}, .version = {1, 12, 1},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = crypt_ctr, .ctr = crypt_ctr,
.dtr = crypt_dtr, .dtr = crypt_dtr,
......
...@@ -198,8 +198,8 @@ static int delay_ctr(struct dm_target *ti, unsigned int argc, char **argv) ...@@ -198,8 +198,8 @@ static int delay_ctr(struct dm_target *ti, unsigned int argc, char **argv)
mutex_init(&dc->timer_lock); mutex_init(&dc->timer_lock);
atomic_set(&dc->may_delay, 1); atomic_set(&dc->may_delay, 1);
ti->num_flush_requests = 1; ti->num_flush_bios = 1;
ti->num_discard_requests = 1; ti->num_discard_bios = 1;
ti->private = dc; ti->private = dc;
return 0; return 0;
...@@ -293,7 +293,7 @@ static int delay_map(struct dm_target *ti, struct bio *bio) ...@@ -293,7 +293,7 @@ static int delay_map(struct dm_target *ti, struct bio *bio)
return delay_bio(dc, dc->read_delay, bio); return delay_bio(dc, dc->read_delay, bio);
} }
static int delay_status(struct dm_target *ti, status_type_t type, static void delay_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen) unsigned status_flags, char *result, unsigned maxlen)
{ {
struct delay_c *dc = ti->private; struct delay_c *dc = ti->private;
...@@ -314,8 +314,6 @@ static int delay_status(struct dm_target *ti, status_type_t type, ...@@ -314,8 +314,6 @@ static int delay_status(struct dm_target *ti, status_type_t type,
dc->write_delay); dc->write_delay);
break; break;
} }
return 0;
} }
static int delay_iterate_devices(struct dm_target *ti, static int delay_iterate_devices(struct dm_target *ti,
...@@ -337,7 +335,7 @@ static int delay_iterate_devices(struct dm_target *ti, ...@@ -337,7 +335,7 @@ static int delay_iterate_devices(struct dm_target *ti,
static struct target_type delay_target = { static struct target_type delay_target = {
.name = "delay", .name = "delay",
.version = {1, 2, 0}, .version = {1, 2, 1},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = delay_ctr, .ctr = delay_ctr,
.dtr = delay_dtr, .dtr = delay_dtr,
......
...@@ -216,8 +216,8 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv) ...@@ -216,8 +216,8 @@ static int flakey_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad; goto bad;
} }
ti->num_flush_requests = 1; ti->num_flush_bios = 1;
ti->num_discard_requests = 1; ti->num_discard_bios = 1;
ti->per_bio_data_size = sizeof(struct per_bio_data); ti->per_bio_data_size = sizeof(struct per_bio_data);
ti->private = fc; ti->private = fc;
return 0; return 0;
...@@ -337,7 +337,7 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio, int error) ...@@ -337,7 +337,7 @@ static int flakey_end_io(struct dm_target *ti, struct bio *bio, int error)
return error; return error;
} }
static int flakey_status(struct dm_target *ti, status_type_t type, static void flakey_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen) unsigned status_flags, char *result, unsigned maxlen)
{ {
unsigned sz = 0; unsigned sz = 0;
...@@ -368,7 +368,6 @@ static int flakey_status(struct dm_target *ti, status_type_t type, ...@@ -368,7 +368,6 @@ static int flakey_status(struct dm_target *ti, status_type_t type,
break; break;
} }
return 0;
} }
static int flakey_ioctl(struct dm_target *ti, unsigned int cmd, unsigned long arg) static int flakey_ioctl(struct dm_target *ti, unsigned int cmd, unsigned long arg)
...@@ -411,7 +410,7 @@ static int flakey_iterate_devices(struct dm_target *ti, iterate_devices_callout_ ...@@ -411,7 +410,7 @@ static int flakey_iterate_devices(struct dm_target *ti, iterate_devices_callout_
static struct target_type flakey_target = { static struct target_type flakey_target = {
.name = "flakey", .name = "flakey",
.version = {1, 3, 0}, .version = {1, 3, 1},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = flakey_ctr, .ctr = flakey_ctr,
.dtr = flakey_dtr, .dtr = flakey_dtr,
......
...@@ -1067,6 +1067,7 @@ static void retrieve_status(struct dm_table *table, ...@@ -1067,6 +1067,7 @@ static void retrieve_status(struct dm_table *table,
num_targets = dm_table_get_num_targets(table); num_targets = dm_table_get_num_targets(table);
for (i = 0; i < num_targets; i++) { for (i = 0; i < num_targets; i++) {
struct dm_target *ti = dm_table_get_target(table, i); struct dm_target *ti = dm_table_get_target(table, i);
size_t l;
remaining = len - (outptr - outbuf); remaining = len - (outptr - outbuf);
if (remaining <= sizeof(struct dm_target_spec)) { if (remaining <= sizeof(struct dm_target_spec)) {
...@@ -1093,14 +1094,17 @@ static void retrieve_status(struct dm_table *table, ...@@ -1093,14 +1094,17 @@ static void retrieve_status(struct dm_table *table,
if (ti->type->status) { if (ti->type->status) {
if (param->flags & DM_NOFLUSH_FLAG) if (param->flags & DM_NOFLUSH_FLAG)
status_flags |= DM_STATUS_NOFLUSH_FLAG; status_flags |= DM_STATUS_NOFLUSH_FLAG;
if (ti->type->status(ti, type, status_flags, outptr, remaining)) { ti->type->status(ti, type, status_flags, outptr, remaining);
} else
outptr[0] = '\0';
l = strlen(outptr) + 1;
if (l == remaining) {
param->flags |= DM_BUFFER_FULL_FLAG; param->flags |= DM_BUFFER_FULL_FLAG;
break; break;
} }
} else
outptr[0] = '\0';
outptr += strlen(outptr) + 1; outptr += l;
used = param->data_start + (outptr - outbuf); used = param->data_start + (outptr - outbuf);
outptr = align_ptr(outptr); outptr = align_ptr(outptr);
...@@ -1410,6 +1414,22 @@ static int table_status(struct dm_ioctl *param, size_t param_size) ...@@ -1410,6 +1414,22 @@ static int table_status(struct dm_ioctl *param, size_t param_size)
return 0; return 0;
} }
static bool buffer_test_overflow(char *result, unsigned maxlen)
{
return !maxlen || strlen(result) + 1 >= maxlen;
}
/*
* Process device-mapper dependent messages.
* Returns a number <= 1 if message was processed by device mapper.
* Returns 2 if message should be delivered to the target.
*/
static int message_for_md(struct mapped_device *md, unsigned argc, char **argv,
char *result, unsigned maxlen)
{
return 2;
}
/* /*
* Pass a message to the target that's at the supplied device offset. * Pass a message to the target that's at the supplied device offset.
*/ */
...@@ -1421,6 +1441,8 @@ static int target_message(struct dm_ioctl *param, size_t param_size) ...@@ -1421,6 +1441,8 @@ static int target_message(struct dm_ioctl *param, size_t param_size)
struct dm_table *table; struct dm_table *table;
struct dm_target *ti; struct dm_target *ti;
struct dm_target_msg *tmsg = (void *) param + param->data_start; struct dm_target_msg *tmsg = (void *) param + param->data_start;
size_t maxlen;
char *result = get_result_buffer(param, param_size, &maxlen);
md = find_device(param); md = find_device(param);
if (!md) if (!md)
...@@ -1444,6 +1466,10 @@ static int target_message(struct dm_ioctl *param, size_t param_size) ...@@ -1444,6 +1466,10 @@ static int target_message(struct dm_ioctl *param, size_t param_size)
goto out_argv; goto out_argv;
} }
r = message_for_md(md, argc, argv, result, maxlen);
if (r <= 1)
goto out_argv;
table = dm_get_live_table(md); table = dm_get_live_table(md);
if (!table) if (!table)
goto out_argv; goto out_argv;
...@@ -1469,44 +1495,68 @@ static int target_message(struct dm_ioctl *param, size_t param_size) ...@@ -1469,44 +1495,68 @@ static int target_message(struct dm_ioctl *param, size_t param_size)
out_argv: out_argv:
kfree(argv); kfree(argv);
out: out:
param->data_size = 0; if (r >= 0)
__dev_status(md, param);
if (r == 1) {
param->flags |= DM_DATA_OUT_FLAG;
if (buffer_test_overflow(result, maxlen))
param->flags |= DM_BUFFER_FULL_FLAG;
else
param->data_size = param->data_start + strlen(result) + 1;
r = 0;
}
dm_put(md); dm_put(md);
return r; return r;
} }
/*
* The ioctl parameter block consists of two parts, a dm_ioctl struct
* followed by a data buffer. This flag is set if the second part,
* which has a variable size, is not used by the function processing
* the ioctl.
*/
#define IOCTL_FLAGS_NO_PARAMS 1
/*----------------------------------------------------------------- /*-----------------------------------------------------------------
* Implementation of open/close/ioctl on the special char * Implementation of open/close/ioctl on the special char
* device. * device.
*---------------------------------------------------------------*/ *---------------------------------------------------------------*/
static ioctl_fn lookup_ioctl(unsigned int cmd) static ioctl_fn lookup_ioctl(unsigned int cmd, int *ioctl_flags)
{ {
static struct { static struct {
int cmd; int cmd;
int flags;
ioctl_fn fn; ioctl_fn fn;
} _ioctls[] = { } _ioctls[] = {
{DM_VERSION_CMD, NULL}, /* version is dealt with elsewhere */ {DM_VERSION_CMD, 0, NULL}, /* version is dealt with elsewhere */
{DM_REMOVE_ALL_CMD, remove_all}, {DM_REMOVE_ALL_CMD, IOCTL_FLAGS_NO_PARAMS, remove_all},
{DM_LIST_DEVICES_CMD, list_devices}, {DM_LIST_DEVICES_CMD, 0, list_devices},
{DM_DEV_CREATE_CMD, dev_create}, {DM_DEV_CREATE_CMD, IOCTL_FLAGS_NO_PARAMS, dev_create},
{DM_DEV_REMOVE_CMD, dev_remove}, {DM_DEV_REMOVE_CMD, IOCTL_FLAGS_NO_PARAMS, dev_remove},
{DM_DEV_RENAME_CMD, dev_rename}, {DM_DEV_RENAME_CMD, 0, dev_rename},
{DM_DEV_SUSPEND_CMD, dev_suspend}, {DM_DEV_SUSPEND_CMD, IOCTL_FLAGS_NO_PARAMS, dev_suspend},
{DM_DEV_STATUS_CMD, dev_status}, {DM_DEV_STATUS_CMD, IOCTL_FLAGS_NO_PARAMS, dev_status},
{DM_DEV_WAIT_CMD, dev_wait}, {DM_DEV_WAIT_CMD, 0, dev_wait},
{DM_TABLE_LOAD_CMD, table_load}, {DM_TABLE_LOAD_CMD, 0, table_load},
{DM_TABLE_CLEAR_CMD, table_clear}, {DM_TABLE_CLEAR_CMD, IOCTL_FLAGS_NO_PARAMS, table_clear},
{DM_TABLE_DEPS_CMD, table_deps}, {DM_TABLE_DEPS_CMD, 0, table_deps},
{DM_TABLE_STATUS_CMD, table_status}, {DM_TABLE_STATUS_CMD, 0, table_status},
{DM_LIST_VERSIONS_CMD, list_versions}, {DM_LIST_VERSIONS_CMD, 0, list_versions},
{DM_TARGET_MSG_CMD, target_message}, {DM_TARGET_MSG_CMD, 0, target_message},
{DM_DEV_SET_GEOMETRY_CMD, dev_set_geometry} {DM_DEV_SET_GEOMETRY_CMD, 0, dev_set_geometry}
}; };
return (cmd >= ARRAY_SIZE(_ioctls)) ? NULL : _ioctls[cmd].fn; if (unlikely(cmd >= ARRAY_SIZE(_ioctls)))
return NULL;
*ioctl_flags = _ioctls[cmd].flags;
return _ioctls[cmd].fn;
} }
/* /*
...@@ -1543,7 +1593,8 @@ static int check_version(unsigned int cmd, struct dm_ioctl __user *user) ...@@ -1543,7 +1593,8 @@ static int check_version(unsigned int cmd, struct dm_ioctl __user *user)
return r; return r;
} }
#define DM_PARAMS_VMALLOC 0x0001 /* Params alloced with vmalloc not kmalloc */ #define DM_PARAMS_KMALLOC 0x0001 /* Params alloced with kmalloc */
#define DM_PARAMS_VMALLOC 0x0002 /* Params alloced with vmalloc */
#define DM_WIPE_BUFFER 0x0010 /* Wipe input buffer before returning from ioctl */ #define DM_WIPE_BUFFER 0x0010 /* Wipe input buffer before returning from ioctl */
static void free_params(struct dm_ioctl *param, size_t param_size, int param_flags) static void free_params(struct dm_ioctl *param, size_t param_size, int param_flags)
...@@ -1551,66 +1602,80 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla ...@@ -1551,66 +1602,80 @@ static void free_params(struct dm_ioctl *param, size_t param_size, int param_fla
if (param_flags & DM_WIPE_BUFFER) if (param_flags & DM_WIPE_BUFFER)
memset(param, 0, param_size); memset(param, 0, param_size);
if (param_flags & DM_PARAMS_KMALLOC)
kfree(param);
if (param_flags & DM_PARAMS_VMALLOC) if (param_flags & DM_PARAMS_VMALLOC)
vfree(param); vfree(param);
else
kfree(param);
} }
static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl **param, int *param_flags) static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kernel,
int ioctl_flags,
struct dm_ioctl **param, int *param_flags)
{ {
struct dm_ioctl tmp, *dmi; struct dm_ioctl *dmi;
int secure_data; int secure_data;
const size_t minimum_data_size = sizeof(*param_kernel) - sizeof(param_kernel->data);
if (copy_from_user(&tmp, user, sizeof(tmp) - sizeof(tmp.data))) if (copy_from_user(param_kernel, user, minimum_data_size))
return -EFAULT; return -EFAULT;
if (tmp.data_size < (sizeof(tmp) - sizeof(tmp.data))) if (param_kernel->data_size < minimum_data_size)
return -EINVAL; return -EINVAL;
secure_data = tmp.flags & DM_SECURE_DATA_FLAG; secure_data = param_kernel->flags & DM_SECURE_DATA_FLAG;
*param_flags = secure_data ? DM_WIPE_BUFFER : 0; *param_flags = secure_data ? DM_WIPE_BUFFER : 0;
if (ioctl_flags & IOCTL_FLAGS_NO_PARAMS) {
dmi = param_kernel;
dmi->data_size = minimum_data_size;
goto data_copied;
}
/* /*
* Try to avoid low memory issues when a device is suspended. * Try to avoid low memory issues when a device is suspended.
* Use kmalloc() rather than vmalloc() when we can. * Use kmalloc() rather than vmalloc() when we can.
*/ */
dmi = NULL; dmi = NULL;
if (tmp.data_size <= KMALLOC_MAX_SIZE) if (param_kernel->data_size <= KMALLOC_MAX_SIZE) {
dmi = kmalloc(tmp.data_size, GFP_NOIO | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN); dmi = kmalloc(param_kernel->data_size, GFP_NOIO | __GFP_NORETRY | __GFP_NOMEMALLOC | __GFP_NOWARN);
if (dmi)
*param_flags |= DM_PARAMS_KMALLOC;
}
if (!dmi) { if (!dmi) {
dmi = __vmalloc(tmp.data_size, GFP_NOIO | __GFP_REPEAT | __GFP_HIGH, PAGE_KERNEL); dmi = __vmalloc(param_kernel->data_size, GFP_NOIO | __GFP_REPEAT | __GFP_HIGH, PAGE_KERNEL);
if (dmi)
*param_flags |= DM_PARAMS_VMALLOC; *param_flags |= DM_PARAMS_VMALLOC;
} }
if (!dmi) { if (!dmi) {
if (secure_data && clear_user(user, tmp.data_size)) if (secure_data && clear_user(user, param_kernel->data_size))
return -EFAULT; return -EFAULT;
return -ENOMEM; return -ENOMEM;
} }
if (copy_from_user(dmi, user, tmp.data_size)) if (copy_from_user(dmi, user, param_kernel->data_size))
goto bad; goto bad;
data_copied:
/* /*
* Abort if something changed the ioctl data while it was being copied. * Abort if something changed the ioctl data while it was being copied.
*/ */
if (dmi->data_size != tmp.data_size) { if (dmi->data_size != param_kernel->data_size) {
DMERR("rejecting ioctl: data size modified while processing parameters"); DMERR("rejecting ioctl: data size modified while processing parameters");
goto bad; goto bad;
} }
/* Wipe the user buffer so we do not return it to userspace */ /* Wipe the user buffer so we do not return it to userspace */
if (secure_data && clear_user(user, tmp.data_size)) if (secure_data && clear_user(user, param_kernel->data_size))
goto bad; goto bad;
*param = dmi; *param = dmi;
return 0; return 0;
bad: bad:
free_params(dmi, tmp.data_size, *param_flags); free_params(dmi, param_kernel->data_size, *param_flags);
return -EFAULT; return -EFAULT;
} }
...@@ -1621,6 +1686,7 @@ static int validate_params(uint cmd, struct dm_ioctl *param) ...@@ -1621,6 +1686,7 @@ static int validate_params(uint cmd, struct dm_ioctl *param)
param->flags &= ~DM_BUFFER_FULL_FLAG; param->flags &= ~DM_BUFFER_FULL_FLAG;
param->flags &= ~DM_UEVENT_GENERATED_FLAG; param->flags &= ~DM_UEVENT_GENERATED_FLAG;
param->flags &= ~DM_SECURE_DATA_FLAG; param->flags &= ~DM_SECURE_DATA_FLAG;
param->flags &= ~DM_DATA_OUT_FLAG;
/* Ignores parameters */ /* Ignores parameters */
if (cmd == DM_REMOVE_ALL_CMD || if (cmd == DM_REMOVE_ALL_CMD ||
...@@ -1648,11 +1714,13 @@ static int validate_params(uint cmd, struct dm_ioctl *param) ...@@ -1648,11 +1714,13 @@ static int validate_params(uint cmd, struct dm_ioctl *param)
static int ctl_ioctl(uint command, struct dm_ioctl __user *user) static int ctl_ioctl(uint command, struct dm_ioctl __user *user)
{ {
int r = 0; int r = 0;
int ioctl_flags;
int param_flags; int param_flags;
unsigned int cmd; unsigned int cmd;
struct dm_ioctl *uninitialized_var(param); struct dm_ioctl *uninitialized_var(param);
ioctl_fn fn = NULL; ioctl_fn fn = NULL;
size_t input_param_size; size_t input_param_size;
struct dm_ioctl param_kernel;
/* only root can play with this */ /* only root can play with this */
if (!capable(CAP_SYS_ADMIN)) if (!capable(CAP_SYS_ADMIN))
...@@ -1677,7 +1745,7 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user) ...@@ -1677,7 +1745,7 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user)
if (cmd == DM_VERSION_CMD) if (cmd == DM_VERSION_CMD)
return 0; return 0;
fn = lookup_ioctl(cmd); fn = lookup_ioctl(cmd, &ioctl_flags);
if (!fn) { if (!fn) {
DMWARN("dm_ctl_ioctl: unknown command 0x%x", command); DMWARN("dm_ctl_ioctl: unknown command 0x%x", command);
return -ENOTTY; return -ENOTTY;
...@@ -1686,7 +1754,7 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user) ...@@ -1686,7 +1754,7 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user)
/* /*
* Copy the parameters into kernel space. * Copy the parameters into kernel space.
*/ */
r = copy_params(user, &param, &param_flags); r = copy_params(user, &param_kernel, ioctl_flags, &param, &param_flags);
if (r) if (r)
return r; return r;
...@@ -1699,6 +1767,10 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user) ...@@ -1699,6 +1767,10 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user)
param->data_size = sizeof(*param); param->data_size = sizeof(*param);
r = fn(param, input_param_size); r = fn(param, input_param_size);
if (unlikely(param->flags & DM_BUFFER_FULL_FLAG) &&
unlikely(ioctl_flags & IOCTL_FLAGS_NO_PARAMS))
DMERR("ioctl %d tried to output some data but has IOCTL_FLAGS_NO_PARAMS set", cmd);
/* /*
* Copy the results back to userland. * Copy the results back to userland.
*/ */
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/vmalloc.h> #include <linux/vmalloc.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/delay.h>
#include <linux/device-mapper.h> #include <linux/device-mapper.h>
#include <linux/dm-kcopyd.h> #include <linux/dm-kcopyd.h>
...@@ -51,6 +52,8 @@ struct dm_kcopyd_client { ...@@ -51,6 +52,8 @@ struct dm_kcopyd_client {
struct workqueue_struct *kcopyd_wq; struct workqueue_struct *kcopyd_wq;
struct work_struct kcopyd_work; struct work_struct kcopyd_work;
struct dm_kcopyd_throttle *throttle;
/* /*
* We maintain three lists of jobs: * We maintain three lists of jobs:
* *
...@@ -68,6 +71,117 @@ struct dm_kcopyd_client { ...@@ -68,6 +71,117 @@ struct dm_kcopyd_client {
static struct page_list zero_page_list; static struct page_list zero_page_list;
static DEFINE_SPINLOCK(throttle_spinlock);
/*
* IO/IDLE accounting slowly decays after (1 << ACCOUNT_INTERVAL_SHIFT) period.
* When total_period >= (1 << ACCOUNT_INTERVAL_SHIFT) the counters are divided
* by 2.
*/
#define ACCOUNT_INTERVAL_SHIFT SHIFT_HZ
/*
* Sleep this number of milliseconds.
*
* The value was decided experimentally.
* Smaller values seem to cause an increased copy rate above the limit.
* The reason for this is unknown but possibly due to jiffies rounding errors
* or read/write cache inside the disk.
*/
#define SLEEP_MSEC 100
/*
* Maximum number of sleep events. There is a theoretical livelock if more
* kcopyd clients do work simultaneously which this limit avoids.
*/
#define MAX_SLEEPS 10
static void io_job_start(struct dm_kcopyd_throttle *t)
{
unsigned throttle, now, difference;
int slept = 0, skew;
if (unlikely(!t))
return;
try_again:
spin_lock_irq(&throttle_spinlock);
throttle = ACCESS_ONCE(t->throttle);
if (likely(throttle >= 100))
goto skip_limit;
now = jiffies;
difference = now - t->last_jiffies;
t->last_jiffies = now;
if (t->num_io_jobs)
t->io_period += difference;
t->total_period += difference;
/*
* Maintain sane values if we got a temporary overflow.
*/
if (unlikely(t->io_period > t->total_period))
t->io_period = t->total_period;
if (unlikely(t->total_period >= (1 << ACCOUNT_INTERVAL_SHIFT))) {
int shift = fls(t->total_period >> ACCOUNT_INTERVAL_SHIFT);
t->total_period >>= shift;
t->io_period >>= shift;
}
skew = t->io_period - throttle * t->total_period / 100;
if (unlikely(skew > 0) && slept < MAX_SLEEPS) {
slept++;
spin_unlock_irq(&throttle_spinlock);
msleep(SLEEP_MSEC);
goto try_again;
}
skip_limit:
t->num_io_jobs++;
spin_unlock_irq(&throttle_spinlock);
}
static void io_job_finish(struct dm_kcopyd_throttle *t)
{
unsigned long flags;
if (unlikely(!t))
return;
spin_lock_irqsave(&throttle_spinlock, flags);
t->num_io_jobs--;
if (likely(ACCESS_ONCE(t->throttle) >= 100))
goto skip_limit;
if (!t->num_io_jobs) {
unsigned now, difference;
now = jiffies;
difference = now - t->last_jiffies;
t->last_jiffies = now;
t->io_period += difference;
t->total_period += difference;
/*
* Maintain sane values if we got a temporary overflow.
*/
if (unlikely(t->io_period > t->total_period))
t->io_period = t->total_period;
}
skip_limit:
spin_unlock_irqrestore(&throttle_spinlock, flags);
}
static void wake(struct dm_kcopyd_client *kc) static void wake(struct dm_kcopyd_client *kc)
{ {
queue_work(kc->kcopyd_wq, &kc->kcopyd_work); queue_work(kc->kcopyd_wq, &kc->kcopyd_work);
...@@ -348,6 +462,8 @@ static void complete_io(unsigned long error, void *context) ...@@ -348,6 +462,8 @@ static void complete_io(unsigned long error, void *context)
struct kcopyd_job *job = (struct kcopyd_job *) context; struct kcopyd_job *job = (struct kcopyd_job *) context;
struct dm_kcopyd_client *kc = job->kc; struct dm_kcopyd_client *kc = job->kc;
io_job_finish(kc->throttle);
if (error) { if (error) {
if (job->rw & WRITE) if (job->rw & WRITE)
job->write_err |= error; job->write_err |= error;
...@@ -389,6 +505,8 @@ static int run_io_job(struct kcopyd_job *job) ...@@ -389,6 +505,8 @@ static int run_io_job(struct kcopyd_job *job)
.client = job->kc->io_client, .client = job->kc->io_client,
}; };
io_job_start(job->kc->throttle);
if (job->rw == READ) if (job->rw == READ)
r = dm_io(&io_req, 1, &job->source, NULL); r = dm_io(&io_req, 1, &job->source, NULL);
else else
...@@ -695,7 +813,7 @@ int kcopyd_cancel(struct kcopyd_job *job, int block) ...@@ -695,7 +813,7 @@ int kcopyd_cancel(struct kcopyd_job *job, int block)
/*----------------------------------------------------------------- /*-----------------------------------------------------------------
* Client setup * Client setup
*---------------------------------------------------------------*/ *---------------------------------------------------------------*/
struct dm_kcopyd_client *dm_kcopyd_client_create(void) struct dm_kcopyd_client *dm_kcopyd_client_create(struct dm_kcopyd_throttle *throttle)
{ {
int r = -ENOMEM; int r = -ENOMEM;
struct dm_kcopyd_client *kc; struct dm_kcopyd_client *kc;
...@@ -708,6 +826,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(void) ...@@ -708,6 +826,7 @@ struct dm_kcopyd_client *dm_kcopyd_client_create(void)
INIT_LIST_HEAD(&kc->complete_jobs); INIT_LIST_HEAD(&kc->complete_jobs);
INIT_LIST_HEAD(&kc->io_jobs); INIT_LIST_HEAD(&kc->io_jobs);
INIT_LIST_HEAD(&kc->pages_jobs); INIT_LIST_HEAD(&kc->pages_jobs);
kc->throttle = throttle;
kc->job_pool = mempool_create_slab_pool(MIN_JOBS, _job_cache); kc->job_pool = mempool_create_slab_pool(MIN_JOBS, _job_cache);
if (!kc->job_pool) if (!kc->job_pool)
......
...@@ -53,9 +53,9 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv) ...@@ -53,9 +53,9 @@ static int linear_ctr(struct dm_target *ti, unsigned int argc, char **argv)
goto bad; goto bad;
} }
ti->num_flush_requests = 1; ti->num_flush_bios = 1;
ti->num_discard_requests = 1; ti->num_discard_bios = 1;
ti->num_write_same_requests = 1; ti->num_write_same_bios = 1;
ti->private = lc; ti->private = lc;
return 0; return 0;
...@@ -95,7 +95,7 @@ static int linear_map(struct dm_target *ti, struct bio *bio) ...@@ -95,7 +95,7 @@ static int linear_map(struct dm_target *ti, struct bio *bio)
return DM_MAPIO_REMAPPED; return DM_MAPIO_REMAPPED;
} }
static int linear_status(struct dm_target *ti, status_type_t type, static void linear_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen) unsigned status_flags, char *result, unsigned maxlen)
{ {
struct linear_c *lc = (struct linear_c *) ti->private; struct linear_c *lc = (struct linear_c *) ti->private;
...@@ -110,7 +110,6 @@ static int linear_status(struct dm_target *ti, status_type_t type, ...@@ -110,7 +110,6 @@ static int linear_status(struct dm_target *ti, status_type_t type,
(unsigned long long)lc->start); (unsigned long long)lc->start);
break; break;
} }
return 0;
} }
static int linear_ioctl(struct dm_target *ti, unsigned int cmd, static int linear_ioctl(struct dm_target *ti, unsigned int cmd,
...@@ -155,7 +154,7 @@ static int linear_iterate_devices(struct dm_target *ti, ...@@ -155,7 +154,7 @@ static int linear_iterate_devices(struct dm_target *ti,
static struct target_type linear_target = { static struct target_type linear_target = {
.name = "linear", .name = "linear",
.version = {1, 2, 0}, .version = {1, 2, 1},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = linear_ctr, .ctr = linear_ctr,
.dtr = linear_dtr, .dtr = linear_dtr,
......
...@@ -905,8 +905,8 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc, ...@@ -905,8 +905,8 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc,
goto bad; goto bad;
} }
ti->num_flush_requests = 1; ti->num_flush_bios = 1;
ti->num_discard_requests = 1; ti->num_discard_bios = 1;
return 0; return 0;
...@@ -1378,7 +1378,7 @@ static void multipath_resume(struct dm_target *ti) ...@@ -1378,7 +1378,7 @@ static void multipath_resume(struct dm_target *ti)
* [priority selector-name num_ps_args [ps_args]* * [priority selector-name num_ps_args [ps_args]*
* num_paths num_selector_args [path_dev [selector_args]* ]+ ]+ * num_paths num_selector_args [path_dev [selector_args]* ]+ ]+
*/ */
static int multipath_status(struct dm_target *ti, status_type_t type, static void multipath_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen) unsigned status_flags, char *result, unsigned maxlen)
{ {
int sz = 0; int sz = 0;
...@@ -1485,8 +1485,6 @@ static int multipath_status(struct dm_target *ti, status_type_t type, ...@@ -1485,8 +1485,6 @@ static int multipath_status(struct dm_target *ti, status_type_t type,
} }
spin_unlock_irqrestore(&m->lock, flags); spin_unlock_irqrestore(&m->lock, flags);
return 0;
} }
static int multipath_message(struct dm_target *ti, unsigned argc, char **argv) static int multipath_message(struct dm_target *ti, unsigned argc, char **argv)
...@@ -1695,7 +1693,7 @@ static int multipath_busy(struct dm_target *ti) ...@@ -1695,7 +1693,7 @@ static int multipath_busy(struct dm_target *ti)
*---------------------------------------------------------------*/ *---------------------------------------------------------------*/
static struct target_type multipath_target = { static struct target_type multipath_target = {
.name = "multipath", .name = "multipath",
.version = {1, 5, 0}, .version = {1, 5, 1},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = multipath_ctr, .ctr = multipath_ctr,
.dtr = multipath_dtr, .dtr = multipath_dtr,
......
...@@ -1151,7 +1151,7 @@ static int raid_ctr(struct dm_target *ti, unsigned argc, char **argv) ...@@ -1151,7 +1151,7 @@ static int raid_ctr(struct dm_target *ti, unsigned argc, char **argv)
INIT_WORK(&rs->md.event_work, do_table_event); INIT_WORK(&rs->md.event_work, do_table_event);
ti->private = rs; ti->private = rs;
ti->num_flush_requests = 1; ti->num_flush_bios = 1;
mutex_lock(&rs->md.reconfig_mutex); mutex_lock(&rs->md.reconfig_mutex);
ret = md_run(&rs->md); ret = md_run(&rs->md);
...@@ -1201,7 +1201,7 @@ static int raid_map(struct dm_target *ti, struct bio *bio) ...@@ -1201,7 +1201,7 @@ static int raid_map(struct dm_target *ti, struct bio *bio)
return DM_MAPIO_SUBMITTED; return DM_MAPIO_SUBMITTED;
} }
static int raid_status(struct dm_target *ti, status_type_t type, static void raid_status(struct dm_target *ti, status_type_t type,
unsigned status_flags, char *result, unsigned maxlen) unsigned status_flags, char *result, unsigned maxlen)
{ {
struct raid_set *rs = ti->private; struct raid_set *rs = ti->private;
...@@ -1344,8 +1344,6 @@ static int raid_status(struct dm_target *ti, status_type_t type, ...@@ -1344,8 +1344,6 @@ static int raid_status(struct dm_target *ti, status_type_t type,
DMEMIT(" -"); DMEMIT(" -");
} }
} }
return 0;
} }
static int raid_iterate_devices(struct dm_target *ti, iterate_devices_callout_fn fn, void *data) static int raid_iterate_devices(struct dm_target *ti, iterate_devices_callout_fn fn, void *data)
...@@ -1405,7 +1403,7 @@ static void raid_resume(struct dm_target *ti) ...@@ -1405,7 +1403,7 @@ static void raid_resume(struct dm_target *ti)
static struct target_type raid_target = { static struct target_type raid_target = {
.name = "raid", .name = "raid",
.version = {1, 4, 1}, .version = {1, 4, 2},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = raid_ctr, .ctr = raid_ctr,
.dtr = raid_dtr, .dtr = raid_dtr,
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -116,7 +116,7 @@ static int io_err_ctr(struct dm_target *tt, unsigned int argc, char **args) ...@@ -116,7 +116,7 @@ static int io_err_ctr(struct dm_target *tt, unsigned int argc, char **args)
/* /*
* Return error for discards instead of -EOPNOTSUPP * Return error for discards instead of -EOPNOTSUPP
*/ */
tt->num_discard_requests = 1; tt->num_discard_bios = 1;
return 0; return 0;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment