Commit 04535d27 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'dm-3.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper changes from Mike Snitzer:

 - Fix dm-cache corruption caused by discard_block_size > cache_block_size

 - Fix a lock-inversion detected by LOCKDEP in dm-cache

 - Fix a dangling bio bug in the dm-thinp target's process_deferred_bios
   error path

 - Fix corruption due to non-atomic transaction commit which allowed a
   metadata superblock to be written before all other metadata was
   successfully written -- this is common to all targets that use the
   persistent-data library's transaction manager (dm-thinp, dm-cache and
   dm-era).

 - Various small cleanups in the DM core

 - Add the dm-era target which is useful for keeping track of which
   blocks were written within a user defined period of time called an
   'era'.  Use cases include tracking changed blocks for backup
   software, and partially invalidating the contents of a cache to
   restore cache coherency after rolling back a vendor snapshot.

 - Improve the on-disk layout of multithreaded writes to the
   dm-thin-pool by splitting the pool's deferred bio list to be a
   per-thin device list and then sorting that list using an rb_tree.
   The subsequent read throughput of the data written via multiple
   threads improved by ~70%.

 - Simplify the multipath target's handling of queuing IO by pushing
   requests back to the request queue rather than queueing the IO
   internally.

* tag 'dm-3.15-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (24 commits)
  dm cache: fix a lock-inversion
  dm thin: sort the per thin deferred bios using an rb_tree
  dm thin: use per thin device deferred bio lists
  dm thin: simplify pool_is_congested
  dm thin: fix dangling bio in process_deferred_bios error path
  dm mpath: print more useful warnings in multipath_message()
  dm-mpath: do not activate failed paths
  dm mpath: remove extra nesting in map function
  dm mpath: remove map_io()
  dm mpath: reduce memory pressure when requeuing
  dm mpath: remove process_queued_ios()
  dm mpath: push back requests instead of queueing
  dm table: add dm_table_run_md_queue_async
  dm mpath: do not call pg_init when it is already running
  dm: use RCU_INIT_POINTER instead of rcu_assign_pointer in __unbind
  dm: stop using bi_private
  dm: remove dm_get_mapinfo
  dm: make dm_table_alloc_md_mempools static
  dm: take care to copy the space map roots before locking the superblock
  dm transaction manager: fix corruption due to non-atomic transaction commit
  ...
parents 3f583bc2 0596661f
Introduction
============
dm-era is a target that behaves similar to the linear target. In
addition it keeps track of which blocks were written within a user
defined period of time called an 'era'. Each era target instance
maintains the current era as a monotonically increasing 32-bit
counter.
Use cases include tracking changed blocks for backup software, and
partially invalidating the contents of a cache to restore cache
coherency after rolling back a vendor snapshot.
Constructor
===========
era <metadata dev> <origin dev> <block size>
metadata dev : fast device holding the persistent metadata
origin dev : device holding data blocks that may change
block size : block size of origin data device, granularity that is
tracked by the target
Messages
========
None of the dm messages take any arguments.
checkpoint
----------
Possibly move to a new era. You shouldn't assume the era has
incremented. After sending this message, you should check the
current era via the status line.
take_metadata_snap
------------------
Create a clone of the metadata, to allow a userland process to read it.
drop_metadata_snap
------------------
Drop the metadata snapshot.
Status
======
<metadata block size> <#used metadata blocks>/<#total metadata blocks>
<current era> <held metadata root | '-'>
metadata block size : Fixed block size for each metadata block in
sectors
#used metadata blocks : Number of metadata blocks used
#total metadata blocks : Total number of metadata blocks
current era : The current era
held metadata root : The location, in blocks, of the metadata root
that has been 'held' for userspace read
access. '-' indicates there is no held root
Detailed use case
=================
The scenario of invalidating a cache when rolling back a vendor
snapshot was the primary use case when developing this target:
Taking a vendor snapshot
------------------------
- Send a checkpoint message to the era target
- Make a note of the current era in its status line
- Take vendor snapshot (the era and snapshot should be forever
associated now).
Rolling back to an vendor snapshot
----------------------------------
- Cache enters passthrough mode (see: dm-cache's docs in cache.txt)
- Rollback vendor storage
- Take metadata snapshot
- Ascertain which blocks have been written since the snapshot was taken
by checking each block's era
- Invalidate those blocks in the caching software
- Cache returns to writeback/writethrough mode
Memory usage
============
The target uses a bitset to record writes in the current era. It also
has a spare bitset ready for switching over to a new era. Other than
that it uses a few 4k blocks for updating metadata.
(4 * nr_blocks) bytes + buffers
Resilience
==========
Metadata is updated on disk before a write to a previously unwritten
block is performed. As such dm-era should not be effected by a hard
crash such as power failure.
Userland tools
==============
Userland tools are found in the increasingly poorly named
thin-provisioning-tools project:
https://github.com/jthornber/thin-provisioning-tools
...@@ -285,6 +285,17 @@ config DM_CACHE_CLEANER ...@@ -285,6 +285,17 @@ config DM_CACHE_CLEANER
A simple cache policy that writes back all data to the A simple cache policy that writes back all data to the
origin. Used when decommissioning a dm-cache. origin. Used when decommissioning a dm-cache.
config DM_ERA
tristate "Era target (EXPERIMENTAL)"
depends on BLK_DEV_DM
default n
select DM_PERSISTENT_DATA
select DM_BIO_PRISON
---help---
dm-era tracks which parts of a block device are written to
over time. Useful for maintaining cache coherency when using
vendor snapshots.
config DM_MIRROR config DM_MIRROR
tristate "Mirror target" tristate "Mirror target"
depends on BLK_DEV_DM depends on BLK_DEV_DM
......
...@@ -14,6 +14,7 @@ dm-thin-pool-y += dm-thin.o dm-thin-metadata.o ...@@ -14,6 +14,7 @@ dm-thin-pool-y += dm-thin.o dm-thin-metadata.o
dm-cache-y += dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o dm-cache-y += dm-cache-target.o dm-cache-metadata.o dm-cache-policy.o
dm-cache-mq-y += dm-cache-policy-mq.o dm-cache-mq-y += dm-cache-policy-mq.o
dm-cache-cleaner-y += dm-cache-policy-cleaner.o dm-cache-cleaner-y += dm-cache-policy-cleaner.o
dm-era-y += dm-era-target.o
md-mod-y += md.o bitmap.o md-mod-y += md.o bitmap.o
raid456-y += raid5.o raid456-y += raid5.o
...@@ -53,6 +54,7 @@ obj-$(CONFIG_DM_VERITY) += dm-verity.o ...@@ -53,6 +54,7 @@ obj-$(CONFIG_DM_VERITY) += dm-verity.o
obj-$(CONFIG_DM_CACHE) += dm-cache.o obj-$(CONFIG_DM_CACHE) += dm-cache.o
obj-$(CONFIG_DM_CACHE_MQ) += dm-cache-mq.o obj-$(CONFIG_DM_CACHE_MQ) += dm-cache-mq.o
obj-$(CONFIG_DM_CACHE_CLEANER) += dm-cache-cleaner.o obj-$(CONFIG_DM_CACHE_CLEANER) += dm-cache-cleaner.o
obj-$(CONFIG_DM_ERA) += dm-era.o
ifeq ($(CONFIG_DM_UEVENT),y) ifeq ($(CONFIG_DM_UEVENT),y)
dm-mod-objs += dm-uevent.o dm-mod-objs += dm-uevent.o
......
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
typedef dm_block_t __bitwise__ dm_oblock_t; typedef dm_block_t __bitwise__ dm_oblock_t;
typedef uint32_t __bitwise__ dm_cblock_t; typedef uint32_t __bitwise__ dm_cblock_t;
typedef dm_block_t __bitwise__ dm_dblock_t;
static inline dm_oblock_t to_oblock(dm_block_t b) static inline dm_oblock_t to_oblock(dm_block_t b)
{ {
...@@ -41,14 +40,4 @@ static inline uint32_t from_cblock(dm_cblock_t b) ...@@ -41,14 +40,4 @@ static inline uint32_t from_cblock(dm_cblock_t b)
return (__force uint32_t) b; return (__force uint32_t) b;
} }
static inline dm_dblock_t to_dblock(dm_block_t b)
{
return (__force dm_dblock_t) b;
}
static inline dm_block_t from_dblock(dm_dblock_t b)
{
return (__force dm_block_t) b;
}
#endif /* DM_CACHE_BLOCK_TYPES_H */ #endif /* DM_CACHE_BLOCK_TYPES_H */
...@@ -109,7 +109,7 @@ struct dm_cache_metadata { ...@@ -109,7 +109,7 @@ struct dm_cache_metadata {
dm_block_t discard_root; dm_block_t discard_root;
sector_t discard_block_size; sector_t discard_block_size;
dm_dblock_t discard_nr_blocks; dm_oblock_t discard_nr_blocks;
sector_t data_block_size; sector_t data_block_size;
dm_cblock_t cache_blocks; dm_cblock_t cache_blocks;
...@@ -120,6 +120,12 @@ struct dm_cache_metadata { ...@@ -120,6 +120,12 @@ struct dm_cache_metadata {
unsigned policy_version[CACHE_POLICY_VERSION_SIZE]; unsigned policy_version[CACHE_POLICY_VERSION_SIZE];
size_t policy_hint_size; size_t policy_hint_size;
struct dm_cache_statistics stats; struct dm_cache_statistics stats;
/*
* Reading the space map root can fail, so we read it into this
* buffer before the superblock is locked and updated.
*/
__u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
}; };
/*------------------------------------------------------------------- /*-------------------------------------------------------------------
...@@ -260,11 +266,31 @@ static void __setup_mapping_info(struct dm_cache_metadata *cmd) ...@@ -260,11 +266,31 @@ static void __setup_mapping_info(struct dm_cache_metadata *cmd)
} }
} }
static int __save_sm_root(struct dm_cache_metadata *cmd)
{
int r;
size_t metadata_len;
r = dm_sm_root_size(cmd->metadata_sm, &metadata_len);
if (r < 0)
return r;
return dm_sm_copy_root(cmd->metadata_sm, &cmd->metadata_space_map_root,
metadata_len);
}
static void __copy_sm_root(struct dm_cache_metadata *cmd,
struct cache_disk_superblock *disk_super)
{
memcpy(&disk_super->metadata_space_map_root,
&cmd->metadata_space_map_root,
sizeof(cmd->metadata_space_map_root));
}
static int __write_initial_superblock(struct dm_cache_metadata *cmd) static int __write_initial_superblock(struct dm_cache_metadata *cmd)
{ {
int r; int r;
struct dm_block *sblock; struct dm_block *sblock;
size_t metadata_len;
struct cache_disk_superblock *disk_super; struct cache_disk_superblock *disk_super;
sector_t bdev_size = i_size_read(cmd->bdev->bd_inode) >> SECTOR_SHIFT; sector_t bdev_size = i_size_read(cmd->bdev->bd_inode) >> SECTOR_SHIFT;
...@@ -272,12 +298,16 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd) ...@@ -272,12 +298,16 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
if (bdev_size > DM_CACHE_METADATA_MAX_SECTORS) if (bdev_size > DM_CACHE_METADATA_MAX_SECTORS)
bdev_size = DM_CACHE_METADATA_MAX_SECTORS; bdev_size = DM_CACHE_METADATA_MAX_SECTORS;
r = dm_sm_root_size(cmd->metadata_sm, &metadata_len); r = dm_tm_pre_commit(cmd->tm);
if (r < 0) if (r < 0)
return r; return r;
r = dm_tm_pre_commit(cmd->tm); /*
if (r < 0) * dm_sm_copy_root() can fail. So we need to do it before we start
* updating the superblock.
*/
r = __save_sm_root(cmd);
if (r)
return r; return r;
r = superblock_lock_zero(cmd, &sblock); r = superblock_lock_zero(cmd, &sblock);
...@@ -293,16 +323,13 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd) ...@@ -293,16 +323,13 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version)); memset(disk_super->policy_version, 0, sizeof(disk_super->policy_version));
disk_super->policy_hint_size = 0; disk_super->policy_hint_size = 0;
r = dm_sm_copy_root(cmd->metadata_sm, &disk_super->metadata_space_map_root, __copy_sm_root(cmd, disk_super);
metadata_len);
if (r < 0)
goto bad_locked;
disk_super->mapping_root = cpu_to_le64(cmd->root); disk_super->mapping_root = cpu_to_le64(cmd->root);
disk_super->hint_root = cpu_to_le64(cmd->hint_root); disk_super->hint_root = cpu_to_le64(cmd->hint_root);
disk_super->discard_root = cpu_to_le64(cmd->discard_root); disk_super->discard_root = cpu_to_le64(cmd->discard_root);
disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size); disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size);
disk_super->discard_nr_blocks = cpu_to_le64(from_dblock(cmd->discard_nr_blocks)); disk_super->discard_nr_blocks = cpu_to_le64(from_oblock(cmd->discard_nr_blocks));
disk_super->metadata_block_size = cpu_to_le32(DM_CACHE_METADATA_BLOCK_SIZE >> SECTOR_SHIFT); disk_super->metadata_block_size = cpu_to_le32(DM_CACHE_METADATA_BLOCK_SIZE >> SECTOR_SHIFT);
disk_super->data_block_size = cpu_to_le32(cmd->data_block_size); disk_super->data_block_size = cpu_to_le32(cmd->data_block_size);
disk_super->cache_blocks = cpu_to_le32(0); disk_super->cache_blocks = cpu_to_le32(0);
...@@ -313,10 +340,6 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd) ...@@ -313,10 +340,6 @@ static int __write_initial_superblock(struct dm_cache_metadata *cmd)
disk_super->write_misses = cpu_to_le32(0); disk_super->write_misses = cpu_to_le32(0);
return dm_tm_commit(cmd->tm, sblock); return dm_tm_commit(cmd->tm, sblock);
bad_locked:
dm_bm_unlock(sblock);
return r;
} }
static int __format_metadata(struct dm_cache_metadata *cmd) static int __format_metadata(struct dm_cache_metadata *cmd)
...@@ -496,7 +519,7 @@ static void read_superblock_fields(struct dm_cache_metadata *cmd, ...@@ -496,7 +519,7 @@ static void read_superblock_fields(struct dm_cache_metadata *cmd,
cmd->hint_root = le64_to_cpu(disk_super->hint_root); cmd->hint_root = le64_to_cpu(disk_super->hint_root);
cmd->discard_root = le64_to_cpu(disk_super->discard_root); cmd->discard_root = le64_to_cpu(disk_super->discard_root);
cmd->discard_block_size = le64_to_cpu(disk_super->discard_block_size); cmd->discard_block_size = le64_to_cpu(disk_super->discard_block_size);
cmd->discard_nr_blocks = to_dblock(le64_to_cpu(disk_super->discard_nr_blocks)); cmd->discard_nr_blocks = to_oblock(le64_to_cpu(disk_super->discard_nr_blocks));
cmd->data_block_size = le32_to_cpu(disk_super->data_block_size); cmd->data_block_size = le32_to_cpu(disk_super->data_block_size);
cmd->cache_blocks = to_cblock(le32_to_cpu(disk_super->cache_blocks)); cmd->cache_blocks = to_cblock(le32_to_cpu(disk_super->cache_blocks));
strncpy(cmd->policy_name, disk_super->policy_name, sizeof(cmd->policy_name)); strncpy(cmd->policy_name, disk_super->policy_name, sizeof(cmd->policy_name));
...@@ -530,8 +553,9 @@ static int __begin_transaction_flags(struct dm_cache_metadata *cmd, ...@@ -530,8 +553,9 @@ static int __begin_transaction_flags(struct dm_cache_metadata *cmd,
disk_super = dm_block_data(sblock); disk_super = dm_block_data(sblock);
update_flags(disk_super, mutator); update_flags(disk_super, mutator);
read_superblock_fields(cmd, disk_super); read_superblock_fields(cmd, disk_super);
dm_bm_unlock(sblock);
return dm_bm_flush_and_unlock(cmd->bm, sblock); return dm_bm_flush(cmd->bm);
} }
static int __begin_transaction(struct dm_cache_metadata *cmd) static int __begin_transaction(struct dm_cache_metadata *cmd)
...@@ -559,7 +583,6 @@ static int __commit_transaction(struct dm_cache_metadata *cmd, ...@@ -559,7 +583,6 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
flags_mutator mutator) flags_mutator mutator)
{ {
int r; int r;
size_t metadata_len;
struct cache_disk_superblock *disk_super; struct cache_disk_superblock *disk_super;
struct dm_block *sblock; struct dm_block *sblock;
...@@ -577,8 +600,8 @@ static int __commit_transaction(struct dm_cache_metadata *cmd, ...@@ -577,8 +600,8 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
if (r < 0) if (r < 0)
return r; return r;
r = dm_sm_root_size(cmd->metadata_sm, &metadata_len); r = __save_sm_root(cmd);
if (r < 0) if (r)
return r; return r;
r = superblock_lock(cmd, &sblock); r = superblock_lock(cmd, &sblock);
...@@ -594,7 +617,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd, ...@@ -594,7 +617,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
disk_super->hint_root = cpu_to_le64(cmd->hint_root); disk_super->hint_root = cpu_to_le64(cmd->hint_root);
disk_super->discard_root = cpu_to_le64(cmd->discard_root); disk_super->discard_root = cpu_to_le64(cmd->discard_root);
disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size); disk_super->discard_block_size = cpu_to_le64(cmd->discard_block_size);
disk_super->discard_nr_blocks = cpu_to_le64(from_dblock(cmd->discard_nr_blocks)); disk_super->discard_nr_blocks = cpu_to_le64(from_oblock(cmd->discard_nr_blocks));
disk_super->cache_blocks = cpu_to_le32(from_cblock(cmd->cache_blocks)); disk_super->cache_blocks = cpu_to_le32(from_cblock(cmd->cache_blocks));
strncpy(disk_super->policy_name, cmd->policy_name, sizeof(disk_super->policy_name)); strncpy(disk_super->policy_name, cmd->policy_name, sizeof(disk_super->policy_name));
disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]); disk_super->policy_version[0] = cpu_to_le32(cmd->policy_version[0]);
...@@ -605,13 +628,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd, ...@@ -605,13 +628,7 @@ static int __commit_transaction(struct dm_cache_metadata *cmd,
disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses); disk_super->read_misses = cpu_to_le32(cmd->stats.read_misses);
disk_super->write_hits = cpu_to_le32(cmd->stats.write_hits); disk_super->write_hits = cpu_to_le32(cmd->stats.write_hits);
disk_super->write_misses = cpu_to_le32(cmd->stats.write_misses); disk_super->write_misses = cpu_to_le32(cmd->stats.write_misses);
__copy_sm_root(cmd, disk_super);
r = dm_sm_copy_root(cmd->metadata_sm, &disk_super->metadata_space_map_root,
metadata_len);
if (r < 0) {
dm_bm_unlock(sblock);
return r;
}
return dm_tm_commit(cmd->tm, sblock); return dm_tm_commit(cmd->tm, sblock);
} }
...@@ -771,15 +788,15 @@ int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size) ...@@ -771,15 +788,15 @@ int dm_cache_resize(struct dm_cache_metadata *cmd, dm_cblock_t new_cache_size)
int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd, int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
sector_t discard_block_size, sector_t discard_block_size,
dm_dblock_t new_nr_entries) dm_oblock_t new_nr_entries)
{ {
int r; int r;
down_write(&cmd->root_lock); down_write(&cmd->root_lock);
r = dm_bitset_resize(&cmd->discard_info, r = dm_bitset_resize(&cmd->discard_info,
cmd->discard_root, cmd->discard_root,
from_dblock(cmd->discard_nr_blocks), from_oblock(cmd->discard_nr_blocks),
from_dblock(new_nr_entries), from_oblock(new_nr_entries),
false, &cmd->discard_root); false, &cmd->discard_root);
if (!r) { if (!r) {
cmd->discard_block_size = discard_block_size; cmd->discard_block_size = discard_block_size;
...@@ -792,28 +809,28 @@ int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd, ...@@ -792,28 +809,28 @@ int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
return r; return r;
} }
static int __set_discard(struct dm_cache_metadata *cmd, dm_dblock_t b) static int __set_discard(struct dm_cache_metadata *cmd, dm_oblock_t b)
{ {
return dm_bitset_set_bit(&cmd->discard_info, cmd->discard_root, return dm_bitset_set_bit(&cmd->discard_info, cmd->discard_root,
from_dblock(b), &cmd->discard_root); from_oblock(b), &cmd->discard_root);
} }
static int __clear_discard(struct dm_cache_metadata *cmd, dm_dblock_t b) static int __clear_discard(struct dm_cache_metadata *cmd, dm_oblock_t b)
{ {
return dm_bitset_clear_bit(&cmd->discard_info, cmd->discard_root, return dm_bitset_clear_bit(&cmd->discard_info, cmd->discard_root,
from_dblock(b), &cmd->discard_root); from_oblock(b), &cmd->discard_root);
} }
static int __is_discarded(struct dm_cache_metadata *cmd, dm_dblock_t b, static int __is_discarded(struct dm_cache_metadata *cmd, dm_oblock_t b,
bool *is_discarded) bool *is_discarded)
{ {
return dm_bitset_test_bit(&cmd->discard_info, cmd->discard_root, return dm_bitset_test_bit(&cmd->discard_info, cmd->discard_root,
from_dblock(b), &cmd->discard_root, from_oblock(b), &cmd->discard_root,
is_discarded); is_discarded);
} }
static int __discard(struct dm_cache_metadata *cmd, static int __discard(struct dm_cache_metadata *cmd,
dm_dblock_t dblock, bool discard) dm_oblock_t dblock, bool discard)
{ {
int r; int r;
...@@ -826,7 +843,7 @@ static int __discard(struct dm_cache_metadata *cmd, ...@@ -826,7 +843,7 @@ static int __discard(struct dm_cache_metadata *cmd,
} }
int dm_cache_set_discard(struct dm_cache_metadata *cmd, int dm_cache_set_discard(struct dm_cache_metadata *cmd,
dm_dblock_t dblock, bool discard) dm_oblock_t dblock, bool discard)
{ {
int r; int r;
...@@ -844,8 +861,8 @@ static int __load_discards(struct dm_cache_metadata *cmd, ...@@ -844,8 +861,8 @@ static int __load_discards(struct dm_cache_metadata *cmd,
dm_block_t b; dm_block_t b;
bool discard; bool discard;
for (b = 0; b < from_dblock(cmd->discard_nr_blocks); b++) { for (b = 0; b < from_oblock(cmd->discard_nr_blocks); b++) {
dm_dblock_t dblock = to_dblock(b); dm_oblock_t dblock = to_oblock(b);
if (cmd->clean_when_opened) { if (cmd->clean_when_opened) {
r = __is_discarded(cmd, dblock, &discard); r = __is_discarded(cmd, dblock, &discard);
...@@ -1228,22 +1245,12 @@ static int begin_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *po ...@@ -1228,22 +1245,12 @@ static int begin_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *po
return 0; return 0;
} }
int dm_cache_begin_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *policy) static int save_hint(void *context, dm_cblock_t cblock, dm_oblock_t oblock, uint32_t hint)
{ {
struct dm_cache_metadata *cmd = context;
__le32 value = cpu_to_le32(hint);
int r; int r;
down_write(&cmd->root_lock);
r = begin_hints(cmd, policy);
up_write(&cmd->root_lock);
return r;
}
static int save_hint(struct dm_cache_metadata *cmd, dm_cblock_t cblock,
uint32_t hint)
{
int r;
__le32 value = cpu_to_le32(hint);
__dm_bless_for_disk(&value); __dm_bless_for_disk(&value);
r = dm_array_set_value(&cmd->hint_info, cmd->hint_root, r = dm_array_set_value(&cmd->hint_info, cmd->hint_root,
...@@ -1253,16 +1260,25 @@ static int save_hint(struct dm_cache_metadata *cmd, dm_cblock_t cblock, ...@@ -1253,16 +1260,25 @@ static int save_hint(struct dm_cache_metadata *cmd, dm_cblock_t cblock,
return r; return r;
} }
int dm_cache_save_hint(struct dm_cache_metadata *cmd, dm_cblock_t cblock, static int write_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *policy)
uint32_t hint)
{ {
int r; int r;
if (!hints_array_initialized(cmd)) r = begin_hints(cmd, policy);
return 0; if (r) {
DMERR("begin_hints failed");
return r;
}
return policy_walk_mappings(policy, save_hint, cmd);
}
int dm_cache_write_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *policy)
{
int r;
down_write(&cmd->root_lock); down_write(&cmd->root_lock);
r = save_hint(cmd, cblock, hint); r = write_hints(cmd, policy);
up_write(&cmd->root_lock); up_write(&cmd->root_lock);
return r; return r;
......
...@@ -72,14 +72,14 @@ dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd); ...@@ -72,14 +72,14 @@ dm_cblock_t dm_cache_size(struct dm_cache_metadata *cmd);
int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd, int dm_cache_discard_bitset_resize(struct dm_cache_metadata *cmd,
sector_t discard_block_size, sector_t discard_block_size,
dm_dblock_t new_nr_entries); dm_oblock_t new_nr_entries);
typedef int (*load_discard_fn)(void *context, sector_t discard_block_size, typedef int (*load_discard_fn)(void *context, sector_t discard_block_size,
dm_dblock_t dblock, bool discarded); dm_oblock_t dblock, bool discarded);
int dm_cache_load_discards(struct dm_cache_metadata *cmd, int dm_cache_load_discards(struct dm_cache_metadata *cmd,
load_discard_fn fn, void *context); load_discard_fn fn, void *context);
int dm_cache_set_discard(struct dm_cache_metadata *cmd, dm_dblock_t dblock, bool discard); int dm_cache_set_discard(struct dm_cache_metadata *cmd, dm_oblock_t dblock, bool discard);
int dm_cache_remove_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock); int dm_cache_remove_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock);
int dm_cache_insert_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock, dm_oblock_t oblock); int dm_cache_insert_mapping(struct dm_cache_metadata *cmd, dm_cblock_t cblock, dm_oblock_t oblock);
...@@ -128,14 +128,7 @@ void dm_cache_dump(struct dm_cache_metadata *cmd); ...@@ -128,14 +128,7 @@ void dm_cache_dump(struct dm_cache_metadata *cmd);
* rather than querying the policy for each cblock, we let it walk its data * rather than querying the policy for each cblock, we let it walk its data
* structures and fill in the hints in whatever order it wishes. * structures and fill in the hints in whatever order it wishes.
*/ */
int dm_cache_write_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *p);
int dm_cache_begin_hints(struct dm_cache_metadata *cmd, struct dm_cache_policy *p);
/*
* requests hints for every cblock and stores in the metadata device.
*/
int dm_cache_save_hint(struct dm_cache_metadata *cmd,
dm_cblock_t cblock, uint32_t hint);
/* /*
* Query method. Are all the blocks in the cache clean? * Query method. Are all the blocks in the cache clean?
......
...@@ -237,9 +237,8 @@ struct cache { ...@@ -237,9 +237,8 @@ struct cache {
/* /*
* origin_blocks entries, discarded if set. * origin_blocks entries, discarded if set.
*/ */
dm_dblock_t discard_nr_blocks; dm_oblock_t discard_nr_blocks;
unsigned long *discard_bitset; unsigned long *discard_bitset;
uint32_t discard_block_size; /* a power of 2 times sectors per block */
/* /*
* Rather than reconstructing the table line for the status we just * Rather than reconstructing the table line for the status we just
...@@ -526,48 +525,33 @@ static dm_block_t block_div(dm_block_t b, uint32_t n) ...@@ -526,48 +525,33 @@ static dm_block_t block_div(dm_block_t b, uint32_t n)
return b; return b;
} }
static dm_dblock_t oblock_to_dblock(struct cache *cache, dm_oblock_t oblock) static void set_discard(struct cache *cache, dm_oblock_t b)
{
uint32_t discard_blocks = cache->discard_block_size;
dm_block_t b = from_oblock(oblock);
if (!block_size_is_power_of_two(cache))
discard_blocks = discard_blocks / cache->sectors_per_block;
else
discard_blocks >>= cache->sectors_per_block_shift;
b = block_div(b, discard_blocks);
return to_dblock(b);
}
static void set_discard(struct cache *cache, dm_dblock_t b)
{ {
unsigned long flags; unsigned long flags;
atomic_inc(&cache->stats.discard_count); atomic_inc(&cache->stats.discard_count);
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
set_bit(from_dblock(b), cache->discard_bitset); set_bit(from_oblock(b), cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
} }
static void clear_discard(struct cache *cache, dm_dblock_t b) static void clear_discard(struct cache *cache, dm_oblock_t b)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
clear_bit(from_dblock(b), cache->discard_bitset); clear_bit(from_oblock(b), cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
} }
static bool is_discarded(struct cache *cache, dm_dblock_t b) static bool is_discarded(struct cache *cache, dm_oblock_t b)
{ {
int r; int r;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
r = test_bit(from_dblock(b), cache->discard_bitset); r = test_bit(from_oblock(b), cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
return r; return r;
...@@ -579,8 +563,7 @@ static bool is_discarded_oblock(struct cache *cache, dm_oblock_t b) ...@@ -579,8 +563,7 @@ static bool is_discarded_oblock(struct cache *cache, dm_oblock_t b)
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&cache->lock, flags); spin_lock_irqsave(&cache->lock, flags);
r = test_bit(from_dblock(oblock_to_dblock(cache, b)), r = test_bit(from_oblock(b), cache->discard_bitset);
cache->discard_bitset);
spin_unlock_irqrestore(&cache->lock, flags); spin_unlock_irqrestore(&cache->lock, flags);
return r; return r;
...@@ -705,7 +688,7 @@ static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio, ...@@ -705,7 +688,7 @@ static void remap_to_origin_clear_discard(struct cache *cache, struct bio *bio,
check_if_tick_bio_needed(cache, bio); check_if_tick_bio_needed(cache, bio);
remap_to_origin(cache, bio); remap_to_origin(cache, bio);
if (bio_data_dir(bio) == WRITE) if (bio_data_dir(bio) == WRITE)
clear_discard(cache, oblock_to_dblock(cache, oblock)); clear_discard(cache, oblock);
} }
static void remap_to_cache_dirty(struct cache *cache, struct bio *bio, static void remap_to_cache_dirty(struct cache *cache, struct bio *bio,
...@@ -715,7 +698,7 @@ static void remap_to_cache_dirty(struct cache *cache, struct bio *bio, ...@@ -715,7 +698,7 @@ static void remap_to_cache_dirty(struct cache *cache, struct bio *bio,
remap_to_cache(cache, bio, cblock); remap_to_cache(cache, bio, cblock);
if (bio_data_dir(bio) == WRITE) { if (bio_data_dir(bio) == WRITE) {
set_dirty(cache, oblock, cblock); set_dirty(cache, oblock, cblock);
clear_discard(cache, oblock_to_dblock(cache, oblock)); clear_discard(cache, oblock);
} }
} }
...@@ -1288,14 +1271,14 @@ static void process_flush_bio(struct cache *cache, struct bio *bio) ...@@ -1288,14 +1271,14 @@ static void process_flush_bio(struct cache *cache, struct bio *bio)
static void process_discard_bio(struct cache *cache, struct bio *bio) static void process_discard_bio(struct cache *cache, struct bio *bio)
{ {
dm_block_t start_block = dm_sector_div_up(bio->bi_iter.bi_sector, dm_block_t start_block = dm_sector_div_up(bio->bi_iter.bi_sector,
cache->discard_block_size); cache->sectors_per_block);
dm_block_t end_block = bio_end_sector(bio); dm_block_t end_block = bio_end_sector(bio);
dm_block_t b; dm_block_t b;
end_block = block_div(end_block, cache->discard_block_size); end_block = block_div(end_block, cache->sectors_per_block);
for (b = start_block; b < end_block; b++) for (b = start_block; b < end_block; b++)
set_discard(cache, to_dblock(b)); set_discard(cache, to_oblock(b));
bio_endio(bio, 0); bio_endio(bio, 0);
} }
...@@ -2171,35 +2154,6 @@ static int create_cache_policy(struct cache *cache, struct cache_args *ca, ...@@ -2171,35 +2154,6 @@ static int create_cache_policy(struct cache *cache, struct cache_args *ca,
return 0; return 0;
} }
/*
* We want the discard block size to be a power of two, at least the size
* of the cache block size, and have no more than 2^14 discard blocks
* across the origin.
*/
#define MAX_DISCARD_BLOCKS (1 << 14)
static bool too_many_discard_blocks(sector_t discard_block_size,
sector_t origin_size)
{
(void) sector_div(origin_size, discard_block_size);
return origin_size > MAX_DISCARD_BLOCKS;
}
static sector_t calculate_discard_block_size(sector_t cache_block_size,
sector_t origin_size)
{
sector_t discard_block_size;
discard_block_size = roundup_pow_of_two(cache_block_size);
if (origin_size)
while (too_many_discard_blocks(discard_block_size, origin_size))
discard_block_size *= 2;
return discard_block_size;
}
#define DEFAULT_MIGRATION_THRESHOLD 2048 #define DEFAULT_MIGRATION_THRESHOLD 2048
static int cache_create(struct cache_args *ca, struct cache **result) static int cache_create(struct cache_args *ca, struct cache **result)
...@@ -2321,16 +2275,13 @@ static int cache_create(struct cache_args *ca, struct cache **result) ...@@ -2321,16 +2275,13 @@ static int cache_create(struct cache_args *ca, struct cache **result)
} }
clear_bitset(cache->dirty_bitset, from_cblock(cache->cache_size)); clear_bitset(cache->dirty_bitset, from_cblock(cache->cache_size));
cache->discard_block_size = cache->discard_nr_blocks = cache->origin_blocks;
calculate_discard_block_size(cache->sectors_per_block, cache->discard_bitset = alloc_bitset(from_oblock(cache->discard_nr_blocks));
cache->origin_sectors);
cache->discard_nr_blocks = oblock_to_dblock(cache, cache->origin_blocks);
cache->discard_bitset = alloc_bitset(from_dblock(cache->discard_nr_blocks));
if (!cache->discard_bitset) { if (!cache->discard_bitset) {
*error = "could not allocate discard bitset"; *error = "could not allocate discard bitset";
goto bad; goto bad;
} }
clear_bitset(cache->discard_bitset, from_dblock(cache->discard_nr_blocks)); clear_bitset(cache->discard_bitset, from_oblock(cache->discard_nr_blocks));
cache->copier = dm_kcopyd_client_create(&dm_kcopyd_throttle); cache->copier = dm_kcopyd_client_create(&dm_kcopyd_throttle);
if (IS_ERR(cache->copier)) { if (IS_ERR(cache->copier)) {
...@@ -2614,16 +2565,16 @@ static int write_discard_bitset(struct cache *cache) ...@@ -2614,16 +2565,16 @@ static int write_discard_bitset(struct cache *cache)
{ {
unsigned i, r; unsigned i, r;
r = dm_cache_discard_bitset_resize(cache->cmd, cache->discard_block_size, r = dm_cache_discard_bitset_resize(cache->cmd, cache->sectors_per_block,
cache->discard_nr_blocks); cache->origin_blocks);
if (r) { if (r) {
DMERR("could not resize on-disk discard bitset"); DMERR("could not resize on-disk discard bitset");
return r; return r;
} }
for (i = 0; i < from_dblock(cache->discard_nr_blocks); i++) { for (i = 0; i < from_oblock(cache->discard_nr_blocks); i++) {
r = dm_cache_set_discard(cache->cmd, to_dblock(i), r = dm_cache_set_discard(cache->cmd, to_oblock(i),
is_discarded(cache, to_dblock(i))); is_discarded(cache, to_oblock(i)));
if (r) if (r)
return r; return r;
} }
...@@ -2631,30 +2582,6 @@ static int write_discard_bitset(struct cache *cache) ...@@ -2631,30 +2582,6 @@ static int write_discard_bitset(struct cache *cache)
return 0; return 0;
} }
static int save_hint(void *context, dm_cblock_t cblock, dm_oblock_t oblock,
uint32_t hint)
{
struct cache *cache = context;
return dm_cache_save_hint(cache->cmd, cblock, hint);
}
static int write_hints(struct cache *cache)
{
int r;
r = dm_cache_begin_hints(cache->cmd, cache->policy);
if (r) {
DMERR("dm_cache_begin_hints failed");
return r;
}
r = policy_walk_mappings(cache->policy, save_hint, cache);
if (r)
DMERR("policy_walk_mappings failed");
return r;
}
/* /*
* returns true on success * returns true on success
*/ */
...@@ -2672,7 +2599,7 @@ static bool sync_metadata(struct cache *cache) ...@@ -2672,7 +2599,7 @@ static bool sync_metadata(struct cache *cache)
save_stats(cache); save_stats(cache);
r3 = write_hints(cache); r3 = dm_cache_write_hints(cache->cmd, cache->policy);
if (r3) if (r3)
DMERR("could not write hints"); DMERR("could not write hints");
...@@ -2720,16 +2647,14 @@ static int load_mapping(void *context, dm_oblock_t oblock, dm_cblock_t cblock, ...@@ -2720,16 +2647,14 @@ static int load_mapping(void *context, dm_oblock_t oblock, dm_cblock_t cblock,
} }
static int load_discard(void *context, sector_t discard_block_size, static int load_discard(void *context, sector_t discard_block_size,
dm_dblock_t dblock, bool discard) dm_oblock_t oblock, bool discard)
{ {
struct cache *cache = context; struct cache *cache = context;
/* FIXME: handle mis-matched block size */
if (discard) if (discard)
set_discard(cache, dblock); set_discard(cache, oblock);
else else
clear_discard(cache, dblock); clear_discard(cache, oblock);
return 0; return 0;
} }
...@@ -3120,8 +3045,8 @@ static void set_discard_limits(struct cache *cache, struct queue_limits *limits) ...@@ -3120,8 +3045,8 @@ static void set_discard_limits(struct cache *cache, struct queue_limits *limits)
/* /*
* FIXME: these limits may be incompatible with the cache device * FIXME: these limits may be incompatible with the cache device
*/ */
limits->max_discard_sectors = cache->discard_block_size * 1024; limits->max_discard_sectors = cache->sectors_per_block;
limits->discard_granularity = cache->discard_block_size << SECTOR_SHIFT; limits->discard_granularity = cache->sectors_per_block << SECTOR_SHIFT;
} }
static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits) static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits)
...@@ -3145,7 +3070,7 @@ static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits) ...@@ -3145,7 +3070,7 @@ static void cache_io_hints(struct dm_target *ti, struct queue_limits *limits)
static struct target_type cache_target = { static struct target_type cache_target = {
.name = "cache", .name = "cache",
.version = {1, 3, 0}, .version = {1, 4, 0},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = cache_ctr, .ctr = cache_ctr,
.dtr = cache_dtr, .dtr = cache_dtr,
......
This diff is collapsed.
This diff is collapsed.
...@@ -945,7 +945,7 @@ bool dm_table_request_based(struct dm_table *t) ...@@ -945,7 +945,7 @@ bool dm_table_request_based(struct dm_table *t)
return dm_table_get_type(t) == DM_TYPE_REQUEST_BASED; return dm_table_get_type(t) == DM_TYPE_REQUEST_BASED;
} }
int dm_table_alloc_md_mempools(struct dm_table *t) static int dm_table_alloc_md_mempools(struct dm_table *t)
{ {
unsigned type = dm_table_get_type(t); unsigned type = dm_table_get_type(t);
unsigned per_bio_data_size = 0; unsigned per_bio_data_size = 0;
...@@ -1618,6 +1618,25 @@ struct mapped_device *dm_table_get_md(struct dm_table *t) ...@@ -1618,6 +1618,25 @@ struct mapped_device *dm_table_get_md(struct dm_table *t)
} }
EXPORT_SYMBOL(dm_table_get_md); EXPORT_SYMBOL(dm_table_get_md);
void dm_table_run_md_queue_async(struct dm_table *t)
{
struct mapped_device *md;
struct request_queue *queue;
unsigned long flags;
if (!dm_table_request_based(t))
return;
md = dm_table_get_md(t);
queue = dm_get_md_queue(md);
if (queue) {
spin_lock_irqsave(queue->queue_lock, flags);
blk_run_queue_async(queue);
spin_unlock_irqrestore(queue->queue_lock, flags);
}
}
EXPORT_SYMBOL(dm_table_run_md_queue_async);
static int device_discard_capable(struct dm_target *ti, struct dm_dev *dev, static int device_discard_capable(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data) sector_t start, sector_t len, void *data)
{ {
......
...@@ -192,6 +192,13 @@ struct dm_pool_metadata { ...@@ -192,6 +192,13 @@ struct dm_pool_metadata {
* operation possible in this state is the closing of the device. * operation possible in this state is the closing of the device.
*/ */
bool fail_io:1; bool fail_io:1;
/*
* Reading the space map roots can fail, so we read it into these
* buffers before the superblock is locked and updated.
*/
__u8 data_space_map_root[SPACE_MAP_ROOT_SIZE];
__u8 metadata_space_map_root[SPACE_MAP_ROOT_SIZE];
}; };
struct dm_thin_device { struct dm_thin_device {
...@@ -431,26 +438,53 @@ static void __setup_btree_details(struct dm_pool_metadata *pmd) ...@@ -431,26 +438,53 @@ static void __setup_btree_details(struct dm_pool_metadata *pmd)
pmd->details_info.value_type.equal = NULL; pmd->details_info.value_type.equal = NULL;
} }
static int save_sm_roots(struct dm_pool_metadata *pmd)
{
int r;
size_t len;
r = dm_sm_root_size(pmd->metadata_sm, &len);
if (r < 0)
return r;
r = dm_sm_copy_root(pmd->metadata_sm, &pmd->metadata_space_map_root, len);
if (r < 0)
return r;
r = dm_sm_root_size(pmd->data_sm, &len);
if (r < 0)
return r;
return dm_sm_copy_root(pmd->data_sm, &pmd->data_space_map_root, len);
}
static void copy_sm_roots(struct dm_pool_metadata *pmd,
struct thin_disk_superblock *disk)
{
memcpy(&disk->metadata_space_map_root,
&pmd->metadata_space_map_root,
sizeof(pmd->metadata_space_map_root));
memcpy(&disk->data_space_map_root,
&pmd->data_space_map_root,
sizeof(pmd->data_space_map_root));
}
static int __write_initial_superblock(struct dm_pool_metadata *pmd) static int __write_initial_superblock(struct dm_pool_metadata *pmd)
{ {
int r; int r;
struct dm_block *sblock; struct dm_block *sblock;
size_t metadata_len, data_len;
struct thin_disk_superblock *disk_super; struct thin_disk_superblock *disk_super;
sector_t bdev_size = i_size_read(pmd->bdev->bd_inode) >> SECTOR_SHIFT; sector_t bdev_size = i_size_read(pmd->bdev->bd_inode) >> SECTOR_SHIFT;
if (bdev_size > THIN_METADATA_MAX_SECTORS) if (bdev_size > THIN_METADATA_MAX_SECTORS)
bdev_size = THIN_METADATA_MAX_SECTORS; bdev_size = THIN_METADATA_MAX_SECTORS;
r = dm_sm_root_size(pmd->metadata_sm, &metadata_len); r = dm_sm_commit(pmd->data_sm);
if (r < 0)
return r;
r = dm_sm_root_size(pmd->data_sm, &data_len);
if (r < 0) if (r < 0)
return r; return r;
r = dm_sm_commit(pmd->data_sm); r = save_sm_roots(pmd);
if (r < 0) if (r < 0)
return r; return r;
...@@ -471,15 +505,7 @@ static int __write_initial_superblock(struct dm_pool_metadata *pmd) ...@@ -471,15 +505,7 @@ static int __write_initial_superblock(struct dm_pool_metadata *pmd)
disk_super->trans_id = 0; disk_super->trans_id = 0;
disk_super->held_root = 0; disk_super->held_root = 0;
r = dm_sm_copy_root(pmd->metadata_sm, &disk_super->metadata_space_map_root, copy_sm_roots(pmd, disk_super);
metadata_len);
if (r < 0)
goto bad_locked;
r = dm_sm_copy_root(pmd->data_sm, &disk_super->data_space_map_root,
data_len);
if (r < 0)
goto bad_locked;
disk_super->data_mapping_root = cpu_to_le64(pmd->root); disk_super->data_mapping_root = cpu_to_le64(pmd->root);
disk_super->device_details_root = cpu_to_le64(pmd->details_root); disk_super->device_details_root = cpu_to_le64(pmd->details_root);
...@@ -488,10 +514,6 @@ static int __write_initial_superblock(struct dm_pool_metadata *pmd) ...@@ -488,10 +514,6 @@ static int __write_initial_superblock(struct dm_pool_metadata *pmd)
disk_super->data_block_size = cpu_to_le32(pmd->data_block_size); disk_super->data_block_size = cpu_to_le32(pmd->data_block_size);
return dm_tm_commit(pmd->tm, sblock); return dm_tm_commit(pmd->tm, sblock);
bad_locked:
dm_bm_unlock(sblock);
return r;
} }
static int __format_metadata(struct dm_pool_metadata *pmd) static int __format_metadata(struct dm_pool_metadata *pmd)
...@@ -769,6 +791,10 @@ static int __commit_transaction(struct dm_pool_metadata *pmd) ...@@ -769,6 +791,10 @@ static int __commit_transaction(struct dm_pool_metadata *pmd)
if (r < 0) if (r < 0)
return r; return r;
r = save_sm_roots(pmd);
if (r < 0)
return r;
r = superblock_lock(pmd, &sblock); r = superblock_lock(pmd, &sblock);
if (r) if (r)
return r; return r;
...@@ -780,21 +806,9 @@ static int __commit_transaction(struct dm_pool_metadata *pmd) ...@@ -780,21 +806,9 @@ static int __commit_transaction(struct dm_pool_metadata *pmd)
disk_super->trans_id = cpu_to_le64(pmd->trans_id); disk_super->trans_id = cpu_to_le64(pmd->trans_id);
disk_super->flags = cpu_to_le32(pmd->flags); disk_super->flags = cpu_to_le32(pmd->flags);
r = dm_sm_copy_root(pmd->metadata_sm, &disk_super->metadata_space_map_root, copy_sm_roots(pmd, disk_super);
metadata_len);
if (r < 0)
goto out_locked;
r = dm_sm_copy_root(pmd->data_sm, &disk_super->data_space_map_root,
data_len);
if (r < 0)
goto out_locked;
return dm_tm_commit(pmd->tm, sblock); return dm_tm_commit(pmd->tm, sblock);
out_locked:
dm_bm_unlock(sblock);
return r;
} }
struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev, struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev,
......
This diff is collapsed.
...@@ -94,13 +94,6 @@ struct dm_rq_clone_bio_info { ...@@ -94,13 +94,6 @@ struct dm_rq_clone_bio_info {
struct bio clone; struct bio clone;
}; };
union map_info *dm_get_mapinfo(struct bio *bio)
{
if (bio && bio->bi_private)
return &((struct dm_target_io *)bio->bi_private)->info;
return NULL;
}
union map_info *dm_get_rq_mapinfo(struct request *rq) union map_info *dm_get_rq_mapinfo(struct request *rq)
{ {
if (rq && rq->end_io_data) if (rq && rq->end_io_data)
...@@ -475,6 +468,11 @@ sector_t dm_get_size(struct mapped_device *md) ...@@ -475,6 +468,11 @@ sector_t dm_get_size(struct mapped_device *md)
return get_capacity(md->disk); return get_capacity(md->disk);
} }
struct request_queue *dm_get_md_queue(struct mapped_device *md)
{
return md->queue;
}
struct dm_stats *dm_get_stats(struct mapped_device *md) struct dm_stats *dm_get_stats(struct mapped_device *md)
{ {
return &md->stats; return &md->stats;
...@@ -760,7 +758,7 @@ static void dec_pending(struct dm_io *io, int error) ...@@ -760,7 +758,7 @@ static void dec_pending(struct dm_io *io, int error)
static void clone_endio(struct bio *bio, int error) static void clone_endio(struct bio *bio, int error)
{ {
int r = 0; int r = 0;
struct dm_target_io *tio = bio->bi_private; struct dm_target_io *tio = container_of(bio, struct dm_target_io, clone);
struct dm_io *io = tio->io; struct dm_io *io = tio->io;
struct mapped_device *md = tio->io->md; struct mapped_device *md = tio->io->md;
dm_endio_fn endio = tio->ti->type->end_io; dm_endio_fn endio = tio->ti->type->end_io;
...@@ -794,7 +792,8 @@ static void clone_endio(struct bio *bio, int error) ...@@ -794,7 +792,8 @@ static void clone_endio(struct bio *bio, int error)
*/ */
static void end_clone_bio(struct bio *clone, int error) static void end_clone_bio(struct bio *clone, int error)
{ {
struct dm_rq_clone_bio_info *info = clone->bi_private; struct dm_rq_clone_bio_info *info =
container_of(clone, struct dm_rq_clone_bio_info, clone);
struct dm_rq_target_io *tio = info->tio; struct dm_rq_target_io *tio = info->tio;
struct bio *bio = info->orig; struct bio *bio = info->orig;
unsigned int nr_bytes = info->orig->bi_iter.bi_size; unsigned int nr_bytes = info->orig->bi_iter.bi_size;
...@@ -1120,7 +1119,6 @@ static void __map_bio(struct dm_target_io *tio) ...@@ -1120,7 +1119,6 @@ static void __map_bio(struct dm_target_io *tio)
struct dm_target *ti = tio->ti; struct dm_target *ti = tio->ti;
clone->bi_end_io = clone_endio; clone->bi_end_io = clone_endio;
clone->bi_private = tio;
/* /*
* Map the clone. If r == 0 we don't need to do * Map the clone. If r == 0 we don't need to do
...@@ -1195,7 +1193,6 @@ static struct dm_target_io *alloc_tio(struct clone_info *ci, ...@@ -1195,7 +1193,6 @@ static struct dm_target_io *alloc_tio(struct clone_info *ci,
tio->io = ci->io; tio->io = ci->io;
tio->ti = ti; tio->ti = ti;
memset(&tio->info, 0, sizeof(tio->info));
tio->target_bio_nr = target_bio_nr; tio->target_bio_nr = target_bio_nr;
return tio; return tio;
...@@ -1530,7 +1527,6 @@ static int dm_rq_bio_constructor(struct bio *bio, struct bio *bio_orig, ...@@ -1530,7 +1527,6 @@ static int dm_rq_bio_constructor(struct bio *bio, struct bio *bio_orig,
info->orig = bio_orig; info->orig = bio_orig;
info->tio = tio; info->tio = tio;
bio->bi_end_io = end_clone_bio; bio->bi_end_io = end_clone_bio;
bio->bi_private = info;
return 0; return 0;
} }
...@@ -2172,7 +2168,7 @@ static struct dm_table *__unbind(struct mapped_device *md) ...@@ -2172,7 +2168,7 @@ static struct dm_table *__unbind(struct mapped_device *md)
return NULL; return NULL;
dm_table_event_callback(map, NULL, NULL); dm_table_event_callback(map, NULL, NULL);
rcu_assign_pointer(md->map, NULL); RCU_INIT_POINTER(md->map, NULL);
dm_sync_table(md); dm_sync_table(md);
return map; return map;
...@@ -2873,8 +2869,6 @@ static const struct block_device_operations dm_blk_dops = { ...@@ -2873,8 +2869,6 @@ static const struct block_device_operations dm_blk_dops = {
.owner = THIS_MODULE .owner = THIS_MODULE
}; };
EXPORT_SYMBOL(dm_get_mapinfo);
/* /*
* module hooks * module hooks
*/ */
......
...@@ -73,7 +73,6 @@ unsigned dm_table_get_type(struct dm_table *t); ...@@ -73,7 +73,6 @@ unsigned dm_table_get_type(struct dm_table *t);
struct target_type *dm_table_get_immutable_target_type(struct dm_table *t); struct target_type *dm_table_get_immutable_target_type(struct dm_table *t);
bool dm_table_request_based(struct dm_table *t); bool dm_table_request_based(struct dm_table *t);
bool dm_table_supports_discards(struct dm_table *t); bool dm_table_supports_discards(struct dm_table *t);
int dm_table_alloc_md_mempools(struct dm_table *t);
void dm_table_free_md_mempools(struct dm_table *t); void dm_table_free_md_mempools(struct dm_table *t);
struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t); struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t);
...@@ -189,6 +188,7 @@ int dm_lock_for_deletion(struct mapped_device *md, bool mark_deferred, bool only ...@@ -189,6 +188,7 @@ int dm_lock_for_deletion(struct mapped_device *md, bool mark_deferred, bool only
int dm_cancel_deferred_remove(struct mapped_device *md); int dm_cancel_deferred_remove(struct mapped_device *md);
int dm_request_based(struct mapped_device *md); int dm_request_based(struct mapped_device *md);
sector_t dm_get_size(struct mapped_device *md); sector_t dm_get_size(struct mapped_device *md);
struct request_queue *dm_get_md_queue(struct mapped_device *md);
struct dm_stats *dm_get_stats(struct mapped_device *md); struct dm_stats *dm_get_stats(struct mapped_device *md);
int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action, int dm_kobject_uevent(struct mapped_device *md, enum kobject_action action,
......
...@@ -65,7 +65,7 @@ int dm_bitset_flush(struct dm_disk_bitset *info, dm_block_t root, ...@@ -65,7 +65,7 @@ int dm_bitset_flush(struct dm_disk_bitset *info, dm_block_t root,
int r; int r;
__le64 value; __le64 value;
if (!info->current_index_set) if (!info->current_index_set || !info->dirty)
return 0; return 0;
value = cpu_to_le64(info->current_bits); value = cpu_to_le64(info->current_bits);
...@@ -77,6 +77,8 @@ int dm_bitset_flush(struct dm_disk_bitset *info, dm_block_t root, ...@@ -77,6 +77,8 @@ int dm_bitset_flush(struct dm_disk_bitset *info, dm_block_t root,
return r; return r;
info->current_index_set = false; info->current_index_set = false;
info->dirty = false;
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(dm_bitset_flush); EXPORT_SYMBOL_GPL(dm_bitset_flush);
...@@ -94,6 +96,8 @@ static int read_bits(struct dm_disk_bitset *info, dm_block_t root, ...@@ -94,6 +96,8 @@ static int read_bits(struct dm_disk_bitset *info, dm_block_t root,
info->current_bits = le64_to_cpu(value); info->current_bits = le64_to_cpu(value);
info->current_index_set = true; info->current_index_set = true;
info->current_index = array_index; info->current_index = array_index;
info->dirty = false;
return 0; return 0;
} }
...@@ -126,6 +130,8 @@ int dm_bitset_set_bit(struct dm_disk_bitset *info, dm_block_t root, ...@@ -126,6 +130,8 @@ int dm_bitset_set_bit(struct dm_disk_bitset *info, dm_block_t root,
return r; return r;
set_bit(b, (unsigned long *) &info->current_bits); set_bit(b, (unsigned long *) &info->current_bits);
info->dirty = true;
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(dm_bitset_set_bit); EXPORT_SYMBOL_GPL(dm_bitset_set_bit);
...@@ -141,6 +147,8 @@ int dm_bitset_clear_bit(struct dm_disk_bitset *info, dm_block_t root, ...@@ -141,6 +147,8 @@ int dm_bitset_clear_bit(struct dm_disk_bitset *info, dm_block_t root,
return r; return r;
clear_bit(b, (unsigned long *) &info->current_bits); clear_bit(b, (unsigned long *) &info->current_bits);
info->dirty = true;
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(dm_bitset_clear_bit); EXPORT_SYMBOL_GPL(dm_bitset_clear_bit);
......
...@@ -71,6 +71,7 @@ struct dm_disk_bitset { ...@@ -71,6 +71,7 @@ struct dm_disk_bitset {
uint64_t current_bits; uint64_t current_bits;
bool current_index_set:1; bool current_index_set:1;
bool dirty:1;
}; };
/* /*
......
...@@ -595,25 +595,14 @@ int dm_bm_unlock(struct dm_block *b) ...@@ -595,25 +595,14 @@ int dm_bm_unlock(struct dm_block *b)
} }
EXPORT_SYMBOL_GPL(dm_bm_unlock); EXPORT_SYMBOL_GPL(dm_bm_unlock);
int dm_bm_flush_and_unlock(struct dm_block_manager *bm, int dm_bm_flush(struct dm_block_manager *bm)
struct dm_block *superblock)
{ {
int r;
if (bm->read_only) if (bm->read_only)
return -EPERM; return -EPERM;
r = dm_bufio_write_dirty_buffers(bm->bufio);
if (unlikely(r)) {
dm_bm_unlock(superblock);
return r;
}
dm_bm_unlock(superblock);
return dm_bufio_write_dirty_buffers(bm->bufio); return dm_bufio_write_dirty_buffers(bm->bufio);
} }
EXPORT_SYMBOL_GPL(dm_bm_flush_and_unlock); EXPORT_SYMBOL_GPL(dm_bm_flush);
void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b) void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b)
{ {
......
...@@ -105,8 +105,7 @@ int dm_bm_unlock(struct dm_block *b); ...@@ -105,8 +105,7 @@ int dm_bm_unlock(struct dm_block *b);
* *
* This method always blocks. * This method always blocks.
*/ */
int dm_bm_flush_and_unlock(struct dm_block_manager *bm, int dm_bm_flush(struct dm_block_manager *bm);
struct dm_block *superblock);
/* /*
* Request data is prefetched into the cache. * Request data is prefetched into the cache.
......
...@@ -154,7 +154,7 @@ int dm_tm_pre_commit(struct dm_transaction_manager *tm) ...@@ -154,7 +154,7 @@ int dm_tm_pre_commit(struct dm_transaction_manager *tm)
if (r < 0) if (r < 0)
return r; return r;
return 0; return dm_bm_flush(tm->bm);
} }
EXPORT_SYMBOL_GPL(dm_tm_pre_commit); EXPORT_SYMBOL_GPL(dm_tm_pre_commit);
...@@ -164,8 +164,9 @@ int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root) ...@@ -164,8 +164,9 @@ int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root)
return -EWOULDBLOCK; return -EWOULDBLOCK;
wipe_shadow_table(tm); wipe_shadow_table(tm);
dm_bm_unlock(root);
return dm_bm_flush_and_unlock(tm->bm, root); return dm_bm_flush(tm->bm);
} }
EXPORT_SYMBOL_GPL(dm_tm_commit); EXPORT_SYMBOL_GPL(dm_tm_commit);
......
...@@ -38,18 +38,17 @@ struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transac ...@@ -38,18 +38,17 @@ struct dm_transaction_manager *dm_tm_create_non_blocking_clone(struct dm_transac
/* /*
* We use a 2-phase commit here. * We use a 2-phase commit here.
* *
* i) In the first phase the block manager is told to start flushing, and * i) Make all changes for the transaction *except* for the superblock.
* the changes to the space map are written to disk. You should interrogate * Then call dm_tm_pre_commit() to flush them to disk.
* your particular space map to get detail of its root node etc. to be
* included in your superblock.
* *
* ii) @root will be committed last. You shouldn't use more than the * ii) Lock your superblock. Update. Then call dm_tm_commit() which will
* first 512 bytes of @root if you wish the transaction to survive a power * unlock the superblock and flush it. No other blocks should be updated
* failure. You *must* have a write lock held on @root for both stage (i) * during this period. Care should be taken to never unlock a partially
* and (ii). The commit will drop the write lock. * updated superblock; perform any operations that could fail *before* you
* take the superblock lock.
*/ */
int dm_tm_pre_commit(struct dm_transaction_manager *tm); int dm_tm_pre_commit(struct dm_transaction_manager *tm);
int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *root); int dm_tm_commit(struct dm_transaction_manager *tm, struct dm_block *superblock);
/* /*
* These methods are the only way to get hold of a writeable block. * These methods are the only way to get hold of a writeable block.
......
...@@ -23,7 +23,6 @@ typedef enum { STATUSTYPE_INFO, STATUSTYPE_TABLE } status_type_t; ...@@ -23,7 +23,6 @@ typedef enum { STATUSTYPE_INFO, STATUSTYPE_TABLE } status_type_t;
union map_info { union map_info {
void *ptr; void *ptr;
unsigned long long ll;
}; };
/* /*
...@@ -291,7 +290,6 @@ struct dm_target_callbacks { ...@@ -291,7 +290,6 @@ struct dm_target_callbacks {
struct dm_target_io { struct dm_target_io {
struct dm_io *io; struct dm_io *io;
struct dm_target *ti; struct dm_target *ti;
union map_info info;
unsigned target_bio_nr; unsigned target_bio_nr;
struct bio clone; struct bio clone;
}; };
...@@ -403,7 +401,6 @@ int dm_copy_name_and_uuid(struct mapped_device *md, char *name, char *uuid); ...@@ -403,7 +401,6 @@ int dm_copy_name_and_uuid(struct mapped_device *md, char *name, char *uuid);
struct gendisk *dm_disk(struct mapped_device *md); struct gendisk *dm_disk(struct mapped_device *md);
int dm_suspended(struct dm_target *ti); int dm_suspended(struct dm_target *ti);
int dm_noflush_suspending(struct dm_target *ti); int dm_noflush_suspending(struct dm_target *ti);
union map_info *dm_get_mapinfo(struct bio *bio);
union map_info *dm_get_rq_mapinfo(struct request *rq); union map_info *dm_get_rq_mapinfo(struct request *rq);
struct queue_limits *dm_get_queue_limits(struct mapped_device *md); struct queue_limits *dm_get_queue_limits(struct mapped_device *md);
...@@ -465,6 +462,11 @@ struct mapped_device *dm_table_get_md(struct dm_table *t); ...@@ -465,6 +462,11 @@ struct mapped_device *dm_table_get_md(struct dm_table *t);
*/ */
void dm_table_event(struct dm_table *t); void dm_table_event(struct dm_table *t);
/*
* Run the queue for request-based targets.
*/
void dm_table_run_md_queue_async(struct dm_table *t);
/* /*
* The device must be suspended before calling this method. * The device must be suspended before calling this method.
* Returns the previous table, which the caller must destroy. * Returns the previous table, which the caller must destroy.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment