Commit 8d370595 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'xfs-for-linus-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs

Pull xfs and iomap updates from Dave Chinner:
 "The main things in this update are the iomap-based DAX infrastructure,
  an XFS delalloc rework, and a chunk of fixes to how log recovery
  schedules writeback to prevent spurious corruption detections when
  recovery of certain items was not required.

  The other main chunk of code is some preparation for the upcoming
  reflink functionality. Most of it is generic and cleanups that stand
  alone, but they were ready and reviewed so are in this pull request.

  Speaking of reflink, I'm currently planning to send you another pull
  request next week containing all the new reflink functionality. I'm
  working through a similar process to the last cycle, where I sent the
  reverse mapping code in a separate request because of how large it
  was. The reflink code merge is even bigger than reverse mapping, so
  I'll be doing the same thing again....

  Summary for this update:

   - change of XFS mailing list to linux-xfs@vger.kernel.org

   - iomap-based DAX infrastructure w/ XFS and ext2 support

   - small iomap fixes and additions

   - more efficient XFS delayed allocation infrastructure based on iomap

   - a rework of log recovery writeback scheduling to ensure we don't
     fail recovery when trying to replay items that are already on disk

   - some preparation patches for upcoming reflink support

   - configurable error handling fixes and documentation

   - aio access time update race fixes for XFS and
     generic_file_read_iter"

* tag 'xfs-for-linus-4.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs: (40 commits)
  fs: update atime before I/O in generic_file_read_iter
  xfs: update atime before I/O in xfs_file_dio_aio_read
  ext2: fix possible integer truncation in ext2_iomap_begin
  xfs: log recovery tracepoints to track current lsn and buffer submission
  xfs: update metadata LSN in buffers during log recovery
  xfs: don't warn on buffers not being recovered due to LSN
  xfs: pass current lsn to log recovery buffer validation
  xfs: rework log recovery to submit buffers on LSN boundaries
  xfs: quiesce the filesystem after recovery on readonly mount
  xfs: remote attribute blocks aren't really userdata
  ext2: use iomap to implement DAX
  ext2: stop passing buffer_head to ext2_get_blocks
  xfs: use iomap to implement DAX
  xfs: refactor xfs_setfilesize
  xfs: take the ilock shared if possible in xfs_file_iomap_begin
  xfs: fix locking for DAX writes
  dax: provide an iomap based fault handler
  dax: provide an iomap based dax read/write path
  dax: don't pass buffer_head to copy_user_dax
  dax: don't pass buffer_head to dax_insert_mapping
  ...
parents d230ec72 155cd433
......@@ -348,3 +348,126 @@ Removed Sysctls
---- -------
fs.xfs.xfsbufd_centisec v4.0
fs.xfs.age_buffer_centisecs v4.0
Error handling
==============
XFS can act differently according to the type of error found during its
operation. The implementation introduces the following concepts to the error
handler:
-failure speed:
Defines how fast XFS should propagate an error upwards when a specific
error is found during the filesystem operation. It can propagate
immediately, after a defined number of retries, after a set time period,
or simply retry forever.
-error classes:
Specifies the subsystem the error configuration will apply to, such as
metadata IO or memory allocation. Different subsystems will have
different error handlers for which behaviour can be configured.
-error handlers:
Defines the behavior for a specific error.
The filesystem behavior during an error can be set via sysfs files. Each
error handler works independently - the first condition met by an error handler
for a specific class will cause the error to be propagated rather than reset and
retried.
The action taken by the filesystem when the error is propagated is context
dependent - it may cause a shut down in the case of an unrecoverable error,
it may be reported back to userspace, or it may even be ignored because
there's nothing useful we can with the error or anyone we can report it to (e.g.
during unmount).
The configuration files are organized into the following hierarchy for each
mounted filesystem:
/sys/fs/xfs/<dev>/error/<class>/<error>/
Where:
<dev>
The short device name of the mounted filesystem. This is the same device
name that shows up in XFS kernel error messages as "XFS(<dev>): ..."
<class>
The subsystem the error configuration belongs to. As of 4.9, the defined
classes are:
- "metadata": applies metadata buffer write IO
<error>
The individual error handler configurations.
Each filesystem has "global" error configuration options defined in their top
level directory:
/sys/fs/xfs/<dev>/error/
fail_at_unmount (Min: 0 Default: 1 Max: 1)
Defines the filesystem error behavior at unmount time.
If set to a value of 1, XFS will override all other error configurations
during unmount and replace them with "immediate fail" characteristics.
i.e. no retries, no retry timeout. This will always allow unmount to
succeed when there are persistent errors present.
If set to 0, the configured retry behaviour will continue until all
retries and/or timeouts have been exhausted. This will delay unmount
completion when there are persistent errors, and it may prevent the
filesystem from ever unmounting fully in the case of "retry forever"
handler configurations.
Note: there is no guarantee that fail_at_unmount can be set whilst an
unmount is in progress. It is possible that the sysfs entries are
removed by the unmounting filesystem before a "retry forever" error
handler configuration causes unmount to hang, and hence the filesystem
must be configured appropriately before unmount begins to prevent
unmount hangs.
Each filesystem has specific error class handlers that define the error
propagation behaviour for specific errors. There is also a "default" error
handler defined, which defines the behaviour for all errors that don't have
specific handlers defined. Where multiple retry constraints are configuredi for
a single error, the first retry configuration that expires will cause the error
to be propagated. The handler configurations are found in the directory:
/sys/fs/xfs/<dev>/error/<class>/<error>/
max_retries (Min: -1 Default: Varies Max: INTMAX)
Defines the allowed number of retries of a specific error before
the filesystem will propagate the error. The retry count for a given
error context (e.g. a specific metadata buffer) is reset every time
there is a successful completion of the operation.
Setting the value to "-1" will cause XFS to retry forever for this
specific error.
Setting the value to "0" will cause XFS to fail immediately when the
specific error is reported.
Setting the value to "N" (where 0 < N < Max) will make XFS retry the
operation "N" times before propagating the error.
retry_timeout_seconds (Min: -1 Default: Varies Max: 1 day)
Define the amount of time (in seconds) that the filesystem is
allowed to retry its operations when the specific error is
found.
Setting the value to "-1" will allow XFS to retry forever for this
specific error.
Setting the value to "0" will cause XFS to fail immediately when the
specific error is reported.
Setting the value to "N" (where 0 < N < Max) will allow XFS to retry the
operation for up to "N" seconds before propagating the error.
Note: The default behaviour for a specific error handler is dependent on both
the class and error context. For example, the default values for
"metadata/ENODEV" are "0" rather than "-1" so that this error handler defaults
to "fail immediately" behaviour. This is done because ENODEV is a fatal,
unrecoverable error no matter how many times the metadata IO is retried.
......@@ -13099,11 +13099,10 @@ F: arch/x86/xen/*swiotlb*
F: drivers/xen/*swiotlb*
XFS FILESYSTEM
P: Silicon Graphics Inc
M: Dave Chinner <david@fromorbit.com>
M: xfs@oss.sgi.com
L: xfs@oss.sgi.com
W: http://oss.sgi.com/projects/xfs
M: linux-xfs@vger.kernel.org
L: linux-xfs@vger.kernel.org
W: http://xfs.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs.git
S: Supported
F: Documentation/filesystems/xfs.txt
......
......@@ -31,6 +31,8 @@
#include <linux/vmstat.h>
#include <linux/pfn_t.h>
#include <linux/sizes.h>
#include <linux/iomap.h>
#include "internal.h"
/*
* We use lowest available bit in exceptional entry for locking, other two
......@@ -580,14 +582,13 @@ static int dax_load_hole(struct address_space *mapping, void *entry,
return VM_FAULT_LOCKED;
}
static int copy_user_bh(struct page *to, struct inode *inode,
struct buffer_head *bh, unsigned long vaddr)
static int copy_user_dax(struct block_device *bdev, sector_t sector, size_t size,
struct page *to, unsigned long vaddr)
{
struct blk_dax_ctl dax = {
.sector = to_sector(bh, inode),
.size = bh->b_size,
.sector = sector,
.size = size,
};
struct block_device *bdev = bh->b_bdev;
void *vto;
if (dax_map_atomic(bdev, &dax) < 0)
......@@ -790,14 +791,13 @@ int dax_writeback_mapping_range(struct address_space *mapping,
EXPORT_SYMBOL_GPL(dax_writeback_mapping_range);
static int dax_insert_mapping(struct address_space *mapping,
struct buffer_head *bh, void **entryp,
struct vm_area_struct *vma, struct vm_fault *vmf)
struct block_device *bdev, sector_t sector, size_t size,
void **entryp, struct vm_area_struct *vma, struct vm_fault *vmf)
{
unsigned long vaddr = (unsigned long)vmf->virtual_address;
struct block_device *bdev = bh->b_bdev;
struct blk_dax_ctl dax = {
.sector = to_sector(bh, mapping->host),
.size = bh->b_size,
.sector = sector,
.size = size,
};
void *ret;
void *entry = *entryp;
......@@ -868,7 +868,8 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
if (vmf->cow_page) {
struct page *new_page = vmf->cow_page;
if (buffer_written(&bh))
error = copy_user_bh(new_page, inode, &bh, vaddr);
error = copy_user_dax(bh.b_bdev, to_sector(&bh, inode),
bh.b_size, new_page, vaddr);
else
clear_user_highpage(new_page, vaddr);
if (error)
......@@ -898,7 +899,8 @@ int dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
/* Filesystem should not return unwritten buffers to us! */
WARN_ON_ONCE(buffer_unwritten(&bh) || buffer_new(&bh));
error = dax_insert_mapping(mapping, &bh, &entry, vma, vmf);
error = dax_insert_mapping(mapping, bh.b_bdev, to_sector(&bh, inode),
bh.b_size, &entry, vma, vmf);
unlock_entry:
put_locked_mapping_entry(mapping, vmf->pgoff, entry);
out:
......@@ -1241,3 +1243,229 @@ int dax_truncate_page(struct inode *inode, loff_t from, get_block_t get_block)
return dax_zero_page_range(inode, from, length, get_block);
}
EXPORT_SYMBOL_GPL(dax_truncate_page);
#ifdef CONFIG_FS_IOMAP
static loff_t
iomap_dax_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
struct iomap *iomap)
{
struct iov_iter *iter = data;
loff_t end = pos + length, done = 0;
ssize_t ret = 0;
if (iov_iter_rw(iter) == READ) {
end = min(end, i_size_read(inode));
if (pos >= end)
return 0;
if (iomap->type == IOMAP_HOLE || iomap->type == IOMAP_UNWRITTEN)
return iov_iter_zero(min(length, end - pos), iter);
}
if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED))
return -EIO;
while (pos < end) {
unsigned offset = pos & (PAGE_SIZE - 1);
struct blk_dax_ctl dax = { 0 };
ssize_t map_len;
dax.sector = iomap->blkno +
(((pos & PAGE_MASK) - iomap->offset) >> 9);
dax.size = (length + offset + PAGE_SIZE - 1) & PAGE_MASK;
map_len = dax_map_atomic(iomap->bdev, &dax);
if (map_len < 0) {
ret = map_len;
break;
}
dax.addr += offset;
map_len -= offset;
if (map_len > end - pos)
map_len = end - pos;
if (iov_iter_rw(iter) == WRITE)
map_len = copy_from_iter_pmem(dax.addr, map_len, iter);
else
map_len = copy_to_iter(dax.addr, map_len, iter);
dax_unmap_atomic(iomap->bdev, &dax);
if (map_len <= 0) {
ret = map_len ? map_len : -EFAULT;
break;
}
pos += map_len;
length -= map_len;
done += map_len;
}
return done ? done : ret;
}
/**
* iomap_dax_rw - Perform I/O to a DAX file
* @iocb: The control block for this I/O
* @iter: The addresses to do I/O from or to
* @ops: iomap ops passed from the file system
*
* This function performs read and write operations to directly mapped
* persistent memory. The callers needs to take care of read/write exclusion
* and evicting any page cache pages in the region under I/O.
*/
ssize_t
iomap_dax_rw(struct kiocb *iocb, struct iov_iter *iter,
struct iomap_ops *ops)
{
struct address_space *mapping = iocb->ki_filp->f_mapping;
struct inode *inode = mapping->host;
loff_t pos = iocb->ki_pos, ret = 0, done = 0;
unsigned flags = 0;
if (iov_iter_rw(iter) == WRITE)
flags |= IOMAP_WRITE;
/*
* Yes, even DAX files can have page cache attached to them: A zeroed
* page is inserted into the pagecache when we have to serve a write
* fault on a hole. It should never be dirtied and can simply be
* dropped from the pagecache once we get real data for the page.
*
* XXX: This is racy against mmap, and there's nothing we can do about
* it. We'll eventually need to shift this down even further so that
* we can check if we allocated blocks over a hole first.
*/
if (mapping->nrpages) {
ret = invalidate_inode_pages2_range(mapping,
pos >> PAGE_SHIFT,
(pos + iov_iter_count(iter) - 1) >> PAGE_SHIFT);
WARN_ON_ONCE(ret);
}
while (iov_iter_count(iter)) {
ret = iomap_apply(inode, pos, iov_iter_count(iter), flags, ops,
iter, iomap_dax_actor);
if (ret <= 0)
break;
pos += ret;
done += ret;
}
iocb->ki_pos += done;
return done ? done : ret;
}
EXPORT_SYMBOL_GPL(iomap_dax_rw);
/**
* iomap_dax_fault - handle a page fault on a DAX file
* @vma: The virtual memory area where the fault occurred
* @vmf: The description of the fault
* @ops: iomap ops passed from the file system
*
* When a page fault occurs, filesystems may call this helper in their fault
* or mkwrite handler for DAX files. Assumes the caller has done all the
* necessary locking for the page fault to proceed successfully.
*/
int iomap_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
struct iomap_ops *ops)
{
struct address_space *mapping = vma->vm_file->f_mapping;
struct inode *inode = mapping->host;
unsigned long vaddr = (unsigned long)vmf->virtual_address;
loff_t pos = (loff_t)vmf->pgoff << PAGE_SHIFT;
sector_t sector;
struct iomap iomap = { 0 };
unsigned flags = 0;
int error, major = 0;
void *entry;
/*
* Check whether offset isn't beyond end of file now. Caller is supposed
* to hold locks serializing us with truncate / punch hole so this is
* a reliable test.
*/
if (pos >= i_size_read(inode))
return VM_FAULT_SIGBUS;
entry = grab_mapping_entry(mapping, vmf->pgoff);
if (IS_ERR(entry)) {
error = PTR_ERR(entry);
goto out;
}
if ((vmf->flags & FAULT_FLAG_WRITE) && !vmf->cow_page)
flags |= IOMAP_WRITE;
/*
* Note that we don't bother to use iomap_apply here: DAX required
* the file system block size to be equal the page size, which means
* that we never have to deal with more than a single extent here.
*/
error = ops->iomap_begin(inode, pos, PAGE_SIZE, flags, &iomap);
if (error)
goto unlock_entry;
if (WARN_ON_ONCE(iomap.offset + iomap.length < pos + PAGE_SIZE)) {
error = -EIO; /* fs corruption? */
goto unlock_entry;
}
sector = iomap.blkno + (((pos & PAGE_MASK) - iomap.offset) >> 9);
if (vmf->cow_page) {
switch (iomap.type) {
case IOMAP_HOLE:
case IOMAP_UNWRITTEN:
clear_user_highpage(vmf->cow_page, vaddr);
break;
case IOMAP_MAPPED:
error = copy_user_dax(iomap.bdev, sector, PAGE_SIZE,
vmf->cow_page, vaddr);
break;
default:
WARN_ON_ONCE(1);
error = -EIO;
break;
}
if (error)
goto unlock_entry;
if (!radix_tree_exceptional_entry(entry)) {
vmf->page = entry;
return VM_FAULT_LOCKED;
}
vmf->entry = entry;
return VM_FAULT_DAX_LOCKED;
}
switch (iomap.type) {
case IOMAP_MAPPED:
if (iomap.flags & IOMAP_F_NEW) {
count_vm_event(PGMAJFAULT);
mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
major = VM_FAULT_MAJOR;
}
error = dax_insert_mapping(mapping, iomap.bdev, sector,
PAGE_SIZE, &entry, vma, vmf);
break;
case IOMAP_UNWRITTEN:
case IOMAP_HOLE:
if (!(vmf->flags & FAULT_FLAG_WRITE))
return dax_load_hole(mapping, entry, vmf);
/*FALLTHRU*/
default:
WARN_ON_ONCE(1);
error = -EIO;
break;
}
unlock_entry:
put_locked_mapping_entry(mapping, vmf->pgoff, entry);
out:
if (error == -ENOMEM)
return VM_FAULT_OOM | major;
/* -EBUSY is fine, somebody else faulted on the same PTE */
if (error < 0 && error != -EBUSY)
return VM_FAULT_SIGBUS | major;
return VM_FAULT_NOPAGE | major;
}
EXPORT_SYMBOL_GPL(iomap_dax_fault);
#endif /* CONFIG_FS_IOMAP */
config EXT2_FS
tristate "Second extended fs support"
select FS_IOMAP if FS_DAX
help
Ext2 is a standard Linux file system for hard disks.
......
......@@ -814,6 +814,7 @@ extern const struct file_operations ext2_file_operations;
/* inode.c */
extern const struct address_space_operations ext2_aops;
extern const struct address_space_operations ext2_nobh_aops;
extern struct iomap_ops ext2_iomap_ops;
/* namei.c */
extern const struct inode_operations ext2_dir_inode_operations;
......
......@@ -22,11 +22,59 @@
#include <linux/pagemap.h>
#include <linux/dax.h>
#include <linux/quotaops.h>
#include <linux/iomap.h>
#include <linux/uio.h>
#include "ext2.h"
#include "xattr.h"
#include "acl.h"
#ifdef CONFIG_FS_DAX
static ssize_t ext2_dax_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
struct inode *inode = iocb->ki_filp->f_mapping->host;
ssize_t ret;
if (!iov_iter_count(to))
return 0; /* skip atime */
inode_lock_shared(inode);
ret = iomap_dax_rw(iocb, to, &ext2_iomap_ops);
inode_unlock_shared(inode);
file_accessed(iocb->ki_filp);
return ret;
}
static ssize_t ext2_dax_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
struct file *file = iocb->ki_filp;
struct inode *inode = file->f_mapping->host;
ssize_t ret;
inode_lock(inode);
ret = generic_write_checks(iocb, from);
if (ret <= 0)
goto out_unlock;
ret = file_remove_privs(file);
if (ret)
goto out_unlock;
ret = file_update_time(file);
if (ret)
goto out_unlock;
ret = iomap_dax_rw(iocb, from, &ext2_iomap_ops);
if (ret > 0 && iocb->ki_pos > i_size_read(inode)) {
i_size_write(inode, iocb->ki_pos);
mark_inode_dirty(inode);
}
out_unlock:
inode_unlock(inode);
if (ret > 0)
ret = generic_write_sync(iocb, ret);
return ret;
}
/*
* The lock ordering for ext2 DAX fault paths is:
*
......@@ -51,7 +99,7 @@ static int ext2_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
}
down_read(&ei->dax_sem);
ret = dax_fault(vma, vmf, ext2_get_block);
ret = iomap_dax_fault(vma, vmf, &ext2_iomap_ops);
up_read(&ei->dax_sem);
if (vmf->flags & FAULT_FLAG_WRITE)
......@@ -156,14 +204,28 @@ int ext2_fsync(struct file *file, loff_t start, loff_t end, int datasync)
return ret;
}
/*
* We have mostly NULL's here: the current defaults are ok for
* the ext2 filesystem.
*/
static ssize_t ext2_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
{
#ifdef CONFIG_FS_DAX
if (IS_DAX(iocb->ki_filp->f_mapping->host))
return ext2_dax_read_iter(iocb, to);
#endif
return generic_file_read_iter(iocb, to);
}
static ssize_t ext2_file_write_iter(struct kiocb *iocb, struct iov_iter *from)
{
#ifdef CONFIG_FS_DAX
if (IS_DAX(iocb->ki_filp->f_mapping->host))
return ext2_dax_write_iter(iocb, from);
#endif
return generic_file_write_iter(iocb, from);
}
const struct file_operations ext2_file_operations = {
.llseek = generic_file_llseek,
.read_iter = generic_file_read_iter,
.write_iter = generic_file_write_iter,
.read_iter = ext2_file_read_iter,
.write_iter = ext2_file_write_iter,
.unlocked_ioctl = ext2_ioctl,
#ifdef CONFIG_COMPAT
.compat_ioctl = ext2_compat_ioctl,
......
......@@ -32,6 +32,7 @@
#include <linux/buffer_head.h>
#include <linux/mpage.h>
#include <linux/fiemap.h>
#include <linux/iomap.h>
#include <linux/namei.h>
#include <linux/uio.h>
#include "ext2.h"
......@@ -618,7 +619,7 @@ static void ext2_splice_branch(struct inode *inode,
*/
static int ext2_get_blocks(struct inode *inode,
sector_t iblock, unsigned long maxblocks,
struct buffer_head *bh_result,
u32 *bno, bool *new, bool *boundary,
int create)
{
int err = -EIO;
......@@ -644,7 +645,6 @@ static int ext2_get_blocks(struct inode *inode,
/* Simplest case - block found, no allocation needed */
if (!partial) {
first_block = le32_to_cpu(chain[depth - 1].key);
clear_buffer_new(bh_result); /* What's this do? */
count++;
/*map more blocks*/
while (count < maxblocks && count <= blocks_to_boundary) {
......@@ -699,7 +699,6 @@ static int ext2_get_blocks(struct inode *inode,
mutex_unlock(&ei->truncate_mutex);
if (err)
goto cleanup;
clear_buffer_new(bh_result);
goto got_it;
}
}
......@@ -755,15 +754,16 @@ static int ext2_get_blocks(struct inode *inode,
mutex_unlock(&ei->truncate_mutex);
goto cleanup;
}
} else
set_buffer_new(bh_result);
} else {
*new = true;
}
ext2_splice_branch(inode, iblock, partial, indirect_blks, count);
mutex_unlock(&ei->truncate_mutex);
got_it:
map_bh(bh_result, inode->i_sb, le32_to_cpu(chain[depth-1].key));
*bno = le32_to_cpu(chain[depth-1].key);
if (count > blocks_to_boundary)
set_buffer_boundary(bh_result);
*boundary = true;
err = count;
/* Clean up and exit */
partial = chain + depth - 1; /* the whole chain */
......@@ -775,19 +775,82 @@ static int ext2_get_blocks(struct inode *inode,
return err;
}
int ext2_get_block(struct inode *inode, sector_t iblock, struct buffer_head *bh_result, int create)
int ext2_get_block(struct inode *inode, sector_t iblock,
struct buffer_head *bh_result, int create)
{
unsigned max_blocks = bh_result->b_size >> inode->i_blkbits;
int ret = ext2_get_blocks(inode, iblock, max_blocks,
bh_result, create);
if (ret > 0) {
bh_result->b_size = (ret << inode->i_blkbits);
ret = 0;
bool new = false, boundary = false;
u32 bno;
int ret;
ret = ext2_get_blocks(inode, iblock, max_blocks, &bno, &new, &boundary,
create);
if (ret <= 0)
return ret;
map_bh(bh_result, inode->i_sb, bno);
bh_result->b_size = (ret << inode->i_blkbits);
if (new)
set_buffer_new(bh_result);
if (boundary)
set_buffer_boundary(bh_result);
return 0;
}
#ifdef CONFIG_FS_DAX
static int ext2_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
unsigned flags, struct iomap *iomap)
{
unsigned int blkbits = inode->i_blkbits;
unsigned long first_block = offset >> blkbits;
unsigned long max_blocks = (length + (1 << blkbits) - 1) >> blkbits;
bool new = false, boundary = false;
u32 bno;
int ret;
ret = ext2_get_blocks(inode, first_block, max_blocks,
&bno, &new, &boundary, flags & IOMAP_WRITE);
if (ret < 0)
return ret;
iomap->flags = 0;
iomap->bdev = inode->i_sb->s_bdev;
iomap->offset = (u64)first_block << blkbits;
if (ret == 0) {
iomap->type = IOMAP_HOLE;
iomap->blkno = IOMAP_NULL_BLOCK;
iomap->length = 1 << blkbits;
} else {
iomap->type = IOMAP_MAPPED;
iomap->blkno = (sector_t)bno << (blkbits - 9);
iomap->length = (u64)ret << blkbits;
iomap->flags |= IOMAP_F_MERGED;
}
return ret;
if (new)
iomap->flags |= IOMAP_F_NEW;
return 0;
}
static int
ext2_iomap_end(struct inode *inode, loff_t offset, loff_t length,
ssize_t written, unsigned flags, struct iomap *iomap)
{
if (iomap->type == IOMAP_MAPPED &&
written < length &&
(flags & IOMAP_WRITE))
ext2_write_failed(inode->i_mapping, offset + length);
return 0;
}
struct iomap_ops ext2_iomap_ops = {
.iomap_begin = ext2_iomap_begin,
.iomap_end = ext2_iomap_end,
};
#endif /* CONFIG_FS_DAX */
int ext2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
......@@ -873,11 +936,10 @@ ext2_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
loff_t offset = iocb->ki_pos;
ssize_t ret;
if (IS_DAX(inode))
ret = dax_do_io(iocb, inode, iter, ext2_get_block, NULL,
DIO_LOCKING);
else
ret = blockdev_direct_IO(iocb, inode, iter, ext2_get_block);
if (WARN_ON_ONCE(IS_DAX(inode)))
return -EIO;
ret = blockdev_direct_IO(iocb, inode, iter, ext2_get_block);
if (ret < 0 && iov_iter_rw(iter) == WRITE)
ext2_write_failed(mapping, offset + count);
return ret;
......
......@@ -12,6 +12,7 @@
struct super_block;
struct file_system_type;
struct iomap;
struct iomap_ops;
struct linux_binprm;
struct path;
struct mount;
......@@ -164,3 +165,13 @@ extern struct dentry_operations ns_dentry_operations;
extern int do_vfs_ioctl(struct file *file, unsigned int fd, unsigned int cmd,
unsigned long arg);
extern long vfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
/*
* iomap support:
*/
typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
void *data, struct iomap *iomap);
loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
unsigned flags, struct iomap_ops *ops, void *data,
iomap_actor_t actor);
......@@ -27,9 +27,6 @@
#include <linux/dax.h>
#include "internal.h"
typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
void *data, struct iomap *iomap);
/*
* Execute a iomap write on a segment of the mapping that spans a
* contiguous range of pages that have identical block mapping state.
......@@ -41,7 +38,7 @@ typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
* resources they require in the iomap_begin call, and release them in the
* iomap_end call.
*/
static loff_t
loff_t
iomap_apply(struct inode *inode, loff_t pos, loff_t length, unsigned flags,
struct iomap_ops *ops, void *data, iomap_actor_t actor)
{
......@@ -252,6 +249,88 @@ iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *iter,
}
EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
static struct page *
__iomap_read_page(struct inode *inode, loff_t offset)
{
struct address_space *mapping = inode->i_mapping;
struct page *page;
page = read_mapping_page(mapping, offset >> PAGE_SHIFT, NULL);
if (IS_ERR(page))
return page;
if (!PageUptodate(page)) {
put_page(page);
return ERR_PTR(-EIO);
}
return page;
}
static loff_t
iomap_dirty_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
struct iomap *iomap)
{
long status = 0;
ssize_t written = 0;
do {
struct page *page, *rpage;
unsigned long offset; /* Offset into pagecache page */
unsigned long bytes; /* Bytes to write to page */
offset = (pos & (PAGE_SIZE - 1));
bytes = min_t(unsigned long, PAGE_SIZE - offset, length);
rpage = __iomap_read_page(inode, pos);
if (IS_ERR(rpage))
return PTR_ERR(rpage);
status = iomap_write_begin(inode, pos, bytes,
AOP_FLAG_NOFS | AOP_FLAG_UNINTERRUPTIBLE,
&page, iomap);
put_page(rpage);
if (unlikely(status))
return status;
WARN_ON_ONCE(!PageUptodate(page));
status = iomap_write_end(inode, pos, bytes, bytes, page);
if (unlikely(status <= 0)) {
if (WARN_ON_ONCE(status == 0))
return -EIO;
return status;
}
cond_resched();
pos += status;
written += status;
length -= status;
balance_dirty_pages_ratelimited(inode->i_mapping);
} while (length);
return written;
}
int
iomap_file_dirty(struct inode *inode, loff_t pos, loff_t len,
struct iomap_ops *ops)
{
loff_t ret;
while (len) {
ret = iomap_apply(inode, pos, len, IOMAP_WRITE, ops, NULL,
iomap_dirty_actor);
if (ret <= 0)
return ret;
pos += ret;
len -= ret;
}
return 0;
}
EXPORT_SYMBOL_GPL(iomap_file_dirty);
static int iomap_zero(struct inode *inode, loff_t pos, unsigned offset,
unsigned bytes, struct iomap *iomap)
{
......@@ -430,6 +509,8 @@ static int iomap_to_fiemap(struct fiemap_extent_info *fi,
if (iomap->flags & IOMAP_F_MERGED)
flags |= FIEMAP_EXTENT_MERGED;
if (iomap->flags & IOMAP_F_SHARED)
flags |= FIEMAP_EXTENT_SHARED;
return fiemap_fill_next_extent(fi, iomap->offset,
iomap->blkno != IOMAP_NULL_BLOCK ? iomap->blkno << 9: 0,
......
......@@ -52,6 +52,7 @@ xfs-y += $(addprefix libxfs/, \
xfs_inode_fork.o \
xfs_inode_buf.o \
xfs_log_rlimit.o \
xfs_ag_resv.o \
xfs_rmap.o \
xfs_rmap_btree.o \
xfs_sb.o \
......
/*
* Copyright (C) 2016 Oracle. All Rights Reserved.
*
* Author: Darrick J. Wong <darrick.wong@oracle.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it would be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include "xfs.h"
#include "xfs_fs.h"
#include "xfs_shared.h"
#include "xfs_format.h"
#include "xfs_log_format.h"
#include "xfs_trans_resv.h"
#include "xfs_sb.h"
#include "xfs_mount.h"
#include "xfs_defer.h"
#include "xfs_alloc.h"
#include "xfs_error.h"
#include "xfs_trace.h"
#include "xfs_cksum.h"
#include "xfs_trans.h"
#include "xfs_bit.h"
#include "xfs_bmap.h"
#include "xfs_bmap_btree.h"
#include "xfs_ag_resv.h"
#include "xfs_trans_space.h"
#include "xfs_rmap_btree.h"
#include "xfs_btree.h"
/*
* Per-AG Block Reservations
*
* For some kinds of allocation group metadata structures, it is advantageous
* to reserve a small number of blocks in each AG so that future expansions of
* that data structure do not encounter ENOSPC because errors during a btree
* split cause the filesystem to go offline.
*
* Prior to the introduction of reflink, this wasn't an issue because the free
* space btrees maintain a reserve of space (the AGFL) to handle any expansion
* that may be necessary; and allocations of other metadata (inodes, BMBT,
* dir/attr) aren't restricted to a single AG. However, with reflink it is
* possible to allocate all the space in an AG, have subsequent reflink/CoW
* activity expand the refcount btree, and discover that there's no space left
* to handle that expansion. Since we can calculate the maximum size of the
* refcount btree, we can reserve space for it and avoid ENOSPC.
*
* Handling per-AG reservations consists of three changes to the allocator's
* behavior: First, because these reservations are always needed, we decrease
* the ag_max_usable counter to reflect the size of the AG after the reserved
* blocks are taken. Second, the reservations must be reflected in the
* fdblocks count to maintain proper accounting. Third, each AG must maintain
* its own reserved block counter so that we can calculate the amount of space
* that must remain free to maintain the reservations. Fourth, the "remaining
* reserved blocks" count must be used when calculating the length of the
* longest free extent in an AG and to clamp maxlen in the per-AG allocation
* functions. In other words, we maintain a virtual allocation via in-core
* accounting tricks so that we don't have to clean up after a crash. :)
*
* Reserved blocks can be managed by passing one of the enum xfs_ag_resv_type
* values via struct xfs_alloc_arg or directly to the xfs_free_extent
* function. It might seem a little funny to maintain a reservoir of blocks
* to feed another reservoir, but the AGFL only holds enough blocks to get
* through the next transaction. The per-AG reservation is to ensure (we
* hope) that each AG never runs out of blocks. Each data structure wanting
* to use the reservation system should update ask/used in xfs_ag_resv_init.
*/
/*
* Are we critically low on blocks? For now we'll define that as the number
* of blocks we can get our hands on being less than 10% of what we reserved
* or less than some arbitrary number (maximum btree height).
*/
bool
xfs_ag_resv_critical(
struct xfs_perag *pag,
enum xfs_ag_resv_type type)
{
xfs_extlen_t avail;
xfs_extlen_t orig;
switch (type) {
case XFS_AG_RESV_METADATA:
avail = pag->pagf_freeblks - pag->pag_agfl_resv.ar_reserved;
orig = pag->pag_meta_resv.ar_asked;
break;
case XFS_AG_RESV_AGFL:
avail = pag->pagf_freeblks + pag->pagf_flcount -
pag->pag_meta_resv.ar_reserved;
orig = pag->pag_agfl_resv.ar_asked;
break;
default:
ASSERT(0);
return false;
}
trace_xfs_ag_resv_critical(pag, type, avail);
/* Critically low if less than 10% or max btree height remains. */
return avail < orig / 10 || avail < XFS_BTREE_MAXLEVELS;
}
/*
* How many blocks are reserved but not used, and therefore must not be
* allocated away?
*/
xfs_extlen_t
xfs_ag_resv_needed(
struct xfs_perag *pag,
enum xfs_ag_resv_type type)
{
xfs_extlen_t len;
len = pag->pag_meta_resv.ar_reserved + pag->pag_agfl_resv.ar_reserved;
switch (type) {
case XFS_AG_RESV_METADATA:
case XFS_AG_RESV_AGFL:
len -= xfs_perag_resv(pag, type)->ar_reserved;
break;
case XFS_AG_RESV_NONE:
/* empty */
break;
default:
ASSERT(0);
}
trace_xfs_ag_resv_needed(pag, type, len);
return len;
}
/* Clean out a reservation */
static int
__xfs_ag_resv_free(
struct xfs_perag *pag,
enum xfs_ag_resv_type type)
{
struct xfs_ag_resv *resv;
xfs_extlen_t oldresv;
int error;
trace_xfs_ag_resv_free(pag, type, 0);
resv = xfs_perag_resv(pag, type);
pag->pag_mount->m_ag_max_usable += resv->ar_asked;
/*
* AGFL blocks are always considered "free", so whatever
* was reserved at mount time must be given back at umount.
*/
if (type == XFS_AG_RESV_AGFL)
oldresv = resv->ar_orig_reserved;
else
oldresv = resv->ar_reserved;
error = xfs_mod_fdblocks(pag->pag_mount, oldresv, true);
resv->ar_reserved = 0;
resv->ar_asked = 0;
if (error)
trace_xfs_ag_resv_free_error(pag->pag_mount, pag->pag_agno,
error, _RET_IP_);
return error;
}
/* Free a per-AG reservation. */
int
xfs_ag_resv_free(
struct xfs_perag *pag)
{
int error;
int err2;
error = __xfs_ag_resv_free(pag, XFS_AG_RESV_AGFL);
err2 = __xfs_ag_resv_free(pag, XFS_AG_RESV_METADATA);
if (err2 && !error)
error = err2;
return error;
}
static int
__xfs_ag_resv_init(
struct xfs_perag *pag,
enum xfs_ag_resv_type type,
xfs_extlen_t ask,
xfs_extlen_t used)
{
struct xfs_mount *mp = pag->pag_mount;
struct xfs_ag_resv *resv;
int error;
resv = xfs_perag_resv(pag, type);
if (used > ask)
ask = used;
resv->ar_asked = ask;
resv->ar_reserved = resv->ar_orig_reserved = ask - used;
mp->m_ag_max_usable -= ask;
trace_xfs_ag_resv_init(pag, type, ask);
error = xfs_mod_fdblocks(mp, -(int64_t)resv->ar_reserved, true);
if (error)
trace_xfs_ag_resv_init_error(pag->pag_mount, pag->pag_agno,
error, _RET_IP_);
return error;
}
/* Create a per-AG block reservation. */
int
xfs_ag_resv_init(
struct xfs_perag *pag)
{
xfs_extlen_t ask;
xfs_extlen_t used;
int error = 0;
/* Create the metadata reservation. */
if (pag->pag_meta_resv.ar_asked == 0) {
ask = used = 0;
error = __xfs_ag_resv_init(pag, XFS_AG_RESV_METADATA,
ask, used);
if (error)
goto out;
}
/* Create the AGFL metadata reservation */
if (pag->pag_agfl_resv.ar_asked == 0) {
ask = used = 0;
error = __xfs_ag_resv_init(pag, XFS_AG_RESV_AGFL, ask, used);
if (error)
goto out;
}
out:
return error;
}
/* Allocate a block from the reservation. */
void
xfs_ag_resv_alloc_extent(
struct xfs_perag *pag,
enum xfs_ag_resv_type type,
struct xfs_alloc_arg *args)
{
struct xfs_ag_resv *resv;
xfs_extlen_t len;
uint field;
trace_xfs_ag_resv_alloc_extent(pag, type, args->len);
switch (type) {
case XFS_AG_RESV_METADATA:
case XFS_AG_RESV_AGFL:
resv = xfs_perag_resv(pag, type);
break;
default:
ASSERT(0);
/* fall through */
case XFS_AG_RESV_NONE:
field = args->wasdel ? XFS_TRANS_SB_RES_FDBLOCKS :
XFS_TRANS_SB_FDBLOCKS;
xfs_trans_mod_sb(args->tp, field, -(int64_t)args->len);
return;
}
len = min_t(xfs_extlen_t, args->len, resv->ar_reserved);
resv->ar_reserved -= len;
if (type == XFS_AG_RESV_AGFL)
return;
/* Allocations of reserved blocks only need on-disk sb updates... */
xfs_trans_mod_sb(args->tp, XFS_TRANS_SB_RES_FDBLOCKS, -(int64_t)len);
/* ...but non-reserved blocks need in-core and on-disk updates. */
if (args->len > len)
xfs_trans_mod_sb(args->tp, XFS_TRANS_SB_FDBLOCKS,
-((int64_t)args->len - len));
}
/* Free a block to the reservation. */
void
xfs_ag_resv_free_extent(
struct xfs_perag *pag,
enum xfs_ag_resv_type type,
struct xfs_trans *tp,
xfs_extlen_t len)
{
xfs_extlen_t leftover;
struct xfs_ag_resv *resv;
trace_xfs_ag_resv_free_extent(pag, type, len);
switch (type) {
case XFS_AG_RESV_METADATA:
case XFS_AG_RESV_AGFL:
resv = xfs_perag_resv(pag, type);
break;
default:
ASSERT(0);
/* fall through */
case XFS_AG_RESV_NONE:
xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, (int64_t)len);
return;
}
leftover = min_t(xfs_extlen_t, len, resv->ar_asked - resv->ar_reserved);
resv->ar_reserved += leftover;
if (type == XFS_AG_RESV_AGFL)
return;
/* Freeing into the reserved pool only requires on-disk update... */
xfs_trans_mod_sb(tp, XFS_TRANS_SB_RES_FDBLOCKS, len);
/* ...but freeing beyond that requires in-core and on-disk update. */
if (len > leftover)
xfs_trans_mod_sb(tp, XFS_TRANS_SB_FDBLOCKS, len - leftover);
}
/*
* Copyright (C) 2016 Oracle. All Rights Reserved.
*
* Author: Darrick J. Wong <darrick.wong@oracle.com>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it would be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __XFS_AG_RESV_H__
#define __XFS_AG_RESV_H__
int xfs_ag_resv_free(struct xfs_perag *pag);
int xfs_ag_resv_init(struct xfs_perag *pag);
bool xfs_ag_resv_critical(struct xfs_perag *pag, enum xfs_ag_resv_type type);
xfs_extlen_t xfs_ag_resv_needed(struct xfs_perag *pag,
enum xfs_ag_resv_type type);
void xfs_ag_resv_alloc_extent(struct xfs_perag *pag, enum xfs_ag_resv_type type,
struct xfs_alloc_arg *args);
void xfs_ag_resv_free_extent(struct xfs_perag *pag, enum xfs_ag_resv_type type,
struct xfs_trans *tp, xfs_extlen_t len);
#endif /* __XFS_AG_RESV_H__ */
This diff is collapsed.
......@@ -85,20 +85,33 @@ typedef struct xfs_alloc_arg {
xfs_extlen_t len; /* output: actual size of extent */
xfs_alloctype_t type; /* allocation type XFS_ALLOCTYPE_... */
xfs_alloctype_t otype; /* original allocation type */
int datatype; /* mask defining data type treatment */
char wasdel; /* set if allocation was prev delayed */
char wasfromfl; /* set if allocation is from freelist */
char isfl; /* set if is freelist blocks - !acctg */
char userdata; /* mask defining userdata treatment */
xfs_fsblock_t firstblock; /* io first block allocated */
struct xfs_owner_info oinfo; /* owner of blocks being allocated */
enum xfs_ag_resv_type resv; /* block reservation to use */
} xfs_alloc_arg_t;
/*
* Defines for userdata
* Defines for datatype
*/
#define XFS_ALLOC_USERDATA (1 << 0)/* allocation is for user data*/
#define XFS_ALLOC_INITIAL_USER_DATA (1 << 1)/* special case start of file */
#define XFS_ALLOC_USERDATA_ZERO (1 << 2)/* zero extent on allocation */
#define XFS_ALLOC_NOBUSY (1 << 3)/* Busy extents not allowed */
static inline bool
xfs_alloc_is_userdata(int datatype)
{
return (datatype & ~XFS_ALLOC_NOBUSY) != 0;
}
static inline bool
xfs_alloc_allow_busy_reuse(int datatype)
{
return (datatype & XFS_ALLOC_NOBUSY) == 0;
}
/* freespace limit calculations */
#define XFS_ALLOC_AGFL_RESERVE 4
......@@ -106,7 +119,8 @@ unsigned int xfs_alloc_set_aside(struct xfs_mount *mp);
unsigned int xfs_alloc_ag_max_usable(struct xfs_mount *mp);
xfs_extlen_t xfs_alloc_longest_free_extent(struct xfs_mount *mp,
struct xfs_perag *pag, xfs_extlen_t need);
struct xfs_perag *pag, xfs_extlen_t need,
xfs_extlen_t reserved);
unsigned int xfs_alloc_min_freelist(struct xfs_mount *mp,
struct xfs_perag *pag);
......@@ -184,7 +198,8 @@ xfs_free_extent(
struct xfs_trans *tp, /* transaction pointer */
xfs_fsblock_t bno, /* starting block number of extent */
xfs_extlen_t len, /* length of extent */
struct xfs_owner_info *oinfo);/* extent owner */
struct xfs_owner_info *oinfo, /* extent owner */
enum xfs_ag_resv_type type); /* block reservation type */
int /* error */
xfs_alloc_lookup_ge(
......
......@@ -47,6 +47,7 @@
#include "xfs_attr_leaf.h"
#include "xfs_filestream.h"
#include "xfs_rmap.h"
#include "xfs_ag_resv.h"
kmem_zone_t *xfs_bmap_free_item_zone;
......@@ -1388,7 +1389,7 @@ xfs_bmap_search_multi_extents(
* Else, *lastxp will be set to the index of the found
* entry; *gotp will contain the entry.
*/
STATIC xfs_bmbt_rec_host_t * /* pointer to found extent entry */
xfs_bmbt_rec_host_t * /* pointer to found extent entry */
xfs_bmap_search_extents(
xfs_inode_t *ip, /* incore inode pointer */
xfs_fileoff_t bno, /* block number searched for */
......@@ -3347,7 +3348,8 @@ xfs_bmap_adjacent(
mp = ap->ip->i_mount;
nullfb = *ap->firstblock == NULLFSBLOCK;
rt = XFS_IS_REALTIME_INODE(ap->ip) && ap->userdata;
rt = XFS_IS_REALTIME_INODE(ap->ip) &&
xfs_alloc_is_userdata(ap->datatype);
fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, *ap->firstblock);
/*
* If allocating at eof, and there's a previous real block,
......@@ -3501,7 +3503,8 @@ xfs_bmap_longest_free_extent(
}
longest = xfs_alloc_longest_free_extent(mp, pag,
xfs_alloc_min_freelist(mp, pag));
xfs_alloc_min_freelist(mp, pag),
xfs_ag_resv_needed(pag, XFS_AG_RESV_NONE));
if (*blen < longest)
*blen = longest;
......@@ -3622,7 +3625,7 @@ xfs_bmap_btalloc(
{
xfs_mount_t *mp; /* mount point structure */
xfs_alloctype_t atype = 0; /* type for allocation routines */
xfs_extlen_t align; /* minimum allocation alignment */
xfs_extlen_t align = 0; /* minimum allocation alignment */
xfs_agnumber_t fb_agno; /* ag number of ap->firstblock */
xfs_agnumber_t ag;
xfs_alloc_arg_t args;
......@@ -3645,7 +3648,8 @@ xfs_bmap_btalloc(
else if (mp->m_dalign)
stripe_align = mp->m_dalign;
align = ap->userdata ? xfs_get_extsz_hint(ap->ip) : 0;
if (xfs_alloc_is_userdata(ap->datatype))
align = xfs_get_extsz_hint(ap->ip);
if (unlikely(align)) {
error = xfs_bmap_extsize_align(mp, &ap->got, &ap->prev,
align, 0, ap->eof, 0, ap->conv,
......@@ -3658,7 +3662,8 @@ xfs_bmap_btalloc(
nullfb = *ap->firstblock == NULLFSBLOCK;
fb_agno = nullfb ? NULLAGNUMBER : XFS_FSB_TO_AGNO(mp, *ap->firstblock);
if (nullfb) {
if (ap->userdata && xfs_inode_is_filestream(ap->ip)) {
if (xfs_alloc_is_userdata(ap->datatype) &&
xfs_inode_is_filestream(ap->ip)) {
ag = xfs_filestream_lookup_ag(ap->ip);
ag = (ag != NULLAGNUMBER) ? ag : 0;
ap->blkno = XFS_AGB_TO_FSB(mp, ag, 0);
......@@ -3698,7 +3703,8 @@ xfs_bmap_btalloc(
* enough for the request. If one isn't found, then adjust
* the minimum allocation size to the largest space found.
*/
if (ap->userdata && xfs_inode_is_filestream(ap->ip))
if (xfs_alloc_is_userdata(ap->datatype) &&
xfs_inode_is_filestream(ap->ip))
error = xfs_bmap_btalloc_filestreams(ap, &args, &blen);
else
error = xfs_bmap_btalloc_nullfb(ap, &args, &blen);
......@@ -3781,9 +3787,9 @@ xfs_bmap_btalloc(
}
args.minleft = ap->minleft;
args.wasdel = ap->wasdel;
args.isfl = 0;
args.userdata = ap->userdata;
if (ap->userdata & XFS_ALLOC_USERDATA_ZERO)
args.resv = XFS_AG_RESV_NONE;
args.datatype = ap->datatype;
if (ap->datatype & XFS_ALLOC_USERDATA_ZERO)
args.ip = ap->ip;
error = xfs_alloc_vextent(&args);
......@@ -3877,7 +3883,8 @@ STATIC int
xfs_bmap_alloc(
struct xfs_bmalloca *ap) /* bmap alloc argument struct */
{
if (XFS_IS_REALTIME_INODE(ap->ip) && ap->userdata)
if (XFS_IS_REALTIME_INODE(ap->ip) &&
xfs_alloc_is_userdata(ap->datatype))
return xfs_bmap_rtalloc(ap);
return xfs_bmap_btalloc(ap);
}
......@@ -4074,7 +4081,7 @@ xfs_bmapi_read(
return 0;
}
STATIC int
int
xfs_bmapi_reserve_delalloc(
struct xfs_inode *ip,
xfs_fileoff_t aoff,
......@@ -4170,91 +4177,6 @@ xfs_bmapi_reserve_delalloc(
return error;
}
/*
* Map file blocks to filesystem blocks, adding delayed allocations as needed.
*/
int
xfs_bmapi_delay(
struct xfs_inode *ip, /* incore inode */
xfs_fileoff_t bno, /* starting file offs. mapped */
xfs_filblks_t len, /* length to map in file */
struct xfs_bmbt_irec *mval, /* output: map values */
int *nmap, /* i/o: mval size/count */
int flags) /* XFS_BMAPI_... */
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_DATA_FORK);
struct xfs_bmbt_irec got; /* current file extent record */
struct xfs_bmbt_irec prev; /* previous file extent record */
xfs_fileoff_t obno; /* old block number (offset) */
xfs_fileoff_t end; /* end of mapped file region */
xfs_extnum_t lastx; /* last useful extent number */
int eof; /* we've hit the end of extents */
int n = 0; /* current extent index */
int error = 0;
ASSERT(*nmap >= 1);
ASSERT(*nmap <= XFS_BMAP_MAX_NMAP);
ASSERT(!(flags & ~XFS_BMAPI_ENTIRE));
ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
if (unlikely(XFS_TEST_ERROR(
(XFS_IFORK_FORMAT(ip, XFS_DATA_FORK) != XFS_DINODE_FMT_EXTENTS &&
XFS_IFORK_FORMAT(ip, XFS_DATA_FORK) != XFS_DINODE_FMT_BTREE),
mp, XFS_ERRTAG_BMAPIFORMAT, XFS_RANDOM_BMAPIFORMAT))) {
XFS_ERROR_REPORT("xfs_bmapi_delay", XFS_ERRLEVEL_LOW, mp);
return -EFSCORRUPTED;
}
if (XFS_FORCED_SHUTDOWN(mp))
return -EIO;
XFS_STATS_INC(mp, xs_blk_mapw);
if (!(ifp->if_flags & XFS_IFEXTENTS)) {
error = xfs_iread_extents(NULL, ip, XFS_DATA_FORK);
if (error)
return error;
}
xfs_bmap_search_extents(ip, bno, XFS_DATA_FORK, &eof, &lastx, &got, &prev);
end = bno + len;
obno = bno;
while (bno < end && n < *nmap) {
if (eof || got.br_startoff > bno) {
error = xfs_bmapi_reserve_delalloc(ip, bno, len, &got,
&prev, &lastx, eof);
if (error) {
if (n == 0) {
*nmap = 0;
return error;
}
break;
}
}
/* set up the extent map to return. */
xfs_bmapi_trim_map(mval, &got, &bno, len, obno, end, n, flags);
xfs_bmapi_update_map(&mval, &bno, &len, obno, end, &n, flags);
/* If we're done, stop now. */
if (bno >= end || n >= *nmap)
break;
/* Else go on to the next record. */
prev = got;
if (++lastx < ifp->if_bytes / sizeof(xfs_bmbt_rec_t))
xfs_bmbt_get_all(xfs_iext_get_ext(ifp, lastx), &got);
else
eof = 1;
}
*nmap = n;
return 0;
}
static int
xfs_bmapi_allocate(
struct xfs_bmalloca *bma)
......@@ -4287,15 +4209,21 @@ xfs_bmapi_allocate(
}
/*
* Indicate if this is the first user data in the file, or just any
* user data. And if it is userdata, indicate whether it needs to
* be initialised to zero during allocation.
* Set the data type being allocated. For the data fork, the first data
* in the file is treated differently to all other allocations. For the
* attribute fork, we only need to ensure the allocated range is not on
* the busy list.
*/
if (!(bma->flags & XFS_BMAPI_METADATA)) {
bma->userdata = (bma->offset == 0) ?
XFS_ALLOC_INITIAL_USER_DATA : XFS_ALLOC_USERDATA;
bma->datatype = XFS_ALLOC_NOBUSY;
if (whichfork == XFS_DATA_FORK) {
if (bma->offset == 0)
bma->datatype |= XFS_ALLOC_INITIAL_USER_DATA;
else
bma->datatype |= XFS_ALLOC_USERDATA;
}
if (bma->flags & XFS_BMAPI_ZERO)
bma->userdata |= XFS_ALLOC_USERDATA_ZERO;
bma->datatype |= XFS_ALLOC_USERDATA_ZERO;
}
bma->minlen = (bma->flags & XFS_BMAPI_CONTIG) ? bma->length : 1;
......@@ -4565,7 +4493,7 @@ xfs_bmapi_write(
bma.tp = tp;
bma.ip = ip;
bma.total = total;
bma.userdata = 0;
bma.datatype = 0;
bma.dfops = dfops;
bma.firstblock = firstblock;
......
......@@ -54,7 +54,7 @@ struct xfs_bmalloca {
bool wasdel; /* replacing a delayed allocation */
bool aeof; /* allocated space at eof */
bool conv; /* overwriting unwritten extents */
char userdata;/* userdata mask */
int datatype;/* data type being allocated */
int flags;
};
......@@ -181,9 +181,6 @@ int xfs_bmap_read_extents(struct xfs_trans *tp, struct xfs_inode *ip,
int xfs_bmapi_read(struct xfs_inode *ip, xfs_fileoff_t bno,
xfs_filblks_t len, struct xfs_bmbt_irec *mval,
int *nmap, int flags);
int xfs_bmapi_delay(struct xfs_inode *ip, xfs_fileoff_t bno,
xfs_filblks_t len, struct xfs_bmbt_irec *mval,
int *nmap, int flags);
int xfs_bmapi_write(struct xfs_trans *tp, struct xfs_inode *ip,
xfs_fileoff_t bno, xfs_filblks_t len, int flags,
xfs_fsblock_t *firstblock, xfs_extlen_t total,
......@@ -202,5 +199,12 @@ int xfs_bmap_shift_extents(struct xfs_trans *tp, struct xfs_inode *ip,
struct xfs_defer_ops *dfops, enum shift_direction direction,
int num_exts);
int xfs_bmap_split_extent(struct xfs_inode *ip, xfs_fileoff_t split_offset);
struct xfs_bmbt_rec_host *
xfs_bmap_search_extents(struct xfs_inode *ip, xfs_fileoff_t bno,
int fork, int *eofp, xfs_extnum_t *lastxp,
struct xfs_bmbt_irec *gotp, struct xfs_bmbt_irec *prevp);
int xfs_bmapi_reserve_delalloc(struct xfs_inode *ip, xfs_fileoff_t aoff,
xfs_filblks_t len, struct xfs_bmbt_irec *got,
struct xfs_bmbt_irec *prev, xfs_extnum_t *lastx, int eof);
#endif /* __XFS_BMAP_H__ */
......@@ -2070,7 +2070,7 @@ __xfs_btree_updkeys(
struct xfs_buf *bp0,
bool force_all)
{
union xfs_btree_bigkey key; /* keys from current level */
union xfs_btree_key key; /* keys from current level */
union xfs_btree_key *lkey; /* keys from the next level up */
union xfs_btree_key *hkey;
union xfs_btree_key *nlkey; /* keys from the next level up */
......@@ -2086,7 +2086,7 @@ __xfs_btree_updkeys(
trace_xfs_btree_updkeys(cur, level, bp0);
lkey = (union xfs_btree_key *)&key;
lkey = &key;
hkey = xfs_btree_high_key_from_key(cur, lkey);
xfs_btree_get_keys(cur, block, lkey);
for (level++; level < cur->bc_nlevels; level++) {
......@@ -3226,7 +3226,7 @@ xfs_btree_insrec(
struct xfs_buf *bp; /* buffer for block */
union xfs_btree_ptr nptr; /* new block ptr */
struct xfs_btree_cur *ncur; /* new btree cursor */
union xfs_btree_bigkey nkey; /* new block key */
union xfs_btree_key nkey; /* new block key */
union xfs_btree_key *lkey;
int optr; /* old key/record index */
int ptr; /* key/record index */
......@@ -3241,7 +3241,7 @@ xfs_btree_insrec(
XFS_BTREE_TRACE_ARGIPR(cur, level, *ptrp, &rec);
ncur = NULL;
lkey = (union xfs_btree_key *)&nkey;
lkey = &nkey;
/*
* If we have an external root pointer, and we've made it to the
......@@ -3444,14 +3444,14 @@ xfs_btree_insert(
union xfs_btree_ptr nptr; /* new block number (split result) */
struct xfs_btree_cur *ncur; /* new cursor (split result) */
struct xfs_btree_cur *pcur; /* previous level's cursor */
union xfs_btree_bigkey bkey; /* key of block to insert */
union xfs_btree_key bkey; /* key of block to insert */
union xfs_btree_key *key;
union xfs_btree_rec rec; /* record to insert */
level = 0;
ncur = NULL;
pcur = cur;
key = (union xfs_btree_key *)&bkey;
key = &bkey;
xfs_btree_set_ptr_null(cur, &nptr);
......@@ -4797,3 +4797,50 @@ xfs_btree_query_range(
return xfs_btree_overlapped_query_range(cur, &low_key, &high_key,
fn, priv);
}
/*
* Calculate the number of blocks needed to store a given number of records
* in a short-format (per-AG metadata) btree.
*/
xfs_extlen_t
xfs_btree_calc_size(
struct xfs_mount *mp,
uint *limits,
unsigned long long len)
{
int level;
int maxrecs;
xfs_extlen_t rval;
maxrecs = limits[0];
for (level = 0, rval = 0; len > 1; level++) {
len += maxrecs - 1;
do_div(len, maxrecs);
maxrecs = limits[1];
rval += len;
}
return rval;
}
int
xfs_btree_count_blocks_helper(
struct xfs_btree_cur *cur,
int level,
void *data)
{
xfs_extlen_t *blocks = data;
(*blocks)++;
return 0;
}
/* Count the blocks in a btree and return the result in *blocks. */
int
xfs_btree_count_blocks(
struct xfs_btree_cur *cur,
xfs_extlen_t *blocks)
{
*blocks = 0;
return xfs_btree_visit_blocks(cur, xfs_btree_count_blocks_helper,
blocks);
}
......@@ -37,30 +37,18 @@ union xfs_btree_ptr {
__be64 l; /* long form ptr */
};
union xfs_btree_key {
struct xfs_bmbt_key bmbt;
xfs_bmdr_key_t bmbr; /* bmbt root block */
xfs_alloc_key_t alloc;
struct xfs_inobt_key inobt;
struct xfs_rmap_key rmap;
};
/*
* In-core key that holds both low and high keys for overlapped btrees.
* The two keys are packed next to each other on disk, so do the same
* in memory. Preserve the existing xfs_btree_key as a single key to
* avoid the mental model breakage that would happen if we passed a
* bigkey into a function that operates on a single key.
* The in-core btree key. Overlapping btrees actually store two keys
* per pointer, so we reserve enough memory to hold both. The __*bigkey
* items should never be accessed directly.
*/
union xfs_btree_bigkey {
union xfs_btree_key {
struct xfs_bmbt_key bmbt;
xfs_bmdr_key_t bmbr; /* bmbt root block */
xfs_alloc_key_t alloc;
struct xfs_inobt_key inobt;
struct {
struct xfs_rmap_key rmap;
struct xfs_rmap_key rmap_hi;
};
struct xfs_rmap_key rmap;
struct xfs_rmap_key __rmap_bigkey[2];
};
union xfs_btree_rec {
......@@ -513,6 +501,8 @@ bool xfs_btree_sblock_v5hdr_verify(struct xfs_buf *bp);
bool xfs_btree_sblock_verify(struct xfs_buf *bp, unsigned int max_recs);
uint xfs_btree_compute_maxlevels(struct xfs_mount *mp, uint *limits,
unsigned long len);
xfs_extlen_t xfs_btree_calc_size(struct xfs_mount *mp, uint *limits,
unsigned long long len);
/* return codes */
#define XFS_BTREE_QUERY_RANGE_CONTINUE 0 /* keep iterating */
......@@ -529,4 +519,6 @@ typedef int (*xfs_btree_visit_blocks_fn)(struct xfs_btree_cur *cur, int level,
int xfs_btree_visit_blocks(struct xfs_btree_cur *cur,
xfs_btree_visit_blocks_fn fn, void *data);
int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_extlen_t *blocks);
#endif /* __XFS_BTREE_H__ */
......@@ -81,6 +81,10 @@
* - For each work item attached to the log intent item,
* * Perform the described action.
* * Attach the work item to the log done item.
* * If the result of doing the work was -EAGAIN, ->finish work
* wants a new transaction. See the "Requesting a Fresh
* Transaction while Finishing Deferred Work" section below for
* details.
*
* The key here is that we must log an intent item for all pending
* work items every time we roll the transaction, and that we must log
......@@ -88,6 +92,34 @@
* we can perform complex remapping operations, chaining intent items
* as needed.
*
* Requesting a Fresh Transaction while Finishing Deferred Work
*
* If ->finish_item decides that it needs a fresh transaction to
* finish the work, it must ask its caller (xfs_defer_finish) for a
* continuation. The most likely cause of this circumstance are the
* refcount adjust functions deciding that they've logged enough items
* to be at risk of exceeding the transaction reservation.
*
* To get a fresh transaction, we want to log the existing log done
* item to prevent the log intent item from replaying, immediately log
* a new log intent item with the unfinished work items, roll the
* transaction, and re-call ->finish_item wherever it left off. The
* log done item and the new log intent item must be in the same
* transaction or atomicity cannot be guaranteed; defer_finish ensures
* that this happens.
*
* This requires some coordination between ->finish_item and
* defer_finish. Upon deciding to request a new transaction,
* ->finish_item should update the current work item to reflect the
* unfinished work. Next, it should reset the log done item's list
* count to the number of items finished, and return -EAGAIN.
* defer_finish sees the -EAGAIN, logs the new log intent item
* with the remaining work items, and leaves the xfs_defer_pending
* item at the head of the dop_work queue. Then it rolls the
* transaction and picks up processing where it left off. It is
* required that ->finish_item must be careful to leave enough
* transaction reservation to fit the new log intent item.
*
* This is an example of remapping the extent (E, E+B) into file X at
* offset A and dealing with the extent (C, C+B) already being mapped
* there:
......@@ -104,21 +136,26 @@
* | Intent to add rmap (X, E, A, B) |
* +-------------------------------------------------+
* | Reduce refcount for extent (C, B) | t2
* | Done reducing refcount for extent (C, B) |
* | Done reducing refcount for extent (C, 9) |
* | Intent to reduce refcount for extent (C+9, B-9) |
* | (ran out of space after 9 refcount updates) |
* +-------------------------------------------------+
* | Reduce refcount for extent (C+9, B+9) | t3
* | Done reducing refcount for extent (C+9, B-9) |
* | Increase refcount for extent (E, B) |
* | Done increasing refcount for extent (E, B) |
* | Intent to free extent (C, B) |
* | Intent to free extent (F, 1) (refcountbt block) |
* | Intent to remove rmap (F, 1, REFC) |
* +-------------------------------------------------+
* | Remove rmap (X, C, A, B) | t3
* | Remove rmap (X, C, A, B) | t4
* | Done removing rmap (X, C, A, B) |
* | Add rmap (X, E, A, B) |
* | Done adding rmap (X, E, A, B) |
* | Remove rmap (F, 1, REFC) |
* | Done removing rmap (F, 1, REFC) |
* +-------------------------------------------------+
* | Free extent (C, B) | t4
* | Free extent (C, B) | t5
* | Done freeing extent (C, B) |
* | Free extent (D, 1) |
* | Done freeing extent (D, 1) |
......@@ -141,6 +178,9 @@
* - Intent to free extent (C, B)
* - Intent to free extent (F, 1) (refcountbt block)
* - Intent to remove rmap (F, 1, REFC)
*
* Note that the continuation requested between t2 and t3 is likely to
* reoccur.
*/
static const struct xfs_defer_op_type *defer_op_types[XFS_DEFER_OPS_TYPE_MAX];
......@@ -323,7 +363,16 @@ xfs_defer_finish(
dfp->dfp_count--;
error = dfp->dfp_type->finish_item(*tp, dop, li,
dfp->dfp_done, &state);
if (error) {
if (error == -EAGAIN) {
/*
* Caller wants a fresh transaction;
* put the work item back on the list
* and jump out.
*/
list_add(li, &dfp->dfp_work);
dfp->dfp_count++;
break;
} else if (error) {
/*
* Clean up after ourselves and jump out.
* xfs_defer_cancel will take care of freeing
......@@ -335,9 +384,25 @@ xfs_defer_finish(
goto out;
}
}
/* Done with the dfp, free it. */
list_del(&dfp->dfp_list);
kmem_free(dfp);
if (error == -EAGAIN) {
/*
* Caller wants a fresh transaction, so log a
* new log intent item to replace the old one
* and roll the transaction. See "Requesting
* a Fresh Transaction while Finishing
* Deferred Work" above.
*/
dfp->dfp_intent = dfp->dfp_type->create_intent(*tp,
dfp->dfp_count);
dfp->dfp_done = NULL;
list_for_each(li, &dfp->dfp_work)
dfp->dfp_type->log_item(*tp, dfp->dfp_intent,
li);
} else {
/* Done with the dfp, free it. */
list_del(&dfp->dfp_list);
kmem_free(dfp);
}
if (cleanup_fn)
cleanup_fn(*tp, state, error);
......
......@@ -132,7 +132,7 @@ xfs_inobt_free_block(
xfs_rmap_ag_owner(&oinfo, XFS_RMAP_OWN_INOBT);
return xfs_free_extent(cur->bc_tp,
XFS_DADDR_TO_FSB(cur->bc_mp, XFS_BUF_ADDR(bp)), 1,
&oinfo);
&oinfo, XFS_AG_RESV_NONE);
}
STATIC int
......
......@@ -647,9 +647,17 @@ struct xfs_rui_log_format {
__uint16_t rui_size; /* size of this item */
__uint32_t rui_nextents; /* # extents to free */
__uint64_t rui_id; /* rui identifier */
struct xfs_map_extent rui_extents[1]; /* array of extents to rmap */
struct xfs_map_extent rui_extents[]; /* array of extents to rmap */
};
static inline size_t
xfs_rui_log_format_sizeof(
unsigned int nr)
{
return sizeof(struct xfs_rui_log_format) +
nr * sizeof(struct xfs_map_extent);
}
/*
* This is the structure used to lay out an rud log item in the
* log. The rud_extents array is a variable size array whose
......
......@@ -200,7 +200,7 @@ xfs_setfilesize_trans_alloc(
* Update on-disk file size now that data has been written to disk.
*/
STATIC int
xfs_setfilesize(
__xfs_setfilesize(
struct xfs_inode *ip,
struct xfs_trans *tp,
xfs_off_t offset,
......@@ -225,6 +225,23 @@ xfs_setfilesize(
return xfs_trans_commit(tp);
}
int
xfs_setfilesize(
struct xfs_inode *ip,
xfs_off_t offset,
size_t size)
{
struct xfs_mount *mp = ip->i_mount;
struct xfs_trans *tp;
int error;
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0, &tp);
if (error)
return error;
return __xfs_setfilesize(ip, tp, offset, size);
}
STATIC int
xfs_setfilesize_ioend(
struct xfs_ioend *ioend,
......@@ -247,7 +264,7 @@ xfs_setfilesize_ioend(
return error;
}
return xfs_setfilesize(ip, tp, ioend->io_offset, ioend->io_size);
return __xfs_setfilesize(ip, tp, ioend->io_offset, ioend->io_size);
}
/*
......@@ -1336,13 +1353,12 @@ xfs_end_io_direct_write(
{
struct inode *inode = file_inode(iocb->ki_filp);
struct xfs_inode *ip = XFS_I(inode);
struct xfs_mount *mp = ip->i_mount;
uintptr_t flags = (uintptr_t)private;
int error = 0;
trace_xfs_end_io_direct_write(ip, offset, size);
if (XFS_FORCED_SHUTDOWN(mp))
if (XFS_FORCED_SHUTDOWN(ip->i_mount))
return -EIO;
if (size <= 0)
......@@ -1380,14 +1396,9 @@ xfs_end_io_direct_write(
error = xfs_iomap_write_unwritten(ip, offset, size);
} else if (flags & XFS_DIO_FLAG_APPEND) {
struct xfs_trans *tp;
trace_xfs_end_io_direct_write_append(ip, offset, size);
error = xfs_trans_alloc(mp, &M_RES(mp)->tr_fsyncts, 0, 0, 0,
&tp);
if (!error)
error = xfs_setfilesize(ip, tp, offset, size);
error = xfs_setfilesize(ip, offset, size);
}
return error;
......
......@@ -62,6 +62,7 @@ int xfs_get_blocks_dax_fault(struct inode *inode, sector_t offset,
int xfs_end_io_direct_write(struct kiocb *iocb, loff_t offset,
ssize_t size, void *private);
int xfs_setfilesize(struct xfs_inode *ip, xfs_off_t offset, size_t size);
extern void xfs_count_page_state(struct page *, int *, int *);
extern struct block_device *xfs_find_bdev_for_inode(struct inode *);
......
......@@ -182,7 +182,7 @@ xfs_bmap_rtalloc(
XFS_TRANS_DQ_RTBCOUNT, (long) ralen);
/* Zero the extent if we were asked to do so */
if (ap->userdata & XFS_ALLOC_USERDATA_ZERO) {
if (ap->datatype & XFS_ALLOC_USERDATA_ZERO) {
error = xfs_zero_extent(ap->ip, ap->blkno, ap->length);
if (error)
return error;
......
......@@ -865,7 +865,7 @@ xfs_buf_item_log_segment(
*/
if (bit) {
end_bit = MIN(bit + bits_to_set, (uint)NBWORD);
mask = ((1 << (end_bit - bit)) - 1) << bit;
mask = ((1U << (end_bit - bit)) - 1) << bit;
*wordp |= mask;
wordp++;
bits_set = end_bit - bit;
......@@ -888,7 +888,7 @@ xfs_buf_item_log_segment(
*/
end_bit = bits_to_set - bits_set;
if (end_bit) {
mask = (1 << end_bit) - 1;
mask = (1U << end_bit) - 1;
*wordp |= mask;
}
}
......@@ -1095,7 +1095,8 @@ xfs_buf_iodone_callback_error(
bp->b_last_error != bp->b_error) {
bp->b_flags |= (XBF_WRITE | XBF_DONE | XBF_WRITE_FAIL);
bp->b_last_error = bp->b_error;
if (cfg->retry_timeout && !bp->b_first_retry_time)
if (cfg->retry_timeout != XFS_ERR_RETRY_FOREVER &&
!bp->b_first_retry_time)
bp->b_first_retry_time = jiffies;
xfs_buf_ioerror(bp, 0);
......@@ -1111,7 +1112,7 @@ xfs_buf_iodone_callback_error(
if (cfg->max_retries != XFS_ERR_RETRY_FOREVER &&
++bp->b_retries > cfg->max_retries)
goto permanent_error;
if (cfg->retry_timeout &&
if (cfg->retry_timeout != XFS_ERR_RETRY_FOREVER &&
time_after(jiffies, cfg->retry_timeout + bp->b_first_retry_time))
goto permanent_error;
......
......@@ -384,7 +384,7 @@ xfs_extent_busy_trim(
* If this is a metadata allocation, try to reuse the busy
* extent instead of trimming the allocation.
*/
if (!args->userdata &&
if (!xfs_alloc_is_userdata(args->datatype) &&
!(busyp->flags & XFS_EXTENT_BUSY_DISCARDED)) {
if (!xfs_extent_busy_update_extent(args->mp, args->pag,
busyp, fbno, flen,
......
......@@ -269,6 +269,8 @@ xfs_file_dio_aio_read(
return -EINVAL;
}
file_accessed(iocb->ki_filp);
/*
* Locking is a bit tricky here. If we take an exclusive lock for direct
* IO, we effectively serialise all new concurrent read IO to this file
......@@ -323,7 +325,6 @@ xfs_file_dio_aio_read(
}
xfs_rw_iunlock(ip, XFS_IOLOCK_SHARED);
file_accessed(iocb->ki_filp);
return ret;
}
......@@ -332,10 +333,7 @@ xfs_file_dax_read(
struct kiocb *iocb,
struct iov_iter *to)
{
struct address_space *mapping = iocb->ki_filp->f_mapping;
struct inode *inode = mapping->host;
struct xfs_inode *ip = XFS_I(inode);
struct iov_iter data = *to;
struct xfs_inode *ip = XFS_I(iocb->ki_filp->f_mapping->host);
size_t count = iov_iter_count(to);
ssize_t ret = 0;
......@@ -345,11 +343,7 @@ xfs_file_dax_read(
return 0; /* skip atime */
xfs_rw_ilock(ip, XFS_IOLOCK_SHARED);
ret = dax_do_io(iocb, inode, &data, xfs_get_blocks_direct, NULL, 0);
if (ret > 0) {
iocb->ki_pos += ret;
iov_iter_advance(to, ret);
}
ret = iomap_dax_rw(iocb, to, &xfs_iomap_ops);
xfs_rw_iunlock(ip, XFS_IOLOCK_SHARED);
file_accessed(iocb->ki_filp);
......@@ -711,70 +705,32 @@ xfs_file_dax_write(
struct kiocb *iocb,
struct iov_iter *from)
{
struct address_space *mapping = iocb->ki_filp->f_mapping;
struct inode *inode = mapping->host;
struct inode *inode = iocb->ki_filp->f_mapping->host;
struct xfs_inode *ip = XFS_I(inode);
struct xfs_mount *mp = ip->i_mount;
ssize_t ret = 0;
int unaligned_io = 0;
int iolock;
struct iov_iter data;
int iolock = XFS_IOLOCK_EXCL;
ssize_t ret, error = 0;
size_t count;
loff_t pos;
/* "unaligned" here means not aligned to a filesystem block */
if ((iocb->ki_pos & mp->m_blockmask) ||
((iocb->ki_pos + iov_iter_count(from)) & mp->m_blockmask)) {
unaligned_io = 1;
iolock = XFS_IOLOCK_EXCL;
} else if (mapping->nrpages) {
iolock = XFS_IOLOCK_EXCL;
} else {
iolock = XFS_IOLOCK_SHARED;
}
xfs_rw_ilock(ip, iolock);
ret = xfs_file_aio_write_checks(iocb, from, &iolock);
if (ret)
goto out;
/*
* Yes, even DAX files can have page cache attached to them: A zeroed
* page is inserted into the pagecache when we have to serve a write
* fault on a hole. It should never be dirtied and can simply be
* dropped from the pagecache once we get real data for the page.
*
* XXX: This is racy against mmap, and there's nothing we can do about
* it. dax_do_io() should really do this invalidation internally as
* it will know if we've allocated over a holei for this specific IO and
* if so it needs to update the mapping tree and invalidate existing
* PTEs over the newly allocated range. Remove this invalidation when
* dax_do_io() is fixed up.
*/
if (mapping->nrpages) {
loff_t end = iocb->ki_pos + iov_iter_count(from) - 1;
pos = iocb->ki_pos;
count = iov_iter_count(from);
ret = invalidate_inode_pages2_range(mapping,
iocb->ki_pos >> PAGE_SHIFT,
end >> PAGE_SHIFT);
WARN_ON_ONCE(ret);
}
trace_xfs_file_dax_write(ip, count, pos);
if (iolock == XFS_IOLOCK_EXCL && !unaligned_io) {
xfs_rw_ilock_demote(ip, XFS_IOLOCK_EXCL);
iolock = XFS_IOLOCK_SHARED;
ret = iomap_dax_rw(iocb, from, &xfs_iomap_ops);
if (ret > 0 && iocb->ki_pos > i_size_read(inode)) {
i_size_write(inode, iocb->ki_pos);
error = xfs_setfilesize(ip, pos, ret);
}
trace_xfs_file_dax_write(ip, iov_iter_count(from), iocb->ki_pos);
data = *from;
ret = dax_do_io(iocb, inode, &data, xfs_get_blocks_direct,
xfs_end_io_direct_write, 0);
if (ret > 0) {
iocb->ki_pos += ret;
iov_iter_advance(from, ret);
}
out:
xfs_rw_iunlock(ip, iolock);
return ret;
return error ? error : ret;
}
STATIC ssize_t
......@@ -1513,7 +1469,7 @@ xfs_filemap_page_mkwrite(
xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
if (IS_DAX(inode)) {
ret = dax_mkwrite(vma, vmf, xfs_get_blocks_dax_fault);
ret = iomap_dax_fault(vma, vmf, &xfs_iomap_ops);
} else {
ret = iomap_page_mkwrite(vma, vmf, &xfs_iomap_ops);
ret = block_page_mkwrite_return(ret);
......@@ -1547,7 +1503,7 @@ xfs_filemap_fault(
* changes to xfs_get_blocks_direct() to map unwritten extent
* ioend for conversion on read-only mappings.
*/
ret = dax_fault(vma, vmf, xfs_get_blocks_dax_fault);
ret = iomap_dax_fault(vma, vmf, &xfs_iomap_ops);
} else
ret = filemap_fault(vma, vmf);
xfs_iunlock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
......
......@@ -30,6 +30,7 @@
#include "xfs_mru_cache.h"
#include "xfs_filestream.h"
#include "xfs_trace.h"
#include "xfs_ag_resv.h"
struct xfs_fstrm_item {
struct xfs_mru_cache_elem mru;
......@@ -198,7 +199,8 @@ xfs_filestream_pick_ag(
}
longest = xfs_alloc_longest_free_extent(mp, pag,
xfs_alloc_min_freelist(mp, pag));
xfs_alloc_min_freelist(mp, pag),
xfs_ag_resv_needed(pag, XFS_AG_RESV_NONE));
if (((minlen && longest >= minlen) ||
(!minlen && pag->pagf_freeblks >= minfree)) &&
(!pag->pagf_metadata || !(flags & XFS_PICK_USERDATA) ||
......@@ -369,7 +371,8 @@ xfs_filestream_new_ag(
struct xfs_mount *mp = ip->i_mount;
xfs_extlen_t minlen = ap->length;
xfs_agnumber_t startag = 0;
int flags, err = 0;
int flags = 0;
int err = 0;
struct xfs_mru_cache_elem *mru;
*agp = NULLAGNUMBER;
......@@ -385,8 +388,10 @@ xfs_filestream_new_ag(
startag = (item->ag + 1) % mp->m_sb.sb_agcount;
}
flags = (ap->userdata ? XFS_PICK_USERDATA : 0) |
(ap->dfops->dop_low ? XFS_PICK_LOWSPACE : 0);
if (xfs_alloc_is_userdata(ap->datatype))
flags |= XFS_PICK_USERDATA;
if (ap->dfops->dop_low)
flags |= XFS_PICK_LOWSPACE;
err = xfs_filestream_pick_ag(pip, startag, agp, flags, minlen);
......
......@@ -553,7 +553,7 @@ xfs_growfs_data_private(
error = xfs_free_extent(tp,
XFS_AGB_TO_FSB(mp, agno,
be32_to_cpu(agf->agf_length) - new),
new, &oinfo);
new, &oinfo, XFS_AG_RESV_NONE);
if (error)
goto error0;
}
......
......@@ -1414,6 +1414,16 @@ xfs_inode_set_eofblocks_tag(
struct xfs_perag *pag;
int tagged;
/*
* Don't bother locking the AG and looking up in the radix trees
* if we already know that we have the tag set.
*/
if (ip->i_flags & XFS_IEOFBLOCKS)
return;
spin_lock(&ip->i_flags_lock);
ip->i_flags |= XFS_IEOFBLOCKS;
spin_unlock(&ip->i_flags_lock);
pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino));
spin_lock(&pag->pag_ici_lock);
trace_xfs_inode_set_eofblocks_tag(ip);
......@@ -1449,6 +1459,10 @@ xfs_inode_clear_eofblocks_tag(
struct xfs_mount *mp = ip->i_mount;
struct xfs_perag *pag;
spin_lock(&ip->i_flags_lock);
ip->i_flags &= ~XFS_IEOFBLOCKS;
spin_unlock(&ip->i_flags_lock);
pag = xfs_perag_get(mp, XFS_INO_TO_AGNO(mp, ip->i_ino));
spin_lock(&pag->pag_ici_lock);
trace_xfs_inode_clear_eofblocks_tag(ip);
......
......@@ -216,6 +216,7 @@ xfs_get_initial_prid(struct xfs_inode *dp)
#define __XFS_IPINNED_BIT 8 /* wakeup key for zero pin count */
#define XFS_IPINNED (1 << __XFS_IPINNED_BIT)
#define XFS_IDONTCACHE (1 << 9) /* don't cache the inode long term */
#define XFS_IEOFBLOCKS (1 << 10)/* has the preallocblocks tag set */
/*
* Per-lifetime flags need to be reset when re-using a reclaimable inode during
......
This diff is collapsed.
......@@ -25,8 +25,6 @@ struct xfs_bmbt_irec;
int xfs_iomap_write_direct(struct xfs_inode *, xfs_off_t, size_t,
struct xfs_bmbt_irec *, int);
int xfs_iomap_write_delay(struct xfs_inode *, xfs_off_t, size_t,
struct xfs_bmbt_irec *);
int xfs_iomap_write_allocate(struct xfs_inode *, xfs_off_t,
struct xfs_bmbt_irec *);
int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t);
......
......@@ -413,7 +413,8 @@ struct xlog {
/* log record crc error injection factor */
uint32_t l_badcrc_factor;
#endif
/* log recovery lsn tracking (for buffer submission */
xfs_lsn_t l_recovery_lsn;
};
#define XLOG_BUF_CANCEL_BUCKET(log, blkno) \
......
This diff is collapsed.
......@@ -933,6 +933,20 @@ xfs_mountfs(
goto out_rtunmount;
}
/*
* Now the log is fully replayed, we can transition to full read-only
* mode for read-only mounts. This will sync all the metadata and clean
* the log so that the recovery we just performed does not have to be
* replayed again on the next mount.
*
* We use the same quiesce mechanism as the rw->ro remount, as they are
* semantically identical operations.
*/
if ((mp->m_flags & (XFS_MOUNT_RDONLY|XFS_MOUNT_NORECOVERY)) ==
XFS_MOUNT_RDONLY) {
xfs_quiesce_attr(mp);
}
/*
* Complete the quota initialisation, post-log-replay component.
*/
......
......@@ -57,10 +57,16 @@ enum {
#define XFS_ERR_RETRY_FOREVER -1
/*
* Although retry_timeout is in jiffies which is normally an unsigned long,
* we limit the retry timeout to 86400 seconds, or one day. So even a
* signed 32-bit long is sufficient for a HZ value up to 24855. Making it
* signed lets us store the special "-1" value, meaning retry forever.
*/
struct xfs_error_cfg {
struct xfs_kobj kobj;
int max_retries;
unsigned long retry_timeout; /* in jiffies, 0 = no timeout */
long retry_timeout; /* in jiffies, -1 = infinite */
};
typedef struct xfs_mount {
......@@ -325,6 +331,22 @@ xfs_mp_fail_writes(struct xfs_mount *mp)
}
#endif
/* per-AG block reservation data structures*/
enum xfs_ag_resv_type {
XFS_AG_RESV_NONE = 0,
XFS_AG_RESV_METADATA,
XFS_AG_RESV_AGFL,
};
struct xfs_ag_resv {
/* number of blocks originally reserved here */
xfs_extlen_t ar_orig_reserved;
/* number of blocks reserved here */
xfs_extlen_t ar_reserved;
/* number of blocks originally asked for */
xfs_extlen_t ar_asked;
};
/*
* Per-ag incore structure, copies of information in agf and agi, to improve the
* performance of allocation group selection.
......@@ -372,8 +394,28 @@ typedef struct xfs_perag {
/* for rcu-safe freeing */
struct rcu_head rcu_head;
int pagb_count; /* pagb slots in use */
/* Blocks reserved for all kinds of metadata. */
struct xfs_ag_resv pag_meta_resv;
/* Blocks reserved for just AGFL-based metadata. */
struct xfs_ag_resv pag_agfl_resv;
} xfs_perag_t;
static inline struct xfs_ag_resv *
xfs_perag_resv(
struct xfs_perag *pag,
enum xfs_ag_resv_type type)
{
switch (type) {
case XFS_AG_RESV_METADATA:
return &pag->pag_meta_resv;
case XFS_AG_RESV_AGFL:
return &pag->pag_agfl_resv;
default:
return NULL;
}
}
extern void xfs_uuid_table_free(void);
extern int xfs_log_sbcount(xfs_mount_t *);
extern __uint64_t xfs_default_resblks(xfs_mount_t *mp);
......
......@@ -51,28 +51,16 @@ xfs_rui_item_free(
kmem_zone_free(xfs_rui_zone, ruip);
}
/*
* This returns the number of iovecs needed to log the given rui item.
* We only need 1 iovec for an rui item. It just logs the rui_log_format
* structure.
*/
static inline int
xfs_rui_item_sizeof(
struct xfs_rui_log_item *ruip)
{
return sizeof(struct xfs_rui_log_format) +
(ruip->rui_format.rui_nextents - 1) *
sizeof(struct xfs_map_extent);
}
STATIC void
xfs_rui_item_size(
struct xfs_log_item *lip,
int *nvecs,
int *nbytes)
{
struct xfs_rui_log_item *ruip = RUI_ITEM(lip);
*nvecs += 1;
*nbytes += xfs_rui_item_sizeof(RUI_ITEM(lip));
*nbytes += xfs_rui_log_format_sizeof(ruip->rui_format.rui_nextents);
}
/*
......@@ -97,7 +85,7 @@ xfs_rui_item_format(
ruip->rui_format.rui_size = 1;
xlog_copy_iovec(lv, &vecp, XLOG_REG_TYPE_RUI_FORMAT, &ruip->rui_format,
xfs_rui_item_sizeof(ruip));
xfs_rui_log_format_sizeof(ruip->rui_format.rui_nextents));
}
/*
......@@ -205,16 +193,12 @@ xfs_rui_init(
{
struct xfs_rui_log_item *ruip;
uint size;
ASSERT(nextents > 0);
if (nextents > XFS_RUI_MAX_FAST_EXTENTS) {
size = (uint)(sizeof(struct xfs_rui_log_item) +
((nextents - 1) * sizeof(struct xfs_map_extent)));
ruip = kmem_zalloc(size, KM_SLEEP);
} else {
if (nextents > XFS_RUI_MAX_FAST_EXTENTS)
ruip = kmem_zalloc(xfs_rui_log_item_sizeof(nextents), KM_SLEEP);
else
ruip = kmem_zone_zalloc(xfs_rui_zone, KM_SLEEP);
}
xfs_log_item_init(mp, &ruip->rui_item, XFS_LI_RUI, &xfs_rui_item_ops);
ruip->rui_format.rui_nextents = nextents;
......@@ -239,14 +223,12 @@ xfs_rui_copy_format(
uint len;
src_rui_fmt = buf->i_addr;
len = sizeof(struct xfs_rui_log_format) +
(src_rui_fmt->rui_nextents - 1) *
sizeof(struct xfs_map_extent);
len = xfs_rui_log_format_sizeof(src_rui_fmt->rui_nextents);
if (buf->i_len != len)
return -EFSCORRUPTED;
memcpy((char *)dst_rui_fmt, (char *)src_rui_fmt, len);
memcpy(dst_rui_fmt, src_rui_fmt, len);
return 0;
}
......
......@@ -70,6 +70,14 @@ struct xfs_rui_log_item {
struct xfs_rui_log_format rui_format;
};
static inline size_t
xfs_rui_log_item_sizeof(
unsigned int nr)
{
return offsetof(struct xfs_rui_log_item, rui_format) +
xfs_rui_log_format_sizeof(nr);
}
/*
* This is the "rmap update done" log item. It is used to log the fact that
* some rmapbt updates mentioned in an earlier rui item have been performed.
......
......@@ -1137,7 +1137,7 @@ xfs_restore_resvblks(struct xfs_mount *mp)
* Note: xfs_log_quiesce() stops background log work - the callers must ensure
* it is started again when appropriate.
*/
static void
void
xfs_quiesce_attr(
struct xfs_mount *mp)
{
......@@ -1782,9 +1782,8 @@ xfs_init_zones(void)
if (!xfs_rud_zone)
goto out_destroy_icreate_zone;
xfs_rui_zone = kmem_zone_init((sizeof(struct xfs_rui_log_item) +
((XFS_RUI_MAX_FAST_EXTENTS - 1) *
sizeof(struct xfs_map_extent))),
xfs_rui_zone = kmem_zone_init(
xfs_rui_log_item_sizeof(XFS_RUI_MAX_FAST_EXTENTS),
"xfs_rui_item");
if (!xfs_rui_zone)
goto out_destroy_rud_zone;
......
......@@ -61,6 +61,7 @@ struct xfs_mount;
struct xfs_buftarg;
struct block_device;
extern void xfs_quiesce_attr(struct xfs_mount *mp);
extern void xfs_flush_inodes(struct xfs_mount *mp);
extern void xfs_blkdev_issue_flush(struct xfs_buftarg *);
extern xfs_agnumber_t xfs_set_inode_alloc(struct xfs_mount *,
......
......@@ -393,9 +393,15 @@ max_retries_show(
struct kobject *kobject,
char *buf)
{
int retries;
struct xfs_error_cfg *cfg = to_error_cfg(kobject);
return snprintf(buf, PAGE_SIZE, "%d\n", cfg->max_retries);
if (cfg->retry_timeout == XFS_ERR_RETRY_FOREVER)
retries = -1;
else
retries = cfg->max_retries;
return snprintf(buf, PAGE_SIZE, "%d\n", retries);
}
static ssize_t
......@@ -415,7 +421,10 @@ max_retries_store(
if (val < -1)
return -EINVAL;
cfg->max_retries = val;
if (val == -1)
cfg->retry_timeout = XFS_ERR_RETRY_FOREVER;
else
cfg->max_retries = val;
return count;
}
XFS_SYSFS_ATTR_RW(max_retries);
......@@ -425,10 +434,15 @@ retry_timeout_seconds_show(
struct kobject *kobject,
char *buf)
{
int timeout;
struct xfs_error_cfg *cfg = to_error_cfg(kobject);
return snprintf(buf, PAGE_SIZE, "%ld\n",
jiffies_to_msecs(cfg->retry_timeout) / MSEC_PER_SEC);
if (cfg->retry_timeout == XFS_ERR_RETRY_FOREVER)
timeout = -1;
else
timeout = jiffies_to_msecs(cfg->retry_timeout) / MSEC_PER_SEC;
return snprintf(buf, PAGE_SIZE, "%d\n", timeout);
}
static ssize_t
......@@ -445,11 +459,16 @@ retry_timeout_seconds_store(
if (ret)
return ret;
/* 1 day timeout maximum */
if (val < 0 || val > 86400)
/* 1 day timeout maximum, -1 means infinite */
if (val < -1 || val > 86400)
return -EINVAL;
cfg->retry_timeout = msecs_to_jiffies(val * MSEC_PER_SEC);
if (val == -1)
cfg->retry_timeout = XFS_ERR_RETRY_FOREVER;
else {
cfg->retry_timeout = msecs_to_jiffies(val * MSEC_PER_SEC);
ASSERT(msecs_to_jiffies(val * MSEC_PER_SEC) < LONG_MAX);
}
return count;
}
XFS_SYSFS_ATTR_RW(retry_timeout_seconds);
......@@ -519,18 +538,19 @@ struct xfs_error_init {
static const struct xfs_error_init xfs_error_meta_init[XFS_ERR_ERRNO_MAX] = {
{ .name = "default",
.max_retries = XFS_ERR_RETRY_FOREVER,
.retry_timeout = 0,
.retry_timeout = XFS_ERR_RETRY_FOREVER,
},
{ .name = "EIO",
.max_retries = XFS_ERR_RETRY_FOREVER,
.retry_timeout = 0,
.retry_timeout = XFS_ERR_RETRY_FOREVER,
},
{ .name = "ENOSPC",
.max_retries = XFS_ERR_RETRY_FOREVER,
.retry_timeout = 0,
.retry_timeout = XFS_ERR_RETRY_FOREVER,
},
{ .name = "ENODEV",
.max_retries = 0,
.max_retries = 0, /* We can't recover from devices disappearing */
.retry_timeout = 0,
},
};
......@@ -561,7 +581,10 @@ xfs_error_sysfs_init_class(
goto out_error;
cfg->max_retries = init[i].max_retries;
cfg->retry_timeout = msecs_to_jiffies(
if (init[i].retry_timeout == XFS_ERR_RETRY_FOREVER)
cfg->retry_timeout = XFS_ERR_RETRY_FOREVER;
else
cfg->retry_timeout = msecs_to_jiffies(
init[i].retry_timeout * MSEC_PER_SEC);
}
return 0;
......
......@@ -1570,14 +1570,15 @@ TRACE_EVENT(xfs_agf,
TRACE_EVENT(xfs_free_extent,
TP_PROTO(struct xfs_mount *mp, xfs_agnumber_t agno, xfs_agblock_t agbno,
xfs_extlen_t len, bool isfl, int haveleft, int haveright),
TP_ARGS(mp, agno, agbno, len, isfl, haveleft, haveright),
xfs_extlen_t len, enum xfs_ag_resv_type resv, int haveleft,
int haveright),
TP_ARGS(mp, agno, agbno, len, resv, haveleft, haveright),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
__field(xfs_agblock_t, agbno)
__field(xfs_extlen_t, len)
__field(int, isfl)
__field(int, resv)
__field(int, haveleft)
__field(int, haveright)
),
......@@ -1586,16 +1587,16 @@ TRACE_EVENT(xfs_free_extent,
__entry->agno = agno;
__entry->agbno = agbno;
__entry->len = len;
__entry->isfl = isfl;
__entry->resv = resv;
__entry->haveleft = haveleft;
__entry->haveright = haveright;
),
TP_printk("dev %d:%d agno %u agbno %u len %u isfl %d %s",
TP_printk("dev %d:%d agno %u agbno %u len %u resv %d %s",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
__entry->agbno,
__entry->len,
__entry->isfl,
__entry->resv,
__entry->haveleft ?
(__entry->haveright ? "both" : "left") :
(__entry->haveright ? "right" : "none"))
......@@ -1622,8 +1623,8 @@ DECLARE_EVENT_CLASS(xfs_alloc_class,
__field(short, otype)
__field(char, wasdel)
__field(char, wasfromfl)
__field(char, isfl)
__field(char, userdata)
__field(int, resv)
__field(int, datatype)
__field(xfs_fsblock_t, firstblock)
),
TP_fast_assign(
......@@ -1643,14 +1644,14 @@ DECLARE_EVENT_CLASS(xfs_alloc_class,
__entry->otype = args->otype;
__entry->wasdel = args->wasdel;
__entry->wasfromfl = args->wasfromfl;
__entry->isfl = args->isfl;
__entry->userdata = args->userdata;
__entry->resv = args->resv;
__entry->datatype = args->datatype;
__entry->firstblock = args->firstblock;
),
TP_printk("dev %d:%d agno %u agbno %u minlen %u maxlen %u mod %u "
"prod %u minleft %u total %u alignment %u minalignslop %u "
"len %u type %s otype %s wasdel %d wasfromfl %d isfl %d "
"userdata %d firstblock 0x%llx",
"len %u type %s otype %s wasdel %d wasfromfl %d resv %d "
"datatype 0x%x firstblock 0x%llx",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
__entry->agbno,
......@@ -1667,8 +1668,8 @@ DECLARE_EVENT_CLASS(xfs_alloc_class,
__print_symbolic(__entry->otype, XFS_ALLOC_TYPES),
__entry->wasdel,
__entry->wasfromfl,
__entry->isfl,
__entry->userdata,
__entry->resv,
__entry->datatype,
(unsigned long long)__entry->firstblock)
)
......@@ -1984,6 +1985,29 @@ DEFINE_EVENT(xfs_swap_extent_class, name, \
DEFINE_SWAPEXT_EVENT(xfs_swap_extent_before);
DEFINE_SWAPEXT_EVENT(xfs_swap_extent_after);
TRACE_EVENT(xfs_log_recover_record,
TP_PROTO(struct xlog *log, struct xlog_rec_header *rhead, int pass),
TP_ARGS(log, rhead, pass),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_lsn_t, lsn)
__field(int, len)
__field(int, num_logops)
__field(int, pass)
),
TP_fast_assign(
__entry->dev = log->l_mp->m_super->s_dev;
__entry->lsn = be64_to_cpu(rhead->h_lsn);
__entry->len = be32_to_cpu(rhead->h_len);
__entry->num_logops = be32_to_cpu(rhead->h_num_logops);
__entry->pass = pass;
),
TP_printk("dev %d:%d lsn 0x%llx len 0x%x num_logops 0x%x pass %d",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->lsn, __entry->len, __entry->num_logops,
__entry->pass)
)
DECLARE_EVENT_CLASS(xfs_log_recover_item_class,
TP_PROTO(struct xlog *log, struct xlog_recover *trans,
struct xlog_recover_item *item, int pass),
......@@ -1992,6 +2016,7 @@ DECLARE_EVENT_CLASS(xfs_log_recover_item_class,
__field(dev_t, dev)
__field(unsigned long, item)
__field(xlog_tid_t, tid)
__field(xfs_lsn_t, lsn)
__field(int, type)
__field(int, pass)
__field(int, count)
......@@ -2001,15 +2026,17 @@ DECLARE_EVENT_CLASS(xfs_log_recover_item_class,
__entry->dev = log->l_mp->m_super->s_dev;
__entry->item = (unsigned long)item;
__entry->tid = trans->r_log_tid;
__entry->lsn = trans->r_lsn;
__entry->type = ITEM_TYPE(item);
__entry->pass = pass;
__entry->count = item->ri_cnt;
__entry->total = item->ri_total;
),
TP_printk("dev %d:%d trans 0x%x, pass %d, item 0x%p, item type %s "
"item region count/total %d/%d",
TP_printk("dev %d:%d tid 0x%x lsn 0x%llx, pass %d, item 0x%p, "
"item type %s item region count/total %d/%d",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->tid,
__entry->lsn,
__entry->pass,
(void *)__entry->item,
__print_symbolic(__entry->type, XFS_LI_TYPE_DESC),
......@@ -2068,6 +2095,7 @@ DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_cancel);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_cancel_add);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_cancel_ref_inc);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_recover);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_skip);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_inode_buf);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_reg_buf);
DEFINE_LOG_RECOVER_BUF_ITEM(xfs_log_recover_buf_dquot_buf);
......@@ -2558,6 +2586,60 @@ DEFINE_RMAPBT_EVENT(xfs_rmap_lookup_le_range_result);
DEFINE_RMAPBT_EVENT(xfs_rmap_find_right_neighbor_result);
DEFINE_RMAPBT_EVENT(xfs_rmap_find_left_neighbor_result);
/* per-AG reservation */
DECLARE_EVENT_CLASS(xfs_ag_resv_class,
TP_PROTO(struct xfs_perag *pag, enum xfs_ag_resv_type resv,
xfs_extlen_t len),
TP_ARGS(pag, resv, len),
TP_STRUCT__entry(
__field(dev_t, dev)
__field(xfs_agnumber_t, agno)
__field(int, resv)
__field(xfs_extlen_t, freeblks)
__field(xfs_extlen_t, flcount)
__field(xfs_extlen_t, reserved)
__field(xfs_extlen_t, asked)
__field(xfs_extlen_t, len)
),
TP_fast_assign(
struct xfs_ag_resv *r = xfs_perag_resv(pag, resv);
__entry->dev = pag->pag_mount->m_super->s_dev;
__entry->agno = pag->pag_agno;
__entry->resv = resv;
__entry->freeblks = pag->pagf_freeblks;
__entry->flcount = pag->pagf_flcount;
__entry->reserved = r ? r->ar_reserved : 0;
__entry->asked = r ? r->ar_asked : 0;
__entry->len = len;
),
TP_printk("dev %d:%d agno %u resv %d freeblks %u flcount %u resv %u ask %u len %u\n",
MAJOR(__entry->dev), MINOR(__entry->dev),
__entry->agno,
__entry->resv,
__entry->freeblks,
__entry->flcount,
__entry->reserved,
__entry->asked,
__entry->len)
)
#define DEFINE_AG_RESV_EVENT(name) \
DEFINE_EVENT(xfs_ag_resv_class, name, \
TP_PROTO(struct xfs_perag *pag, enum xfs_ag_resv_type type, \
xfs_extlen_t len), \
TP_ARGS(pag, type, len))
/* per-AG reservation tracepoints */
DEFINE_AG_RESV_EVENT(xfs_ag_resv_init);
DEFINE_AG_RESV_EVENT(xfs_ag_resv_free);
DEFINE_AG_RESV_EVENT(xfs_ag_resv_alloc_extent);
DEFINE_AG_RESV_EVENT(xfs_ag_resv_free_extent);
DEFINE_AG_RESV_EVENT(xfs_ag_resv_critical);
DEFINE_AG_RESV_EVENT(xfs_ag_resv_needed);
DEFINE_AG_ERROR_EVENT(xfs_ag_resv_free_error);
DEFINE_AG_ERROR_EVENT(xfs_ag_resv_init_error);
#endif /* _TRACE_XFS_H */
#undef TRACE_INCLUDE_PATH
......
......@@ -217,7 +217,7 @@ xfs_trans_reserve(
undo_blocks:
if (blocks > 0) {
xfs_mod_fdblocks(tp->t_mountp, -((int64_t)blocks), rsvd);
xfs_mod_fdblocks(tp->t_mountp, (int64_t)blocks, rsvd);
tp->t_blk_res = 0;
}
......@@ -318,7 +318,6 @@ xfs_trans_mod_sb(
* in-core superblock's counter. This should only
* be applied to the on-disk superblock.
*/
ASSERT(delta < 0);
tp->t_res_fdblocks_delta += delta;
if (xfs_sb_version_haslazysbcount(&mp->m_sb))
flags &= ~XFS_TRANS_SB_DIRTY;
......
......@@ -79,7 +79,8 @@ xfs_trans_free_extent(
trace_xfs_bmap_free_deferred(tp->t_mountp, agno, 0, agbno, ext_len);
error = xfs_free_extent(tp, start_block, ext_len, oinfo);
error = xfs_free_extent(tp, start_block, ext_len, oinfo,
XFS_AG_RESV_NONE);
/*
* Mark the transaction dirty, even on error. This ensures the
......
......@@ -147,6 +147,7 @@ __xfs_xattr_put_listent(
arraytop = context->count + prefix_len + namelen + 1;
if (arraytop > context->firstu) {
context->count = -1; /* insufficient space */
context->seen_enough = 1;
return 0;
}
offset = (char *)context->alist + context->count;
......
......@@ -6,13 +6,19 @@
#include <linux/radix-tree.h>
#include <asm/pgtable.h>
struct iomap_ops;
/* We use lowest available exceptional entry bit for locking */
#define RADIX_DAX_ENTRY_LOCK (1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
ssize_t iomap_dax_rw(struct kiocb *iocb, struct iov_iter *iter,
struct iomap_ops *ops);
ssize_t dax_do_io(struct kiocb *, struct inode *, struct iov_iter *,
get_block_t, dio_iodone_t, int flags);
int dax_zero_page_range(struct inode *, loff_t from, unsigned len, get_block_t);
int dax_truncate_page(struct inode *, loff_t from, get_block_t);
int iomap_dax_fault(struct vm_area_struct *vma, struct vm_fault *vmf,
struct iomap_ops *ops);
int dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t);
int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
void dax_wake_mapping_entry_waiter(struct address_space *mapping,
......
......@@ -22,6 +22,8 @@ struct vm_fault;
* Flags for iomap mappings:
*/
#define IOMAP_F_MERGED 0x01 /* contains multiple blocks/extents */
#define IOMAP_F_SHARED 0x02 /* block shared with another file */
#define IOMAP_F_NEW 0x04 /* blocks have been newly allocated */
/*
* Magic value for blkno:
......@@ -64,6 +66,8 @@ struct iomap_ops {
ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
struct iomap_ops *ops);
int iomap_file_dirty(struct inode *inode, loff_t pos, loff_t len,
struct iomap_ops *ops);
int iomap_zero_range(struct inode *inode, loff_t pos, loff_t len,
bool *did_zero, struct iomap_ops *ops);
int iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
......
......@@ -1923,16 +1923,18 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
if (iocb->ki_flags & IOCB_DIRECT) {
struct address_space *mapping = file->f_mapping;
struct inode *inode = mapping->host;
struct iov_iter data = *iter;
loff_t size;
size = i_size_read(inode);
retval = filemap_write_and_wait_range(mapping, iocb->ki_pos,
iocb->ki_pos + count - 1);
if (!retval) {
struct iov_iter data = *iter;
retval = mapping->a_ops->direct_IO(iocb, &data);
}
if (retval < 0)
goto out;
file_accessed(file);
retval = mapping->a_ops->direct_IO(iocb, &data);
if (retval > 0) {
iocb->ki_pos += retval;
iov_iter_advance(iter, retval);
......@@ -1948,10 +1950,8 @@ generic_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
* DAX files, so don't bother trying.
*/
if (retval < 0 || !iov_iter_count(iter) || iocb->ki_pos >= size ||
IS_DAX(inode)) {
file_accessed(file);
IS_DAX(inode))
goto out;
}
}
retval = do_generic_file_read(file, &iocb->ki_pos, iter, retval);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment