Commit fd048088 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4

* 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (43 commits)
  ext4: Rename ext4dev to ext4
  ext4: Avoid double dirtying of super block in ext4_put_super()
  Update ext4 MAINTAINERS file
  Hook ext4 to the vfs fiemap interface.
  generic block based fiemap implementation
  ocfs2: fiemap support
  vfs: vfs-level fiemap interface
  ext4: fix xattr deadlock
  jbd2: Fix buffer head leak when writing the commit block
  ext4: Add debugging markers that can be used by systemtap
  jbd2: abort instead of waiting for nonexistent transaction
  ext4: fix initialization of UNINIT bitmap blocks
  ext4: Remove old legacy block allocator
  ext4: Use readahead when reading an inode from the inode table
  ext4: Improve the documentation for ext4's /proc tunables
  ext4: Combine proc file handling into a single set of functions
  ext4: move /proc setup and teardown out of mballoc.c
  ext4: Don't use 'struct dentry' for internal lookups
  ext4/jbd2: Avoid WARN() messages when failing to write to the superblock
  ext4: use percpu data structures for lg_prealloc_list
  ...
parents 5c3c4d9b 03010a33
......@@ -32,9 +32,9 @@ Mailing list: linux-ext4@vger.kernel.org
you will need to merge your changes with the version from e2fsprogs
1.41.x.
- Create a new filesystem using the ext4dev filesystem type:
- Create a new filesystem using the ext4 filesystem type:
# mke2fs -t ext4dev /dev/hda1
# mke2fs -t ext4 /dev/hda1
Or configure an existing ext3 filesystem to support extents and set
the test_fs flag to indicate that it's ok for an in-development
......@@ -47,13 +47,13 @@ Mailing list: linux-ext4@vger.kernel.org
# tune2fs -I 256 /dev/hda1
(Note: we currently do not have tools to convert an ext4dev
(Note: we currently do not have tools to convert an ext4
filesystem back to ext3; so please do not do try this on production
filesystems.)
- Mounting:
# mount -t ext4dev /dev/hda1 /wherever
# mount -t ext4 /dev/hda1 /wherever
- When comparing performance with other filesystems, remember that
ext3/4 by default offers higher data integrity guarantees than most.
......@@ -177,6 +177,11 @@ barrier=<0|1(*)> This enables/disables the use of write barriers in
your disks are battery-backed in one way or another,
disabling barriers may safely improve performance.
inode_readahead=n This tuning parameter controls the maximum
number of inode table blocks that ext4's inode
table readahead algorithm will pre-read into
the buffer cache. The default value is 32 blocks.
orlov (*) This enables the new Orlov block allocator. It is
enabled by default.
......@@ -252,6 +257,7 @@ stripe=n Number of filesystem blocks that mballoc will try
delalloc (*) Deferring block allocation until write-out time.
nodelalloc Disable delayed allocation. Blocks are allocation
when data is copied from user to page cache.
Data Mode
=========
There are 3 different data modes:
......
============
Fiemap Ioctl
============
The fiemap ioctl is an efficient method for userspace to get file
extent mappings. Instead of block-by-block mapping (such as bmap), fiemap
returns a list of extents.
Request Basics
--------------
A fiemap request is encoded within struct fiemap:
struct fiemap {
__u64 fm_start; /* logical offset (inclusive) at
* which to start mapping (in) */
__u64 fm_length; /* logical length of mapping which
* userspace cares about (in) */
__u32 fm_flags; /* FIEMAP_FLAG_* flags for request (in/out) */
__u32 fm_mapped_extents; /* number of extents that were
* mapped (out) */
__u32 fm_extent_count; /* size of fm_extents array (in) */
__u32 fm_reserved;
struct fiemap_extent fm_extents[0]; /* array of mapped extents (out) */
};
fm_start, and fm_length specify the logical range within the file
which the process would like mappings for. Extents returned mirror
those on disk - that is, the logical offset of the 1st returned extent
may start before fm_start, and the range covered by the last returned
extent may end after fm_length. All offsets and lengths are in bytes.
Certain flags to modify the way in which mappings are looked up can be
set in fm_flags. If the kernel doesn't understand some particular
flags, it will return EBADR and the contents of fm_flags will contain
the set of flags which caused the error. If the kernel is compatible
with all flags passed, the contents of fm_flags will be unmodified.
It is up to userspace to determine whether rejection of a particular
flag is fatal to it's operation. This scheme is intended to allow the
fiemap interface to grow in the future but without losing
compatibility with old software.
fm_extent_count specifies the number of elements in the fm_extents[] array
that can be used to return extents. If fm_extent_count is zero, then the
fm_extents[] array is ignored (no extents will be returned), and the
fm_mapped_extents count will hold the number of extents needed in
fm_extents[] to hold the file's current mapping. Note that there is
nothing to prevent the file from changing between calls to FIEMAP.
The following flags can be set in fm_flags:
* FIEMAP_FLAG_SYNC
If this flag is set, the kernel will sync the file before mapping extents.
* FIEMAP_FLAG_XATTR
If this flag is set, the extents returned will describe the inodes
extended attribute lookup tree, instead of it's data tree.
Extent Mapping
--------------
Extent information is returned within the embedded fm_extents array
which userspace must allocate along with the fiemap structure. The
number of elements in the fiemap_extents[] array should be passed via
fm_extent_count. The number of extents mapped by kernel will be
returned via fm_mapped_extents. If the number of fiemap_extents
allocated is less than would be required to map the requested range,
the maximum number of extents that can be mapped in the fm_extent[]
array will be returned and fm_mapped_extents will be equal to
fm_extent_count. In that case, the last extent in the array will not
complete the requested range and will not have the FIEMAP_EXTENT_LAST
flag set (see the next section on extent flags).
Each extent is described by a single fiemap_extent structure as
returned in fm_extents.
struct fiemap_extent {
__u64 fe_logical; /* logical offset in bytes for the start of
* the extent */
__u64 fe_physical; /* physical offset in bytes for the start
* of the extent */
__u64 fe_length; /* length in bytes for the extent */
__u64 fe_reserved64[2];
__u32 fe_flags; /* FIEMAP_EXTENT_* flags for this extent */
__u32 fe_reserved[3];
};
All offsets and lengths are in bytes and mirror those on disk. It is valid
for an extents logical offset to start before the request or it's logical
length to extend past the request. Unless FIEMAP_EXTENT_NOT_ALIGNED is
returned, fe_logical, fe_physical, and fe_length will be aligned to the
block size of the file system. With the exception of extents flagged as
FIEMAP_EXTENT_MERGED, adjacent extents will not be merged.
The fe_flags field contains flags which describe the extent returned.
A special flag, FIEMAP_EXTENT_LAST is always set on the last extent in
the file so that the process making fiemap calls can determine when no
more extents are available, without having to call the ioctl again.
Some flags are intentionally vague and will always be set in the
presence of other more specific flags. This way a program looking for
a general property does not have to know all existing and future flags
which imply that property.
For example, if FIEMAP_EXTENT_DATA_INLINE or FIEMAP_EXTENT_DATA_TAIL
are set, FIEMAP_EXTENT_NOT_ALIGNED will also be set. A program looking
for inline or tail-packed data can key on the specific flag. Software
which simply cares not to try operating on non-aligned extents
however, can just key on FIEMAP_EXTENT_NOT_ALIGNED, and not have to
worry about all present and future flags which might imply unaligned
data. Note that the opposite is not true - it would be valid for
FIEMAP_EXTENT_NOT_ALIGNED to appear alone.
* FIEMAP_EXTENT_LAST
This is the last extent in the file. A mapping attempt past this
extent will return nothing.
* FIEMAP_EXTENT_UNKNOWN
The location of this extent is currently unknown. This may indicate
the data is stored on an inaccessible volume or that no storage has
been allocated for the file yet.
* FIEMAP_EXTENT_DELALLOC
- This will also set FIEMAP_EXTENT_UNKNOWN.
Delayed allocation - while there is data for this extent, it's
physical location has not been allocated yet.
* FIEMAP_EXTENT_ENCODED
This extent does not consist of plain filesystem blocks but is
encoded (e.g. encrypted or compressed). Reading the data in this
extent via I/O to the block device will have undefined results.
Note that it is *always* undefined to try to update the data
in-place by writing to the indicated location without the
assistance of the filesystem, or to access the data using the
information returned by the FIEMAP interface while the filesystem
is mounted. In other words, user applications may only read the
extent data via I/O to the block device while the filesystem is
unmounted, and then only if the FIEMAP_EXTENT_ENCODED flag is
clear; user applications must not try reading or writing to the
filesystem via the block device under any other circumstances.
* FIEMAP_EXTENT_DATA_ENCRYPTED
- This will also set FIEMAP_EXTENT_ENCODED
The data in this extent has been encrypted by the file system.
* FIEMAP_EXTENT_NOT_ALIGNED
Extent offsets and length are not guaranteed to be block aligned.
* FIEMAP_EXTENT_DATA_INLINE
This will also set FIEMAP_EXTENT_NOT_ALIGNED
Data is located within a meta data block.
* FIEMAP_EXTENT_DATA_TAIL
This will also set FIEMAP_EXTENT_NOT_ALIGNED
Data is packed into a block with data from other files.
* FIEMAP_EXTENT_UNWRITTEN
Unwritten extent - the extent is allocated but it's data has not been
initialized. This indicates the extent's data will be all zero if read
through the filesystem but the contents are undefined if read directly from
the device.
* FIEMAP_EXTENT_MERGED
This will be set when a file does not support extents, i.e., it uses a block
based addressing scheme. Since returning an extent for each block back to
userspace would be highly inefficient, the kernel will try to merge most
adjacent blocks into 'extents'.
VFS -> File System Implementation
---------------------------------
File systems wishing to support fiemap must implement a ->fiemap callback on
their inode_operations structure. The fs ->fiemap call is responsible for
defining it's set of supported fiemap flags, and calling a helper function on
each discovered extent:
struct inode_operations {
...
int (*fiemap)(struct inode *, struct fiemap_extent_info *, u64 start,
u64 len);
->fiemap is passed struct fiemap_extent_info which describes the
fiemap request:
struct fiemap_extent_info {
unsigned int fi_flags; /* Flags as passed from user */
unsigned int fi_extents_mapped; /* Number of mapped extents */
unsigned int fi_extents_max; /* Size of fiemap_extent array */
struct fiemap_extent *fi_extents_start; /* Start of fiemap_extent array */
};
It is intended that the file system should not need to access any of this
structure directly.
Flag checking should be done at the beginning of the ->fiemap callback via the
fiemap_check_flags() helper:
int fiemap_check_flags(struct fiemap_extent_info *fieinfo, u32 fs_flags);
The struct fieinfo should be passed in as recieved from ioctl_fiemap(). The
set of fiemap flags which the fs understands should be passed via fs_flags. If
fiemap_check_flags finds invalid user flags, it will place the bad values in
fieinfo->fi_flags and return -EBADR. If the file system gets -EBADR, from
fiemap_check_flags(), it should immediately exit, returning that error back to
ioctl_fiemap().
For each extent in the request range, the file system should call
the helper function, fiemap_fill_next_extent():
int fiemap_fill_next_extent(struct fiemap_extent_info *info, u64 logical,
u64 phys, u64 len, u32 flags, u32 dev);
fiemap_fill_next_extent() will use the passed values to populate the
next free extent in the fm_extents array. 'General' extent flags will
automatically be set from specific flags on behalf of the calling file
system so that the userspace API is not broken.
fiemap_fill_next_extent() returns 0 on success, and 1 when the
user-supplied fm_extents array is full. If an error is encountered
while copying the extent to user memory, -EFAULT will be returned.
......@@ -923,45 +923,44 @@ CPUs.
The "procs_blocked" line gives the number of processes currently blocked,
waiting for I/O to complete.
1.9 Ext4 file system parameters
------------------------------
Ext4 file system have one directory per partition under /proc/fs/ext4/
# ls /proc/fs/ext4/hdc/
group_prealloc max_to_scan mb_groups mb_history min_to_scan order2_req
stats stream_req
mb_groups:
This file gives the details of multiblock allocator buddy cache of free blocks
mb_history:
Multiblock allocation history.
stats:
This file indicate whether the multiblock allocator should start collecting
statistics. The statistics are shown during unmount
group_prealloc:
The multiblock allocator normalize the block allocation request to
group_prealloc filesystem blocks if we don't have strip value set.
The stripe value can be specified at mount time or during mke2fs.
max_to_scan:
How long multiblock allocator can look for a best extent (in found extents)
min_to_scan:
How long multiblock allocator must look for a best extent
order2_req:
Multiblock allocator use 2^N search using buddies only for requests greater
than or equal to order2_req. The request size is specfied in file system
blocks. A value of 2 indicate only if the requests are greater than or equal
to 4 blocks.
stream_req:
Files smaller than stream_req are served by the stream allocator, whose
purpose is to pack requests as close each to other as possible to
produce smooth I/O traffic. Avalue of 16 indicate that file smaller than 16
filesystem block size will use group based preallocation.
Information about mounted ext4 file systems can be found in
/proc/fs/ext4. Each mounted filesystem will have a directory in
/proc/fs/ext4 based on its device name (i.e., /proc/fs/ext4/hdc or
/proc/fs/ext4/dm-0). The files in each per-device directory are shown
in Table 1-10, below.
Table 1-10: Files in /proc/fs/ext4/<devname>
..............................................................................
File Content
mb_groups details of multiblock allocator buddy cache of free blocks
mb_history multiblock allocation history
stats controls whether the multiblock allocator should start
collecting statistics, which are shown during the unmount
group_prealloc the multiblock allocator will round up allocation
requests to a multiple of this tuning parameter if the
stripe size is not set in the ext4 superblock
max_to_scan The maximum number of extents the multiblock allocator
will search to find the best extent
min_to_scan The minimum number of extents the multiblock allocator
will search to find the best extent
order2_req Tuning parameter which controls the minimum size for
requests (as a power of 2) where the buddy cache is
used
stream_req Files which have fewer blocks than this tunable
parameter will have their blocks allocated out of a
block group specific preallocation pool, so that small
files are packed closely together. Each large file
will have its blocks allocated out of its own unique
preallocation pool.
inode_readahead Tuning parameter which controls the maximum number of
inode table blocks that ext4's inode table readahead
algorithm will pre-read into the buffer cache
..............................................................................
------------------------------------------------------------------------------
Summary
......
......@@ -1659,9 +1659,10 @@ L: linux-ext4@vger.kernel.org
S: Maintained
EXT4 FILE SYSTEM
P: Stephen Tweedie, Andrew Morton
M: sct@redhat.com, akpm@linux-foundation.org, adilger@sun.com
P: Theodore Ts'o
M: tytso@mit.edu, adilger@sun.com
L: linux-ext4@vger.kernel.org
W: http://ext4.wiki.kernel.org
S: Maintained
F71805F HARDWARE MONITORING DRIVER
......
......@@ -136,37 +136,51 @@ config EXT3_FS_SECURITY
If you are not using a security module that requires using
extended attributes for file security labels, say N.
config EXT4DEV_FS
tristate "Ext4dev/ext4 extended fs support development (EXPERIMENTAL)"
depends on EXPERIMENTAL
config EXT4_FS
tristate "The Extended 4 (ext4) filesystem"
select JBD2
select CRC16
help
Ext4dev is a predecessor filesystem of the next generation
extended fs ext4, based on ext3 filesystem code. It will be
renamed ext4 fs later, once ext4dev is mature and stabilized.
This is the next generation of the ext3 filesystem.
Unlike the change from ext2 filesystem to ext3 filesystem,
the on-disk format of ext4dev is not the same as ext3 any more:
it is based on extent maps and it supports 48-bit physical block
numbers. These combined on-disk format changes will allow
ext4dev/ext4 to handle more than 16 TB filesystem volumes --
a hard limit that ext3 cannot overcome without changing the
on-disk format.
Other than extent maps and 48-bit block numbers, ext4dev also is
likely to have other new features such as persistent preallocation,
high resolution time stamps, and larger file support etc. These
features will be added to ext4dev gradually.
the on-disk format of ext4 is not forwards compatible with
ext3; it is based on extent maps and it supports 48-bit
physical block numbers. The ext4 filesystem also supports delayed
allocation, persistent preallocation, high resolution time stamps,
and a number of other features to improve performance and speed
up fsck time. For more information, please see the web pages at
http://ext4.wiki.kernel.org.
The ext4 filesystem will support mounting an ext3
filesystem; while there will be some performance gains from
the delayed allocation and inode table readahead, the best
performance gains will require enabling ext4 features in the
filesystem, or formating a new filesystem as an ext4
filesystem initially.
To compile this file system support as a module, choose M here. The
module will be called ext4dev.
If unsure, say N.
config EXT4DEV_FS_XATTR
bool "Ext4dev extended attributes"
depends on EXT4DEV_FS
config EXT4DEV_COMPAT
bool "Enable ext4dev compatibility"
depends on EXT4_FS
help
Starting with 2.6.28, the name of the ext4 filesystem was
renamed from ext4dev to ext4. Unfortunately there are some
lagecy userspace programs (such as klibc's fstype) have
"ext4dev" hardcoded.
To enable backwards compatibility so that systems that are
still expecting to mount ext4 filesystems using ext4dev,
chose Y here. This feature will go away by 2.6.31, so
please arrange to get your userspace programs fixed!
config EXT4_FS_XATTR
bool "Ext4 extended attributes"
depends on EXT4_FS
default y
help
Extended attributes are name:value pairs associated with inodes by
......@@ -175,11 +189,11 @@ config EXT4DEV_FS_XATTR
If unsure, say N.
You need this for POSIX ACL support on ext4dev/ext4.
You need this for POSIX ACL support on ext4.
config EXT4DEV_FS_POSIX_ACL
bool "Ext4dev POSIX Access Control Lists"
depends on EXT4DEV_FS_XATTR
config EXT4_FS_POSIX_ACL
bool "Ext4 POSIX Access Control Lists"
depends on EXT4_FS_XATTR
select FS_POSIX_ACL
help
POSIX Access Control Lists (ACLs) support permissions for users and
......@@ -190,14 +204,14 @@ config EXT4DEV_FS_POSIX_ACL
If you don't know what Access Control Lists are, say N
config EXT4DEV_FS_SECURITY
bool "Ext4dev Security Labels"
depends on EXT4DEV_FS_XATTR
config EXT4_FS_SECURITY
bool "Ext4 Security Labels"
depends on EXT4_FS_XATTR
help
Security labels support alternative access control models
implemented by security modules like SELinux. This option
enables an extended attribute handler for file security
labels in the ext4dev/ext4 filesystem.
labels in the ext4 filesystem.
If you are not using a security module that requires using
extended attributes for file security labels, say N.
......@@ -240,22 +254,22 @@ config JBD2
help
This is a generic journaling layer for block devices that support
both 32-bit and 64-bit block numbers. It is currently used by
the ext4dev/ext4 filesystem, but it could also be used to add
the ext4 filesystem, but it could also be used to add
journal support to other file systems or block devices such
as RAID or LVM.
If you are using ext4dev/ext4, you need to say Y here. If you are not
using ext4dev/ext4 then you will probably want to say N.
If you are using ext4, you need to say Y here. If you are not
using ext4 then you will probably want to say N.
To compile this device as a module, choose M here. The module will be
called jbd2. If you are compiling ext4dev/ext4 into the kernel,
called jbd2. If you are compiling ext4 into the kernel,
you cannot compile this code as a module.
config JBD2_DEBUG
bool "JBD2 (ext4dev/ext4) debugging support"
bool "JBD2 (ext4) debugging support"
depends on JBD2 && DEBUG_FS
help
If you are using the ext4dev/ext4 journaled file system (or
If you are using the ext4 journaled file system (or
potentially any other filesystem/device using JBD2), this option
allows you to enable debugging output while the system is running,
in order to help track down any problems you are having.
......@@ -270,9 +284,9 @@ config JBD2_DEBUG
config FS_MBCACHE
# Meta block cache for Extended Attributes (ext2/ext3/ext4)
tristate
depends on EXT2_FS_XATTR || EXT3_FS_XATTR || EXT4DEV_FS_XATTR
default y if EXT2_FS=y || EXT3_FS=y || EXT4DEV_FS=y
default m if EXT2_FS=m || EXT3_FS=m || EXT4DEV_FS=m
depends on EXT2_FS_XATTR || EXT3_FS_XATTR || EXT4_FS_XATTR
default y if EXT2_FS=y || EXT3_FS=y || EXT4_FS=y
default m if EXT2_FS=m || EXT3_FS=m || EXT4_FS=m
config REISERFS_FS
tristate "Reiserfs support"
......
......@@ -69,7 +69,7 @@ obj-$(CONFIG_DLM) += dlm/
# Do not add any filesystems before this line
obj-$(CONFIG_REISERFS_FS) += reiserfs/
obj-$(CONFIG_EXT3_FS) += ext3/ # Before ext2 so root fs can be ext3
obj-$(CONFIG_EXT4DEV_FS) += ext4/ # Before ext2 so root fs can be ext4dev
obj-$(CONFIG_EXT4_FS) += ext4/ # Before ext2 so root fs can be ext4dev
obj-$(CONFIG_JBD) += jbd/
obj-$(CONFIG_JBD2) += jbd2/
obj-$(CONFIG_EXT2_FS) += ext2/
......
......@@ -133,6 +133,8 @@ extern void ext2_truncate (struct inode *);
extern int ext2_setattr (struct dentry *, struct iattr *);
extern void ext2_set_inode_flags(struct inode *inode);
extern void ext2_get_inode_flags(struct ext2_inode_info *);
extern int ext2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len);
int __ext2_write_begin(struct file *file, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata);
......
......@@ -86,4 +86,5 @@ const struct inode_operations ext2_file_inode_operations = {
#endif
.setattr = ext2_setattr,
.permission = ext2_permission,
.fiemap = ext2_fiemap,
};
......@@ -31,6 +31,7 @@
#include <linux/writeback.h>
#include <linux/buffer_head.h>
#include <linux/mpage.h>
#include <linux/fiemap.h>
#include "ext2.h"
#include "acl.h"
#include "xip.h"
......@@ -704,6 +705,13 @@ int ext2_get_block(struct inode *inode, sector_t iblock, struct buffer_head *bh_
}
int ext2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
return generic_block_fiemap(inode, fieinfo, start, len,
ext2_get_block);
}
static int ext2_writepage(struct page *page, struct writeback_control *wbc)
{
return block_write_full_page(page, ext2_get_block, wbc);
......
......@@ -134,5 +134,6 @@ const struct inode_operations ext3_file_inode_operations = {
.removexattr = generic_removexattr,
#endif
.permission = ext3_permission,
.fiemap = ext3_fiemap,
};
......@@ -36,6 +36,7 @@
#include <linux/mpage.h>
#include <linux/uio.h>
#include <linux/bio.h>
#include <linux/fiemap.h>
#include "xattr.h"
#include "acl.h"
......@@ -981,6 +982,13 @@ static int ext3_get_block(struct inode *inode, sector_t iblock,
return ret;
}
int ext3_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
return generic_block_fiemap(inode, fieinfo, start, len,
ext3_get_block);
}
/*
* `handle' can be NULL if create is zero
*/
......
......@@ -2,12 +2,12 @@
# Makefile for the linux ext4-filesystem routines.
#
obj-$(CONFIG_EXT4DEV_FS) += ext4dev.o
obj-$(CONFIG_EXT4_FS) += ext4.o
ext4dev-y := balloc.o bitmap.o dir.o file.o fsync.o ialloc.o inode.o \
ext4-y := balloc.o bitmap.o dir.o file.o fsync.o ialloc.o inode.o \
ioctl.o namei.o super.o symlink.o hash.o resize.o extents.o \
ext4_jbd2.o migrate.o mballoc.o
ext4dev-$(CONFIG_EXT4DEV_FS_XATTR) += xattr.o xattr_user.o xattr_trusted.o
ext4dev-$(CONFIG_EXT4DEV_FS_POSIX_ACL) += acl.o
ext4dev-$(CONFIG_EXT4DEV_FS_SECURITY) += xattr_security.o
ext4-$(CONFIG_EXT4_FS_XATTR) += xattr.o xattr_user.o xattr_trusted.o
ext4-$(CONFIG_EXT4_FS_POSIX_ACL) += acl.o
ext4-$(CONFIG_EXT4_FS_SECURITY) += xattr_security.o
......@@ -51,18 +51,18 @@ static inline int ext4_acl_count(size_t size)
}
}
#ifdef CONFIG_EXT4DEV_FS_POSIX_ACL
#ifdef CONFIG_EXT4_FS_POSIX_ACL
/* Value for inode->u.ext4_i.i_acl and inode->u.ext4_i.i_default_acl
if the ACL has not been cached */
#define EXT4_ACL_NOT_CACHED ((void *)-1)
/* acl.c */
extern int ext4_permission (struct inode *, int);
extern int ext4_acl_chmod (struct inode *);
extern int ext4_init_acl (handle_t *, struct inode *, struct inode *);
extern int ext4_permission(struct inode *, int);
extern int ext4_acl_chmod(struct inode *);
extern int ext4_init_acl(handle_t *, struct inode *, struct inode *);
#else /* CONFIG_EXT4DEV_FS_POSIX_ACL */
#else /* CONFIG_EXT4_FS_POSIX_ACL */
#include <linux/sched.h>
#define ext4_permission NULL
......@@ -77,5 +77,5 @@ ext4_init_acl(handle_t *handle, struct inode *inode, struct inode *dir)
{
return 0;
}
#endif /* CONFIG_EXT4DEV_FS_POSIX_ACL */
#endif /* CONFIG_EXT4_FS_POSIX_ACL */
This diff is collapsed.
......@@ -15,17 +15,17 @@
static const int nibblemap[] = {4, 3, 3, 2, 3, 2, 2, 1, 3, 2, 2, 1, 2, 1, 1, 0};
unsigned long ext4_count_free (struct buffer_head * map, unsigned int numchars)
unsigned long ext4_count_free(struct buffer_head *map, unsigned int numchars)
{
unsigned int i;
unsigned long sum = 0;
if (!map)
return (0);
return 0;
for (i = 0; i < numchars; i++)
sum += nibblemap[map->b_data[i] & 0xf] +
nibblemap[(map->b_data[i] >> 4) & 0xf];
return (sum);
return sum;
}
#endif /* EXT4FS_DEBUG */
......
......@@ -33,10 +33,10 @@ static unsigned char ext4_filetype_table[] = {
};
static int ext4_readdir(struct file *, void *, filldir_t);
static int ext4_dx_readdir(struct file * filp,
void * dirent, filldir_t filldir);
static int ext4_release_dir (struct inode * inode,
struct file * filp);
static int ext4_dx_readdir(struct file *filp,
void *dirent, filldir_t filldir);
static int ext4_release_dir(struct inode *inode,
struct file *filp);
const struct file_operations ext4_dir_operations = {
.llseek = generic_file_llseek,
......@@ -61,12 +61,12 @@ static unsigned char get_dtype(struct super_block *sb, int filetype)
}
int ext4_check_dir_entry (const char * function, struct inode * dir,
struct ext4_dir_entry_2 * de,
struct buffer_head * bh,
int ext4_check_dir_entry(const char *function, struct inode *dir,
struct ext4_dir_entry_2 *de,
struct buffer_head *bh,
unsigned long offset)
{
const char * error_msg = NULL;
const char *error_msg = NULL;
const int rlen = ext4_rec_len_from_disk(de->rec_len);
if (rlen < EXT4_DIR_REC_LEN(1))
......@@ -82,7 +82,7 @@ int ext4_check_dir_entry (const char * function, struct inode * dir,
error_msg = "inode out of bounds";
if (error_msg != NULL)
ext4_error (dir->i_sb, function,
ext4_error(dir->i_sb, function,
"bad entry in directory #%lu: %s - "
"offset=%lu, inode=%lu, rec_len=%d, name_len=%d",
dir->i_ino, error_msg, offset,
......@@ -91,8 +91,8 @@ int ext4_check_dir_entry (const char * function, struct inode * dir,
return error_msg == NULL ? 1 : 0;
}
static int ext4_readdir(struct file * filp,
void * dirent, filldir_t filldir)
static int ext4_readdir(struct file *filp,
void *dirent, filldir_t filldir)
{
int error = 0;
unsigned long offset;
......@@ -102,6 +102,7 @@ static int ext4_readdir(struct file * filp,
int err;
struct inode *inode = filp->f_path.dentry->d_inode;
int ret = 0;
int dir_has_error = 0;
sb = inode->i_sb;
......@@ -148,9 +149,13 @@ static int ext4_readdir(struct file * filp,
* of recovering data when there's a bad sector
*/
if (!bh) {
ext4_error (sb, "ext4_readdir",
"directory #%lu contains a hole at offset %lu",
inode->i_ino, (unsigned long)filp->f_pos);
if (!dir_has_error) {
ext4_error(sb, __func__, "directory #%lu "
"contains a hole at offset %Lu",
inode->i_ino,
(unsigned long long) filp->f_pos);
dir_has_error = 1;
}
/* corrupt size? Maybe no more blocks to read */
if (filp->f_pos > inode->i_blocks << 9)
break;
......@@ -187,14 +192,14 @@ static int ext4_readdir(struct file * filp,
while (!error && filp->f_pos < inode->i_size
&& offset < sb->s_blocksize) {
de = (struct ext4_dir_entry_2 *) (bh->b_data + offset);
if (!ext4_check_dir_entry ("ext4_readdir", inode, de,
if (!ext4_check_dir_entry("ext4_readdir", inode, de,
bh, offset)) {
/*
* On error, skip the f_pos to the next block
*/
filp->f_pos = (filp->f_pos |
(sb->s_blocksize - 1)) + 1;
brelse (bh);
brelse(bh);
ret = stored;
goto out;
}
......@@ -218,12 +223,12 @@ static int ext4_readdir(struct file * filp,
break;
if (version != filp->f_version)
goto revalidate;
stored ++;
stored++;
}
filp->f_pos += ext4_rec_len_from_disk(de->rec_len);
}
offset = 0;
brelse (bh);
brelse(bh);
}
out:
return ret;
......@@ -290,9 +295,9 @@ static void free_rb_tree_fname(struct rb_root *root)
parent = rb_parent(n);
fname = rb_entry(n, struct fname, rb_hash);
while (fname) {
struct fname * old = fname;
struct fname *old = fname;
fname = fname->next;
kfree (old);
kfree(old);
}
if (!parent)
root->rb_node = NULL;
......@@ -331,7 +336,7 @@ int ext4_htree_store_dirent(struct file *dir_file, __u32 hash,
struct ext4_dir_entry_2 *dirent)
{
struct rb_node **p, *parent = NULL;
struct fname * fname, *new_fn;
struct fname *fname, *new_fn;
struct dir_private_info *info;
int len;
......@@ -388,19 +393,20 @@ int ext4_htree_store_dirent(struct file *dir_file, __u32 hash,
* for all entres on the fname linked list. (Normally there is only
* one entry on the linked list, unless there are 62 bit hash collisions.)
*/
static int call_filldir(struct file * filp, void * dirent,
static int call_filldir(struct file *filp, void *dirent,
filldir_t filldir, struct fname *fname)
{
struct dir_private_info *info = filp->private_data;
loff_t curr_pos;
struct inode *inode = filp->f_path.dentry->d_inode;
struct super_block * sb;
struct super_block *sb;
int error;
sb = inode->i_sb;
if (!fname) {
printk("call_filldir: called with null fname?!?\n");
printk(KERN_ERR "ext4: call_filldir: called with "
"null fname?!?\n");
return 0;
}
curr_pos = hash2pos(fname->hash, fname->minor_hash);
......@@ -419,8 +425,8 @@ static int call_filldir(struct file * filp, void * dirent,
return 0;
}
static int ext4_dx_readdir(struct file * filp,
void * dirent, filldir_t filldir)
static int ext4_dx_readdir(struct file *filp,
void *dirent, filldir_t filldir)
{
struct dir_private_info *info = filp->private_data;
struct inode *inode = filp->f_path.dentry->d_inode;
......@@ -511,7 +517,7 @@ static int ext4_dx_readdir(struct file * filp,
return 0;
}
static int ext4_release_dir (struct inode * inode, struct file * filp)
static int ext4_release_dir(struct inode *inode, struct file *filp)
{
if (filp->private_data)
ext4_htree_free_dir_info(filp->private_data);
......
This diff is collapsed.
......@@ -124,6 +124,19 @@ struct ext4_ext_path {
#define EXT4_EXT_CACHE_GAP 1
#define EXT4_EXT_CACHE_EXTENT 2
/*
* to be called by ext4_ext_walk_space()
* negative retcode - error
* positive retcode - signal for ext4_ext_walk_space(), see below
* callback must return valid extent (passed or newly created)
*/
typedef int (*ext_prepare_callback)(struct inode *, struct ext4_ext_path *,
struct ext4_ext_cache *,
struct ext4_extent *, void *);
#define EXT_CONTINUE 0
#define EXT_BREAK 1
#define EXT_REPEAT 2
#define EXT_MAX_BLOCK 0xffffffff
......@@ -224,6 +237,8 @@ extern int ext4_ext_try_to_merge(struct inode *inode,
struct ext4_extent *);
extern unsigned int ext4_ext_check_overlap(struct inode *, struct ext4_extent *, struct ext4_ext_path *);
extern int ext4_ext_insert_extent(handle_t *, struct inode *, struct ext4_ext_path *, struct ext4_extent *);
extern int ext4_ext_walk_space(struct inode *, ext4_lblk_t, ext4_lblk_t,
ext_prepare_callback, void *);
extern struct ext4_ext_path *ext4_ext_find_extent(struct inode *, ext4_lblk_t,
struct ext4_ext_path *);
extern int ext4_ext_search_left(struct inode *, struct ext4_ext_path *,
......
......@@ -33,38 +33,6 @@ typedef __u32 ext4_lblk_t;
/* data type for block group number */
typedef unsigned long ext4_group_t;
struct ext4_reserve_window {
ext4_fsblk_t _rsv_start; /* First byte reserved */
ext4_fsblk_t _rsv_end; /* Last byte reserved or 0 */
};
struct ext4_reserve_window_node {
struct rb_node rsv_node;
__u32 rsv_goal_size;
__u32 rsv_alloc_hit;
struct ext4_reserve_window rsv_window;
};
struct ext4_block_alloc_info {
/* information about reservation window */
struct ext4_reserve_window_node rsv_window_node;
/*
* was i_next_alloc_block in ext4_inode_info
* is the logical (file-relative) number of the
* most-recently-allocated block in this file.
* We use this for detecting linearly ascending allocation requests.
*/
ext4_lblk_t last_alloc_logical_block;
/*
* Was i_next_alloc_goal in ext4_inode_info
* is the *physical* companion to i_next_alloc_block.
* it the physical block number of the block which was most-recentl
* allocated to this file. This give us the goal (target) for the next
* allocation when we detect linearly ascending requests.
*/
ext4_fsblk_t last_alloc_physical_block;
};
#define rsv_start rsv_window._rsv_start
#define rsv_end rsv_window._rsv_end
......@@ -97,11 +65,8 @@ struct ext4_inode_info {
ext4_group_t i_block_group;
__u32 i_state; /* Dynamic state flags for ext4 */
/* block reservation info */
struct ext4_block_alloc_info *i_block_alloc_info;
ext4_lblk_t i_dir_start_lookup;
#ifdef CONFIG_EXT4DEV_FS_XATTR
#ifdef CONFIG_EXT4_FS_XATTR
/*
* Extended attributes can be read independently of the main file
* data. Taking i_mutex even when reading would cause contention
......@@ -111,7 +76,7 @@ struct ext4_inode_info {
*/
struct rw_semaphore xattr_sem;
#endif
#ifdef CONFIG_EXT4DEV_FS_POSIX_ACL
#ifdef CONFIG_EXT4_FS_POSIX_ACL
struct posix_acl *i_acl;
struct posix_acl *i_default_acl;
#endif
......
......@@ -40,8 +40,8 @@ struct ext4_sb_info {
unsigned long s_blocks_last; /* Last seen block count */
loff_t s_bitmap_maxbytes; /* max bytes for bitmap files */
struct buffer_head * s_sbh; /* Buffer containing the super block */
struct ext4_super_block * s_es; /* Pointer to the super block in the buffer */
struct buffer_head ** s_group_desc;
struct ext4_super_block *s_es; /* Pointer to the super block in the buffer */
struct buffer_head **s_group_desc;
unsigned long s_mount_opt;
ext4_fsblk_t s_sb_block;
uid_t s_resuid;
......@@ -52,6 +52,7 @@ struct ext4_sb_info {
int s_desc_per_block_bits;
int s_inode_size;
int s_first_ino;
unsigned int s_inode_readahead_blks;
spinlock_t s_next_gen_lock;
u32 s_next_generation;
u32 s_hash_seed[4];
......@@ -59,16 +60,17 @@ struct ext4_sb_info {
struct percpu_counter s_freeblocks_counter;
struct percpu_counter s_freeinodes_counter;
struct percpu_counter s_dirs_counter;
struct percpu_counter s_dirtyblocks_counter;
struct blockgroup_lock s_blockgroup_lock;
struct proc_dir_entry *s_proc;
/* root of the per fs reservation window tree */
spinlock_t s_rsv_window_lock;
struct rb_root s_rsv_window_root;
struct ext4_reserve_window_node s_rsv_window_head;
/* Journaling */
struct inode * s_journal_inode;
struct journal_s * s_journal;
struct inode *s_journal_inode;
struct journal_s *s_journal;
struct list_head s_orphan;
unsigned long s_commit_interval;
struct block_device *journal_bdev;
......@@ -106,12 +108,12 @@ struct ext4_sb_info {
/* tunables */
unsigned long s_stripe;
unsigned long s_mb_stream_request;
unsigned long s_mb_max_to_scan;
unsigned long s_mb_min_to_scan;
unsigned long s_mb_stats;
unsigned long s_mb_order2_reqs;
unsigned long s_mb_group_prealloc;
unsigned int s_mb_stream_request;
unsigned int s_mb_max_to_scan;
unsigned int s_mb_min_to_scan;
unsigned int s_mb_stats;
unsigned int s_mb_order2_reqs;
unsigned int s_mb_group_prealloc;
/* where last allocation was done - for stream allocation */
unsigned long s_mb_last_group;
unsigned long s_mb_last_start;
......@@ -121,7 +123,6 @@ struct ext4_sb_info {
int s_mb_history_cur;
int s_mb_history_max;
int s_mb_history_num;
struct proc_dir_entry *s_mb_proc;
spinlock_t s_mb_history_lock;
int s_mb_history_filter;
......
......@@ -40,6 +40,7 @@
#include <linux/slab.h>
#include <linux/falloc.h>
#include <asm/uaccess.h>
#include <linux/fiemap.h>
#include "ext4_jbd2.h"
#include "ext4_extents.h"
......@@ -383,8 +384,8 @@ static void ext4_ext_show_leaf(struct inode *inode, struct ext4_ext_path *path)
ext_debug("\n");
}
#else
#define ext4_ext_show_path(inode,path)
#define ext4_ext_show_leaf(inode,path)
#define ext4_ext_show_path(inode, path)
#define ext4_ext_show_leaf(inode, path)
#endif
void ext4_ext_drop_refs(struct ext4_ext_path *path)
......@@ -440,9 +441,10 @@ ext4_ext_binsearch_idx(struct inode *inode,
for (k = 0; k < le16_to_cpu(eh->eh_entries); k++, ix++) {
if (k != 0 &&
le32_to_cpu(ix->ei_block) <= le32_to_cpu(ix[-1].ei_block)) {
printk("k=%d, ix=0x%p, first=0x%p\n", k,
printk(KERN_DEBUG "k=%d, ix=0x%p, "
"first=0x%p\n", k,
ix, EXT_FIRST_INDEX(eh));
printk("%u <= %u\n",
printk(KERN_DEBUG "%u <= %u\n",
le32_to_cpu(ix->ei_block),
le32_to_cpu(ix[-1].ei_block));
}
......@@ -1475,7 +1477,7 @@ int ext4_ext_insert_extent(handle_t *handle, struct inode *inode,
struct ext4_ext_path *path,
struct ext4_extent *newext)
{
struct ext4_extent_header * eh;
struct ext4_extent_header *eh;
struct ext4_extent *ex, *fex;
struct ext4_extent *nearex; /* nearest extent */
struct ext4_ext_path *npath = NULL;
......@@ -1625,6 +1627,113 @@ int ext4_ext_insert_extent(handle_t *handle, struct inode *inode,
return err;
}
int ext4_ext_walk_space(struct inode *inode, ext4_lblk_t block,
ext4_lblk_t num, ext_prepare_callback func,
void *cbdata)
{
struct ext4_ext_path *path = NULL;
struct ext4_ext_cache cbex;
struct ext4_extent *ex;
ext4_lblk_t next, start = 0, end = 0;
ext4_lblk_t last = block + num;
int depth, exists, err = 0;
BUG_ON(func == NULL);
BUG_ON(inode == NULL);
while (block < last && block != EXT_MAX_BLOCK) {
num = last - block;
/* find extent for this block */
path = ext4_ext_find_extent(inode, block, path);
if (IS_ERR(path)) {
err = PTR_ERR(path);
path = NULL;
break;
}
depth = ext_depth(inode);
BUG_ON(path[depth].p_hdr == NULL);
ex = path[depth].p_ext;
next = ext4_ext_next_allocated_block(path);
exists = 0;
if (!ex) {
/* there is no extent yet, so try to allocate
* all requested space */
start = block;
end = block + num;
} else if (le32_to_cpu(ex->ee_block) > block) {
/* need to allocate space before found extent */
start = block;
end = le32_to_cpu(ex->ee_block);
if (block + num < end)
end = block + num;
} else if (block >= le32_to_cpu(ex->ee_block)
+ ext4_ext_get_actual_len(ex)) {
/* need to allocate space after found extent */
start = block;
end = block + num;
if (end >= next)
end = next;
} else if (block >= le32_to_cpu(ex->ee_block)) {
/*
* some part of requested space is covered
* by found extent
*/
start = block;
end = le32_to_cpu(ex->ee_block)
+ ext4_ext_get_actual_len(ex);
if (block + num < end)
end = block + num;
exists = 1;
} else {
BUG();
}
BUG_ON(end <= start);
if (!exists) {
cbex.ec_block = start;
cbex.ec_len = end - start;
cbex.ec_start = 0;
cbex.ec_type = EXT4_EXT_CACHE_GAP;
} else {
cbex.ec_block = le32_to_cpu(ex->ee_block);
cbex.ec_len = ext4_ext_get_actual_len(ex);
cbex.ec_start = ext_pblock(ex);
cbex.ec_type = EXT4_EXT_CACHE_EXTENT;
}
BUG_ON(cbex.ec_len == 0);
err = func(inode, path, &cbex, ex, cbdata);
ext4_ext_drop_refs(path);
if (err < 0)
break;
if (err == EXT_REPEAT)
continue;
else if (err == EXT_BREAK) {
err = 0;
break;
}
if (ext_depth(inode) != depth) {
/* depth was changed. we have to realloc path */
kfree(path);
path = NULL;
}
block = cbex.ec_block + cbex.ec_len;
}
if (path) {
ext4_ext_drop_refs(path);
kfree(path);
}
return err;
}
static void
ext4_ext_put_in_cache(struct inode *inode, ext4_lblk_t block,
__u32 len, ext4_fsblk_t start, int type)
......@@ -2142,7 +2251,7 @@ void ext4_ext_init(struct super_block *sb)
*/
if (test_opt(sb, EXTENTS)) {
printk("EXT4-fs: file extents enabled");
printk(KERN_INFO "EXT4-fs: file extents enabled");
#ifdef AGGRESSIVE_TEST
printk(", aggressive tests");
#endif
......@@ -2696,11 +2805,8 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
goto out2;
}
/*
* Okay, we need to do block allocation. Lazily initialize the block
* allocation info here if necessary.
* Okay, we need to do block allocation.
*/
if (S_ISREG(inode->i_mode) && (!EXT4_I(inode)->i_block_alloc_info))
ext4_init_block_alloc_info(inode);
/* find neighbour allocated blocks */
ar.lleft = iblock;
......@@ -2760,7 +2866,7 @@ int ext4_ext_get_blocks(handle_t *handle, struct inode *inode,
/* free data blocks we just allocated */
/* not a good idea to call discard here directly,
* but otherwise we'd need to call it every free() */
ext4_mb_discard_inode_preallocations(inode);
ext4_discard_preallocations(inode);
ext4_free_blocks(handle, inode, ext_pblock(&newex),
ext4_ext_get_actual_len(&newex), 0);
goto out2;
......@@ -2824,7 +2930,7 @@ void ext4_ext_truncate(struct inode *inode)
down_write(&EXT4_I(inode)->i_data_sem);
ext4_ext_invalidate_cache(inode);
ext4_discard_reservation(inode);
ext4_discard_preallocations(inode);
/*
* TODO: optimization is possible here.
......@@ -2877,10 +2983,11 @@ static void ext4_falloc_update_inode(struct inode *inode,
* Update only when preallocation was requested beyond
* the file size.
*/
if (!(mode & FALLOC_FL_KEEP_SIZE) &&
new_size > i_size_read(inode)) {
if (!(mode & FALLOC_FL_KEEP_SIZE)) {
if (new_size > i_size_read(inode))
i_size_write(inode, new_size);
EXT4_I(inode)->i_disksize = new_size;
if (new_size > EXT4_I(inode)->i_disksize)
ext4_update_i_disksize(inode, new_size);
}
}
......@@ -2972,3 +3079,143 @@ long ext4_fallocate(struct inode *inode, int mode, loff_t offset, loff_t len)
mutex_unlock(&inode->i_mutex);
return ret > 0 ? ret2 : ret;
}
/*
* Callback function called for each extent to gather FIEMAP information.
*/
int ext4_ext_fiemap_cb(struct inode *inode, struct ext4_ext_path *path,
struct ext4_ext_cache *newex, struct ext4_extent *ex,
void *data)
{
struct fiemap_extent_info *fieinfo = data;
unsigned long blksize_bits = inode->i_sb->s_blocksize_bits;
__u64 logical;
__u64 physical;
__u64 length;
__u32 flags = 0;
int error;
logical = (__u64)newex->ec_block << blksize_bits;
if (newex->ec_type == EXT4_EXT_CACHE_GAP) {
pgoff_t offset;
struct page *page;
struct buffer_head *bh = NULL;
offset = logical >> PAGE_SHIFT;
page = find_get_page(inode->i_mapping, offset);
if (!page || !page_has_buffers(page))
return EXT_CONTINUE;
bh = page_buffers(page);
if (!bh)
return EXT_CONTINUE;
if (buffer_delay(bh)) {
flags |= FIEMAP_EXTENT_DELALLOC;
page_cache_release(page);
} else {
page_cache_release(page);
return EXT_CONTINUE;
}
}
physical = (__u64)newex->ec_start << blksize_bits;
length = (__u64)newex->ec_len << blksize_bits;
if (ex && ext4_ext_is_uninitialized(ex))
flags |= FIEMAP_EXTENT_UNWRITTEN;
/*
* If this extent reaches EXT_MAX_BLOCK, it must be last.
*
* Or if ext4_ext_next_allocated_block is EXT_MAX_BLOCK,
* this also indicates no more allocated blocks.
*
* XXX this might miss a single-block extent at EXT_MAX_BLOCK
*/
if (logical + length - 1 == EXT_MAX_BLOCK ||
ext4_ext_next_allocated_block(path) == EXT_MAX_BLOCK)
flags |= FIEMAP_EXTENT_LAST;
error = fiemap_fill_next_extent(fieinfo, logical, physical,
length, flags);
if (error < 0)
return error;
if (error == 1)
return EXT_BREAK;
return EXT_CONTINUE;
}
/* fiemap flags we can handle specified here */
#define EXT4_FIEMAP_FLAGS (FIEMAP_FLAG_SYNC|FIEMAP_FLAG_XATTR)
int ext4_xattr_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo)
{
__u64 physical = 0;
__u64 length;
__u32 flags = FIEMAP_EXTENT_LAST;
int blockbits = inode->i_sb->s_blocksize_bits;
int error = 0;
/* in-inode? */
if (EXT4_I(inode)->i_state & EXT4_STATE_XATTR) {
struct ext4_iloc iloc;
int offset; /* offset of xattr in inode */
error = ext4_get_inode_loc(inode, &iloc);
if (error)
return error;
physical = iloc.bh->b_blocknr << blockbits;
offset = EXT4_GOOD_OLD_INODE_SIZE +
EXT4_I(inode)->i_extra_isize;
physical += offset;
length = EXT4_SB(inode->i_sb)->s_inode_size - offset;
flags |= FIEMAP_EXTENT_DATA_INLINE;
} else { /* external block */
physical = EXT4_I(inode)->i_file_acl << blockbits;
length = inode->i_sb->s_blocksize;
}
if (physical)
error = fiemap_fill_next_extent(fieinfo, 0, physical,
length, flags);
return (error < 0 ? error : 0);
}
int ext4_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
__u64 start, __u64 len)
{
ext4_lblk_t start_blk;
ext4_lblk_t len_blks;
int error = 0;
/* fallback to generic here if not in extents fmt */
if (!(EXT4_I(inode)->i_flags & EXT4_EXTENTS_FL))
return generic_block_fiemap(inode, fieinfo, start, len,
ext4_get_block);
if (fiemap_check_flags(fieinfo, EXT4_FIEMAP_FLAGS))
return -EBADR;
if (fieinfo->fi_flags & FIEMAP_FLAG_XATTR) {
error = ext4_xattr_fiemap(inode, fieinfo);
} else {
start_blk = start >> inode->i_sb->s_blocksize_bits;
len_blks = len >> inode->i_sb->s_blocksize_bits;
/*
* Walk the extent tree gathering extent information.
* ext4_ext_fiemap_cb will push extents back to user.
*/
down_write(&EXT4_I(inode)->i_data_sem);
error = ext4_ext_walk_space(inode, start_blk, len_blks,
ext4_ext_fiemap_cb, fieinfo);
up_write(&EXT4_I(inode)->i_data_sem);
}
return error;
}
......@@ -31,14 +31,14 @@
* from ext4_file_open: open gets called at every open, but release
* gets called only when /all/ the files are closed.
*/
static int ext4_release_file (struct inode * inode, struct file * filp)
static int ext4_release_file(struct inode *inode, struct file *filp)
{
/* if we are the last writer on the inode, drop the block reservation */
if ((filp->f_mode & FMODE_WRITE) &&
(atomic_read(&inode->i_writecount) == 1))
{
down_write(&EXT4_I(inode)->i_data_sem);
ext4_discard_reservation(inode);
ext4_discard_preallocations(inode);
up_write(&EXT4_I(inode)->i_data_sem);
}
if (is_dx(inode) && filp->private_data)
......@@ -140,6 +140,9 @@ static int ext4_file_mmap(struct file *file, struct vm_area_struct *vma)
return 0;
}
extern int ext4_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
__u64 start, __u64 len);
const struct file_operations ext4_file_operations = {
.llseek = generic_file_llseek,
.read = do_sync_read,
......@@ -162,7 +165,7 @@ const struct inode_operations ext4_file_inode_operations = {
.truncate = ext4_truncate,
.setattr = ext4_setattr,
.getattr = ext4_getattr,
#ifdef CONFIG_EXT4DEV_FS_XATTR
#ifdef CONFIG_EXT4_FS_XATTR
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = ext4_listxattr,
......@@ -170,5 +173,6 @@ const struct inode_operations ext4_file_inode_operations = {
#endif
.permission = ext4_permission,
.fallocate = ext4_fallocate,
.fiemap = ext4_fiemap,
};
......@@ -28,6 +28,7 @@
#include <linux/writeback.h>
#include <linux/jbd2.h>
#include <linux/blkdev.h>
#include <linux/marker.h>
#include "ext4.h"
#include "ext4_jbd2.h"
......@@ -43,7 +44,7 @@
* inode to disk.
*/
int ext4_sync_file(struct file * file, struct dentry *dentry, int datasync)
int ext4_sync_file(struct file *file, struct dentry *dentry, int datasync)
{
struct inode *inode = dentry->d_inode;
journal_t *journal = EXT4_SB(inode->i_sb)->s_journal;
......@@ -51,6 +52,10 @@ int ext4_sync_file(struct file * file, struct dentry *dentry, int datasync)
J_ASSERT(ext4_journal_current_handle() == NULL);
trace_mark(ext4_sync_file, "dev %s datasync %d ino %ld parent %ld",
inode->i_sb->s_id, datasync, inode->i_ino,
dentry->d_parent->d_inode->i_ino);
/*
* data=writeback:
* The caller's filemap_fdatawrite()/wait will sync the data.
......
......@@ -27,7 +27,7 @@ static void TEA_transform(__u32 buf[4], __u32 const in[])
sum += DELTA;
b0 += ((b1 << 4)+a) ^ (b1+sum) ^ ((b1 >> 5)+b);
b1 += ((b0 << 4)+c) ^ (b0+sum) ^ ((b0 >> 5)+d);
} while(--n);
} while (--n);
buf[0] += b0;
buf[1] += b1;
......@@ -35,7 +35,7 @@ static void TEA_transform(__u32 buf[4], __u32 const in[])
/* The old legacy hash */
static __u32 dx_hack_hash (const char *name, int len)
static __u32 dx_hack_hash(const char *name, int len)
{
__u32 hash0 = 0x12a3fe2d, hash1 = 0x37abe8f9;
while (len--) {
......@@ -59,7 +59,7 @@ static void str2hashbuf(const char *msg, int len, __u32 *buf, int num)
val = pad;
if (len > num*4)
len = num * 4;
for (i=0; i < len; i++) {
for (i = 0; i < len; i++) {
if ((i % 4) == 0)
val = pad;
val = msg[i] + (val << 8);
......@@ -104,7 +104,7 @@ int ext4fs_dirhash(const char *name, int len, struct dx_hash_info *hinfo)
/* Check to see if the seed is all zero's */
if (hinfo->seed) {
for (i=0; i < 4; i++) {
for (i = 0; i < 4; i++) {
if (hinfo->seed[i])
break;
}
......
......@@ -115,9 +115,11 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
block_group, bitmap_blk);
return NULL;
}
if (bh_uptodate_or_lock(bh))
if (buffer_uptodate(bh) &&
!(desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)))
return bh;
lock_buffer(bh);
spin_lock(sb_bgl_lock(EXT4_SB(sb), block_group));
if (desc->bg_flags & cpu_to_le16(EXT4_BG_INODE_UNINIT)) {
ext4_init_inode_bitmap(sb, bh, block_group, desc);
......@@ -154,39 +156,40 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
* though), and then we'd have two inodes sharing the
* same inode number and space on the harddisk.
*/
void ext4_free_inode (handle_t *handle, struct inode * inode)
void ext4_free_inode(handle_t *handle, struct inode *inode)
{
struct super_block * sb = inode->i_sb;
struct super_block *sb = inode->i_sb;
int is_directory;
unsigned long ino;
struct buffer_head *bitmap_bh = NULL;
struct buffer_head *bh2;
ext4_group_t block_group;
unsigned long bit;
struct ext4_group_desc * gdp;
struct ext4_super_block * es;
struct ext4_group_desc *gdp;
struct ext4_super_block *es;
struct ext4_sb_info *sbi;
int fatal = 0, err;
ext4_group_t flex_group;
if (atomic_read(&inode->i_count) > 1) {
printk ("ext4_free_inode: inode has count=%d\n",
printk(KERN_ERR "ext4_free_inode: inode has count=%d\n",
atomic_read(&inode->i_count));
return;
}
if (inode->i_nlink) {
printk ("ext4_free_inode: inode has nlink=%d\n",
printk(KERN_ERR "ext4_free_inode: inode has nlink=%d\n",
inode->i_nlink);
return;
}
if (!sb) {
printk("ext4_free_inode: inode on nonexistent device\n");
printk(KERN_ERR "ext4_free_inode: inode on "
"nonexistent device\n");
return;
}
sbi = EXT4_SB(sb);
ino = inode->i_ino;
ext4_debug ("freeing inode %lu\n", ino);
ext4_debug("freeing inode %lu\n", ino);
/*
* Note: we must free any quota before locking the superblock,
......@@ -200,11 +203,11 @@ void ext4_free_inode (handle_t *handle, struct inode * inode)
is_directory = S_ISDIR(inode->i_mode);
/* Do this BEFORE marking the inode not in use or returning an error */
clear_inode (inode);
clear_inode(inode);
es = EXT4_SB(sb)->s_es;
if (ino < EXT4_FIRST_INO(sb) || ino > le32_to_cpu(es->s_inodes_count)) {
ext4_error (sb, "ext4_free_inode",
ext4_error(sb, "ext4_free_inode",
"reserved or nonexistent inode %lu", ino);
goto error_return;
}
......@@ -222,10 +225,10 @@ void ext4_free_inode (handle_t *handle, struct inode * inode)
/* Ok, now we can actually update the inode bitmaps.. */
if (!ext4_clear_bit_atomic(sb_bgl_lock(sbi, block_group),
bit, bitmap_bh->b_data))
ext4_error (sb, "ext4_free_inode",
ext4_error(sb, "ext4_free_inode",
"bit already cleared for inode %lu", ino);
else {
gdp = ext4_get_group_desc (sb, block_group, &bh2);
gdp = ext4_get_group_desc(sb, block_group, &bh2);
BUFFER_TRACE(bh2, "get_write_access");
fatal = ext4_journal_get_write_access(handle, bh2);
......@@ -287,7 +290,7 @@ static int find_group_dir(struct super_block *sb, struct inode *parent,
avefreei = freei / ngroups;
for (group = 0; group < ngroups; group++) {
desc = ext4_get_group_desc (sb, group, NULL);
desc = ext4_get_group_desc(sb, group, NULL);
if (!desc || !desc->bg_free_inodes_count)
continue;
if (le16_to_cpu(desc->bg_free_inodes_count) < avefreei)
......@@ -576,16 +579,16 @@ static int find_group_other(struct super_block *sb, struct inode *parent,
* For other inodes, search forward from the parent directory's block
* group to find a free inode.
*/
struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
struct inode *ext4_new_inode(handle_t *handle, struct inode *dir, int mode)
{
struct super_block *sb;
struct buffer_head *bitmap_bh = NULL;
struct buffer_head *bh2;
ext4_group_t group = 0;
unsigned long ino = 0;
struct inode * inode;
struct ext4_group_desc * gdp = NULL;
struct ext4_super_block * es;
struct inode *inode;
struct ext4_group_desc *gdp = NULL;
struct ext4_super_block *es;
struct ext4_inode_info *ei;
struct ext4_sb_info *sbi;
int ret2, err = 0;
......@@ -613,7 +616,7 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
}
if (S_ISDIR(mode)) {
if (test_opt (sb, OLDALLOC))
if (test_opt(sb, OLDALLOC))
ret2 = find_group_dir(sb, dir, &group);
else
ret2 = find_group_orlov(sb, dir, &group);
......@@ -783,7 +786,7 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
}
inode->i_uid = current->fsuid;
if (test_opt (sb, GRPID))
if (test_opt(sb, GRPID))
inode->i_gid = dir->i_gid;
else if (dir->i_mode & S_ISGID) {
inode->i_gid = dir->i_gid;
......@@ -816,7 +819,6 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
ei->i_flags &= ~EXT4_DIRSYNC_FL;
ei->i_file_acl = 0;
ei->i_dtime = 0;
ei->i_block_alloc_info = NULL;
ei->i_block_group = group;
ext4_set_inode_flags(inode);
......@@ -832,7 +834,7 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
ei->i_extra_isize = EXT4_SB(sb)->s_want_extra_isize;
ret = inode;
if(DQUOT_ALLOC_INODE(inode)) {
if (DQUOT_ALLOC_INODE(inode)) {
err = -EDQUOT;
goto fail_drop;
}
......@@ -841,7 +843,7 @@ struct inode *ext4_new_inode(handle_t *handle, struct inode * dir, int mode)
if (err)
goto fail_free_drop;
err = ext4_init_security(handle,inode, dir);
err = ext4_init_security(handle, inode, dir);
if (err)
goto fail_free_drop;
......@@ -959,7 +961,7 @@ struct inode *ext4_orphan_get(struct super_block *sb, unsigned long ino)
return ERR_PTR(err);
}
unsigned long ext4_count_free_inodes (struct super_block * sb)
unsigned long ext4_count_free_inodes(struct super_block *sb)
{
unsigned long desc_count;
struct ext4_group_desc *gdp;
......@@ -974,7 +976,7 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
bitmap_count = 0;
gdp = NULL;
for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
gdp = ext4_get_group_desc (sb, i, NULL);
gdp = ext4_get_group_desc(sb, i, NULL);
if (!gdp)
continue;
desc_count += le16_to_cpu(gdp->bg_free_inodes_count);
......@@ -989,13 +991,14 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
bitmap_count += x;
}
brelse(bitmap_bh);
printk("ext4_count_free_inodes: stored = %u, computed = %lu, %lu\n",
printk(KERN_DEBUG "ext4_count_free_inodes: "
"stored = %u, computed = %lu, %lu\n",
le32_to_cpu(es->s_free_inodes_count), desc_count, bitmap_count);
return desc_count;
#else
desc_count = 0;
for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
gdp = ext4_get_group_desc (sb, i, NULL);
gdp = ext4_get_group_desc(sb, i, NULL);
if (!gdp)
continue;
desc_count += le16_to_cpu(gdp->bg_free_inodes_count);
......@@ -1006,13 +1009,13 @@ unsigned long ext4_count_free_inodes (struct super_block * sb)
}
/* Called at mount-time, super-block is locked */
unsigned long ext4_count_dirs (struct super_block * sb)
unsigned long ext4_count_dirs(struct super_block * sb)
{
unsigned long count = 0;
ext4_group_t i;
for (i = 0; i < EXT4_SB(sb)->s_groups_count; i++) {
struct ext4_group_desc *gdp = ext4_get_group_desc (sb, i, NULL);
struct ext4_group_desc *gdp = ext4_get_group_desc(sb, i, NULL);
if (!gdp)
continue;
count += le16_to_cpu(gdp->bg_used_dirs_count);
......
This diff is collapsed.
......@@ -23,9 +23,8 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
struct inode *inode = filp->f_dentry->d_inode;
struct ext4_inode_info *ei = EXT4_I(inode);
unsigned int flags;
unsigned short rsv_window_size;
ext4_debug ("cmd = %u, arg = %lu\n", cmd, arg);
ext4_debug("cmd = %u, arg = %lu\n", cmd, arg);
switch (cmd) {
case EXT4_IOC_GETFLAGS:
......@@ -34,7 +33,7 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return put_user(flags, (int __user *) arg);
case EXT4_IOC_SETFLAGS: {
handle_t *handle = NULL;
int err;
int err, migrate = 0;
struct ext4_iloc iloc;
unsigned int oldflags;
unsigned int jflag;
......@@ -82,6 +81,17 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
if (!capable(CAP_SYS_RESOURCE))
goto flags_out;
}
if (oldflags & EXT4_EXTENTS_FL) {
/* We don't support clearning extent flags */
if (!(flags & EXT4_EXTENTS_FL)) {
err = -EOPNOTSUPP;
goto flags_out;
}
} else if (flags & EXT4_EXTENTS_FL) {
/* migrate the file */
migrate = 1;
flags &= ~EXT4_EXTENTS_FL;
}
handle = ext4_journal_start(inode, 1);
if (IS_ERR(handle)) {
......@@ -109,6 +119,10 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
if ((jflag ^ oldflags) & (EXT4_JOURNAL_DATA_FL))
err = ext4_change_inode_journal_flag(inode, jflag);
if (err)
goto flags_out;
if (migrate)
err = ext4_ext_migrate(inode);
flags_out:
mutex_unlock(&inode->i_mutex);
mnt_drop_write(filp->f_path.mnt);
......@@ -175,49 +189,6 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
return ret;
}
#endif
case EXT4_IOC_GETRSVSZ:
if (test_opt(inode->i_sb, RESERVATION)
&& S_ISREG(inode->i_mode)
&& ei->i_block_alloc_info) {
rsv_window_size = ei->i_block_alloc_info->rsv_window_node.rsv_goal_size;
return put_user(rsv_window_size, (int __user *)arg);
}
return -ENOTTY;
case EXT4_IOC_SETRSVSZ: {
int err;
if (!test_opt(inode->i_sb, RESERVATION) ||!S_ISREG(inode->i_mode))
return -ENOTTY;
if (!is_owner_or_cap(inode))
return -EACCES;
if (get_user(rsv_window_size, (int __user *)arg))
return -EFAULT;
err = mnt_want_write(filp->f_path.mnt);
if (err)
return err;
if (rsv_window_size > EXT4_MAX_RESERVE_BLOCKS)
rsv_window_size = EXT4_MAX_RESERVE_BLOCKS;
/*
* need to allocate reservation structure for this inode
* before set the window size
*/
down_write(&ei->i_data_sem);
if (!ei->i_block_alloc_info)
ext4_init_block_alloc_info(inode);
if (ei->i_block_alloc_info){
struct ext4_reserve_window_node *rsv = &ei->i_block_alloc_info->rsv_window_node;
rsv->rsv_goal_size = rsv_window_size;
}
up_write(&ei->i_data_sem);
mnt_drop_write(filp->f_path.mnt);
return 0;
}
case EXT4_IOC_GROUP_EXTEND: {
ext4_fsblk_t n_blocks_count;
struct super_block *sb = inode->i_sb;
......@@ -267,7 +238,26 @@ long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
}
case EXT4_IOC_MIGRATE:
return ext4_ext_migrate(inode, filp, cmd, arg);
{
int err;
if (!is_owner_or_cap(inode))
return -EACCES;
err = mnt_want_write(filp->f_path.mnt);
if (err)
return err;
/*
* inode_mutex prevent write and truncate on the file.
* Read still goes through. We take i_data_sem in
* ext4_ext_swap_inode_data before we switch the
* inode format to prevent read.
*/
mutex_lock(&(inode->i_mutex));
err = ext4_ext_migrate(inode);
mutex_unlock(&(inode->i_mutex));
mnt_drop_write(filp->f_path.mnt);
return err;
}
default:
return -ENOTTY;
......
This diff is collapsed.
......@@ -257,7 +257,6 @@ static void ext4_mb_store_history(struct ext4_allocation_context *ac);
#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
static struct proc_dir_entry *proc_root_ext4;
struct buffer_head *read_block_bitmap(struct super_block *, ext4_group_t);
static void ext4_mb_generate_from_pa(struct super_block *sb, void *bitmap,
......
......@@ -447,8 +447,7 @@ static int free_ext_block(handle_t *handle, struct inode *inode)
}
int ext4_ext_migrate(struct inode *inode, struct file *filp,
unsigned int cmd, unsigned long arg)
int ext4_ext_migrate(struct inode *inode)
{
handle_t *handle;
int retval = 0, i;
......@@ -515,12 +514,6 @@ int ext4_ext_migrate(struct inode *inode, struct file *filp,
* trascation that created the inode. Later as and
* when we add extents we extent the journal
*/
/*
* inode_mutex prevent write and truncate on the file. Read still goes
* through. We take i_data_sem in ext4_ext_swap_inode_data before we
* switch the inode format to prevent read.
*/
mutex_lock(&(inode->i_mutex));
/*
* Even though we take i_mutex we can still cause block allocation
* via mmap write to holes. If we have allocated new blocks we fail
......@@ -623,7 +616,6 @@ int ext4_ext_migrate(struct inode *inode, struct file *filp,
tmp_inode->i_nlink = 0;
ext4_journal_stop(handle);
mutex_unlock(&(inode->i_mutex));
if (tmp_inode)
iput(tmp_inode);
......
This diff is collapsed.
......@@ -870,11 +870,10 @@ int ext4_group_add(struct super_block *sb, struct ext4_new_group_data *input)
* We can allocate memory for mb_alloc based on the new group
* descriptor
*/
if (test_opt(sb, MBALLOC)) {
err = ext4_mb_add_more_groupinfo(sb, input->group, gdp);
if (err)
goto exit_journal;
}
/*
* Make the new blocks and inodes valid next. We do this before
* increasing the group count so that once the group is enabled,
......@@ -929,6 +928,15 @@ int ext4_group_add(struct super_block *sb, struct ext4_new_group_data *input)
percpu_counter_add(&sbi->s_freeinodes_counter,
EXT4_INODES_PER_GROUP(sb));
if (EXT4_HAS_INCOMPAT_FEATURE(sb, EXT4_FEATURE_INCOMPAT_FLEX_BG)) {
ext4_group_t flex_group;
flex_group = ext4_flex_group(sbi, input->group);
sbi->s_flex_groups[flex_group].free_blocks +=
input->free_blocks_count;
sbi->s_flex_groups[flex_group].free_inodes +=
EXT4_INODES_PER_GROUP(sb);
}
ext4_journal_dirty_metadata(handle, sbi->s_sbh);
sb->s_dirt = 1;
......@@ -964,7 +972,7 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
ext4_group_t o_groups_count;
ext4_grpblk_t last;
ext4_grpblk_t add;
struct buffer_head * bh;
struct buffer_head *bh;
handle_t *handle;
int err;
unsigned long freed_blocks;
......@@ -1077,8 +1085,15 @@ int ext4_group_extend(struct super_block *sb, struct ext4_super_block *es,
/*
* Mark mballoc pages as not up to date so that they will be updated
* next time they are loaded by ext4_mb_load_buddy.
*
* XXX Bad, Bad, BAD!!! We should not be overloading the
* Uptodate flag, particularly on thte bitmap bh, as way of
* hinting to ext4_mb_load_buddy() that it needs to be
* overloaded. A user could take a LVM snapshot, then do an
* on-line fsck, and clear the uptodate flag, and this would
* not be a bug in userspace, but a bug in the kernel. FIXME!!!
*/
if (test_opt(sb, MBALLOC)) {
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
struct inode *inode = sbi->s_buddy_cache;
int blocks_per_page;
......
This diff is collapsed.
......@@ -23,10 +23,10 @@
#include "ext4.h"
#include "xattr.h"
static void * ext4_follow_link(struct dentry *dentry, struct nameidata *nd)
static void *ext4_follow_link(struct dentry *dentry, struct nameidata *nd)
{
struct ext4_inode_info *ei = EXT4_I(dentry->d_inode);
nd_set_link(nd, (char*)ei->i_data);
nd_set_link(nd, (char *) ei->i_data);
return NULL;
}
......@@ -34,7 +34,7 @@ const struct inode_operations ext4_symlink_inode_operations = {
.readlink = generic_readlink,
.follow_link = page_follow_link_light,
.put_link = page_put_link,
#ifdef CONFIG_EXT4DEV_FS_XATTR
#ifdef CONFIG_EXT4_FS_XATTR
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = ext4_listxattr,
......@@ -45,7 +45,7 @@ const struct inode_operations ext4_symlink_inode_operations = {
const struct inode_operations ext4_fast_symlink_inode_operations = {
.readlink = generic_readlink,
.follow_link = ext4_follow_link,
#ifdef CONFIG_EXT4DEV_FS_XATTR
#ifdef CONFIG_EXT4_FS_XATTR
.setxattr = generic_setxattr,
.getxattr = generic_getxattr,
.listxattr = ext4_listxattr,
......
......@@ -99,12 +99,12 @@ static struct mb_cache *ext4_xattr_cache;
static struct xattr_handler *ext4_xattr_handler_map[] = {
[EXT4_XATTR_INDEX_USER] = &ext4_xattr_user_handler,
#ifdef CONFIG_EXT4DEV_FS_POSIX_ACL
#ifdef CONFIG_EXT4_FS_POSIX_ACL
[EXT4_XATTR_INDEX_POSIX_ACL_ACCESS] = &ext4_xattr_acl_access_handler,
[EXT4_XATTR_INDEX_POSIX_ACL_DEFAULT] = &ext4_xattr_acl_default_handler,
#endif
[EXT4_XATTR_INDEX_TRUSTED] = &ext4_xattr_trusted_handler,
#ifdef CONFIG_EXT4DEV_FS_SECURITY
#ifdef CONFIG_EXT4_FS_SECURITY
[EXT4_XATTR_INDEX_SECURITY] = &ext4_xattr_security_handler,
#endif
};
......@@ -112,11 +112,11 @@ static struct xattr_handler *ext4_xattr_handler_map[] = {
struct xattr_handler *ext4_xattr_handlers[] = {
&ext4_xattr_user_handler,
&ext4_xattr_trusted_handler,
#ifdef CONFIG_EXT4DEV_FS_POSIX_ACL
#ifdef CONFIG_EXT4_FS_POSIX_ACL
&ext4_xattr_acl_access_handler,
&ext4_xattr_acl_default_handler,
#endif
#ifdef CONFIG_EXT4DEV_FS_SECURITY
#ifdef CONFIG_EXT4_FS_SECURITY
&ext4_xattr_security_handler,
#endif
NULL
......@@ -959,6 +959,7 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
struct ext4_xattr_block_find bs = {
.s = { .not_found = -ENODATA, },
};
unsigned long no_expand;
int error;
if (!name)
......@@ -966,6 +967,9 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
if (strlen(name) > 255)
return -ERANGE;
down_write(&EXT4_I(inode)->xattr_sem);
no_expand = EXT4_I(inode)->i_state & EXT4_STATE_NO_EXPAND;
EXT4_I(inode)->i_state |= EXT4_STATE_NO_EXPAND;
error = ext4_get_inode_loc(inode, &is.iloc);
if (error)
goto cleanup;
......@@ -1042,6 +1046,8 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
cleanup:
brelse(is.iloc.bh);
brelse(bs.bh);
if (no_expand == 0)
EXT4_I(inode)->i_state &= ~EXT4_STATE_NO_EXPAND;
up_write(&EXT4_I(inode)->xattr_sem);
return error;
}
......
......@@ -51,8 +51,8 @@ struct ext4_xattr_entry {
(((name_len) + EXT4_XATTR_ROUND + \
sizeof(struct ext4_xattr_entry)) & ~EXT4_XATTR_ROUND)
#define EXT4_XATTR_NEXT(entry) \
( (struct ext4_xattr_entry *)( \
(char *)(entry) + EXT4_XATTR_LEN((entry)->e_name_len)) )
((struct ext4_xattr_entry *)( \
(char *)(entry) + EXT4_XATTR_LEN((entry)->e_name_len)))
#define EXT4_XATTR_SIZE(size) \
(((size) + EXT4_XATTR_ROUND) & ~EXT4_XATTR_ROUND)
......@@ -63,7 +63,7 @@ struct ext4_xattr_entry {
EXT4_I(inode)->i_extra_isize))
#define IFIRST(hdr) ((struct ext4_xattr_entry *)((hdr)+1))
# ifdef CONFIG_EXT4DEV_FS_XATTR
# ifdef CONFIG_EXT4_FS_XATTR
extern struct xattr_handler ext4_xattr_user_handler;
extern struct xattr_handler ext4_xattr_trusted_handler;
......@@ -88,7 +88,7 @@ extern void exit_ext4_xattr(void);
extern struct xattr_handler *ext4_xattr_handlers[];
# else /* CONFIG_EXT4DEV_FS_XATTR */
# else /* CONFIG_EXT4_FS_XATTR */
static inline int
ext4_xattr_get(struct inode *inode, int name_index, const char *name,
......@@ -141,9 +141,9 @@ ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
#define ext4_xattr_handlers NULL
# endif /* CONFIG_EXT4DEV_FS_XATTR */
# endif /* CONFIG_EXT4_FS_XATTR */
#ifdef CONFIG_EXT4DEV_FS_SECURITY
#ifdef CONFIG_EXT4_FS_SECURITY
extern int ext4_init_security(handle_t *handle, struct inode *inode,
struct inode *dir);
#else
......
This diff is collapsed.
......@@ -20,6 +20,7 @@
#include <linux/time.h>
#include <linux/fs.h>
#include <linux/jbd2.h>
#include <linux/marker.h>
#include <linux/errno.h>
#include <linux/slab.h>
......@@ -126,14 +127,29 @@ void __jbd2_log_wait_for_space(journal_t *journal)
/*
* Test again, another process may have checkpointed while we
* were waiting for the checkpoint lock
* were waiting for the checkpoint lock. If there are no
* outstanding transactions there is nothing to checkpoint and
* we can't make progress. Abort the journal in this case.
*/
spin_lock(&journal->j_state_lock);
spin_lock(&journal->j_list_lock);
nblocks = jbd_space_needed(journal);
if (__jbd2_log_space_left(journal) < nblocks) {
int chkpt = journal->j_checkpoint_transactions != NULL;
spin_unlock(&journal->j_list_lock);
spin_unlock(&journal->j_state_lock);
if (chkpt) {
jbd2_log_do_checkpoint(journal);
} else {
printk(KERN_ERR "%s: no transactions\n",
__func__);
jbd2_journal_abort(journal, 0);
}
spin_lock(&journal->j_state_lock);
} else {
spin_unlock(&journal->j_list_lock);
}
mutex_unlock(&journal->j_checkpoint_mutex);
}
......@@ -313,6 +329,8 @@ int jbd2_log_do_checkpoint(journal_t *journal)
* journal straight away.
*/
result = jbd2_cleanup_journal_tail(journal);
trace_mark(jbd2_checkpoint, "dev %s need_checkpoint %d",
journal->j_devname, result);
jbd_debug(1, "cleanup_journal_tail returned %d\n", result);
if (result <= 0)
return result;
......
......@@ -16,6 +16,7 @@
#include <linux/time.h>
#include <linux/fs.h>
#include <linux/jbd2.h>
#include <linux/marker.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/mm.h>
......@@ -126,8 +127,7 @@ static int journal_submit_commit_record(journal_t *journal,
JBUFFER_TRACE(descriptor, "submit commit block");
lock_buffer(bh);
get_bh(bh);
set_buffer_dirty(bh);
clear_buffer_dirty(bh);
set_buffer_uptodate(bh);
bh->b_end_io = journal_end_buffer_io_sync;
......@@ -147,12 +147,9 @@ static int journal_submit_commit_record(journal_t *journal,
* to remember if we sent a barrier request
*/
if (ret == -EOPNOTSUPP && barrier_done) {
char b[BDEVNAME_SIZE];
printk(KERN_WARNING
"JBD: barrier-based sync failed on %s - "
"disabling barriers\n",
bdevname(journal->j_dev, b));
"disabling barriers\n", journal->j_devname);
spin_lock(&journal->j_state_lock);
journal->j_flags &= ~JBD2_BARRIER;
spin_unlock(&journal->j_state_lock);
......@@ -160,7 +157,7 @@ static int journal_submit_commit_record(journal_t *journal,
/* And try again, without the barrier */
lock_buffer(bh);
set_buffer_uptodate(bh);
set_buffer_dirty(bh);
clear_buffer_dirty(bh);
ret = submit_bh(WRITE, bh);
}
*cbh = bh;
......@@ -371,6 +368,8 @@ void jbd2_journal_commit_transaction(journal_t *journal)
commit_transaction = journal->j_running_transaction;
J_ASSERT(commit_transaction->t_state == T_RUNNING);
trace_mark(jbd2_start_commit, "dev %s transaction %d",
journal->j_devname, commit_transaction->t_tid);
jbd_debug(1, "JBD: starting commit of transaction %d\n",
commit_transaction->t_tid);
......@@ -681,11 +680,9 @@ void jbd2_journal_commit_transaction(journal_t *journal)
*/
err = journal_finish_inode_data_buffers(journal, commit_transaction);
if (err) {
char b[BDEVNAME_SIZE];
printk(KERN_WARNING
"JBD2: Detected IO errors while flushing file data "
"on %s\n", bdevname(journal->j_fs_dev, b));
"on %s\n", journal->j_devname);
err = 0;
}
......@@ -990,6 +987,9 @@ void jbd2_journal_commit_transaction(journal_t *journal)
}
spin_unlock(&journal->j_list_lock);
trace_mark(jbd2_end_commit, "dev %s transaction %d head %d",
journal->j_devname, commit_transaction->t_tid,
journal->j_tail_sequence);
jbd_debug(1, "JBD: commit %d complete, head %d\n",
journal->j_commit_sequence, journal->j_tail_sequence);
......
This diff is collapsed.
......@@ -989,15 +989,6 @@ static int ocfs2_grow_tree(struct inode *inode, handle_t *handle,
return ret;
}
/*
* This is only valid for leaf nodes, which are the only ones that can
* have empty extents anyway.
*/
static inline int ocfs2_is_empty_extent(struct ocfs2_extent_rec *rec)
{
return !rec->e_leaf_clusters;
}
/*
* This function will discard the rightmost extent record.
*/
......
......@@ -146,4 +146,13 @@ static inline unsigned int ocfs2_rec_clusters(struct ocfs2_extent_list *el,
return le16_to_cpu(rec->e_leaf_clusters);
}
/*
* This is only valid for leaf nodes, which are the only ones that can
* have empty extents anyway.
*/
static inline int ocfs2_is_empty_extent(struct ocfs2_extent_rec *rec)
{
return !rec->e_leaf_clusters;
}
#endif /* OCFS2_ALLOC_H */
This diff is collapsed.
......@@ -50,4 +50,7 @@ int ocfs2_get_clusters(struct inode *inode, u32 v_cluster, u32 *p_cluster,
int ocfs2_extent_map_get_blocks(struct inode *inode, u64 v_blkno, u64 *p_blkno,
u64 *ret_count, unsigned int *extent_flags);
int ocfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 map_start, u64 map_len);
#endif /* _EXTENT_MAP_H */
......@@ -2228,6 +2228,7 @@ const struct inode_operations ocfs2_file_iops = {
.getattr = ocfs2_getattr,
.permission = ocfs2_permission,
.fallocate = ocfs2_fallocate,
.fiemap = ocfs2_fiemap,
};
const struct inode_operations ocfs2_special_file_iops = {
......
......@@ -837,6 +837,8 @@ extern void ext3_truncate (struct inode *);
extern void ext3_set_inode_flags(struct inode *);
extern void ext3_get_inode_flags(struct ext3_inode_info *);
extern void ext3_set_aops(struct inode *inode);
extern int ext3_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len);
/* ioctl.c */
extern int ext3_ioctl (struct inode *, struct file *, unsigned int,
......
This diff is collapsed.
This diff is collapsed.
......@@ -851,6 +851,7 @@ struct journal_s
struct block_device *j_dev;
int j_blocksize;
unsigned long long j_blk_offset;
char j_devname[BDEVNAME_SIZE+24];
/*
* Device which holds the client fs. For internal journal this will be
......
This diff is collapsed.
......@@ -52,7 +52,7 @@ EXPORT_SYMBOL(__percpu_counter_add);
* Add up all the per-cpu counts, return the result. This is a more accurate
* but much slower version of percpu_counter_read_positive()
*/
s64 __percpu_counter_sum(struct percpu_counter *fbc, int set)
s64 __percpu_counter_sum(struct percpu_counter *fbc)
{
s64 ret;
int cpu;
......@@ -62,10 +62,8 @@ s64 __percpu_counter_sum(struct percpu_counter *fbc, int set)
for_each_online_cpu(cpu) {
s32 *pcount = per_cpu_ptr(fbc->counters, cpu);
ret += *pcount;
if (set)
*pcount = 0;
}
if (set)
fbc->count = ret;
spin_unlock(&fbc->lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment