Commit 487e2c9f authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'afs-next-20171113' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

Pull AFS updates from David Howells:
 "kAFS filesystem driver overhaul.

  The major points of the overhaul are:

   (1) Preliminary groundwork is laid for supporting network-namespacing
       of kAFS. The remainder of the namespacing work requires some way
       to pass namespace information to submounts triggered by an
       automount. This requires something like the mount overhaul that's
       in progress.

   (2) sockaddr_rxrpc is used in preference to in_addr for holding
       addresses internally and add support for talking to the YFS VL
       server. With this, kAFS can do everything over IPv6 as well as
       IPv4 if it's talking to servers that support it.

   (3) Callback handling is overhauled to be generally passive rather
       than active. 'Callbacks' are promises by the server to tell us
       about data and metadata changes. Callbacks are now checked when
       we next touch an inode rather than actively going and looking for
       it where possible.

   (4) File access permit caching is overhauled to store the caching
       information per-inode rather than per-directory, shared over
       subordinate files. Whilst older AFS servers only allow ACLs on
       directories (shared to the files in that directory), newer AFS
       servers break that restriction.

       To improve memory usage and to make it easier to do mass-key
       removal, permit combinations are cached and shared.

   (5) Cell database management is overhauled to allow lighter locks to
       be used and to make cell records autonomous state machines that
       look after getting their own DNS records and cleaning themselves
       up, in particular preventing races in acquiring and relinquishing
       the fscache token for the cell.

   (6) Volume caching is overhauled. The afs_vlocation record is got rid
       of to simplify things and the superblock is now keyed on the cell
       and the numeric volume ID only. The volume record is tied to a
       superblock and normal superblock management is used to mediate
       the lifetime of the volume fscache token.

   (7) File server record caching is overhauled to make server records
       independent of cells and volumes. A server can be in multiple
       cells (in such a case, the administrator must make sure that the
       VL services for all cells correctly reflect the volumes shared
       between those cells).

       Server records are now indexed using the UUID of the server
       rather than the address since a server can have multiple
       addresses.

   (8) File server rotation is overhauled to handle VMOVED, VBUSY (and
       similar), VOFFLINE and VNOVOL indications and to handle rotation
       both of servers and addresses of those servers. The rotation will
       also wait and retry if the server says it is busy.

   (9) Data writeback is overhauled. Each inode no longer stores a list
       of modified sections tagged with the key that authorised it in
       favour of noting the modified region of a page in page->private
       and storing a list of keys that made modifications in the inode.

       This simplifies things and allows other keys to be used to
       actually write to the server if a key that made a modification
       becomes useless.

  (10) Writable mmap() is implemented. This allows a kernel to be build
       entirely on AFS.

  Note that Pre AFS-3.4 servers are no longer supported, though this can
  be added back if necessary (AFS-3.4 was released in 1998)"

* tag 'afs-next-20171113' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs: (35 commits)
  afs: Protect call->state changes against signals
  afs: Trace page dirty/clean
  afs: Implement shared-writeable mmap
  afs: Get rid of the afs_writeback record
  afs: Introduce a file-private data record
  afs: Use a dynamic port if 7001 is in use
  afs: Fix directory read/modify race
  afs: Trace the sending of pages
  afs: Trace the initiation and completion of client calls
  afs: Fix documentation on # vs % prefix in mount source specification
  afs: Fix total-length calculation for multiple-page send
  afs: Only progress call state at end of Tx phase from rxrpc callback
  afs: Make use of the YFS service upgrade to fully support IPv6
  afs: Overhaul volume and server record caching and fileserver rotation
  afs: Move server rotation code into its own file
  afs: Add an address list concept
  afs: Overhaul cell database management
  afs: Overhaul permit caching
  afs: Overhaul the callback handling
  afs: Rename struct afs_call server member to cm_server
  ...
parents b630a23a 98bf40cd
...@@ -91,8 +91,8 @@ Filesystems can be mounted anywhere by commands similar to the following: ...@@ -91,8 +91,8 @@ Filesystems can be mounted anywhere by commands similar to the following:
mount -t afs "#root.cell." /afs/cambridge mount -t afs "#root.cell." /afs/cambridge
Where the initial character is either a hash or a percent symbol depending on Where the initial character is either a hash or a percent symbol depending on
whether you definitely want a R/W volume (hash) or whether you'd prefer a R/O whether you definitely want a R/W volume (percent) or whether you'd prefer a
volume, but are willing to use a R/W volume instead (percent). R/O volume, but are willing to use a R/W volume instead (hash).
The name of the volume can be suffixes with ".backup" or ".readonly" to The name of the volume can be suffixes with ".backup" or ".readonly" to
specify connection to only volumes of those types. specify connection to only volumes of those types.
......
...@@ -1233,18 +1233,6 @@ static int default_cu2_call(struct notifier_block *nfb, unsigned long action, ...@@ -1233,18 +1233,6 @@ static int default_cu2_call(struct notifier_block *nfb, unsigned long action,
return NOTIFY_OK; return NOTIFY_OK;
} }
static int wait_on_fp_mode_switch(atomic_t *p)
{
/*
* The FP mode for this task is currently being switched. That may
* involve modifications to the format of this tasks FP context which
* make it unsafe to proceed with execution for the moment. Instead,
* schedule some other task.
*/
schedule();
return 0;
}
static int enable_restore_fp_context(int msa) static int enable_restore_fp_context(int msa)
{ {
int err, was_fpu_owner, prior_msa; int err, was_fpu_owner, prior_msa;
...@@ -1254,7 +1242,7 @@ static int enable_restore_fp_context(int msa) ...@@ -1254,7 +1242,7 @@ static int enable_restore_fp_context(int msa)
* complete before proceeding. * complete before proceeding.
*/ */
wait_on_atomic_t(&current->mm->context.fp_mode_switching, wait_on_atomic_t(&current->mm->context.fp_mode_switching,
wait_on_fp_mode_switch, TASK_KILLABLE); atomic_t_wait, TASK_KILLABLE);
if (!used_math()) { if (!used_math()) {
/* First time FP context user. */ /* First time FP context user. */
......
...@@ -263,12 +263,6 @@ static struct drm_dp_aux_dev *drm_dp_aux_dev_get_by_aux(struct drm_dp_aux *aux) ...@@ -263,12 +263,6 @@ static struct drm_dp_aux_dev *drm_dp_aux_dev_get_by_aux(struct drm_dp_aux *aux)
return aux_dev; return aux_dev;
} }
static int auxdev_wait_atomic_t(atomic_t *p)
{
schedule();
return 0;
}
void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux) void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux)
{ {
struct drm_dp_aux_dev *aux_dev; struct drm_dp_aux_dev *aux_dev;
...@@ -283,7 +277,7 @@ void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux) ...@@ -283,7 +277,7 @@ void drm_dp_aux_unregister_devnode(struct drm_dp_aux *aux)
mutex_unlock(&aux_idr_mutex); mutex_unlock(&aux_idr_mutex);
atomic_dec(&aux_dev->usecount); atomic_dec(&aux_dev->usecount);
wait_on_atomic_t(&aux_dev->usecount, auxdev_wait_atomic_t, wait_on_atomic_t(&aux_dev->usecount, atomic_t_wait,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
minor = aux_dev->index; minor = aux_dev->index;
......
...@@ -271,13 +271,7 @@ struct igt_wakeup { ...@@ -271,13 +271,7 @@ struct igt_wakeup {
u32 seqno; u32 seqno;
}; };
static int wait_atomic(atomic_t *p) static int wait_atomic_timeout(atomic_t *p, unsigned int mode)
{
schedule();
return 0;
}
static int wait_atomic_timeout(atomic_t *p)
{ {
return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT; return schedule_timeout(10 * HZ) ? 0 : -ETIMEDOUT;
} }
...@@ -348,7 +342,7 @@ static void igt_wake_all_sync(atomic_t *ready, ...@@ -348,7 +342,7 @@ static void igt_wake_all_sync(atomic_t *ready,
atomic_set(ready, 0); atomic_set(ready, 0);
wake_up_all(wq); wake_up_all(wq);
wait_on_atomic_t(set, wait_atomic, TASK_UNINTERRUPTIBLE); wait_on_atomic_t(set, atomic_t_wait, TASK_UNINTERRUPTIBLE);
atomic_set(ready, count); atomic_set(ready, count);
atomic_set(done, count); atomic_set(done, count);
} }
......
...@@ -88,12 +88,6 @@ int hfi_core_init(struct venus_core *core) ...@@ -88,12 +88,6 @@ int hfi_core_init(struct venus_core *core)
return ret; return ret;
} }
static int core_deinit_wait_atomic_t(atomic_t *p)
{
schedule();
return 0;
}
int hfi_core_deinit(struct venus_core *core, bool blocking) int hfi_core_deinit(struct venus_core *core, bool blocking)
{ {
int ret = 0, empty; int ret = 0, empty;
...@@ -112,7 +106,7 @@ int hfi_core_deinit(struct venus_core *core, bool blocking) ...@@ -112,7 +106,7 @@ int hfi_core_deinit(struct venus_core *core, bool blocking)
if (!empty) { if (!empty) {
mutex_unlock(&core->lock); mutex_unlock(&core->lock);
wait_on_atomic_t(&core->insts_count, core_deinit_wait_atomic_t, wait_on_atomic_t(&core->insts_count, atomic_t_wait,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
mutex_lock(&core->lock); mutex_lock(&core->lock);
} }
......
...@@ -7,6 +7,7 @@ afs-cache-$(CONFIG_AFS_FSCACHE) := cache.o ...@@ -7,6 +7,7 @@ afs-cache-$(CONFIG_AFS_FSCACHE) := cache.o
kafs-objs := \ kafs-objs := \
$(afs-cache-y) \ $(afs-cache-y) \
addr_list.o \
callback.o \ callback.o \
cell.o \ cell.o \
cmservice.o \ cmservice.o \
...@@ -19,14 +20,14 @@ kafs-objs := \ ...@@ -19,14 +20,14 @@ kafs-objs := \
misc.o \ misc.o \
mntpt.o \ mntpt.o \
proc.o \ proc.o \
rotate.o \
rxrpc.o \ rxrpc.o \
security.o \ security.o \
server.o \ server.o \
server_list.o \
super.o \ super.o \
netdevices.o \ netdevices.o \
vlclient.o \ vlclient.o \
vlocation.o \
vnode.o \
volume.o \ volume.o \
write.o \ write.o \
xattr.o xattr.o
......
/* Server address list management
*
* Copyright (C) 2017 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#include <linux/slab.h>
#include <linux/ctype.h>
#include <linux/dns_resolver.h>
#include <linux/inet.h>
#include <keys/rxrpc-type.h>
#include "internal.h"
#include "afs_fs.h"
//#define AFS_MAX_ADDRESSES
// ((unsigned int)((PAGE_SIZE - sizeof(struct afs_addr_list)) /
// sizeof(struct sockaddr_rxrpc)))
#define AFS_MAX_ADDRESSES ((unsigned int)(sizeof(unsigned long) * 8))
/*
* Release an address list.
*/
void afs_put_addrlist(struct afs_addr_list *alist)
{
if (alist && refcount_dec_and_test(&alist->usage))
call_rcu(&alist->rcu, (rcu_callback_t)kfree);
}
/*
* Allocate an address list.
*/
struct afs_addr_list *afs_alloc_addrlist(unsigned int nr,
unsigned short service,
unsigned short port)
{
struct afs_addr_list *alist;
unsigned int i;
_enter("%u,%u,%u", nr, service, port);
alist = kzalloc(sizeof(*alist) + sizeof(alist->addrs[0]) * nr,
GFP_KERNEL);
if (!alist)
return NULL;
refcount_set(&alist->usage, 1);
for (i = 0; i < nr; i++) {
struct sockaddr_rxrpc *srx = &alist->addrs[i];
srx->srx_family = AF_RXRPC;
srx->srx_service = service;
srx->transport_type = SOCK_DGRAM;
srx->transport_len = sizeof(srx->transport.sin6);
srx->transport.sin6.sin6_family = AF_INET6;
srx->transport.sin6.sin6_port = htons(port);
}
return alist;
}
/*
* Parse a text string consisting of delimited addresses.
*/
struct afs_addr_list *afs_parse_text_addrs(const char *text, size_t len,
char delim,
unsigned short service,
unsigned short port)
{
struct afs_addr_list *alist;
const char *p, *end = text + len;
unsigned int nr = 0;
_enter("%*.*s,%c", (int)len, (int)len, text, delim);
if (!len)
return ERR_PTR(-EDESTADDRREQ);
if (delim == ':' && (memchr(text, ',', len) || !memchr(text, '.', len)))
delim = ',';
/* Count the addresses */
p = text;
do {
if (!*p)
return ERR_PTR(-EINVAL);
if (*p == delim)
continue;
nr++;
if (*p == '[') {
p++;
if (p == end)
return ERR_PTR(-EINVAL);
p = memchr(p, ']', end - p);
if (!p)
return ERR_PTR(-EINVAL);
p++;
if (p >= end)
break;
}
p = memchr(p, delim, end - p);
if (!p)
break;
p++;
} while (p < end);
_debug("%u/%u addresses", nr, AFS_MAX_ADDRESSES);
if (nr > AFS_MAX_ADDRESSES)
nr = AFS_MAX_ADDRESSES;
alist = afs_alloc_addrlist(nr, service, port);
if (!alist)
return ERR_PTR(-ENOMEM);
/* Extract the addresses */
p = text;
do {
struct sockaddr_rxrpc *srx = &alist->addrs[alist->nr_addrs];
char tdelim = delim;
if (*p == delim) {
p++;
continue;
}
if (*p == '[') {
p++;
tdelim = ']';
}
if (in4_pton(p, end - p,
(u8 *)&srx->transport.sin6.sin6_addr.s6_addr32[3],
tdelim, &p)) {
srx->transport.sin6.sin6_addr.s6_addr32[0] = 0;
srx->transport.sin6.sin6_addr.s6_addr32[1] = 0;
srx->transport.sin6.sin6_addr.s6_addr32[2] = htonl(0xffff);
} else if (in6_pton(p, end - p,
srx->transport.sin6.sin6_addr.s6_addr,
tdelim, &p)) {
/* Nothing to do */
} else {
goto bad_address;
}
if (tdelim == ']') {
if (p == end || *p != ']')
goto bad_address;
p++;
}
if (p < end) {
if (*p == '+') {
/* Port number specification "+1234" */
unsigned int xport = 0;
p++;
if (p >= end || !isdigit(*p))
goto bad_address;
do {
xport *= 10;
xport += *p - '0';
if (xport > 65535)
goto bad_address;
p++;
} while (p < end && isdigit(*p));
srx->transport.sin6.sin6_port = htons(xport);
} else if (*p == delim) {
p++;
} else {
goto bad_address;
}
}
alist->nr_addrs++;
} while (p < end && alist->nr_addrs < AFS_MAX_ADDRESSES);
_leave(" = [nr %u]", alist->nr_addrs);
return alist;
bad_address:
kfree(alist);
return ERR_PTR(-EINVAL);
}
/*
* Compare old and new address lists to see if there's been any change.
* - How to do this in better than O(Nlog(N)) time?
* - We don't really want to sort the address list, but would rather take the
* list as we got it so as not to undo record rotation by the DNS server.
*/
#if 0
static int afs_cmp_addr_list(const struct afs_addr_list *a1,
const struct afs_addr_list *a2)
{
}
#endif
/*
* Perform a DNS query for VL servers and build a up an address list.
*/
struct afs_addr_list *afs_dns_query(struct afs_cell *cell, time64_t *_expiry)
{
struct afs_addr_list *alist;
char *vllist = NULL;
int ret;
_enter("%s", cell->name);
ret = dns_query("afsdb", cell->name, cell->name_len,
"ipv4", &vllist, _expiry);
if (ret < 0)
return ERR_PTR(ret);
alist = afs_parse_text_addrs(vllist, strlen(vllist), ',',
VL_SERVICE, AFS_VL_PORT);
if (IS_ERR(alist)) {
kfree(vllist);
if (alist != ERR_PTR(-ENOMEM))
pr_err("Failed to parse DNS data\n");
return alist;
}
kfree(vllist);
return alist;
}
/*
* Merge an IPv4 entry into a fileserver address list.
*/
void afs_merge_fs_addr4(struct afs_addr_list *alist, __be32 xdr, u16 port)
{
struct sockaddr_in6 *a;
__be16 xport = htons(port);
int i;
for (i = 0; i < alist->nr_ipv4; i++) {
a = &alist->addrs[i].transport.sin6;
if (xdr == a->sin6_addr.s6_addr32[3] &&
xport == a->sin6_port)
return;
if (xdr == a->sin6_addr.s6_addr32[3] &&
xport < a->sin6_port)
break;
if (xdr < a->sin6_addr.s6_addr32[3])
break;
}
if (i < alist->nr_addrs)
memmove(alist->addrs + i + 1,
alist->addrs + i,
sizeof(alist->addrs[0]) * (alist->nr_addrs - i));
a = &alist->addrs[i].transport.sin6;
a->sin6_port = xport;
a->sin6_addr.s6_addr32[0] = 0;
a->sin6_addr.s6_addr32[1] = 0;
a->sin6_addr.s6_addr32[2] = htonl(0xffff);
a->sin6_addr.s6_addr32[3] = xdr;
alist->nr_ipv4++;
alist->nr_addrs++;
}
/*
* Merge an IPv6 entry into a fileserver address list.
*/
void afs_merge_fs_addr6(struct afs_addr_list *alist, __be32 *xdr, u16 port)
{
struct sockaddr_in6 *a;
__be16 xport = htons(port);
int i, diff;
for (i = alist->nr_ipv4; i < alist->nr_addrs; i++) {
a = &alist->addrs[i].transport.sin6;
diff = memcmp(xdr, &a->sin6_addr, 16);
if (diff == 0 &&
xport == a->sin6_port)
return;
if (diff == 0 &&
xport < a->sin6_port)
break;
if (diff < 0)
break;
}
if (i < alist->nr_addrs)
memmove(alist->addrs + i + 1,
alist->addrs + i,
sizeof(alist->addrs[0]) * (alist->nr_addrs - i));
a = &alist->addrs[i].transport.sin6;
a->sin6_port = xport;
a->sin6_addr.s6_addr32[0] = xdr[0];
a->sin6_addr.s6_addr32[1] = xdr[1];
a->sin6_addr.s6_addr32[2] = xdr[2];
a->sin6_addr.s6_addr32[3] = xdr[3];
alist->nr_addrs++;
}
/*
* Get an address to try.
*/
bool afs_iterate_addresses(struct afs_addr_cursor *ac)
{
_enter("%hu+%hd", ac->start, (short)ac->index);
if (!ac->alist)
return false;
if (ac->begun) {
ac->index++;
if (ac->index == ac->alist->nr_addrs)
ac->index = 0;
if (ac->index == ac->start) {
ac->error = -EDESTADDRREQ;
return false;
}
}
ac->begun = true;
ac->responded = false;
ac->addr = &ac->alist->addrs[ac->index];
return true;
}
/*
* Release an address list cursor.
*/
int afs_end_cursor(struct afs_addr_cursor *ac)
{
if (ac->responded && ac->index != ac->start)
WRITE_ONCE(ac->alist->index, ac->index);
afs_put_addrlist(ac->alist);
ac->alist = NULL;
return ac->error;
}
/*
* Set the address cursor for iterating over VL servers.
*/
int afs_set_vl_cursor(struct afs_addr_cursor *ac, struct afs_cell *cell)
{
struct afs_addr_list *alist;
int ret;
if (!rcu_access_pointer(cell->vl_addrs)) {
ret = wait_on_bit(&cell->flags, AFS_CELL_FL_NO_LOOKUP_YET,
TASK_INTERRUPTIBLE);
if (ret < 0)
return ret;
if (!rcu_access_pointer(cell->vl_addrs) &&
ktime_get_real_seconds() < cell->dns_expiry)
return cell->error;
}
read_lock(&cell->vl_addrs_lock);
alist = rcu_dereference_protected(cell->vl_addrs,
lockdep_is_held(&cell->vl_addrs_lock));
if (alist->nr_addrs > 0)
afs_get_addrlist(alist);
else
alist = NULL;
read_unlock(&cell->vl_addrs_lock);
if (!alist)
return -EDESTADDRREQ;
ac->alist = alist;
ac->addr = NULL;
ac->start = READ_ONCE(alist->index);
ac->index = ac->start;
ac->error = 0;
ac->begun = false;
return 0;
}
...@@ -14,11 +14,14 @@ ...@@ -14,11 +14,14 @@
#include <linux/in.h> #include <linux/in.h>
#define AFS_MAXCELLNAME 64 /* maximum length of a cell name */ #define AFS_MAXCELLNAME 64 /* Maximum length of a cell name */
#define AFS_MAXVOLNAME 64 /* maximum length of a volume name */ #define AFS_MAXVOLNAME 64 /* Maximum length of a volume name */
#define AFSNAMEMAX 256 /* maximum length of a filename plus NUL */ #define AFS_MAXNSERVERS 8 /* Maximum servers in a basic volume record */
#define AFSPATHMAX 1024 /* maximum length of a pathname plus NUL */ #define AFS_NMAXNSERVERS 13 /* Maximum servers in a N/U-class volume record */
#define AFSOPAQUEMAX 1024 /* maximum length of an opaque field */ #define AFS_MAXTYPES 3 /* Maximum number of volume types */
#define AFSNAMEMAX 256 /* Maximum length of a filename plus NUL */
#define AFSPATHMAX 1024 /* Maximum length of a pathname plus NUL */
#define AFSOPAQUEMAX 1024 /* Maximum length of an opaque field */
typedef unsigned afs_volid_t; typedef unsigned afs_volid_t;
typedef unsigned afs_vnodeid_t; typedef unsigned afs_vnodeid_t;
...@@ -72,6 +75,15 @@ struct afs_callback { ...@@ -72,6 +75,15 @@ struct afs_callback {
#define AFSCBMAX 50 /* maximum callbacks transferred per bulk op */ #define AFSCBMAX 50 /* maximum callbacks transferred per bulk op */
struct afs_uuid {
__be32 time_low; /* low part of timestamp */
__be16 time_mid; /* mid part of timestamp */
__be16 time_hi_and_version; /* high part of timestamp and version */
__s8 clock_seq_hi_and_reserved; /* clock seq hi and variant */
__s8 clock_seq_low; /* clock seq low */
__s8 node[6]; /* spatially unique node ID (MAC addr) */
};
/* /*
* AFS volume information * AFS volume information
*/ */
...@@ -124,7 +136,6 @@ struct afs_file_status { ...@@ -124,7 +136,6 @@ struct afs_file_status {
afs_access_t caller_access; /* access rights for authenticated caller */ afs_access_t caller_access; /* access rights for authenticated caller */
afs_access_t anon_access; /* access rights for unauthenticated caller */ afs_access_t anon_access; /* access rights for unauthenticated caller */
umode_t mode; /* UNIX mode */ umode_t mode; /* UNIX mode */
struct afs_fid parent; /* parent dir ID for non-dirs only */
time_t mtime_client; /* last time client changed data */ time_t mtime_client; /* last time client changed data */
time_t mtime_server; /* last time server changed data */ time_t mtime_server; /* last time server changed data */
s32 lock_count; /* file lock count (0=UNLK -1=WRLCK +ve=#RDLCK */ s32 lock_count; /* file lock count (0=UNLK -1=WRLCK +ve=#RDLCK */
...@@ -167,4 +178,16 @@ struct afs_volume_status { ...@@ -167,4 +178,16 @@ struct afs_volume_status {
#define AFS_BLOCK_SIZE 1024 #define AFS_BLOCK_SIZE 1024
/*
* XDR encoding of UUID in AFS.
*/
struct afs_uuid__xdr {
__be32 time_low;
__be32 time_mid;
__be32 time_hi_and_version;
__be32 clock_seq_hi_and_reserved;
__be32 clock_seq_low;
__be32 node[6];
};
#endif /* AFS_H */ #endif /* AFS_H */
...@@ -37,9 +37,12 @@ enum AFS_FS_Operations { ...@@ -37,9 +37,12 @@ enum AFS_FS_Operations {
FSLOOKUP = 161, /* AFS lookup file in directory */ FSLOOKUP = 161, /* AFS lookup file in directory */
FSFETCHDATA64 = 65537, /* AFS Fetch file data */ FSFETCHDATA64 = 65537, /* AFS Fetch file data */
FSSTOREDATA64 = 65538, /* AFS Store file data */ FSSTOREDATA64 = 65538, /* AFS Store file data */
FSGIVEUPALLCALLBACKS = 65539, /* AFS Give up all outstanding callbacks on a server */
FSGETCAPABILITIES = 65540, /* Probe and get the capabilities of a fileserver */
}; };
enum AFS_FS_Errors { enum AFS_FS_Errors {
VRESTARTING = -100, /* Server is restarting */
VSALVAGE = 101, /* volume needs salvaging */ VSALVAGE = 101, /* volume needs salvaging */
VNOVNODE = 102, /* no such file/dir (vnode) */ VNOVNODE = 102, /* no such file/dir (vnode) */
VNOVOL = 103, /* no such volume or volume unavailable */ VNOVOL = 103, /* no such volume or volume unavailable */
...@@ -51,6 +54,9 @@ enum AFS_FS_Errors { ...@@ -51,6 +54,9 @@ enum AFS_FS_Errors {
VOVERQUOTA = 109, /* volume's maximum quota exceeded */ VOVERQUOTA = 109, /* volume's maximum quota exceeded */
VBUSY = 110, /* volume is temporarily unavailable */ VBUSY = 110, /* volume is temporarily unavailable */
VMOVED = 111, /* volume moved to new server - ask this FS where */ VMOVED = 111, /* volume moved to new server - ask this FS where */
VIO = 112, /* I/O error in volume */
VSALVAGING = 113, /* Volume is being salvaged */
VRESTRICTED = 120, /* Volume is restricted from using */
}; };
#endif /* AFS_FS_H */ #endif /* AFS_FS_H */
...@@ -16,11 +16,17 @@ ...@@ -16,11 +16,17 @@
#define AFS_VL_PORT 7003 /* volume location service port */ #define AFS_VL_PORT 7003 /* volume location service port */
#define VL_SERVICE 52 /* RxRPC service ID for the Volume Location service */ #define VL_SERVICE 52 /* RxRPC service ID for the Volume Location service */
#define YFS_VL_SERVICE 2503 /* Service ID for AuriStor upgraded VL service */
enum AFSVL_Operations { enum AFSVL_Operations {
VLGETENTRYBYID = 503, /* AFS Get Cache Entry By ID operation ID */ VLGETENTRYBYID = 503, /* AFS Get VLDB entry by ID */
VLGETENTRYBYNAME = 504, /* AFS Get Cache Entry By Name operation ID */ VLGETENTRYBYNAME = 504, /* AFS Get VLDB entry by name */
VLPROBE = 514, /* AFS Probe Volume Location Service operation ID */ VLPROBE = 514, /* AFS probe VL service */
VLGETENTRYBYIDU = 526, /* AFS Get VLDB entry by ID (UUID-variant) */
VLGETENTRYBYNAMEU = 527, /* AFS Get VLDB entry by name (UUID-variant) */
VLGETADDRSU = 533, /* AFS Get addrs for fileserver */
YVLGETENDPOINTS = 64002, /* YFS Get endpoints for file/volume server */
VLGETCAPABILITIES = 65537, /* AFS Get server capabilities */
}; };
enum AFSVL_Errors { enum AFSVL_Errors {
...@@ -54,6 +60,19 @@ enum AFSVL_Errors { ...@@ -54,6 +60,19 @@ enum AFSVL_Errors {
AFSVL_NOMEM = 363547, /* malloc/realloc failed to alloc enough memory */ AFSVL_NOMEM = 363547, /* malloc/realloc failed to alloc enough memory */
}; };
enum {
YFS_SERVER_INDEX = 0,
YFS_SERVER_UUID = 1,
YFS_SERVER_ENDPOINT = 2,
};
enum {
YFS_ENDPOINT_IPV4 = 0,
YFS_ENDPOINT_IPV6 = 1,
};
#define YFS_MAXENDPOINTS 16
/* /*
* maps to "struct vldbentry" in vvl-spec.pdf * maps to "struct vldbentry" in vvl-spec.pdf
*/ */
...@@ -74,11 +93,57 @@ struct afs_vldbentry { ...@@ -74,11 +93,57 @@ struct afs_vldbentry {
struct in_addr addr; /* server address */ struct in_addr addr; /* server address */
unsigned partition; /* partition ID on this server */ unsigned partition; /* partition ID on this server */
unsigned flags; /* server specific flags */ unsigned flags; /* server specific flags */
#define AFS_VLSF_NEWREPSITE 0x0001 /* unused */ #define AFS_VLSF_NEWREPSITE 0x0001 /* Ignore all 'non-new' servers */
#define AFS_VLSF_ROVOL 0x0002 /* this server holds a R/O instance of the volume */ #define AFS_VLSF_ROVOL 0x0002 /* this server holds a R/O instance of the volume */
#define AFS_VLSF_RWVOL 0x0004 /* this server holds a R/W instance of the volume */ #define AFS_VLSF_RWVOL 0x0004 /* this server holds a R/W instance of the volume */
#define AFS_VLSF_BACKVOL 0x0008 /* this server holds a backup instance of the volume */ #define AFS_VLSF_BACKVOL 0x0008 /* this server holds a backup instance of the volume */
#define AFS_VLSF_UUID 0x0010 /* This server is referred to by its UUID */
#define AFS_VLSF_DONTUSE 0x0020 /* This server ref should be ignored */
} servers[8]; } servers[8];
}; };
#define AFS_VLDB_MAXNAMELEN 65
struct afs_ListAddrByAttributes__xdr {
__be32 Mask;
#define AFS_VLADDR_IPADDR 0x1 /* Match by ->ipaddr */
#define AFS_VLADDR_INDEX 0x2 /* Match by ->index */
#define AFS_VLADDR_UUID 0x4 /* Match by ->uuid */
__be32 ipaddr;
__be32 index;
__be32 spare;
struct afs_uuid__xdr uuid;
};
struct afs_uvldbentry__xdr {
__be32 name[AFS_VLDB_MAXNAMELEN];
__be32 nServers;
struct afs_uuid__xdr serverNumber[AFS_NMAXNSERVERS];
__be32 serverUnique[AFS_NMAXNSERVERS];
__be32 serverPartition[AFS_NMAXNSERVERS];
__be32 serverFlags[AFS_NMAXNSERVERS];
__be32 volumeId[AFS_MAXTYPES];
__be32 cloneId;
__be32 flags;
__be32 spares1;
__be32 spares2;
__be32 spares3;
__be32 spares4;
__be32 spares5;
__be32 spares6;
__be32 spares7;
__be32 spares8;
__be32 spares9;
};
struct afs_address_list {
refcount_t usage;
unsigned int version;
unsigned int nr_addrs;
struct sockaddr_rxrpc addrs[];
};
extern void afs_put_address_list(struct afs_address_list *alist);
#endif /* AFS_VL_H */ #endif /* AFS_VL_H */
...@@ -14,19 +14,6 @@ ...@@ -14,19 +14,6 @@
static uint16_t afs_cell_cache_get_key(const void *cookie_netfs_data, static uint16_t afs_cell_cache_get_key(const void *cookie_netfs_data,
void *buffer, uint16_t buflen); void *buffer, uint16_t buflen);
static uint16_t afs_cell_cache_get_aux(const void *cookie_netfs_data,
void *buffer, uint16_t buflen);
static enum fscache_checkaux afs_cell_cache_check_aux(void *cookie_netfs_data,
const void *buffer,
uint16_t buflen);
static uint16_t afs_vlocation_cache_get_key(const void *cookie_netfs_data,
void *buffer, uint16_t buflen);
static uint16_t afs_vlocation_cache_get_aux(const void *cookie_netfs_data,
void *buffer, uint16_t buflen);
static enum fscache_checkaux afs_vlocation_cache_check_aux(
void *cookie_netfs_data, const void *buffer, uint16_t buflen);
static uint16_t afs_volume_cache_get_key(const void *cookie_netfs_data, static uint16_t afs_volume_cache_get_key(const void *cookie_netfs_data,
void *buffer, uint16_t buflen); void *buffer, uint16_t buflen);
...@@ -42,23 +29,13 @@ static enum fscache_checkaux afs_vnode_cache_check_aux(void *cookie_netfs_data, ...@@ -42,23 +29,13 @@ static enum fscache_checkaux afs_vnode_cache_check_aux(void *cookie_netfs_data,
struct fscache_netfs afs_cache_netfs = { struct fscache_netfs afs_cache_netfs = {
.name = "afs", .name = "afs",
.version = 0, .version = 1,
}; };
struct fscache_cookie_def afs_cell_cache_index_def = { struct fscache_cookie_def afs_cell_cache_index_def = {
.name = "AFS.cell", .name = "AFS.cell",
.type = FSCACHE_COOKIE_TYPE_INDEX, .type = FSCACHE_COOKIE_TYPE_INDEX,
.get_key = afs_cell_cache_get_key, .get_key = afs_cell_cache_get_key,
.get_aux = afs_cell_cache_get_aux,
.check_aux = afs_cell_cache_check_aux,
};
struct fscache_cookie_def afs_vlocation_cache_index_def = {
.name = "AFS.vldb",
.type = FSCACHE_COOKIE_TYPE_INDEX,
.get_key = afs_vlocation_cache_get_key,
.get_aux = afs_vlocation_cache_get_aux,
.check_aux = afs_vlocation_cache_check_aux,
}; };
struct fscache_cookie_def afs_volume_cache_index_def = { struct fscache_cookie_def afs_volume_cache_index_def = {
...@@ -95,150 +72,26 @@ static uint16_t afs_cell_cache_get_key(const void *cookie_netfs_data, ...@@ -95,150 +72,26 @@ static uint16_t afs_cell_cache_get_key(const void *cookie_netfs_data,
return klen; return klen;
} }
/*
* provide new auxiliary cache data
*/
static uint16_t afs_cell_cache_get_aux(const void *cookie_netfs_data,
void *buffer, uint16_t bufmax)
{
const struct afs_cell *cell = cookie_netfs_data;
uint16_t dlen;
_enter("%p,%p,%u", cell, buffer, bufmax);
dlen = cell->vl_naddrs * sizeof(cell->vl_addrs[0]);
dlen = min(dlen, bufmax);
dlen &= ~(sizeof(cell->vl_addrs[0]) - 1);
memcpy(buffer, cell->vl_addrs, dlen);
return dlen;
}
/*
* check that the auxiliary data indicates that the entry is still valid
*/
static enum fscache_checkaux afs_cell_cache_check_aux(void *cookie_netfs_data,
const void *buffer,
uint16_t buflen)
{
_leave(" = OKAY");
return FSCACHE_CHECKAUX_OKAY;
}
/*****************************************************************************/
/*
* set the key for the index entry
*/
static uint16_t afs_vlocation_cache_get_key(const void *cookie_netfs_data,
void *buffer, uint16_t bufmax)
{
const struct afs_vlocation *vlocation = cookie_netfs_data;
uint16_t klen;
_enter("{%s},%p,%u", vlocation->vldb.name, buffer, bufmax);
klen = strnlen(vlocation->vldb.name, sizeof(vlocation->vldb.name));
if (klen > bufmax)
return 0;
memcpy(buffer, vlocation->vldb.name, klen);
_leave(" = %u", klen);
return klen;
}
/*
* provide new auxiliary cache data
*/
static uint16_t afs_vlocation_cache_get_aux(const void *cookie_netfs_data,
void *buffer, uint16_t bufmax)
{
const struct afs_vlocation *vlocation = cookie_netfs_data;
uint16_t dlen;
_enter("{%s},%p,%u", vlocation->vldb.name, buffer, bufmax);
dlen = sizeof(struct afs_cache_vlocation);
dlen -= offsetof(struct afs_cache_vlocation, nservers);
if (dlen > bufmax)
return 0;
memcpy(buffer, (uint8_t *)&vlocation->vldb.nservers, dlen);
_leave(" = %u", dlen);
return dlen;
}
/*
* check that the auxiliary data indicates that the entry is still valid
*/
static
enum fscache_checkaux afs_vlocation_cache_check_aux(void *cookie_netfs_data,
const void *buffer,
uint16_t buflen)
{
const struct afs_cache_vlocation *cvldb;
struct afs_vlocation *vlocation = cookie_netfs_data;
uint16_t dlen;
_enter("{%s},%p,%u", vlocation->vldb.name, buffer, buflen);
/* check the size of the data is what we're expecting */
dlen = sizeof(struct afs_cache_vlocation);
dlen -= offsetof(struct afs_cache_vlocation, nservers);
if (dlen != buflen)
return FSCACHE_CHECKAUX_OBSOLETE;
cvldb = container_of(buffer, struct afs_cache_vlocation, nservers);
/* if what's on disk is more valid than what's in memory, then use the
* VL record from the cache */
if (!vlocation->valid || vlocation->vldb.rtime == cvldb->rtime) {
memcpy((uint8_t *)&vlocation->vldb.nservers, buffer, dlen);
vlocation->valid = 1;
_leave(" = SUCCESS [c->m]");
return FSCACHE_CHECKAUX_OKAY;
}
/* need to update the cache if the cached info differs */
if (memcmp(&vlocation->vldb, buffer, dlen) != 0) {
/* delete if the volume IDs for this name differ */
if (memcmp(&vlocation->vldb.vid, &cvldb->vid,
sizeof(cvldb->vid)) != 0
) {
_leave(" = OBSOLETE");
return FSCACHE_CHECKAUX_OBSOLETE;
}
_leave(" = UPDATE");
return FSCACHE_CHECKAUX_NEEDS_UPDATE;
}
_leave(" = OKAY");
return FSCACHE_CHECKAUX_OKAY;
}
/*****************************************************************************/ /*****************************************************************************/
/* /*
* set the key for the volume index entry * set the key for the volume index entry
*/ */
static uint16_t afs_volume_cache_get_key(const void *cookie_netfs_data, static uint16_t afs_volume_cache_get_key(const void *cookie_netfs_data,
void *buffer, uint16_t bufmax) void *buffer, uint16_t bufmax)
{ {
const struct afs_volume *volume = cookie_netfs_data; const struct afs_volume *volume = cookie_netfs_data;
uint16_t klen; struct {
u64 volid;
} __packed key;
_enter("{%u},%p,%u", volume->type, buffer, bufmax); _enter("{%u},%p,%u", volume->type, buffer, bufmax);
klen = sizeof(volume->type); if (bufmax < sizeof(key))
if (klen > bufmax)
return 0; return 0;
memcpy(buffer, &volume->type, sizeof(volume->type)); key.volid = volume->vid;
memcpy(buffer, &key, sizeof(key));
_leave(" = %u", klen); return sizeof(key);
return klen;
} }
/*****************************************************************************/ /*****************************************************************************/
...@@ -249,20 +102,25 @@ static uint16_t afs_vnode_cache_get_key(const void *cookie_netfs_data, ...@@ -249,20 +102,25 @@ static uint16_t afs_vnode_cache_get_key(const void *cookie_netfs_data,
void *buffer, uint16_t bufmax) void *buffer, uint16_t bufmax)
{ {
const struct afs_vnode *vnode = cookie_netfs_data; const struct afs_vnode *vnode = cookie_netfs_data;
uint16_t klen; struct {
u32 vnode_id[3];
} __packed key;
_enter("{%x,%x,%llx},%p,%u", _enter("{%x,%x,%llx},%p,%u",
vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version, vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
buffer, bufmax); buffer, bufmax);
klen = sizeof(vnode->fid.vnode); /* Allow for a 96-bit key */
if (klen > bufmax) memset(&key, 0, sizeof(key));
return 0; key.vnode_id[0] = vnode->fid.vnode;
key.vnode_id[1] = 0;
key.vnode_id[2] = 0;
memcpy(buffer, &vnode->fid.vnode, sizeof(vnode->fid.vnode)); if (sizeof(key) > bufmax)
return 0;
_leave(" = %u", klen); memcpy(buffer, &key, sizeof(key));
return klen; return sizeof(key);
} }
/* /*
...@@ -280,6 +138,11 @@ static void afs_vnode_cache_get_attr(const void *cookie_netfs_data, ...@@ -280,6 +138,11 @@ static void afs_vnode_cache_get_attr(const void *cookie_netfs_data,
*size = vnode->status.size; *size = vnode->status.size;
} }
struct afs_vnode_cache_aux {
u64 data_version;
u32 fid_unique;
} __packed;
/* /*
* provide new auxiliary cache data * provide new auxiliary cache data
*/ */
...@@ -287,23 +150,21 @@ static uint16_t afs_vnode_cache_get_aux(const void *cookie_netfs_data, ...@@ -287,23 +150,21 @@ static uint16_t afs_vnode_cache_get_aux(const void *cookie_netfs_data,
void *buffer, uint16_t bufmax) void *buffer, uint16_t bufmax)
{ {
const struct afs_vnode *vnode = cookie_netfs_data; const struct afs_vnode *vnode = cookie_netfs_data;
uint16_t dlen; struct afs_vnode_cache_aux aux;
_enter("{%x,%x,%Lx},%p,%u", _enter("{%x,%x,%Lx},%p,%u",
vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version, vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
buffer, bufmax); buffer, bufmax);
dlen = sizeof(vnode->fid.unique) + sizeof(vnode->status.data_version); memset(&aux, 0, sizeof(aux));
if (dlen > bufmax) aux.data_version = vnode->status.data_version;
return 0; aux.fid_unique = vnode->fid.unique;
memcpy(buffer, &vnode->fid.unique, sizeof(vnode->fid.unique)); if (bufmax < sizeof(aux))
buffer += sizeof(vnode->fid.unique); return 0;
memcpy(buffer, &vnode->status.data_version,
sizeof(vnode->status.data_version));
_leave(" = %u", dlen); memcpy(buffer, &aux, sizeof(aux));
return dlen; return sizeof(aux);
} }
/* /*
...@@ -314,43 +175,29 @@ static enum fscache_checkaux afs_vnode_cache_check_aux(void *cookie_netfs_data, ...@@ -314,43 +175,29 @@ static enum fscache_checkaux afs_vnode_cache_check_aux(void *cookie_netfs_data,
uint16_t buflen) uint16_t buflen)
{ {
struct afs_vnode *vnode = cookie_netfs_data; struct afs_vnode *vnode = cookie_netfs_data;
uint16_t dlen; struct afs_vnode_cache_aux aux;
_enter("{%x,%x,%llx},%p,%u", _enter("{%x,%x,%llx},%p,%u",
vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version, vnode->fid.vnode, vnode->fid.unique, vnode->status.data_version,
buffer, buflen); buffer, buflen);
memcpy(&aux, buffer, sizeof(aux));
/* check the size of the data is what we're expecting */ /* check the size of the data is what we're expecting */
dlen = sizeof(vnode->fid.unique) + sizeof(vnode->status.data_version); if (buflen != sizeof(aux)) {
if (dlen != buflen) { _leave(" = OBSOLETE [len %hx != %zx]", buflen, sizeof(aux));
_leave(" = OBSOLETE [len %hx != %hx]", dlen, buflen);
return FSCACHE_CHECKAUX_OBSOLETE; return FSCACHE_CHECKAUX_OBSOLETE;
} }
if (memcmp(buffer, if (vnode->fid.unique != aux.fid_unique) {
&vnode->fid.unique,
sizeof(vnode->fid.unique)
) != 0) {
unsigned unique;
memcpy(&unique, buffer, sizeof(unique));
_leave(" = OBSOLETE [uniq %x != %x]", _leave(" = OBSOLETE [uniq %x != %x]",
unique, vnode->fid.unique); aux.fid_unique, vnode->fid.unique);
return FSCACHE_CHECKAUX_OBSOLETE; return FSCACHE_CHECKAUX_OBSOLETE;
} }
if (memcmp(buffer + sizeof(vnode->fid.unique), if (vnode->status.data_version != aux.data_version) {
&vnode->status.data_version,
sizeof(vnode->status.data_version)
) != 0) {
afs_dataversion_t version;
memcpy(&version, buffer + sizeof(vnode->fid.unique),
sizeof(version));
_leave(" = OBSOLETE [vers %llx != %llx]", _leave(" = OBSOLETE [vers %llx != %llx]",
version, vnode->status.data_version); aux.data_version, vnode->status.data_version);
return FSCACHE_CHECKAUX_OBSOLETE; return FSCACHE_CHECKAUX_OBSOLETE;
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -41,7 +41,6 @@ static CM_NAME(CallBack); ...@@ -41,7 +41,6 @@ static CM_NAME(CallBack);
static const struct afs_call_type afs_SRXCBCallBack = { static const struct afs_call_type afs_SRXCBCallBack = {
.name = afs_SRXCBCallBack_name, .name = afs_SRXCBCallBack_name,
.deliver = afs_deliver_cb_callback, .deliver = afs_deliver_cb_callback,
.abort_to_error = afs_abort_to_error,
.destructor = afs_cm_destructor, .destructor = afs_cm_destructor,
.work = SRXAFSCB_CallBack, .work = SRXAFSCB_CallBack,
}; };
...@@ -53,7 +52,6 @@ static CM_NAME(InitCallBackState); ...@@ -53,7 +52,6 @@ static CM_NAME(InitCallBackState);
static const struct afs_call_type afs_SRXCBInitCallBackState = { static const struct afs_call_type afs_SRXCBInitCallBackState = {
.name = afs_SRXCBInitCallBackState_name, .name = afs_SRXCBInitCallBackState_name,
.deliver = afs_deliver_cb_init_call_back_state, .deliver = afs_deliver_cb_init_call_back_state,
.abort_to_error = afs_abort_to_error,
.destructor = afs_cm_destructor, .destructor = afs_cm_destructor,
.work = SRXAFSCB_InitCallBackState, .work = SRXAFSCB_InitCallBackState,
}; };
...@@ -65,7 +63,6 @@ static CM_NAME(InitCallBackState3); ...@@ -65,7 +63,6 @@ static CM_NAME(InitCallBackState3);
static const struct afs_call_type afs_SRXCBInitCallBackState3 = { static const struct afs_call_type afs_SRXCBInitCallBackState3 = {
.name = afs_SRXCBInitCallBackState3_name, .name = afs_SRXCBInitCallBackState3_name,
.deliver = afs_deliver_cb_init_call_back_state3, .deliver = afs_deliver_cb_init_call_back_state3,
.abort_to_error = afs_abort_to_error,
.destructor = afs_cm_destructor, .destructor = afs_cm_destructor,
.work = SRXAFSCB_InitCallBackState, .work = SRXAFSCB_InitCallBackState,
}; };
...@@ -77,7 +74,6 @@ static CM_NAME(Probe); ...@@ -77,7 +74,6 @@ static CM_NAME(Probe);
static const struct afs_call_type afs_SRXCBProbe = { static const struct afs_call_type afs_SRXCBProbe = {
.name = afs_SRXCBProbe_name, .name = afs_SRXCBProbe_name,
.deliver = afs_deliver_cb_probe, .deliver = afs_deliver_cb_probe,
.abort_to_error = afs_abort_to_error,
.destructor = afs_cm_destructor, .destructor = afs_cm_destructor,
.work = SRXAFSCB_Probe, .work = SRXAFSCB_Probe,
}; };
...@@ -89,7 +85,6 @@ static CM_NAME(ProbeUuid); ...@@ -89,7 +85,6 @@ static CM_NAME(ProbeUuid);
static const struct afs_call_type afs_SRXCBProbeUuid = { static const struct afs_call_type afs_SRXCBProbeUuid = {
.name = afs_SRXCBProbeUuid_name, .name = afs_SRXCBProbeUuid_name,
.deliver = afs_deliver_cb_probe_uuid, .deliver = afs_deliver_cb_probe_uuid,
.abort_to_error = afs_abort_to_error,
.destructor = afs_cm_destructor, .destructor = afs_cm_destructor,
.work = SRXAFSCB_ProbeUuid, .work = SRXAFSCB_ProbeUuid,
}; };
...@@ -101,7 +96,6 @@ static CM_NAME(TellMeAboutYourself); ...@@ -101,7 +96,6 @@ static CM_NAME(TellMeAboutYourself);
static const struct afs_call_type afs_SRXCBTellMeAboutYourself = { static const struct afs_call_type afs_SRXCBTellMeAboutYourself = {
.name = afs_SRXCBTellMeAboutYourself_name, .name = afs_SRXCBTellMeAboutYourself_name,
.deliver = afs_deliver_cb_tell_me_about_yourself, .deliver = afs_deliver_cb_tell_me_about_yourself,
.abort_to_error = afs_abort_to_error,
.destructor = afs_cm_destructor, .destructor = afs_cm_destructor,
.work = SRXAFSCB_TellMeAboutYourself, .work = SRXAFSCB_TellMeAboutYourself,
}; };
...@@ -127,6 +121,9 @@ bool afs_cm_incoming_call(struct afs_call *call) ...@@ -127,6 +121,9 @@ bool afs_cm_incoming_call(struct afs_call *call)
case CBProbe: case CBProbe:
call->type = &afs_SRXCBProbe; call->type = &afs_SRXCBProbe;
return true; return true;
case CBProbeUuid:
call->type = &afs_SRXCBProbeUuid;
return true;
case CBTellMeAboutYourself: case CBTellMeAboutYourself:
call->type = &afs_SRXCBTellMeAboutYourself; call->type = &afs_SRXCBTellMeAboutYourself;
return true; return true;
...@@ -147,18 +144,16 @@ static void afs_cm_destructor(struct afs_call *call) ...@@ -147,18 +144,16 @@ static void afs_cm_destructor(struct afs_call *call)
* afs_deliver_cb_callback(). * afs_deliver_cb_callback().
*/ */
if (call->unmarshall == 5) { if (call->unmarshall == 5) {
ASSERT(call->server && call->count && call->request); ASSERT(call->cm_server && call->count && call->request);
afs_break_callbacks(call->server, call->count, call->request); afs_break_callbacks(call->cm_server, call->count, call->request);
} }
afs_put_server(call->server);
call->server = NULL;
kfree(call->buffer); kfree(call->buffer);
call->buffer = NULL; call->buffer = NULL;
} }
/* /*
* allow the fileserver to see if the cache manager is still alive * The server supplied a list of callbacks that it wanted to break.
*/ */
static void SRXAFSCB_CallBack(struct work_struct *work) static void SRXAFSCB_CallBack(struct work_struct *work)
{ {
...@@ -173,7 +168,7 @@ static void SRXAFSCB_CallBack(struct work_struct *work) ...@@ -173,7 +168,7 @@ static void SRXAFSCB_CallBack(struct work_struct *work)
* yet */ * yet */
afs_send_empty_reply(call); afs_send_empty_reply(call);
afs_break_callbacks(call->server, call->count, call->request); afs_break_callbacks(call->cm_server, call->count, call->request);
afs_put_call(call); afs_put_call(call);
_leave(""); _leave("");
} }
...@@ -193,7 +188,6 @@ static int afs_deliver_cb_callback(struct afs_call *call) ...@@ -193,7 +188,6 @@ static int afs_deliver_cb_callback(struct afs_call *call)
switch (call->unmarshall) { switch (call->unmarshall) {
case 0: case 0:
rxrpc_kernel_get_peer(afs_socket, call->rxcall, &srx);
call->offset = 0; call->offset = 0;
call->unmarshall++; call->unmarshall++;
...@@ -286,14 +280,16 @@ static int afs_deliver_cb_callback(struct afs_call *call) ...@@ -286,14 +280,16 @@ static int afs_deliver_cb_callback(struct afs_call *call)
break; break;
} }
call->state = AFS_CALL_REPLYING; if (!afs_check_call_state(call, AFS_CALL_SV_REPLYING))
return -EIO;
/* we'll need the file server record as that tells us which set of /* we'll need the file server record as that tells us which set of
* vnodes to operate upon */ * vnodes to operate upon */
server = afs_find_server(&srx); rxrpc_kernel_get_peer(call->net->socket, call->rxcall, &srx);
server = afs_find_server(call->net, &srx);
if (!server) if (!server)
return -ENOTCONN; return -ENOTCONN;
call->server = server; call->cm_server = server;
return afs_queue_call_work(call); return afs_queue_call_work(call);
} }
...@@ -305,9 +301,9 @@ static void SRXAFSCB_InitCallBackState(struct work_struct *work) ...@@ -305,9 +301,9 @@ static void SRXAFSCB_InitCallBackState(struct work_struct *work)
{ {
struct afs_call *call = container_of(work, struct afs_call, work); struct afs_call *call = container_of(work, struct afs_call, work);
_enter("{%p}", call->server); _enter("{%p}", call->cm_server);
afs_init_callback_state(call->server); afs_init_callback_state(call->cm_server);
afs_send_empty_reply(call); afs_send_empty_reply(call);
afs_put_call(call); afs_put_call(call);
_leave(""); _leave("");
...@@ -324,21 +320,18 @@ static int afs_deliver_cb_init_call_back_state(struct afs_call *call) ...@@ -324,21 +320,18 @@ static int afs_deliver_cb_init_call_back_state(struct afs_call *call)
_enter(""); _enter("");
rxrpc_kernel_get_peer(afs_socket, call->rxcall, &srx); rxrpc_kernel_get_peer(call->net->socket, call->rxcall, &srx);
ret = afs_extract_data(call, NULL, 0, false); ret = afs_extract_data(call, NULL, 0, false);
if (ret < 0) if (ret < 0)
return ret; return ret;
/* no unmarshalling required */
call->state = AFS_CALL_REPLYING;
/* we'll need the file server record as that tells us which set of /* we'll need the file server record as that tells us which set of
* vnodes to operate upon */ * vnodes to operate upon */
server = afs_find_server(&srx); server = afs_find_server(call->net, &srx);
if (!server) if (!server)
return -ENOTCONN; return -ENOTCONN;
call->server = server; call->cm_server = server;
return afs_queue_call_work(call); return afs_queue_call_work(call);
} }
...@@ -357,8 +350,6 @@ static int afs_deliver_cb_init_call_back_state3(struct afs_call *call) ...@@ -357,8 +350,6 @@ static int afs_deliver_cb_init_call_back_state3(struct afs_call *call)
_enter(""); _enter("");
rxrpc_kernel_get_peer(afs_socket, call->rxcall, &srx);
_enter("{%u}", call->unmarshall); _enter("{%u}", call->unmarshall);
switch (call->unmarshall) { switch (call->unmarshall) {
...@@ -402,15 +393,16 @@ static int afs_deliver_cb_init_call_back_state3(struct afs_call *call) ...@@ -402,15 +393,16 @@ static int afs_deliver_cb_init_call_back_state3(struct afs_call *call)
break; break;
} }
/* no unmarshalling required */ if (!afs_check_call_state(call, AFS_CALL_SV_REPLYING))
call->state = AFS_CALL_REPLYING; return -EIO;
/* we'll need the file server record as that tells us which set of /* we'll need the file server record as that tells us which set of
* vnodes to operate upon */ * vnodes to operate upon */
server = afs_find_server(&srx); rxrpc_kernel_get_peer(call->net->socket, call->rxcall, &srx);
server = afs_find_server(call->net, &srx);
if (!server) if (!server)
return -ENOTCONN; return -ENOTCONN;
call->server = server; call->cm_server = server;
return afs_queue_call_work(call); return afs_queue_call_work(call);
} }
...@@ -441,8 +433,8 @@ static int afs_deliver_cb_probe(struct afs_call *call) ...@@ -441,8 +433,8 @@ static int afs_deliver_cb_probe(struct afs_call *call)
if (ret < 0) if (ret < 0)
return ret; return ret;
/* no unmarshalling required */ if (!afs_check_call_state(call, AFS_CALL_SV_REPLYING))
call->state = AFS_CALL_REPLYING; return -EIO;
return afs_queue_call_work(call); return afs_queue_call_work(call);
} }
...@@ -461,7 +453,7 @@ static void SRXAFSCB_ProbeUuid(struct work_struct *work) ...@@ -461,7 +453,7 @@ static void SRXAFSCB_ProbeUuid(struct work_struct *work)
_enter(""); _enter("");
if (memcmp(r, &afs_uuid, sizeof(afs_uuid)) == 0) if (memcmp(r, &call->net->uuid, sizeof(call->net->uuid)) == 0)
reply.match = htonl(0); reply.match = htonl(0);
else else
reply.match = htonl(1); reply.match = htonl(1);
...@@ -524,7 +516,8 @@ static int afs_deliver_cb_probe_uuid(struct afs_call *call) ...@@ -524,7 +516,8 @@ static int afs_deliver_cb_probe_uuid(struct afs_call *call)
break; break;
} }
call->state = AFS_CALL_REPLYING; if (!afs_check_call_state(call, AFS_CALL_SV_REPLYING))
return -EIO;
return afs_queue_call_work(call); return afs_queue_call_work(call);
} }
...@@ -568,13 +561,13 @@ static void SRXAFSCB_TellMeAboutYourself(struct work_struct *work) ...@@ -568,13 +561,13 @@ static void SRXAFSCB_TellMeAboutYourself(struct work_struct *work)
memset(&reply, 0, sizeof(reply)); memset(&reply, 0, sizeof(reply));
reply.ia.nifs = htonl(nifs); reply.ia.nifs = htonl(nifs);
reply.ia.uuid[0] = afs_uuid.time_low; reply.ia.uuid[0] = call->net->uuid.time_low;
reply.ia.uuid[1] = htonl(ntohs(afs_uuid.time_mid)); reply.ia.uuid[1] = htonl(ntohs(call->net->uuid.time_mid));
reply.ia.uuid[2] = htonl(ntohs(afs_uuid.time_hi_and_version)); reply.ia.uuid[2] = htonl(ntohs(call->net->uuid.time_hi_and_version));
reply.ia.uuid[3] = htonl((s8) afs_uuid.clock_seq_hi_and_reserved); reply.ia.uuid[3] = htonl((s8) call->net->uuid.clock_seq_hi_and_reserved);
reply.ia.uuid[4] = htonl((s8) afs_uuid.clock_seq_low); reply.ia.uuid[4] = htonl((s8) call->net->uuid.clock_seq_low);
for (loop = 0; loop < 6; loop++) for (loop = 0; loop < 6; loop++)
reply.ia.uuid[loop + 5] = htonl((s8) afs_uuid.node[loop]); reply.ia.uuid[loop + 5] = htonl((s8) call->net->uuid.node[loop]);
if (ifs) { if (ifs) {
for (loop = 0; loop < nifs; loop++) { for (loop = 0; loop < nifs; loop++) {
...@@ -605,8 +598,8 @@ static int afs_deliver_cb_tell_me_about_yourself(struct afs_call *call) ...@@ -605,8 +598,8 @@ static int afs_deliver_cb_tell_me_about_yourself(struct afs_call *call)
if (ret < 0) if (ret < 0)
return ret; return ret;
/* no unmarshalling required */ if (!afs_check_call_state(call, AFS_CALL_SV_REPLYING))
call->state = AFS_CALL_REPLYING; return -EIO;
return afs_queue_call_work(call); return afs_queue_call_work(call);
} }
This diff is collapsed.
...@@ -19,11 +19,11 @@ ...@@ -19,11 +19,11 @@
#include <linux/task_io_accounting_ops.h> #include <linux/task_io_accounting_ops.h>
#include "internal.h" #include "internal.h"
static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
static int afs_readpage(struct file *file, struct page *page); static int afs_readpage(struct file *file, struct page *page);
static void afs_invalidatepage(struct page *page, unsigned int offset, static void afs_invalidatepage(struct page *page, unsigned int offset,
unsigned int length); unsigned int length);
static int afs_releasepage(struct page *page, gfp_t gfp_flags); static int afs_releasepage(struct page *page, gfp_t gfp_flags);
static int afs_launder_page(struct page *page);
static int afs_readpages(struct file *filp, struct address_space *mapping, static int afs_readpages(struct file *filp, struct address_space *mapping,
struct list_head *pages, unsigned nr_pages); struct list_head *pages, unsigned nr_pages);
...@@ -35,7 +35,7 @@ const struct file_operations afs_file_operations = { ...@@ -35,7 +35,7 @@ const struct file_operations afs_file_operations = {
.llseek = generic_file_llseek, .llseek = generic_file_llseek,
.read_iter = generic_file_read_iter, .read_iter = generic_file_read_iter,
.write_iter = afs_file_write, .write_iter = afs_file_write,
.mmap = generic_file_readonly_mmap, .mmap = afs_file_mmap,
.splice_read = generic_file_splice_read, .splice_read = generic_file_splice_read,
.fsync = afs_fsync, .fsync = afs_fsync,
.lock = afs_lock, .lock = afs_lock,
...@@ -62,12 +62,63 @@ const struct address_space_operations afs_fs_aops = { ...@@ -62,12 +62,63 @@ const struct address_space_operations afs_fs_aops = {
.writepages = afs_writepages, .writepages = afs_writepages,
}; };
static const struct vm_operations_struct afs_vm_ops = {
.fault = filemap_fault,
.map_pages = filemap_map_pages,
.page_mkwrite = afs_page_mkwrite,
};
/*
* Discard a pin on a writeback key.
*/
void afs_put_wb_key(struct afs_wb_key *wbk)
{
if (refcount_dec_and_test(&wbk->usage)) {
key_put(wbk->key);
kfree(wbk);
}
}
/*
* Cache key for writeback.
*/
int afs_cache_wb_key(struct afs_vnode *vnode, struct afs_file *af)
{
struct afs_wb_key *wbk, *p;
wbk = kzalloc(sizeof(struct afs_wb_key), GFP_KERNEL);
if (!wbk)
return -ENOMEM;
refcount_set(&wbk->usage, 2);
wbk->key = af->key;
spin_lock(&vnode->wb_lock);
list_for_each_entry(p, &vnode->wb_keys, vnode_link) {
if (p->key == wbk->key)
goto found;
}
key_get(wbk->key);
list_add_tail(&wbk->vnode_link, &vnode->wb_keys);
spin_unlock(&vnode->wb_lock);
af->wb = wbk;
return 0;
found:
refcount_inc(&p->usage);
spin_unlock(&vnode->wb_lock);
af->wb = p;
kfree(wbk);
return 0;
}
/* /*
* open an AFS file or directory and attach a key to it * open an AFS file or directory and attach a key to it
*/ */
int afs_open(struct inode *inode, struct file *file) int afs_open(struct inode *inode, struct file *file)
{ {
struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af;
struct key *key; struct key *key;
int ret; int ret;
...@@ -75,19 +126,38 @@ int afs_open(struct inode *inode, struct file *file) ...@@ -75,19 +126,38 @@ int afs_open(struct inode *inode, struct file *file)
key = afs_request_key(vnode->volume->cell); key = afs_request_key(vnode->volume->cell);
if (IS_ERR(key)) { if (IS_ERR(key)) {
_leave(" = %ld [key]", PTR_ERR(key)); ret = PTR_ERR(key);
return PTR_ERR(key); goto error;
} }
ret = afs_validate(vnode, key); af = kzalloc(sizeof(*af), GFP_KERNEL);
if (ret < 0) { if (!af) {
_leave(" = %d [val]", ret); ret = -ENOMEM;
return ret; goto error_key;
} }
af->key = key;
ret = afs_validate(vnode, key);
if (ret < 0)
goto error_af;
file->private_data = key; if (file->f_mode & FMODE_WRITE) {
ret = afs_cache_wb_key(vnode, af);
if (ret < 0)
goto error_af;
}
file->private_data = af;
_leave(" = 0"); _leave(" = 0");
return 0; return 0;
error_af:
kfree(af);
error_key:
key_put(key);
error:
_leave(" = %d", ret);
return ret;
} }
/* /*
...@@ -96,10 +166,16 @@ int afs_open(struct inode *inode, struct file *file) ...@@ -96,10 +166,16 @@ int afs_open(struct inode *inode, struct file *file)
int afs_release(struct inode *inode, struct file *file) int afs_release(struct inode *inode, struct file *file)
{ {
struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_vnode *vnode = AFS_FS_I(inode);
struct afs_file *af = file->private_data;
_enter("{%x:%u},", vnode->fid.vid, vnode->fid.vnode); _enter("{%x:%u},", vnode->fid.vid, vnode->fid.vnode);
key_put(file->private_data); file->private_data = NULL;
if (af->wb)
afs_put_wb_key(af->wb);
key_put(af->key);
kfree(af);
afs_prune_wb_keys(vnode);
_leave(" = 0"); _leave(" = 0");
return 0; return 0;
} }
...@@ -137,6 +213,37 @@ static void afs_file_readpage_read_complete(struct page *page, ...@@ -137,6 +213,37 @@ static void afs_file_readpage_read_complete(struct page *page,
} }
#endif #endif
/*
* Fetch file data from the volume.
*/
int afs_fetch_data(struct afs_vnode *vnode, struct key *key, struct afs_read *desc)
{
struct afs_fs_cursor fc;
int ret;
_enter("%s{%x:%u.%u},%x,,,",
vnode->volume->name,
vnode->fid.vid,
vnode->fid.vnode,
vnode->fid.unique,
key_serial(key));
ret = -ERESTARTSYS;
if (afs_begin_vnode_operation(&fc, vnode, key)) {
while (afs_select_fileserver(&fc)) {
fc.cb_break = vnode->cb_break + vnode->cb_s_break;
afs_fs_fetch_data(&fc, desc);
}
afs_check_for_remote_deletion(&fc, fc.vnode);
afs_vnode_commit_status(&fc, vnode, fc.cb_break);
ret = afs_end_vnode_operation(&fc);
}
_leave(" = %d", ret);
return ret;
}
/* /*
* read page from file, directory or symlink, given a key to use * read page from file, directory or symlink, given a key to use
*/ */
...@@ -199,8 +306,13 @@ int afs_page_filler(void *data, struct page *page) ...@@ -199,8 +306,13 @@ int afs_page_filler(void *data, struct page *page)
/* read the contents of the file from the server into the /* read the contents of the file from the server into the
* page */ * page */
ret = afs_vnode_fetch_data(vnode, key, req); ret = afs_fetch_data(vnode, key, req);
afs_put_read(req); afs_put_read(req);
if (ret >= 0 && S_ISDIR(inode->i_mode) &&
!afs_dir_check_page(inode, page))
ret = -EIO;
if (ret < 0) { if (ret < 0) {
if (ret == -ENOENT) { if (ret == -ENOENT) {
_debug("got NOENT from server" _debug("got NOENT from server"
...@@ -259,12 +371,12 @@ static int afs_readpage(struct file *file, struct page *page) ...@@ -259,12 +371,12 @@ static int afs_readpage(struct file *file, struct page *page)
int ret; int ret;
if (file) { if (file) {
key = file->private_data; key = afs_file_key(file);
ASSERT(key != NULL); ASSERT(key != NULL);
ret = afs_page_filler(key, page); ret = afs_page_filler(key, page);
} else { } else {
struct inode *inode = page->mapping->host; struct inode *inode = page->mapping->host;
key = afs_request_key(AFS_FS_S(inode->i_sb)->volume->cell); key = afs_request_key(AFS_FS_S(inode->i_sb)->cell);
if (IS_ERR(key)) { if (IS_ERR(key)) {
ret = PTR_ERR(key); ret = PTR_ERR(key);
} else { } else {
...@@ -281,7 +393,7 @@ static int afs_readpage(struct file *file, struct page *page) ...@@ -281,7 +393,7 @@ static int afs_readpage(struct file *file, struct page *page)
static void afs_readpages_page_done(struct afs_call *call, struct afs_read *req) static void afs_readpages_page_done(struct afs_call *call, struct afs_read *req)
{ {
#ifdef CONFIG_AFS_FSCACHE #ifdef CONFIG_AFS_FSCACHE
struct afs_vnode *vnode = call->reply; struct afs_vnode *vnode = call->reply[0];
#endif #endif
struct page *page = req->pages[req->index]; struct page *page = req->pages[req->index];
...@@ -310,7 +422,7 @@ static int afs_readpages_one(struct file *file, struct address_space *mapping, ...@@ -310,7 +422,7 @@ static int afs_readpages_one(struct file *file, struct address_space *mapping,
struct afs_read *req; struct afs_read *req;
struct list_head *p; struct list_head *p;
struct page *first, *page; struct page *first, *page;
struct key *key = file->private_data; struct key *key = afs_file_key(file);
pgoff_t index; pgoff_t index;
int ret, n, i; int ret, n, i;
...@@ -369,7 +481,7 @@ static int afs_readpages_one(struct file *file, struct address_space *mapping, ...@@ -369,7 +481,7 @@ static int afs_readpages_one(struct file *file, struct address_space *mapping,
return 0; return 0;
} }
ret = afs_vnode_fetch_data(vnode, key, req); ret = afs_fetch_data(vnode, key, req);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -406,7 +518,7 @@ static int afs_readpages_one(struct file *file, struct address_space *mapping, ...@@ -406,7 +518,7 @@ static int afs_readpages_one(struct file *file, struct address_space *mapping,
static int afs_readpages(struct file *file, struct address_space *mapping, static int afs_readpages(struct file *file, struct address_space *mapping,
struct list_head *pages, unsigned nr_pages) struct list_head *pages, unsigned nr_pages)
{ {
struct key *key = file->private_data; struct key *key = afs_file_key(file);
struct afs_vnode *vnode; struct afs_vnode *vnode;
int ret = 0; int ret = 0;
...@@ -463,16 +575,6 @@ static int afs_readpages(struct file *file, struct address_space *mapping, ...@@ -463,16 +575,6 @@ static int afs_readpages(struct file *file, struct address_space *mapping,
return ret; return ret;
} }
/*
* write back a dirty page
*/
static int afs_launder_page(struct page *page)
{
_enter("{%lu}", page->index);
return 0;
}
/* /*
* invalidate part or all of a page * invalidate part or all of a page
* - release a page and clean up its private data if offset is 0 (indicating * - release a page and clean up its private data if offset is 0 (indicating
...@@ -481,7 +583,8 @@ static int afs_launder_page(struct page *page) ...@@ -481,7 +583,8 @@ static int afs_launder_page(struct page *page)
static void afs_invalidatepage(struct page *page, unsigned int offset, static void afs_invalidatepage(struct page *page, unsigned int offset,
unsigned int length) unsigned int length)
{ {
struct afs_writeback *wb = (struct afs_writeback *) page_private(page); struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
unsigned long priv;
_enter("{%lu},%u,%u", page->index, offset, length); _enter("{%lu},%u,%u", page->index, offset, length);
...@@ -498,13 +601,11 @@ static void afs_invalidatepage(struct page *page, unsigned int offset, ...@@ -498,13 +601,11 @@ static void afs_invalidatepage(struct page *page, unsigned int offset,
#endif #endif
if (PagePrivate(page)) { if (PagePrivate(page)) {
if (wb && !PageWriteback(page)) { priv = page_private(page);
set_page_private(page, 0); trace_afs_page_dirty(vnode, tracepoint_string("inval"),
afs_put_writeback(wb); page->index, priv);
} set_page_private(page, 0);
ClearPagePrivate(page);
if (!page_private(page))
ClearPagePrivate(page);
} }
} }
...@@ -517,8 +618,8 @@ static void afs_invalidatepage(struct page *page, unsigned int offset, ...@@ -517,8 +618,8 @@ static void afs_invalidatepage(struct page *page, unsigned int offset,
*/ */
static int afs_releasepage(struct page *page, gfp_t gfp_flags) static int afs_releasepage(struct page *page, gfp_t gfp_flags)
{ {
struct afs_writeback *wb = (struct afs_writeback *) page_private(page);
struct afs_vnode *vnode = AFS_FS_I(page->mapping->host); struct afs_vnode *vnode = AFS_FS_I(page->mapping->host);
unsigned long priv;
_enter("{{%x:%u}[%lu],%lx},%x", _enter("{{%x:%u}[%lu],%lx},%x",
vnode->fid.vid, vnode->fid.vnode, page->index, page->flags, vnode->fid.vid, vnode->fid.vnode, page->index, page->flags,
...@@ -534,10 +635,10 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags) ...@@ -534,10 +635,10 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags)
#endif #endif
if (PagePrivate(page)) { if (PagePrivate(page)) {
if (wb) { priv = page_private(page);
set_page_private(page, 0); trace_afs_page_dirty(vnode, tracepoint_string("rel"),
afs_put_writeback(wb); page->index, priv);
} set_page_private(page, 0);
ClearPagePrivate(page); ClearPagePrivate(page);
} }
...@@ -545,3 +646,16 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags) ...@@ -545,3 +646,16 @@ static int afs_releasepage(struct page *page, gfp_t gfp_flags)
_leave(" = T"); _leave(" = T");
return 1; return 1;
} }
/*
* Handle setting up a memory mapping on an AFS file.
*/
static int afs_file_mmap(struct file *file, struct vm_area_struct *vma)
{
int ret;
ret = generic_file_mmap(file, vma);
if (ret == 0)
vma->vm_ops = &afs_vm_ops;
return ret;
}
...@@ -14,47 +14,16 @@ ...@@ -14,47 +14,16 @@
#define AFS_LOCK_GRANTED 0 #define AFS_LOCK_GRANTED 0
#define AFS_LOCK_PENDING 1 #define AFS_LOCK_PENDING 1
struct workqueue_struct *afs_lock_manager;
static void afs_fl_copy_lock(struct file_lock *new, struct file_lock *fl); static void afs_fl_copy_lock(struct file_lock *new, struct file_lock *fl);
static void afs_fl_release_private(struct file_lock *fl); static void afs_fl_release_private(struct file_lock *fl);
static struct workqueue_struct *afs_lock_manager;
static DEFINE_MUTEX(afs_lock_manager_mutex);
static const struct file_lock_operations afs_lock_ops = { static const struct file_lock_operations afs_lock_ops = {
.fl_copy_lock = afs_fl_copy_lock, .fl_copy_lock = afs_fl_copy_lock,
.fl_release_private = afs_fl_release_private, .fl_release_private = afs_fl_release_private,
}; };
/*
* initialise the lock manager thread if it isn't already running
*/
static int afs_init_lock_manager(void)
{
int ret;
ret = 0;
if (!afs_lock_manager) {
mutex_lock(&afs_lock_manager_mutex);
if (!afs_lock_manager) {
afs_lock_manager = alloc_workqueue("kafs_lockd",
WQ_MEM_RECLAIM, 0);
if (!afs_lock_manager)
ret = -ENOMEM;
}
mutex_unlock(&afs_lock_manager_mutex);
}
return ret;
}
/*
* destroy the lock manager thread if it's running
*/
void __exit afs_kill_lock_manager(void)
{
if (afs_lock_manager)
destroy_workqueue(afs_lock_manager);
}
/* /*
* if the callback is broken on this vnode, then the lock may now be available * if the callback is broken on this vnode, then the lock may now be available
*/ */
...@@ -98,6 +67,100 @@ static void afs_grant_locks(struct afs_vnode *vnode, struct file_lock *fl) ...@@ -98,6 +67,100 @@ static void afs_grant_locks(struct afs_vnode *vnode, struct file_lock *fl)
} }
} }
/*
* Get a lock on a file
*/
static int afs_set_lock(struct afs_vnode *vnode, struct key *key,
afs_lock_type_t type)
{
struct afs_fs_cursor fc;
int ret;
_enter("%s{%x:%u.%u},%x,%u",
vnode->volume->name,
vnode->fid.vid,
vnode->fid.vnode,
vnode->fid.unique,
key_serial(key), type);
ret = -ERESTARTSYS;
if (afs_begin_vnode_operation(&fc, vnode, key)) {
while (afs_select_fileserver(&fc)) {
fc.cb_break = vnode->cb_break + vnode->cb_s_break;
afs_fs_set_lock(&fc, type);
}
afs_check_for_remote_deletion(&fc, fc.vnode);
afs_vnode_commit_status(&fc, vnode, fc.cb_break);
ret = afs_end_vnode_operation(&fc);
}
_leave(" = %d", ret);
return ret;
}
/*
* Extend a lock on a file
*/
static int afs_extend_lock(struct afs_vnode *vnode, struct key *key)
{
struct afs_fs_cursor fc;
int ret;
_enter("%s{%x:%u.%u},%x",
vnode->volume->name,
vnode->fid.vid,
vnode->fid.vnode,
vnode->fid.unique,
key_serial(key));
ret = -ERESTARTSYS;
if (afs_begin_vnode_operation(&fc, vnode, key)) {
while (afs_select_current_fileserver(&fc)) {
fc.cb_break = vnode->cb_break + vnode->cb_s_break;
afs_fs_extend_lock(&fc);
}
afs_check_for_remote_deletion(&fc, fc.vnode);
afs_vnode_commit_status(&fc, vnode, fc.cb_break);
ret = afs_end_vnode_operation(&fc);
}
_leave(" = %d", ret);
return ret;
}
/*
* Release a lock on a file
*/
static int afs_release_lock(struct afs_vnode *vnode, struct key *key)
{
struct afs_fs_cursor fc;
int ret;
_enter("%s{%x:%u.%u},%x",
vnode->volume->name,
vnode->fid.vid,
vnode->fid.vnode,
vnode->fid.unique,
key_serial(key));
ret = -ERESTARTSYS;
if (afs_begin_vnode_operation(&fc, vnode, key)) {
while (afs_select_current_fileserver(&fc)) {
fc.cb_break = vnode->cb_break + vnode->cb_s_break;
afs_fs_release_lock(&fc);
}
afs_check_for_remote_deletion(&fc, fc.vnode);
afs_vnode_commit_status(&fc, vnode, fc.cb_break);
ret = afs_end_vnode_operation(&fc);
}
_leave(" = %d", ret);
return ret;
}
/* /*
* do work for a lock, including: * do work for a lock, including:
* - probing for a lock we're waiting on but didn't get immediately * - probing for a lock we're waiting on but didn't get immediately
...@@ -122,7 +185,7 @@ void afs_lock_work(struct work_struct *work) ...@@ -122,7 +185,7 @@ void afs_lock_work(struct work_struct *work)
/* attempt to release the server lock; if it fails, we just /* attempt to release the server lock; if it fails, we just
* wait 5 minutes and it'll time out anyway */ * wait 5 minutes and it'll time out anyway */
ret = afs_vnode_release_lock(vnode, vnode->unlock_key); ret = afs_release_lock(vnode, vnode->unlock_key);
if (ret < 0) if (ret < 0)
printk(KERN_WARNING "AFS:" printk(KERN_WARNING "AFS:"
" Failed to release lock on {%x:%x} error %d\n", " Failed to release lock on {%x:%x} error %d\n",
...@@ -143,10 +206,10 @@ void afs_lock_work(struct work_struct *work) ...@@ -143,10 +206,10 @@ void afs_lock_work(struct work_struct *work)
BUG(); BUG();
fl = list_entry(vnode->granted_locks.next, fl = list_entry(vnode->granted_locks.next,
struct file_lock, fl_u.afs.link); struct file_lock, fl_u.afs.link);
key = key_get(fl->fl_file->private_data); key = key_get(afs_file_key(fl->fl_file));
spin_unlock(&vnode->lock); spin_unlock(&vnode->lock);
ret = afs_vnode_extend_lock(vnode, key); ret = afs_extend_lock(vnode, key);
clear_bit(AFS_VNODE_LOCKING, &vnode->flags); clear_bit(AFS_VNODE_LOCKING, &vnode->flags);
key_put(key); key_put(key);
switch (ret) { switch (ret) {
...@@ -177,12 +240,12 @@ void afs_lock_work(struct work_struct *work) ...@@ -177,12 +240,12 @@ void afs_lock_work(struct work_struct *work)
BUG(); BUG();
fl = list_entry(vnode->pending_locks.next, fl = list_entry(vnode->pending_locks.next,
struct file_lock, fl_u.afs.link); struct file_lock, fl_u.afs.link);
key = key_get(fl->fl_file->private_data); key = key_get(afs_file_key(fl->fl_file));
type = (fl->fl_type == F_RDLCK) ? type = (fl->fl_type == F_RDLCK) ?
AFS_LOCK_READ : AFS_LOCK_WRITE; AFS_LOCK_READ : AFS_LOCK_WRITE;
spin_unlock(&vnode->lock); spin_unlock(&vnode->lock);
ret = afs_vnode_set_lock(vnode, key, type); ret = afs_set_lock(vnode, key, type);
clear_bit(AFS_VNODE_LOCKING, &vnode->flags); clear_bit(AFS_VNODE_LOCKING, &vnode->flags);
switch (ret) { switch (ret) {
case -EWOULDBLOCK: case -EWOULDBLOCK:
...@@ -213,7 +276,7 @@ void afs_lock_work(struct work_struct *work) ...@@ -213,7 +276,7 @@ void afs_lock_work(struct work_struct *work)
clear_bit(AFS_VNODE_READLOCKED, &vnode->flags); clear_bit(AFS_VNODE_READLOCKED, &vnode->flags);
clear_bit(AFS_VNODE_WRITELOCKED, &vnode->flags); clear_bit(AFS_VNODE_WRITELOCKED, &vnode->flags);
spin_unlock(&vnode->lock); spin_unlock(&vnode->lock);
afs_vnode_release_lock(vnode, key); afs_release_lock(vnode, key);
if (!list_empty(&vnode->pending_locks)) if (!list_empty(&vnode->pending_locks))
afs_lock_may_be_available(vnode); afs_lock_may_be_available(vnode);
} }
...@@ -255,7 +318,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl) ...@@ -255,7 +318,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
struct inode *inode = file_inode(file); struct inode *inode = file_inode(file);
struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_vnode *vnode = AFS_FS_I(inode);
afs_lock_type_t type; afs_lock_type_t type;
struct key *key = file->private_data; struct key *key = afs_file_key(file);
int ret; int ret;
_enter("{%x:%u},%u", vnode->fid.vid, vnode->fid.vnode, fl->fl_type); _enter("{%x:%u},%u", vnode->fid.vid, vnode->fid.vnode, fl->fl_type);
...@@ -264,10 +327,6 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl) ...@@ -264,10 +327,6 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
if (fl->fl_start != 0 || fl->fl_end != OFFSET_MAX) if (fl->fl_start != 0 || fl->fl_end != OFFSET_MAX)
return -EINVAL; return -EINVAL;
ret = afs_init_lock_manager();
if (ret < 0)
return ret;
fl->fl_ops = &afs_lock_ops; fl->fl_ops = &afs_lock_ops;
INIT_LIST_HEAD(&fl->fl_u.afs.link); INIT_LIST_HEAD(&fl->fl_u.afs.link);
fl->fl_u.afs.state = AFS_LOCK_PENDING; fl->fl_u.afs.state = AFS_LOCK_PENDING;
...@@ -278,7 +337,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl) ...@@ -278,7 +337,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
/* make sure we've got a callback on this file and that our view of the /* make sure we've got a callback on this file and that our view of the
* data version is up to date */ * data version is up to date */
ret = afs_vnode_fetch_status(vnode, NULL, key); ret = afs_validate(vnode, key);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -315,7 +374,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl) ...@@ -315,7 +374,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
set_bit(AFS_VNODE_LOCKING, &vnode->flags); set_bit(AFS_VNODE_LOCKING, &vnode->flags);
spin_unlock(&vnode->lock); spin_unlock(&vnode->lock);
ret = afs_vnode_set_lock(vnode, key, type); ret = afs_set_lock(vnode, key, type);
clear_bit(AFS_VNODE_LOCKING, &vnode->flags); clear_bit(AFS_VNODE_LOCKING, &vnode->flags);
switch (ret) { switch (ret) {
case 0: case 0:
...@@ -418,7 +477,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl) ...@@ -418,7 +477,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
/* again, make sure we've got a callback on this file and, again, make /* again, make sure we've got a callback on this file and, again, make
* sure that our view of the data version is up to date (we ignore * sure that our view of the data version is up to date (we ignore
* errors incurred here and deal with the consequences elsewhere) */ * errors incurred here and deal with the consequences elsewhere) */
afs_vnode_fetch_status(vnode, NULL, key); afs_validate(vnode, key);
error: error:
spin_unlock(&inode->i_lock); spin_unlock(&inode->i_lock);
...@@ -441,7 +500,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl) ...@@ -441,7 +500,7 @@ static int afs_do_setlk(struct file *file, struct file_lock *fl)
static int afs_do_unlk(struct file *file, struct file_lock *fl) static int afs_do_unlk(struct file *file, struct file_lock *fl)
{ {
struct afs_vnode *vnode = AFS_FS_I(file->f_mapping->host); struct afs_vnode *vnode = AFS_FS_I(file->f_mapping->host);
struct key *key = file->private_data; struct key *key = afs_file_key(file);
int ret; int ret;
_enter("{%x:%u},%u", vnode->fid.vid, vnode->fid.vnode, fl->fl_type); _enter("{%x:%u},%u", vnode->fid.vid, vnode->fid.vnode, fl->fl_type);
...@@ -476,7 +535,7 @@ static int afs_do_unlk(struct file *file, struct file_lock *fl) ...@@ -476,7 +535,7 @@ static int afs_do_unlk(struct file *file, struct file_lock *fl)
static int afs_do_getlk(struct file *file, struct file_lock *fl) static int afs_do_getlk(struct file *file, struct file_lock *fl)
{ {
struct afs_vnode *vnode = AFS_FS_I(file->f_mapping->host); struct afs_vnode *vnode = AFS_FS_I(file->f_mapping->host);
struct key *key = file->private_data; struct key *key = afs_file_key(file);
int ret, lock_count; int ret, lock_count;
_enter(""); _enter("");
...@@ -490,7 +549,7 @@ static int afs_do_getlk(struct file *file, struct file_lock *fl) ...@@ -490,7 +549,7 @@ static int afs_do_getlk(struct file *file, struct file_lock *fl)
posix_test_lock(file, fl); posix_test_lock(file, fl);
if (fl->fl_type == F_UNLCK) { if (fl->fl_type == F_UNLCK) {
/* no local locks; consult the server */ /* no local locks; consult the server */
ret = afs_vnode_fetch_status(vnode, NULL, key); ret = afs_fetch_status(vnode, key);
if (ret < 0) if (ret < 0)
goto error; goto error;
lock_count = vnode->status.lock_count; lock_count = vnode->status.lock_count;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -31,57 +31,112 @@ static char *rootcell; ...@@ -31,57 +31,112 @@ static char *rootcell;
module_param(rootcell, charp, 0); module_param(rootcell, charp, 0);
MODULE_PARM_DESC(rootcell, "root AFS cell name and VL server IP addr list"); MODULE_PARM_DESC(rootcell, "root AFS cell name and VL server IP addr list");
struct afs_uuid afs_uuid;
struct workqueue_struct *afs_wq; struct workqueue_struct *afs_wq;
struct afs_net __afs_net;
/* /*
* initialise the AFS client FS module * Initialise an AFS network namespace record.
*/ */
static int __init afs_init(void) static int __net_init afs_net_init(struct afs_net *net)
{ {
int ret; int ret;
printk(KERN_INFO "kAFS: Red Hat AFS client v0.1 registering.\n"); net->live = true;
generate_random_uuid((unsigned char *)&net->uuid);
generate_random_uuid((unsigned char *)&afs_uuid); INIT_WORK(&net->charge_preallocation_work, afs_charge_preallocation);
mutex_init(&net->socket_mutex);
/* create workqueue */ net->cells = RB_ROOT;
ret = -ENOMEM; seqlock_init(&net->cells_lock);
afs_wq = alloc_workqueue("afs", 0, 0); INIT_WORK(&net->cells_manager, afs_manage_cells);
if (!afs_wq) timer_setup(&net->cells_timer, afs_cells_timer, 0);
return ret;
/* register the /proc stuff */ spin_lock_init(&net->proc_cells_lock);
ret = afs_proc_init(); INIT_LIST_HEAD(&net->proc_cells);
if (ret < 0)
goto error_proc;
#ifdef CONFIG_AFS_FSCACHE seqlock_init(&net->fs_lock);
/* we want to be able to cache */ net->fs_servers = RB_ROOT;
ret = fscache_register_netfs(&afs_cache_netfs); INIT_LIST_HEAD(&net->fs_updates);
INIT_HLIST_HEAD(&net->fs_proc);
INIT_HLIST_HEAD(&net->fs_addresses4);
INIT_HLIST_HEAD(&net->fs_addresses6);
seqlock_init(&net->fs_addr_lock);
INIT_WORK(&net->fs_manager, afs_manage_servers);
timer_setup(&net->fs_timer, afs_servers_timer, 0);
/* Register the /proc stuff */
ret = afs_proc_init(net);
if (ret < 0) if (ret < 0)
goto error_cache; goto error_proc;
#endif
/* initialise the cell DB */ /* Initialise the cell DB */
ret = afs_cell_init(rootcell); ret = afs_cell_init(net, rootcell);
if (ret < 0) if (ret < 0)
goto error_cell_init; goto error_cell_init;
/* initialise the VL update process */ /* Create the RxRPC transport */
ret = afs_vlocation_update_init(); ret = afs_open_socket(net);
if (ret < 0) if (ret < 0)
goto error_vl_update_init; goto error_open_socket;
/* initialise the callback update process */ return 0;
ret = afs_callback_update_init();
error_open_socket:
net->live = false;
afs_cell_purge(net);
afs_purge_servers(net);
error_cell_init:
net->live = false;
afs_proc_cleanup(net);
error_proc:
net->live = false;
return ret;
}
/*
* Clean up and destroy an AFS network namespace record.
*/
static void __net_exit afs_net_exit(struct afs_net *net)
{
net->live = false;
afs_cell_purge(net);
afs_purge_servers(net);
afs_close_socket(net);
afs_proc_cleanup(net);
}
/*
* initialise the AFS client FS module
*/
static int __init afs_init(void)
{
int ret = -ENOMEM;
printk(KERN_INFO "kAFS: Red Hat AFS client v0.1 registering.\n");
afs_wq = alloc_workqueue("afs", 0, 0);
if (!afs_wq)
goto error_afs_wq;
afs_async_calls = alloc_workqueue("kafsd", WQ_MEM_RECLAIM, 0);
if (!afs_async_calls)
goto error_async;
afs_lock_manager = alloc_workqueue("kafs_lockd", WQ_MEM_RECLAIM, 0);
if (!afs_lock_manager)
goto error_lockmgr;
#ifdef CONFIG_AFS_FSCACHE
/* we want to be able to cache */
ret = fscache_register_netfs(&afs_cache_netfs);
if (ret < 0) if (ret < 0)
goto error_callback_update_init; goto error_cache;
#endif
/* create the RxRPC transport */ ret = afs_net_init(&__afs_net);
ret = afs_open_socket();
if (ret < 0) if (ret < 0)
goto error_open_socket; goto error_net;
/* register the filesystems */ /* register the filesystems */
ret = afs_fs_init(); ret = afs_fs_init();
...@@ -91,21 +146,18 @@ static int __init afs_init(void) ...@@ -91,21 +146,18 @@ static int __init afs_init(void)
return ret; return ret;
error_fs: error_fs:
afs_close_socket(); afs_net_exit(&__afs_net);
error_open_socket: error_net:
afs_callback_update_kill();
error_callback_update_init:
afs_vlocation_purge();
error_vl_update_init:
afs_cell_purge();
error_cell_init:
#ifdef CONFIG_AFS_FSCACHE #ifdef CONFIG_AFS_FSCACHE
fscache_unregister_netfs(&afs_cache_netfs); fscache_unregister_netfs(&afs_cache_netfs);
error_cache: error_cache:
#endif #endif
afs_proc_cleanup(); destroy_workqueue(afs_lock_manager);
error_proc: error_lockmgr:
destroy_workqueue(afs_async_calls);
error_async:
destroy_workqueue(afs_wq); destroy_workqueue(afs_wq);
error_afs_wq:
rcu_barrier(); rcu_barrier();
printk(KERN_ERR "kAFS: failed to register: %d\n", ret); printk(KERN_ERR "kAFS: failed to register: %d\n", ret);
return ret; return ret;
...@@ -124,17 +176,14 @@ static void __exit afs_exit(void) ...@@ -124,17 +176,14 @@ static void __exit afs_exit(void)
printk(KERN_INFO "kAFS: Red Hat AFS client v0.1 unregistering.\n"); printk(KERN_INFO "kAFS: Red Hat AFS client v0.1 unregistering.\n");
afs_fs_exit(); afs_fs_exit();
afs_kill_lock_manager(); afs_net_exit(&__afs_net);
afs_close_socket();
afs_purge_servers();
afs_callback_update_kill();
afs_vlocation_purge();
destroy_workqueue(afs_wq);
afs_cell_purge();
#ifdef CONFIG_AFS_FSCACHE #ifdef CONFIG_AFS_FSCACHE
fscache_unregister_netfs(&afs_cache_netfs); fscache_unregister_netfs(&afs_cache_netfs);
#endif #endif
afs_proc_cleanup(); destroy_workqueue(afs_lock_manager);
destroy_workqueue(afs_async_calls);
destroy_workqueue(afs_wq);
afs_clean_up_permit_cache();
rcu_barrier(); rcu_barrier();
} }
......
...@@ -21,12 +21,12 @@ ...@@ -21,12 +21,12 @@
int afs_abort_to_error(u32 abort_code) int afs_abort_to_error(u32 abort_code)
{ {
switch (abort_code) { switch (abort_code) {
/* low errno codes inserted into abort namespace */ /* Low errno codes inserted into abort namespace */
case 13: return -EACCES; case 13: return -EACCES;
case 27: return -EFBIG; case 27: return -EFBIG;
case 30: return -EROFS; case 30: return -EROFS;
/* VICE "special error" codes; 101 - 111 */ /* VICE "special error" codes; 101 - 111 */
case VSALVAGE: return -EIO; case VSALVAGE: return -EIO;
case VNOVNODE: return -ENOENT; case VNOVNODE: return -ENOENT;
case VNOVOL: return -ENOMEDIUM; case VNOVOL: return -ENOMEDIUM;
...@@ -39,7 +39,37 @@ int afs_abort_to_error(u32 abort_code) ...@@ -39,7 +39,37 @@ int afs_abort_to_error(u32 abort_code)
case VBUSY: return -EBUSY; case VBUSY: return -EBUSY;
case VMOVED: return -ENXIO; case VMOVED: return -ENXIO;
/* Unified AFS error table; ET "uae" == 0x2f6df00 */ /* Volume Location server errors */
case AFSVL_IDEXIST: return -EEXIST;
case AFSVL_IO: return -EREMOTEIO;
case AFSVL_NAMEEXIST: return -EEXIST;
case AFSVL_CREATEFAIL: return -EREMOTEIO;
case AFSVL_NOENT: return -ENOMEDIUM;
case AFSVL_EMPTY: return -ENOMEDIUM;
case AFSVL_ENTDELETED: return -ENOMEDIUM;
case AFSVL_BADNAME: return -EINVAL;
case AFSVL_BADINDEX: return -EINVAL;
case AFSVL_BADVOLTYPE: return -EINVAL;
case AFSVL_BADSERVER: return -EINVAL;
case AFSVL_BADPARTITION: return -EINVAL;
case AFSVL_REPSFULL: return -EFBIG;
case AFSVL_NOREPSERVER: return -ENOENT;
case AFSVL_DUPREPSERVER: return -EEXIST;
case AFSVL_RWNOTFOUND: return -ENOENT;
case AFSVL_BADREFCOUNT: return -EINVAL;
case AFSVL_SIZEEXCEEDED: return -EINVAL;
case AFSVL_BADENTRY: return -EINVAL;
case AFSVL_BADVOLIDBUMP: return -EINVAL;
case AFSVL_IDALREADYHASHED: return -EINVAL;
case AFSVL_ENTRYLOCKED: return -EBUSY;
case AFSVL_BADVOLOPER: return -EBADRQC;
case AFSVL_BADRELLOCKTYPE: return -EINVAL;
case AFSVL_RERELEASE: return -EREMOTEIO;
case AFSVL_BADSERVERFLAG: return -EINVAL;
case AFSVL_PERM: return -EACCES;
case AFSVL_NOMEM: return -EREMOTEIO;
/* Unified AFS error table; ET "uae" == 0x2f6df00 */
case 0x2f6df00: return -EPERM; case 0x2f6df00: return -EPERM;
case 0x2f6df01: return -ENOENT; case 0x2f6df01: return -ENOENT;
case 0x2f6df04: return -EIO; case 0x2f6df04: return -EIO;
...@@ -68,7 +98,7 @@ int afs_abort_to_error(u32 abort_code) ...@@ -68,7 +98,7 @@ int afs_abort_to_error(u32 abort_code)
case 0x2f6df6c: return -ETIMEDOUT; case 0x2f6df6c: return -ETIMEDOUT;
case 0x2f6df78: return -EDQUOT; case 0x2f6df78: return -EDQUOT;
/* RXKAD abort codes; from include/rxrpc/packet.h. ET "RXK" == 0x1260B00 */ /* RXKAD abort codes; from include/rxrpc/packet.h. ET "RXK" == 0x1260B00 */
case RXKADINCONSISTENCY: return -EPROTO; case RXKADINCONSISTENCY: return -EPROTO;
case RXKADPACKETSHORT: return -EPROTO; case RXKADPACKETSHORT: return -EPROTO;
case RXKADLEVELFAIL: return -EKEYREJECTED; case RXKADLEVELFAIL: return -EKEYREJECTED;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* AFS fileserver list management.
*
* Copyright (C) 2017 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/kernel.h>
#include <linux/slab.h>
#include "internal.h"
void afs_put_serverlist(struct afs_net *net, struct afs_server_list *slist)
{
int i;
if (refcount_dec_and_test(&slist->usage)) {
for (i = 0; i < slist->nr_servers; i++) {
afs_put_cb_interest(net, slist->servers[i].cb_interest);
afs_put_server(net, slist->servers[i].server);
}
kfree(slist);
}
}
/*
* Build a server list from a VLDB record.
*/
struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell,
struct key *key,
struct afs_vldb_entry *vldb,
u8 type_mask)
{
struct afs_server_list *slist;
struct afs_server *server;
int ret = -ENOMEM, nr_servers = 0, i, j;
for (i = 0; i < vldb->nr_servers; i++)
if (vldb->fs_mask[i] & type_mask)
nr_servers++;
slist = kzalloc(sizeof(struct afs_server_list) +
sizeof(struct afs_server_entry) * nr_servers,
GFP_KERNEL);
if (!slist)
goto error;
refcount_set(&slist->usage, 1);
/* Make sure a records exists for each server in the list. */
for (i = 0; i < vldb->nr_servers; i++) {
if (!(vldb->fs_mask[i] & type_mask))
continue;
server = afs_lookup_server(cell, key, &vldb->fs_server[i]);
if (IS_ERR(server)) {
ret = PTR_ERR(server);
if (ret == -ENOENT)
continue;
goto error_2;
}
/* Insertion-sort by server pointer */
for (j = 0; j < slist->nr_servers; j++)
if (slist->servers[j].server >= server)
break;
if (j < slist->nr_servers) {
if (slist->servers[j].server == server) {
afs_put_server(cell->net, server);
continue;
}
memmove(slist->servers + j + 1,
slist->servers + j,
(slist->nr_servers - j) * sizeof(struct afs_server_entry));
}
slist->servers[j].server = server;
slist->nr_servers++;
}
if (slist->nr_servers == 0) {
ret = -EDESTADDRREQ;
goto error_2;
}
return slist;
error_2:
afs_put_serverlist(cell->net, slist);
error:
return ERR_PTR(ret);
}
/*
* Copy the annotations from an old server list to its potential replacement.
*/
bool afs_annotate_server_list(struct afs_server_list *new,
struct afs_server_list *old)
{
struct afs_server *cur;
int i, j;
if (old->nr_servers != new->nr_servers)
goto changed;
for (i = 0; i < old->nr_servers; i++)
if (old->servers[i].server != new->servers[i].server)
goto changed;
return false;
changed:
/* Maintain the same current server as before if possible. */
cur = old->servers[old->index].server;
for (j = 0; j < new->nr_servers; j++) {
if (new->servers[j].server == cur) {
new->index = j;
break;
}
}
/* Keep the old callback interest records where possible so that we
* maintain callback interception.
*/
i = 0;
j = 0;
while (i < old->nr_servers && j < new->nr_servers) {
if (new->servers[j].server == old->servers[i].server) {
struct afs_cb_interest *cbi = old->servers[i].cb_interest;
if (cbi) {
new->servers[j].cb_interest = cbi;
refcount_inc(&cbi->usage);
}
i++;
j++;
continue;
}
if (new->servers[j].server < old->servers[i].server) {
j++;
continue;
}
i++;
continue;
}
return true;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -45,7 +45,7 @@ static int afs_xattr_get_cell(const struct xattr_handler *handler, ...@@ -45,7 +45,7 @@ static int afs_xattr_get_cell(const struct xattr_handler *handler,
struct afs_cell *cell = vnode->volume->cell; struct afs_cell *cell = vnode->volume->cell;
size_t namelen; size_t namelen;
namelen = strlen(cell->name); namelen = cell->name_len;
if (size == 0) if (size == 0)
return namelen; return namelen;
if (namelen > size) if (namelen > size)
...@@ -96,7 +96,7 @@ static int afs_xattr_get_volume(const struct xattr_handler *handler, ...@@ -96,7 +96,7 @@ static int afs_xattr_get_volume(const struct xattr_handler *handler,
void *buffer, size_t size) void *buffer, size_t size)
{ {
struct afs_vnode *vnode = AFS_FS_I(inode); struct afs_vnode *vnode = AFS_FS_I(inode);
const char *volname = vnode->volume->vlocation->vldb.name; const char *volname = vnode->volume->name;
size_t namelen; size_t namelen;
namelen = strlen(volname); namelen = strlen(volname);
......
...@@ -3992,16 +3992,9 @@ void btrfs_dec_nocow_writers(struct btrfs_fs_info *fs_info, u64 bytenr) ...@@ -3992,16 +3992,9 @@ void btrfs_dec_nocow_writers(struct btrfs_fs_info *fs_info, u64 bytenr)
btrfs_put_block_group(bg); btrfs_put_block_group(bg);
} }
static int btrfs_wait_nocow_writers_atomic_t(atomic_t *a)
{
schedule();
return 0;
}
void btrfs_wait_nocow_writers(struct btrfs_block_group_cache *bg) void btrfs_wait_nocow_writers(struct btrfs_block_group_cache *bg)
{ {
wait_on_atomic_t(&bg->nocow_writers, wait_on_atomic_t(&bg->nocow_writers, atomic_t_wait,
btrfs_wait_nocow_writers_atomic_t,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
} }
...@@ -6530,12 +6523,6 @@ void btrfs_dec_block_group_reservations(struct btrfs_fs_info *fs_info, ...@@ -6530,12 +6523,6 @@ void btrfs_dec_block_group_reservations(struct btrfs_fs_info *fs_info,
btrfs_put_block_group(bg); btrfs_put_block_group(bg);
} }
static int btrfs_wait_bg_reservations_atomic_t(atomic_t *a)
{
schedule();
return 0;
}
void btrfs_wait_block_group_reservations(struct btrfs_block_group_cache *bg) void btrfs_wait_block_group_reservations(struct btrfs_block_group_cache *bg)
{ {
struct btrfs_space_info *space_info = bg->space_info; struct btrfs_space_info *space_info = bg->space_info;
...@@ -6558,8 +6545,7 @@ void btrfs_wait_block_group_reservations(struct btrfs_block_group_cache *bg) ...@@ -6558,8 +6545,7 @@ void btrfs_wait_block_group_reservations(struct btrfs_block_group_cache *bg)
down_write(&space_info->groups_sem); down_write(&space_info->groups_sem);
up_write(&space_info->groups_sem); up_write(&space_info->groups_sem);
wait_on_atomic_t(&bg->reservations, wait_on_atomic_t(&bg->reservations, atomic_t_wait,
btrfs_wait_bg_reservations_atomic_t,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
} }
...@@ -11059,12 +11045,6 @@ int btrfs_start_write_no_snapshotting(struct btrfs_root *root) ...@@ -11059,12 +11045,6 @@ int btrfs_start_write_no_snapshotting(struct btrfs_root *root)
return 1; return 1;
} }
static int wait_snapshotting_atomic_t(atomic_t *a)
{
schedule();
return 0;
}
void btrfs_wait_for_snapshot_creation(struct btrfs_root *root) void btrfs_wait_for_snapshot_creation(struct btrfs_root *root)
{ {
while (true) { while (true) {
...@@ -11073,8 +11053,7 @@ void btrfs_wait_for_snapshot_creation(struct btrfs_root *root) ...@@ -11073,8 +11053,7 @@ void btrfs_wait_for_snapshot_creation(struct btrfs_root *root)
ret = btrfs_start_write_no_snapshotting(root); ret = btrfs_start_write_no_snapshotting(root);
if (ret) if (ret)
break; break;
wait_on_atomic_t(&root->will_be_snapshotted, wait_on_atomic_t(&root->will_be_snapshotted, atomic_t_wait,
wait_snapshotting_atomic_t,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
} }
} }
...@@ -558,7 +558,7 @@ void __fscache_disable_cookie(struct fscache_cookie *cookie, bool invalidate) ...@@ -558,7 +558,7 @@ void __fscache_disable_cookie(struct fscache_cookie *cookie, bool invalidate)
* have completed. * have completed.
*/ */
if (!atomic_dec_and_test(&cookie->n_active)) if (!atomic_dec_and_test(&cookie->n_active))
wait_on_atomic_t(&cookie->n_active, fscache_wait_atomic_t, wait_on_atomic_t(&cookie->n_active, atomic_t_wait,
TASK_UNINTERRUPTIBLE); TASK_UNINTERRUPTIBLE);
/* Make sure any pending writes are cancelled. */ /* Make sure any pending writes are cancelled. */
......
...@@ -97,8 +97,6 @@ static inline bool fscache_object_congested(void) ...@@ -97,8 +97,6 @@ static inline bool fscache_object_congested(void)
return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq); return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq);
} }
extern int fscache_wait_atomic_t(atomic_t *);
/* /*
* object.c * object.c
*/ */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment