- 13 Sep, 2002 7 commits
-
-
David S. Miller authored
-
David S. Miller authored
- Jens needs to seperate out the IN/OUT macros to seperate what accesses are to the IDE_DATA register and the rest. On big-endian platforms the IDE_DATA register should be accessed in big-endian for it to all work out correctly or at least be compatible with the behavior existing before the IDE platform macro interface changes in 2.5.x
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
David S. Miller authored
-
- 12 Sep, 2002 32 commits
-
-
Neil Brown authored
md currently tries to set_capacity() *after* freeing the gendisk structure. It also frees the gendisk even when switching to read-only. That patch open-codes free_mddev (which is only called once) and cleans all this up.
-
Neil Brown authored
This is used: to iterate all exports when making /proc/fs/nfs/exports to find all exports of a client to unexport them. The first can just as easily be done by iterating the export_table hash table. The second is very rarely called and can be done by iterating the hash table looking for exports for the given client.
-
Neil Brown authored
Instead of a separate hash table per client we now have one hash table which includes the client in the key.
-
Neil Brown authored
Filehandle lookup currently breaks out the interesting pieces of a filehandle and passes them to exp_get or exp_get_fsid, which put the pieces back into a filehandle fragment. We define a new interface "exp_find" which does a lookup based on a filehandle fragment to avoid this double handling. In the process, common code in exp_get_key and exp_get_fsid_key is united into exp_find_key. Also, filehandle composition now uses the mk_fsid_v? inline functions.
-
Neil Brown authored
Currently each entry in the export table had two hash chains going through it, one for hash-by-dev/ino, One for hash-by-fsid. This is contrary to the goal of a simple hash table structure. The two hash-tables per client are replace by one which stores 'exp_key's which contain the key (as a file handle fragment) and a pointer to the real export entry. The export entries are then all stored in a single hash table indexed by client+vfsmount+dentry;
-
Neil Brown authored
Currently get_parent (needed to find the exportpoint above a given dentry) walks the hash table of export points checking each with is_subdir. Now it walks up the d_parent link checking each for membership in the hashtable. nfsd_lookup currently does that walk too (when crossing a mountpoint backwards) so the code gets unified. This approach makes more sense as we move towards a cache for export information that can be filled on demand. It also assumes less about the hash table (which will change).
-
Neil Brown authored
The nfs server currently doesn't allow you to export both a directory and an ancestor of that directory on the same filesystem. This check is more of a problem than a solution and can be done in user-space if needed, so it is removed. The potential for a security problem is because the files below the lower directory could be accessed as though it were under either of the export points, and so the access control that is applied might not be what is expected (by the nieve admin). e.g. export /a as readwrite and /a/b as readonly. Then a/b/c can be accessed readwrite as it is in /a which might not be the intend. Altering the user to this can be done in userspace though. The current restriction also stops exporting / as readonly and /tmp as read-write which some people want to do. Providing /tmp is also exported subtree_check (the default) there is no security issue here.
-
Neil Brown authored
They can be deduced from ex_dentry
-
Neil Brown authored
We currently store the address list with each client and use it only to print out comments on /proc/fs/nfs/exports While these can be helpful, they are not critical and could be added back later after we restructure the exports table.
-
Neil Brown authored
Instead, use d_path to find path from dentry/vfsmnt. This requires allocating a buffer at exp_open time, and releasing it when closing.
-
Neil Brown authored
It is never used
-
Neil Brown authored
Don't print if default, which should be "-2", but is currently 65534.. We really need a 32bit uid interface for 2.6.
-
Neil Brown authored
I was never entirely sure what it was for, but it is not used now, only set, so it can go.
-
Neil Brown authored
It is un-used and never will be. uid mapping will be done a different way (if at all).
-
Neil Brown authored
lockd currently asks nfsd for a 'client handle' for each request. This is used as a key for finding (or creating) a 'nlm_host' structure, so that there is only one of these per client...almost. There can currently be up to 4 nlm_hosts for a given client, depending on protocol (udp/tcp) or version (v1 or v4). But this isn't handled very well. So the question is: is there any advantage in having only on nlm_host per real host, or have we simply have one for each IP address that makes requests, whether they are separate hosts or not. The nlm_host structure is used: 1/ to hold a lockd rpc client for talking to the remote lockd. Having multiple lockd clients cannot hurt except possibly to waste a little space. 2/ to identify resources to free when we receive notification from statd that a client has restarted. As statd gets a hostname and looks up all IP addresses, and then sends a notification for each IP for which it has a registration, there is no need to minimise the number of nlm_host structures (each of which register for monitoring). 3/ to identify resources to free when a client sends a "free_all" request. If a client uses multiple IP addresses to create locks, and then sends free_all from just one IP address we will loose here. However it is not clear that a client would ever want to send a free_all request, and the linux client doesn't seem to, so there is unlikely to be any loss here. This patch does not ask nfsd for a client identifier, but rather finds an nlm_host based on IP, version, protocol (udp/tcp) and whether we are acting as NFS server or client. All of this information is then placed in the cookie that is passed to statd and returned by statd when the client restarts. Previously only the IP address was passing the cookie, so possibly not all nlm_host structures would have been found. Because of these changes, lockd does not need to know anything about the nfsd export table, so the interface to nfsd is much more narrow. Another consequence is that when nfsd is told to delete a client, it cannot tell lockd to forget all the locks for that client. However it is not clear that lockd should ever forget any locks unless it is told to shutdown (or simulate a shutdown), and in anycase, the current nfsd admin tools never tell nfsd to delete a client anyway.
-
Neil Brown authored
Currently, when lockd wants to invalidate all it's clients, it asks nfsd to iterate through them. Now it iterates itself.
-
Neil Brown authored
Just the new structure initialisers.
-
Neil Brown authored
both md.c and raid5.c can be compiled with debugging and compile errors in this code aren't normally noticed as they aren't even compiled. Now the debugging messages are compiled but optimised out so we will always see the errors. Current errors are fixed.
-
Neil Brown authored
That recent bug fix in raid5 just changed the bug, it didn't fix it. I think that the original code was actually wrong, which didn't help. This time, the code actually matches the nearby comment, that has been expanded a bit, so I feel somewhat more confident that it is actually right.
-
Neil Brown authored
Since 2.5.33, the blk_dev[].queue is called without the device open, so md_queue_proc can no-longer assume that the device is open.
-
bk://jfs.bkbits.net/linux-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Dave Kleikamp authored
-
http://linux-isdn.bkbits.net/linux-2.5.makeLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Kai Germaschewski authored
Use the same rule as in Rules.make for preprocessing vmlinux.lds.S, that also gives automatic dependency tracking. This means we should also use the standard AFLAGS_... instead of CPPFLAGS_... to provide specific additional flags.
-
Kai Germaschewski authored
When using cp to copy the shipped file to its actual name, permissions would be preserved, particularly the copy would be read-only when the original was (BitKeeper) read-only, leading to an error when executing the rule a second time. So now we use cat, which will generate a writable file.
-
Kai Germaschewski authored
Just some cosmetical changes to align output in non-verbose mode.
-
Daniel Jacobowitz authored
Linus spotted one cut-n-pasto ('tracing' argument) but didn't see the other: we were walking the ptrace_children list by the sibling field. So we got garbage for your task_structs when this happened. If the list wasn't empty, it would crash. Strace detaches from all tasks when it receives a Control-C so only with enough threads and SMP would this be easily seen.
-
bk://linuxusb.bkbits.net/linus-2.5Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Jeff Dike authored
This patch implements UML for 2.5.34.
-
Jose A. Lopez authored
I have changed the name of a local variable "l" to be "j", because with some fonts should be difficult to see if [1+l+i] means [2+i] or what.
-
Oliver Neukum authored
using init_etherdev(0, 0) in probe is a race. The struct net_device must be allocate and filled before init_etherdev is called, or there's a race which creates a network interface that isn't usable. The patch for kaweth for 2.5 fixes it.
-
Adam J. Richter authored
ata_attach in linux-2.5.34/drivers/ide/ide.c builds a list of IDE drives that do not yet have a device driver bound to them, in case ide-disk, ide-scsi, or whatever driver you want to use is not loaded yet. The problem was that ata_attach was adding to the head of the list, so the list was being built in reverse order. So, if you had two IDE disks, and ide-disk was a loadable module, the devfs entries for the disks would be numbered in reverse (the first disk would be /dev/discs/disc1, and the second would be /dev/discs/disc0). This fixes the problem by changing the relevant list_add to list_add_tail. Incidentally, the generic code in drivers/base/ already does it this way.
-
- 11 Sep, 2002 1 commit
-
-
David Brownell authored
One more patch: this turns off async schedule processing if there are no control or bulk transactions for a while (currently HZ/3). Consequence: no PCI accesses unless there's work to do. (And a FIXME comment is gone!)
-