- 13 Sep, 2002 5 commits
-
-
Ingo Molnar authored
This implements the 'keep the initial thread around until every thread in the group exits' concept in a different, less intrusive way, along your suggestions. There is no exit_done completion handling anymore, freeing of the task is still done by wait4(). This has the following side-effect: detached threads/processes can only be started within a thread group, not in a standalone way. (This also fixes the bugs introduced by the ->exit_done code, which made it possible for a zombie task to be reactivated.) I've introduced the p->group_leader pointer, which can/will be used for other purposes in the future as well - since from now on the thread group leader is always existent. Right now it's used to notify the parent of the thread group leader from the last non-leader thread that exits [if the thread group leader is a zombie already].
-
Ingo Molnar authored
I distilled the attached fix-patch from Daniel's bigger patch - it includes all fixes for all currently known ptrace related breakages, which include things like bad behavior (crash) if the tracer process dies unexpectedly.
-
bk://linux-input.bkbits.net/linux-inputLinus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
Vojtech Pavlik authored
-
http://ppc.bkbits.net/for-linus-ppc64Linus Torvalds authored
into home.transmeta.com:/home/torvalds/v2.5/linux
-
- 14 Sep, 2002 11 commits
-
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64drivers
-
Anton Blanchard authored
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
Anton Blanchard authored
-
- 13 Sep, 2002 2 commits
-
-
Vojtech Pavlik authored
-
Franz Sirl authored
Exporting kbd_pt_regs in keyboard.c.
-
- 12 Sep, 2002 22 commits
-
-
Anton Blanchard authored
into samba.org:/scratch/anton/linux-2.5_ppc64_new
-
Brad Hards authored
-
Vojtech Pavlik authored
-
Adam J. Richter authored
of pcspkr.o and another 90 elsewhere in the .o file.
-
Richard Zidlicky authored
-
Vojtech Pavlik authored
-
Neil Brown authored
md currently tries to set_capacity() *after* freeing the gendisk structure. It also frees the gendisk even when switching to read-only. That patch open-codes free_mddev (which is only called once) and cleans all this up.
-
Neil Brown authored
This is used: to iterate all exports when making /proc/fs/nfs/exports to find all exports of a client to unexport them. The first can just as easily be done by iterating the export_table hash table. The second is very rarely called and can be done by iterating the hash table looking for exports for the given client.
-
Neil Brown authored
Instead of a separate hash table per client we now have one hash table which includes the client in the key.
-
Neil Brown authored
Filehandle lookup currently breaks out the interesting pieces of a filehandle and passes them to exp_get or exp_get_fsid, which put the pieces back into a filehandle fragment. We define a new interface "exp_find" which does a lookup based on a filehandle fragment to avoid this double handling. In the process, common code in exp_get_key and exp_get_fsid_key is united into exp_find_key. Also, filehandle composition now uses the mk_fsid_v? inline functions.
-
Neil Brown authored
Currently each entry in the export table had two hash chains going through it, one for hash-by-dev/ino, One for hash-by-fsid. This is contrary to the goal of a simple hash table structure. The two hash-tables per client are replace by one which stores 'exp_key's which contain the key (as a file handle fragment) and a pointer to the real export entry. The export entries are then all stored in a single hash table indexed by client+vfsmount+dentry;
-
Neil Brown authored
Currently get_parent (needed to find the exportpoint above a given dentry) walks the hash table of export points checking each with is_subdir. Now it walks up the d_parent link checking each for membership in the hashtable. nfsd_lookup currently does that walk too (when crossing a mountpoint backwards) so the code gets unified. This approach makes more sense as we move towards a cache for export information that can be filled on demand. It also assumes less about the hash table (which will change).
-
Neil Brown authored
The nfs server currently doesn't allow you to export both a directory and an ancestor of that directory on the same filesystem. This check is more of a problem than a solution and can be done in user-space if needed, so it is removed. The potential for a security problem is because the files below the lower directory could be accessed as though it were under either of the export points, and so the access control that is applied might not be what is expected (by the nieve admin). e.g. export /a as readwrite and /a/b as readonly. Then a/b/c can be accessed readwrite as it is in /a which might not be the intend. Altering the user to this can be done in userspace though. The current restriction also stops exporting / as readonly and /tmp as read-write which some people want to do. Providing /tmp is also exported subtree_check (the default) there is no security issue here.
-
Neil Brown authored
They can be deduced from ex_dentry
-
Neil Brown authored
We currently store the address list with each client and use it only to print out comments on /proc/fs/nfs/exports While these can be helpful, they are not critical and could be added back later after we restructure the exports table.
-
Neil Brown authored
Instead, use d_path to find path from dentry/vfsmnt. This requires allocating a buffer at exp_open time, and releasing it when closing.
-
Neil Brown authored
It is never used
-
Neil Brown authored
Don't print if default, which should be "-2", but is currently 65534.. We really need a 32bit uid interface for 2.6.
-
Neil Brown authored
I was never entirely sure what it was for, but it is not used now, only set, so it can go.
-
Neil Brown authored
It is un-used and never will be. uid mapping will be done a different way (if at all).
-
Neil Brown authored
lockd currently asks nfsd for a 'client handle' for each request. This is used as a key for finding (or creating) a 'nlm_host' structure, so that there is only one of these per client...almost. There can currently be up to 4 nlm_hosts for a given client, depending on protocol (udp/tcp) or version (v1 or v4). But this isn't handled very well. So the question is: is there any advantage in having only on nlm_host per real host, or have we simply have one for each IP address that makes requests, whether they are separate hosts or not. The nlm_host structure is used: 1/ to hold a lockd rpc client for talking to the remote lockd. Having multiple lockd clients cannot hurt except possibly to waste a little space. 2/ to identify resources to free when we receive notification from statd that a client has restarted. As statd gets a hostname and looks up all IP addresses, and then sends a notification for each IP for which it has a registration, there is no need to minimise the number of nlm_host structures (each of which register for monitoring). 3/ to identify resources to free when a client sends a "free_all" request. If a client uses multiple IP addresses to create locks, and then sends free_all from just one IP address we will loose here. However it is not clear that a client would ever want to send a free_all request, and the linux client doesn't seem to, so there is unlikely to be any loss here. This patch does not ask nfsd for a client identifier, but rather finds an nlm_host based on IP, version, protocol (udp/tcp) and whether we are acting as NFS server or client. All of this information is then placed in the cookie that is passed to statd and returned by statd when the client restarts. Previously only the IP address was passing the cookie, so possibly not all nlm_host structures would have been found. Because of these changes, lockd does not need to know anything about the nfsd export table, so the interface to nfsd is much more narrow. Another consequence is that when nfsd is told to delete a client, it cannot tell lockd to forget all the locks for that client. However it is not clear that lockd should ever forget any locks unless it is told to shutdown (or simulate a shutdown), and in anycase, the current nfsd admin tools never tell nfsd to delete a client anyway.
-
Neil Brown authored
Currently, when lockd wants to invalidate all it's clients, it asks nfsd to iterate through them. Now it iterates itself.
-