- 26 Jun, 2007 1 commit
-
-
Zach Brown authored
sctp_sock_migrate() grabs the socket lock on a newly allocated socket while holding the socket lock on an old socket. lockdep worries that this might be a recursive lock attempt. task/3026 is trying to acquire lock: (sk_lock-AF_INET){--..}, at: [<ffffffff88105b8c>] sctp_sock_migrate+0x2e3/0x327 [sctp] but task is already holding lock: (sk_lock-AF_INET){--..}, at: [<ffffffff8810891f>] sctp_accept+0xdf/0x1e3 [sctp] This patch tells lockdep that this locking is safe by using lock_sock_nested(). Signed-off-by: Zach Brown <zach.brown@oracle.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
-
- 19 Jun, 2007 5 commits
-
-
Neil Horman authored
This is the split out of the patch that we agreed I should split out from my last patch. It changes space_left to be computed in the same way the to variable is. I know we talked about changing space_left to an int, but I think size_t is more appropriate, since we should never have negative space in our buffer, and computing using offsetof means space_left should now never drop below zero. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Sridhar Samudrala <sri@us.ibm.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
-
Neil Horman authored
I noted the other day while looking at a bug that was ostensibly in some perl networking library, that we strictly avoid allowing getsockopt operations to complete if we pass in oversized buffers. This seems to make libraries like Perl::NET malfunction since it seems to allocate oversized buffers for use in several operations. It also seems to be out of line with the way udp, tcp and ip getsockopt routines handle buffer input (since the *optlen pointer in both an input and an output and gets set to the length of the data that we copy into the buffer). This patch brings our getsockopt helpers into line with other protocols, and allows us to accept oversized buffers for our getsockopt operations. Tested by me with good results. Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Acked-by: Sridhar Samudrala <sri@us.ibm.com> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
-
David Howells authored
Return the number of bytes buffered in rxrpc_send_data(). Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neil Horman authored
ip_vs currently fails to reset its ip_vs_sync_state variable if the sync thread fails to start properly. The result is that the kernel will report a running daemon when their actuall is none. If you issue the following commands: 1. ipvsadm --start-daemon master --mcast-interface bla 2. ipvsadm -L --daemon 3. ipvsadm --stop-daemon master Assuming that bla is not an actual interface, step 2 should return no data, but instead returns: $ ipvsadm -L --daemon master sync daemon (mcast=bla, syncid=0) Signed-off-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Patrick McHardy authored
My IPsec MTU optimization patch introduced a regression in MTU calculation for non-ESP SAs, the SA's header_len needs to be subtracted from the MTU if the transform doesn't provide a ->get_mtu() function. Reported-and-tested-by: Marco Berizzi <pupilla@hotmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 18 Jun, 2007 5 commits
-
-
Linus Torvalds authored
Miklos Szeredi reported very long pauses (several seconds, sometimes more) on his T60 (with a Core2Duo) which he managed to track down to wait_task_inactive()'s open-coded busy-loop. He observed that an interrupt on one core tries to acquire the runqueue-lock but does not succeed in doing so for a very long time - while wait_task_inactive() on the other core loops waiting for the first core to deschedule a task (which it wont do while spinning in an interrupt handler). This rewrites wait_task_inactive() to do all its waiting optimistically without any locks taken at all, and then just double-check the end result with the proper runqueue lock held over just a very short section. If there were races in the optimistic wait, of a preemption event scheduled the process away, we simply re-synchronize, and start over. So the code now looks like this: repeat: /* Unlocked, optimistic looping! */ rq = task_rq(p); while (task_running(rq, p)) cpu_relax(); /* Get the *real* values */ rq = task_rq_lock(p, &flags); running = task_running(rq, p); array = p->array; task_rq_unlock(rq, &flags); /* Check them.. */ if (unlikely(running)) { cpu_relax(); goto repeat; } /* Preempted away? Yield if so.. */ if (unlikely(array)) { yield(); goto repeat; } Basically, that first "while()" loop is done entirely without any locking at all (and doesn't check for the case where the target process might have been preempted away), and so it's possibly "incorrect", but we don't really care. Both the runqueue used, and the "task_running()" check might be the wrong tests, but they won't oops - they just mean that we could possibly get the wrong results due to lack of locking and exit the loop early in the case of a race condition. So once we've exited the loop, we then get the proper (and careful) rq lock, and check the running/runnable state _safely_. And if it turns out that our quick-and-dirty and unsafe loop was wrong after all, we just go back and try it all again. (The patch also adds a lot of comments, which is the actual bulk of it all, to make it more obvious why we can do these things without holding the locks). Thanks to Miklos for all the testing and tracking it down. Tested-by: Miklos Szeredi <miklos@szeredi.hu> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Ingo Molnar authored
Gene Heskett reported the following problem while testing CFS: SysRq-N is not always effective in normalizing tasks back to SCHED_OTHER. The reason for that turns out to be the following bug: - normalize_rt_tasks() uses for_each_process() to iterate through all tasks in the system. The problem is, this method does not iterate through all tasks, it iterates through all thread groups. The proper mechanism to enumerate over all threads is to use a do_each_thread() + while_each_thread() loop. Reported-by: Gene Heskett <gene.heskett@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
* master.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-rc-fixes-2.6: [SCSI] ESP: Don't forget to clear ESP_FLAG_RESETTING. [SCSI] fusion: fix for BZ 8426 - massive slowdown on SCSI CD/DVD drive
-
Benjamin Herrenschmidt authored
Don't let signalfd dequeue private signals off other threads (in the case of things like SIGILL or SIGSEGV, trying to do so would result in undefined behaviour on who actually gets the signal, since they are force unblocked). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Davide Libenzi <davidel@xmailserver.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Thomas Gleixner authored
This reverts commit d0aa7a70. It not only introduced user space visible changes to the futex syscall, it is also non-functional and there is no way to fix it proper before the 2.6.22 release. The breakage report ( http://lkml.org/lkml/2007/5/12/17 ) went unanswered, and unfortunately it turned out that the concept is not feasible at all. It violates the rtmutex semantics badly by introducing a virtual owner, which hacks around the coupling of the user-space pi_futex and the kernel internal rt_mutex representation. At the moment the only safe option is to remove it fully as it contains user-space visible changes to broken kernel code, which we do not want to expose in the 2.6.22 release. The patch reverts the original patch mostly 1:1, but contains a couple of trivial manual cleanups which were necessary due to patches, which touched the same area of code later. Verified against the glibc tests and my own PI futex tests. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Ulrich Drepper <drepper@redhat.com> Cc: Pierre Peiffer <pierre.peiffer@bull.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 17 Jun, 2007 1 commit
-
-
Linus Torvalds authored
The manatees, they are dancing! Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 16 Jun, 2007 27 commits
-
-
Eric W. Biederman authored
Some user space tools need to identify SYSV shared memory when examining /proc/<pid>/maps. To do so they look for a block device with major zero, a dentry named SYSV<sysv key>, and having the minor of the internal sysv shared memory kernel mount. To help these tools and to make it easier for people just browsing /proc/<pid>/maps this patch modifies hugetlb sysv shared memory to use the SYSV<key> dentry naming convention. User space tools will still have to be aware that hugetlb sysv shared memory lives on a different internal kernel mount and so has a different block device minor number from the rest of sysv shared memory. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Cc: "Serge E. Hallyn" <serge@hallyn.com> Cc: Albert Cahalan <acahalan@gmail.com> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Adam Litke authored
Here's another breakage as a result of shared memory stacked files :( The NUMA policy for a VMA is determined by checking the following (in the order given): 1) vma->vm_ops->get_policy() (if defined) 2) vma->vm_policy (if defined) 3) task->mempolicy (if defined) 4) Fall back to default_policy By switching to stacked files for shared memory, get_policy() is now always set to shm_get_policy which is a wrapper function. This causes us to stop at step 1, which yields NULL for hugetlb instead of task->mempolicy which was the previous (and correct) result. This patch modifies the shm_get_policy() wrapper to maintain steps 1-3 for the wrapped vm_ops. (akpm: the refcounting of mempolicies is busted and this patch does nothing to improve it) Signed-off-by: Adam Litke <agl@us.ibm.com> Acked-by: William Irwin <bill.irwin@oracle.com> Cc: dean gaudet <dean@arctic.org> Cc: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@suse.de> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jan Kara authored
We have to take care that when we call udf_discard_prealloc() from udf_clear_inode() we have to write inode ourselves afterwards (otherwise, some changes might be lost leading to leakage of blocks, use of free blocks or improperly aligned extents). Also udf_discard_prealloc() does two different things - it removes preallocated blocks and truncates the last extent to exactly match i_size. We move the latter functionality to udf_truncate_tail_extent(), call udf_discard_prealloc() when last reference to a file is dropped and call udf_truncate_tail_extent() when inode is being removed from inode cache (udf_clear_inode() call). We cannot call udf_truncate_tail_extent() earlier as subsequent open+write would find the last block of the file mapped and happily write to the end of it, although the last extent says it's shorter. [akpm@linux-foundation.org: Make checkpatch.pl happier] Signed-off-by: Jan Kara <jack@suse.cz> Cc: Eric Sandeen <sandeen@sandeen.net> Cc: Cyrill Gorcunov <gorcunov@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Lameter authored
If ARCH_KMALLOC_MINALIGN is set to a value greater than 8 (SLUBs smallest kmalloc cache) then SLUB may generate duplicate slabs in sysfs (yes again) because the object size is padded to reach ARCH_KMALLOC_MINALIGN. Thus the size of the small slabs is all the same. No arch sets ARCH_KMALLOC_MINALIGN larger than 8 though except mips which for some reason wants a 128 byte alignment. This patch increases the size of the smallest cache if ARCH_KMALLOC_MINALIGN is greater than 8. In that case more and more of the smallest caches are disabled. If we do that then the count of the active general caches that is displayed on boot is not correct anymore since we may skip elements of the kmalloc array. So count them separately. This approach was tested by Havard yesterday. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Benjamin Herrenschmidt authored
Some changes done a while ago to avoid pounding on ptep_set_access_flags and update_mmu_cache in some race situations break sun4c which requires update_mmu_cache() to always be called on minor faults. This patch reworks ptep_set_access_flags() semantics, implementations and callers so that it's now responsible for returning whether an update is necessary or not (basically whether the PTE actually changed). This allow fixing the sparc implementation to always return 1 on sun4c. [akpm@linux-foundation.org: fixes, cleanups] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Hugh Dickins <hugh@veritas.com> Cc: David Miller <davem@davemloft.net> Cc: Mark Fortescue <mark@mtfhpc.demon.co.uk> Acked-by: William Lee Irwin III <wli@holomorphy.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Matt Mackall authored
(As reported by linux@horizon.com) Folding is done to minimize the theoretical possibility of systematic weakness in the particular bits of the SHA1 hash output. The result of this bug is that 16 out of 80 bits are un-folded. Without a major new vulnerability being found in SHA1, this is harmless, but still worth fixing. Signed-off-by: Matt Mackall <mpm@selenic.com> Cc: <linux@horizon.com> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jeff Dike authored
The x86_64 a.out.h got a definition of STACK_TOP_MAX, which interferes with the UML version. So, just undef it like STACK_TOP. Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Jeff Dike authored
Distros seem to be removing PAGE_SIZE from asm/page.h. So, the libc side of UML should stop using it. I replace it with UM_KERN_PAGE_SIZE, which is defined to be the same as PAGE_SIZE on the kernel side of the house. I could also use getpagesize(), but it's more important that UML have the same value of PAGE_SIZE everywhere. It's conceivable that it could be built with a larger PAGE_SIZE, and use of getpagesize() would break that badly. PAGE_MASK got the same treatment, as it is closely tied to PAGE_SIZE. Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
David Brownell authored
Update two points in the SPI interface documentation: - Update description of the "chip stays selected after message ends" mode. In some cases it's required for correctness; it isn't just a performance tweak. (Yes: to use this mode on mult-device busses, another programming interface will be needed. One draft has been circulated already.) - Clarify spi_setup(), highlighting that callers must ensure that no requests are queued (can't change configuration except between I/Os), and that the device must be deselected when this returns (which is a key part of why it's called during device init). Signed-off-by: David Brownell <dbrownell@users.sourceforge.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Accetta authored
If raid1/repair (which reads all block and fixes any differences it finds) hits a read error, it doesn't reset the bio for writing before writing correct data back, so the read error isn't fixed, and the device probably gets a zero-length write which it might complain about. Signed-off-by: Neil Brown <neilb@suse.de> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
NeilBrown authored
1/ When resyncing a degraded raid10 which has more than 2 copies of each block, garbage can get synced on top of good data. 2/ We round the wrong way in part of the device size calculation, which can cause confusion. Signed-off-by: Neil Brown <neilb@suse.de> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
fs/fuse/inode.c:658:3: error: Initializer entry defined twice fs/fuse/inode.c:661:3: also defined here Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Björn Steinbrink authored
Fix oops triggered during: echo 0 > /proc/sys/kernel/nmi_watchdog The culprit seems to be 09198e68: [PATCH] i386: Clean up NMI watchdog code In two places, the parameters to release_{evntsel,perfctr}_nmi got interchanged during the cleanup. Fix interchanged parameters to release_{evntsel,perfctr}_nmi. Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de> Cc: Stephane Eranian <eranian@hpl.hp.com> Cc: Andi Kleen <ak@suse.de> Cc: Michal Piotrowski <michal.k.k.piotrowski@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rafael J. Wysocki authored
Fix oops caused by 'cat /dev/snapshot', reported by Arkadiusz Miskiewicz, and make it impossible to thaw tasks with the help of the swsusp userland interface while there is a snapshot image ready to save. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Acked-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Randy Dunlap authored
Fix section error (allyesconfig). The exit function is called from init, so functions that are called by the exit function cannot be marked __exit. WARNING: drivers/built-in.o(.text+0xe5bc6): Section mismatch: reference to .exit. text: (between 'toshiba_acpi_exit' and 'hci_raw') Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Len Brown <len.brown@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Paul Jackson authored
The cpuset code to present a list of tasks using a cpuset to user space could write to an array that it had kmalloc'd, after a kmalloc request of zero size. The problem was that the code didn't check for writes past the allocated end of the array until -after- the first write. This is a race condition that is likely rare -- it would only show up if a cpuset went from being empty to having a task in it, during the brief time between the allocation and the first write. Prior to roughly 2.6.22 kernels, this was also a benign problem, because a zero kmalloc returned a few usable bytes anyway, and no harm was done with the bogus write. With the 2.6.22 kernel changes to make issue a warning if code tries to write to the location returned from a zero size allocation, this problem is no longer benign. This cpuset code would occassionally trigger that warning. The fix is trivial -- check before storing into the array, not after, whether the array is big enough to hold the store. Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: "Serge E. Hallyn" <serue@us.ibm.com> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Herbert Poetzl <herbert@13thfloor.at> Cc: Kirill Korotaev <dev@openvz.org> Cc: Paul Menage <menage@google.com> Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Paul Jackson <pj@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Hugh Dickins authored
It is not safe to use pte_update_defer() in ptep_test_and_clear_young(): its only user, /proc/<pid>/clear_refs, drops pte lock before flushing TLB. Use the safe though less efficient pte_update() paravirtop in its place. Likewise in ptep_test_and_clear_dirty(), though that has no current use. These are macros (header file dependency stops them from becoming inline functions), so be more liberal with the underscores and parentheses. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Zachary Amsden <zach@vmware.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Badari Pulavarty authored
shmid used to be stored as inode# for shared memory segments. Some of the proc-ps tools use this from /proc/pid/maps. Recent cleanups to newseg() changed it. This patch sets inode number back to shared memory id to fix breakage. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: "Albert Cahalan" <acahalan@gmail.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Christoph Lameter authored
The data structure to manage the information gathered about functions allocating and freeing objects is allocated when the list_lock has already been taken. We need to allocate with GFP_ATOMIC instead of GFP_KERNEL. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Björn Steinbrink authored
When disabled through /proc/sys/kernel/nmi_watchdog, the NMI watchdog uses the stop() method directly, which does not decrement the activity counter, leading to a BUG(). Use the wrapper function instead to fix that. Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Björn Steinbrink authored
At system boot time, the NMI watchdog no longer reserved its MSRs, allowing other subsystems to mess with them. Fix that. Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de> Cc: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Paul Fulghum authored
Restore tty locked ioctl handler which was replaced with an unlocked ioctl handler in hung_up_tty_fops by the patch: commit e10cc1df Author: Paul Fulghum <paulkf@microgate.com> Date: Thu May 10 22:22:50 2007 -0700 tty: add compat_ioctl This was reported in: [Bug 8473] New: Oops: 0010 [1] SMP The bug is caused by switching to hung_up_tty_fops in do_tty_hangup. An ioctl call can be waiting on BLK after testing for existence of the locked ioctl handler in the normal tty fops, but before calling the locked ioctl handler. If a hangup occurs at that point, the locked ioctl fop is NULL and an oops occurs. (akpm: we can remove my debugging code from do_ioctl() now, but it'll be OK to do that for 2.6.23) Signed-off-by: Paul Fulghum <paulkf@microgate.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
David Woodhouse authored
Commit 9b01bd5b introduced a compat_ioctl handler for RADEON_SETPARAM, the sole purpose of which was to handle the fact that on i386, alignof(uint64_t)==4. Unfortunately, this handler was installed for _all_ 64-bit architectures, instead of only x86_64 and ia64. And thus it breaks 32-bit compatibility on every other arch, where 64-bit integers are aligned to 8 bytes in 32-bit mode just the same as in 64-bit mode. Arnd has a cunning plan to use 'compat_u64' with appropriate alignment attributes according to the 32-bit ABI, but for now let's just make the compat_radeon_cp_setparam routine entirely disappear on 64-bit machines whose 32-bit compat support isn't for i386. It would be a no-op with compat_u64 anyway. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Dave Airlie <airlied@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Linus Torvalds authored
* master.kernel.org:/pub/scm/linux/kernel/git/bart/ide-2.6: ide-scsi: fix OOPS in idescsi_expiry() Resume from RAM on HPC nx6325 broken
-
Bartlomiej Zolnierkiewicz authored
drive->driver_data contains pointer to Scsi_Host not idescsi_scsi_t. Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Rafael J. Wysocki authored
generic_ide_resume() should check if dev->driver is not NULL before applying to_ide_driver() to it. Fix that. Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
-
Linus Torvalds authored
* 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: [RXRPC] net/rxrpc/ar-connection.c: fix NULL dereference [TCP]: Fix logic breakage due to DSACK separation [TCP]: Congestion control API RTT sampling fix
-
- 15 Jun, 2007 1 commit
-
-
Paul Mundt authored
When building with memory hotplug enabled and cpu hotplug disabled, we end up with the following section mismatch: WARNING: mm/built-in.o(.text+0x4e58): Section mismatch: reference to .init.text: (between 'free_area_init_node' and '__build_all_zonelists') This happens as a result of: -> free_area_init_node() -> free_area_init_core() -> zone_pcp_init() <-- all __meminit up to this point -> zone_batchsize() <-- marked as __cpuinit fo This happens because CONFIG_HOTPLUG_CPU=n sets __cpuinit to __init, but CONFIG_MEMORY_HOTPLUG=y unsets __meminit. Changing zone_batchsize() to __devinit fixes this. __devinit is the only thing that is common between CONFIG_HOTPLUG_CPU=y and CONFIG_MEMORY_HOTPLUG=y. In the long run, perhaps this should be moved to another section identifier completely. Without this, memory hot-add of offline nodes (via hotadd_new_pgdat()) will oops if CPU hotplug is not also enabled. Signed-off-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> -- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
-