- 15 Jan, 2005 2 commits
-
-
Arnaldo Carvalho de Melo authored
No need for two structs, follow the new inet_sock layout style. Signed-off-by: Arnaldo Carvalho de Melo <acme@conectiva.com.br> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Arnaldo Carvalho de Melo authored
No need for two structs, follow the new inet_sock layout style. Also introduce inet_sk_copy_descendant, to copy just the inet_sock descendant specific area from one sock to another. Signed-off-by: Arnaldo Carvalho de Melo <acme@conectiva.com.br> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 14 Jan, 2005 8 commits
-
-
bk://kernel.bkbits.net/acme/connection_sock-2.6David S. Miller authored
into nuts.davemloft.net:/disk1/BK/net-2.6
-
Adrian Bunk authored
- make some needlessly global code static - remove the following unused functions: - exthdrs.c: ipv6_build_rthdr - exthdrs.c: ipv6_build_exthdr - exthdrs.c: ipv6_build_nfrag_opts - exthdrs.c: ipv6_build_frag_opts - remove the following write-only global variables: - addrconf.c: inet6_dev_count - addrconf.c: inet6_ifa_count - #if 0 the following unused global variable: - addrconf.c: in6addr_any - remove the following unneeded EXPORT_SYMBOL's: - ipv6_syms.c: in6addr_any - ipv6_syms.c: in6addr_loopback Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Lennert Buytenhek authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
bk://kernel.bkbits.net/acme/connection_sock-2.6David S. Miller authored
into nuts.davemloft.net:/disk1/BK/net-2.6
-
Jens Axboe authored
noop doesn't follow the instructions on where to insert a request, because it uses q->queue_head instead of the *insert assigned. Clean it up so it's easier to read. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Jens Axboe authored
Doing some raid testing threw a bug in the scsi mid layer, because the segment counts wasn't correct. Initially I worried that we still had problems in this area, but it turns out that is due to the raid usage of bio clones. Currently you have to hold on to the original bio as well, since the clone only maintains a pointer to the bio_vec inside the original bio. If the original bio is freed first, the clone will have garbage in its bio->bi_io_vec as soon as that memory is scribbled. I think the best fix is to maintain flexibility and duplicate the io_vec inside the clone as well. Attached patch does this. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
http://linux-sound.bkbits.net/linux-soundLinus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.6/linux
-
Arnaldo Carvalho de Melo authored
No need for two structs, follow the new inet_sock layout style. Signed-off-by: Arnaldo Carvalho de Melo <acme@conectiva.com.br> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 13 Jan, 2005 28 commits
-
-
bk://kernel.bkbits.net/davem/sparc-2.6Linus Torvalds authored
into ppc970.osdl.org:/home/torvalds/v2.6/linux
-
Jaroslav Kysela authored
into suse.cz:/home/perex/bk/linux-sound/linux-sound
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
Grant Grundler authored
Signed-off-by: Grant Grundler <grundler@parisc-linux.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Andrew Morton authored
Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Olaf Kirch authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
Neil Horman authored
Signed-off-by: Neil Horman <nhorman@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Adrian Bunk authored
- make needlessly global code static - dn_fib.c: remove the write-only global variable dn_fib_info_cnt - dn_fib.c: remove the unused global function dn_fib_rt_message - dn_neigh.c: remove the unused global function dn_neigh_pointopoint_notify - dn_timer.c: remove the fast timer code that isn't used Signed-off-by: David S. Miller <davem@davemloft.net>
-
-
Thomas Gleixner authored
Use the new lock initializers DEFINE_SPIN_LOCk and DEFINE_RW_LOCK Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
http://linux-mh.bkbits.net/bluetooth-2.6David S. Miller authored
into nuts.davemloft.net:/disk1/BK/net-2.6
-
Hideaki Yoshifuji authored
Signed-off-by: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Jaroslav Kysela authored
-
bk://212.42.230.204/net-2.6-schedDavid S. Miller authored
into nuts.davemloft.net:/disk1/BK/tgraf-2.6
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
The netlink_post stuff Arjan removed was the only user of this array. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
do_tcp_sendpages() needs to do skb->truesize et al. accounting just like tcp_sendmsg() does. tcp_sendmsg() works by gradually adjusting these accounting knobs as user data is copied into the packet. do_tcp_sendpages() works differently, when it allocates a new SKB it optimistically adds in tp->mss_cache to these values and then makes no adjustments at all as pages are tacked onto the packet. This does not work at all if tcp_sendmsg() queues a packet onto the send queue, and then do_tcp_sendpages() attaches pages onto the end of that SKB. We are left with a very inaccurate skb->truesize in that case. Consequently, if we were building a TSO frame and it gets partially ACK'd, then since skb->truesize is too small tcp_trim_skb() will potentially underflow it's value and all the accounting becomes corrupted. This is usually seen as sk->sk_forward_alloc being negative at socket destroy time, which triggers an assertion check. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
into nuts.davemloft.net:/disk1/BK/sparc-2.6
-
David S. Miller authored
into nuts.davemloft.net:/disk1/BK/net-2.6
-
Paul Mackerras authored
This patch fixes a problem I have been seeing since all the preempt changes went in, which is that ppc64 SMP systems would livelock randomly if preempt was enabled. It turns out that what was happening was that one cpu was spinning in spin_lock_irq (the version at line 215 of kernel/spinlock.c) madly doing preempt_enable() and preempt_disable() calls. The other cpu had the lock and was trying to set the TIF_NEED_RESCHED flag for the task running on the first cpu. That is an atomic operation which has to be retried if another cpu writes to the same cacheline between the load and the store, which the other cpu was doing every time it did preempt_enable() or preempt_disable(). I decided to move the thread_info flags field into the next cache line, since it is the only field that would regularly be modified by cpus other than the one running the task that owns the thread_info. (OK possibly the `cpu' field would be on a rebalance; I don't know the rebalancing code, but that should be pretty infrequent.) Thus, moving the flags field seems like a good idea generally as well as solving the immediate problem. For the record I am pretty unhappy with the code we use for spin_lock et al. with preemption turned on (the BUILD_LOCK_OPS stuff in spinlock.c). For a start we do the atomic op (_raw_spin_trylock) each time around the loop. That is going to be generating a lot of unnecessary bus (or fabric) traffic. Instead, after we fail to get the lock we should poll it with simple loads until we see that it is clear and then retry the atomic op. Assuming a reasonable cache design, the loads won't generate any bus traffic until another cpu writes to the cacheline containing the lock. Secondly we have lost the __spin_yield call that we had on ppc64, which is an important optimization when we are running under the hypervisor. I can't just put that in cpu_relax because I need to know which (virtual) cpu is holding the lock, so that I can tell the hypervisor which virtual cpu to give my time slice to. That information is stored in the lock variable, which is why __spin_yield needs the address of the lock. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Paul Mackerras authored
This patch adds the PREEMPT_BKL config option for PPC64, shamelessly stolen from the i386 version. I have this turned on in the kernel on my desktop G5 and it seems to be just fine. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Paul Mackerras authored
This patch enables the DEBUG_PREEMPT config option for PPC64. I have this turned on on my desktop G5 and it isn't finding any problems. (It did find one problem, in flush_tlb_pending(), that I have just sent a patch for.) BTW, do we really need to restrict which architectures the config option is available on? Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Paul Mackerras authored
This patch mirrors the recent changes on x86 to call preempt_schedule rather than schedule in the exception exit path, in the case where the preempt_count is zero and the TIF_NEED_RESCHED bit is set. I'm a little concerned that this means that we have a window where interrupts are enabled and we are on our way into preempt_schedule, but preempt_count is still zero. Ingo's proposed preempt_schedule_irq would fix this, and I think something like that should go in. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Paul Mackerras authored
The preempt debug stuff found a place where we were using smp_processor_id() without having preemption disabled, in flush_tlb_pending. This patch fixes it by using get_cpu_var and put_cpu_var instead of the __get_cpu_var variant. Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Jens Axboe authored
I stumbled across this the other day. The block layer only uses a single memory pool for request allocation, so it's very possible for eg writes to have allocated them all at any point in time. If that is the case and the machine is low on memory, a reader attempting to allocate a request and failing in blk_alloc_request() can get stuck for a long time since no one is there to wake it up. The solution is either to add the extra mempool so both reads and writes have one, or attempt to handle the situation. I chose the latter, to save the extra memory required for the additional mempool with BLKDEV_MIN_RQ statically allocated requests per-queue. If a read allocation fails and we have no readers in flight for this queue, mark us rq-starved so that the next write being freed will wake up the sleeping reader(s). Same situation would happen for writes as well of course, it's just a lot more unlikely. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Jens Axboe authored
"ATA over Ethernet support" should not default to 'm', it doesn't make any sense for a special case driver to do so. Signed-off-by: Jens Axboe <axboe@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-
Marcel Holtmann authored
Use the new lock initializers DEFINE_SPIN_LOCK and DEFINE_RW_LOCK. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
-
- 12 Jan, 2005 2 commits
-
-
Jaroslav Kysela authored
Documentation,ATIIXP driver Added ac97_quirk option like intel and via drivers. Signed-off-by: Takashi Iwai <tiwai@suse.de>
-
Jaroslav Kysela authored
IOCTL32 emulation Fixed bugs with ctl_read/write ioctls. The struct size mismatch due to alignment is fixed. The code is also a bit optimized. Signed-off-by: Takashi Iwai <tiwai@suse.de>
-