1. 26 Dec, 2006 1 commit
  2. 18 Dec, 2006 4 commits
  3. 17 Dec, 2006 11 commits
  4. 15 Dec, 2006 5 commits
    • Linus Torvalds's avatar
      AGP: Allocate AGP pages with GFP_DMA32 by default · dcc6e343
      Linus Torvalds authored
      Not all graphic page remappers support physical addresses over the 4GB
      mark for remapping, so while some do (the AMD64 GART always did, and I
      just fixed the i965 to do so properly), we're safest off just forcing
      GFP_DMA32 allocations to make sure graphics pages get allocated in the
      low 32-bit address space by default.
      
      AGP sub-drivers that really care, and can do better, could just choose
      to implement their own allocator (or we could add another "64-bit safe"
      default allocator for their use), but quite frankly, you're not likely
      to care in practice.
      
      So for now, this trivial change means that we won't be allocating pages
      that we can't map correctly by mistake on x86-64.
      
      [ On traditional 32-bit x86, this could never happen, because GFP_KERNEL
        would never allocate any highmem memory anyway ]
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      dcc6e343
    • Neil Brown's avatar
      md: Fix md grow/size code to correctly find the maximum available space · 75ba82c6
      Neil Brown authored
      An md array can be asked to change the amount of each device that it is using,
      and in particular can be asked to use the maximum available space.  This
      currently only works if the first device is not larger than the rest.  As
      'size' gets changed and so 'fit' becomes wrong.  So check if a 'fit' is
      required early and don't corrupt it.
      Signed-off-by: default avatarDoug Ledford <dledford@redhat.com>
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      75ba82c6
    • Zachary Amsden's avatar
      softirq: remove BUG_ONs which can incorrectly trigger · e89da8fc
      Zachary Amsden authored
      It is possible to have tasklets get scheduled before softirqd has had a chance
      to spawn on all CPUs.  This is totally harmless; after success during action
      CPU_UP_PREPARE, action CPU_ONLINE will be called, which immediately wakes
      softirqd on the appropriate CPU to process the already pending tasklets.  So
      there is no danger of having a missed wakeup for any tasklets that were
      already pending.
      
      In particular, i386 is affected by this during startup, and is visible when
      using a very large initrd; during the time it takes for the initrd to be
      decompressed, a timer IRQ can come in and schedule RCU callbacks.  It is also
      possible that resending of a hardware IRQ via a softirq triggers the same bug.
      
      Because of different timing conditions, this shows up in all emulators and
      virtual machines tested, including Xen, VMware, Virtual PC, and Qemu.  It is
      also possible to trigger on native hardware with a large enough initrd,
      although I don't have a reliable case demonstrating that.
      Signed-off-by: default avatarZachary Amsden <zach@vmware.com>
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      e89da8fc
    • Christophe Saout's avatar
      dm crypt: Fix data corruption with dm-crypt over RAID5 · a26b7719
      Christophe Saout authored
      Fix corruption issue with dm-crypt on top of software raid5. Cancelled
      readahead bio's that report no error, just have BIO_UPTODATE cleared
      were reported as successful reads to the higher layers (and leaving
      random content in the buffer cache). Already fixed in 2.6.19.
      Signed-off-by: default avatarChristophe Saout <christophe@saout.de>
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      a26b7719
    • Christophe Saout's avatar
      Fix SUNRPC wakeup/execute race condition · bbb97831
      Christophe Saout authored
      The sunrpc scheduler contains a race condition that can let an RPC
      task end up being neither running nor on any wait queue. The race takes
      place between rpc_make_runnable (called from rpc_wake_up_task) and
      __rpc_execute under the following condition:
      
      First __rpc_execute calls tk_action which puts the task on some wait
      queue. The task is dequeued by another process before __rpc_execute
      continues its execution. While executing rpc_make_runnable exactly after
      setting the task `running' bit and before clearing the `queued' bit
      __rpc_execute picks up execution, clears `running' and subsequently
      both functions fall through, both under the false assumption somebody
      else took the job.
      
      Swapping rpc_test_and_set_running with rpc_clear_queued in
      rpc_make_runnable fixes that hole. This introduces another possible
      race condition that can be handled by checking for `queued' after
      setting the `running' bit.
      
      Bug noticed on a 4-way x86_64 system under XEN with an NFSv4 server
      on the same physical machine, apparently one of the few ways to hit
      this race condition at all.
      Signed-off-by: default avatarChristophe Saout <christophe@saout.de>
      Acked-by: default avatarTrond Myklebust <trond.myklebust@fys.uio.no>
      Signed-off-by: default avatarAdrian Bunk <bunk@stusta.de>
      bbb97831
  5. 14 Dec, 2006 19 commits