1. 02 Jul, 2003 6 commits
    • Andrew Morton's avatar
      [PATCH] page unmapping debug · 98eb235b
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      Manfred's latest page unmapping debug patch.
      
      The patch adds support for a special debug mode to both the page and the slab
      allocator: Unused pages are removed from the kernel linear mapping.  This
      means that now any access to freed memory will cause an immediate exception.
      Right now, read accesses remain totally unnoticed and write accesses may be
      catched by the slab poisoning, but usually far too late for a meaningfull bug
      report.
      
      The implementation is based on a new arch dependant function,
      kernel_map_pages(), that removes the pages from the linear mapping.  It's
      right now only implemented for i386.
      
      Changelog:
      
      - Add kernel_map_pages() for i386, based on change_page_attr.  If
        DEBUG_PAGEALLOC is not set, then the function is an empty stub.  The stub
        is in <linux/mm.h>, i.e.  it exists for all archs.
      
      - Make change_page_attr irq safe.  Note that it's not fully irq safe due to
        the lack of the tlb flush ipi, but it's good enough for kernel_map_pages().
         Another problem is that kernel_map_pages is not permitted to fail, thus
        PSE is disabled if DEBUG_PAGEALLOC is enabled
      
      - use kernel_map pages for the page allocator.
      
      - use kernel_map_pages for the slab allocator.
      
        I couldn't resist and added additional debugging support into mm/slab.c:
      
        * at kfree time, the complete backtrace of the kfree caller is stored
          in the freed object.
      
        * a ptrinfo() function that dumps all known data about a kernel virtual
          address: the pte value, if it belongs to a slab cache the cache name and
          additional info.
      
        * merging of common code: new helper function obj_dbglen and obj_dbghdr
          for the conversion between the user visible object pointers/len and the
          actual, internal addresses and len values.
      98eb235b
    • Andrew Morton's avatar
      [PATCH] move_vma() make_pages_present() fix · 17003453
      Andrew Morton authored
      From: Hugh Dickins <hugh@veritas.com>
      
      mremap's move_vma VM_LOCKED case was still wrong.
      
      If the do_munmap unmaps a part of new_vma, then its vm_start and vm_end
      from before cannot both be the right addresses for the make_pages_present
      range, and may BUG() there.
      
      We need [new_addr, new_addr+new_len) to be locked down; but
      move_page_tables already transferred the locked pages [new_addr,
      new_addr+old_len), and they're either held in a VM_LOCKED vma throughout,
      or temporarily in no vma: in neither case can be swapped out, so no need to
      run over that range again.
      17003453
    • Dagfinn Ilmari Mannsåker's avatar
      [PATCH] Allow modular DM · eeb96479
      Dagfinn Ilmari Mannsåker authored
      With the recent fixes, io_schedule needs to be exported for modular dm
      to work.
      eeb96479
    • Linus Torvalds's avatar
      Linux 2.5.74 · 495c3da1
      Linus Torvalds authored
      495c3da1
    • Joe Thornber's avatar
      [PATCH] dm: remove bogus yields · 8732dde8
      Joe Thornber authored
      Replace a couple of bogus yields() with schedule() and io_schedule()
      respectively.
      8732dde8
    • Joe Thornber's avatar
      [PATCH] dm: fix memory leak · 2ea58325
      Joe Thornber authored
      2ea58325
  2. 01 Jul, 2003 31 commits
  3. 02 Jul, 2003 1 commit
  4. 01 Jul, 2003 1 commit
  5. 30 Jun, 2003 1 commit