An error occurred fetching the project authors.
  1. 29 Dec, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] slab reclaim accounting fix · 1cdf0eef
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      slab_reclaim_pages is increased even if get_free_pages fails.  The attached
      patch moves the update to the correct position.
      1cdf0eef
  2. 22 Oct, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] fix low-memory BUG in slab · 4136bb64
      Andrew Morton authored
      cache_grow() will call kmem_freepages() if the call to alloc_slabmgmt()
      fails.  But the pages have not been marked PageSlab at this stage, so
      kmem_freepages() goes BUG.
      
      It is more symmetrical to mark the pages as PageSlab in kmem_getpages().
      
      The patch also prunes a bunch of incorrect comments.
      
      (PageSlab doesn't actually do anything: its only value is as a debug check.
      I think the LKCD patch uses it).
      4136bb64
  3. 13 Oct, 2003 1 commit
    • Manfred Spraul's avatar
      [PATCH] avoid crashes due to unaligned stacks · d310cbfc
      Manfred Spraul authored
      This fixes the stack end detection properly, and verifies that the
      stack content printing does not overflow into the next page even
      partially.
      
      This is required especially for x86 BIOSes that misalign the stack,
      together with the page access debugging that unmaps unused kernel
      pages to check for valid accesses.
      
      Architectures with special needs (eg HPPA with stacks that grow up)
      can override the kernel stack end test with __HAVE_ARCH_KSTACK_END
      if they ever enable the anal slab debugging code.
      d310cbfc
  4. 12 Oct, 2003 1 commit
  5. 08 Oct, 2003 2 commits
  6. 07 Oct, 2003 1 commit
  7. 05 Oct, 2003 2 commits
    • Andrew Morton's avatar
      [PATCH] kernel documentation fixes · 1820a80d
      Andrew Morton authored
      From: Michael Still <mikal@stillhq.com>
      
      The patch squelches build errors in the kernel-doc make targets by adding
      documentation to arguements previously not documented, and updating the
      argument names where they have changed.
      1820a80d
    • Andrew Morton's avatar
      [PATCH] Clean up MAX_NR_NODES/NUMNODES/etc. [1/5] · c72da22f
      Andrew Morton authored
      From: Matthew Dobson <colpatch@us.ibm.com>
      
      This starts a series of cleanups against the way in which the various
      architectures specify the number of nodes and memory zones.  We end up
      supporting up to 1024 memory zones on ia64, which is a recent requirement.
      
      Has been tested on ia32, ia64 (UMA), ppa64 (UMA) and NUMAQ.
      
      
      Make sure MAX_NUMNODES is defined in one and only one place.  Remove
      superfluous definitions.  Instead of defining MAX_NUMNODES in
      asm/numnodes.h, we define NODES_SHIFT there.  Then in linux/mmzone.h we
      turn that NODES_SHIFT value into MAX_NUMNODES.
      c72da22f
  8. 23 Sep, 2003 1 commit
  9. 22 Sep, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] misc fixes · fd5e34d6
      Andrew Morton authored
      - bio_release_pages() should have file-local scope.
      
      - don't use spaces in slab names in device mapper, enforce this henceforth
        in kmem_cache_create().
      
      - Fix alpha header leftover from cpumask_t conversion
      fd5e34d6
  10. 21 Sep, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] Move slab objects to the end of the real allocation · e0c22e53
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      The real memory allocation is usually larger than the actual object size:
      either due to L1 cache line padding, or due to page padding with
      CONFIG_DEBUG_PAGEALLOC.  Right now objects are placed to the beginning of
      the real allocation, but to trigger bugs it's better to move objects to the
      end of the real allocation: that way accesses behind the end of the
      allocation have a larger chance of hitting the (unmapped) next page.  The
      attached patch moves the objects to align them with the end of the real
      allocation.
      
      Actually it contains 4 seperate changes:
      
      - Do not page-pad allocations that are <= SMP_CACHE_LINE_SIZE.  This
        crashes.  Right now the limit is hardcoded to 128 bytes, but sooner or
        later an arch will appear with 256 byte cache lines.
      
      - cleanup: redzone bytes are not accessed with inline helper functions,
        instead of magic offsets scattered throughout slab.c
      
      - main change: move objects to the end of the allocation - trivial after
        the cleanup.
      
      - Print old redzone value if a redzone mismatch happens: This makes it
        simpler to figure out what happened [single bit error, wrong redzone
        code, overwritten]
      e0c22e53
  11. 03 Sep, 2003 2 commits
    • Andrew Morton's avatar
      [PATCH] might_sleep() improvements · 5eebb6f2
      Andrew Morton authored
      From: Mitchell Blank Jr <mitch@sfgoth.com>
      
      This patch makes the following improvements to might_sleep():
      
       o Add a "might_sleep_if()" macro for when we might sleep only if some
         condition is met.  It's a bit tidier, and has an unlikely() in it.
      
       o Add might_sleep checks to skb_share_check() and skb_unshare() which
         sometimes need to allocate memory.
      
       o Make all architectures call might_sleep() in both down() and
         down_interruptible().  Before only ppc, ppc64, and i386 did this check.
         (sh did the check on down() but not down_interruptible())
      5eebb6f2
    • Andrew Morton's avatar
      [PATCH] more slab page checking · 55308a20
      Andrew Morton authored
      Add checks for kfree() of a page which was allocated with __alloc_pages(),
      and for free_pages() of a page which was allocated with kmalloc().
      55308a20
  12. 19 Aug, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] slab: drain_array fix · d08e5b25
      Andrew Morton authored
      From: Philippe Elie <phil.el@wanadoo.fr>
      
      If drain_array_locked() is passed the `force' command, it must go in and
      empty the head array.
      d08e5b25
  13. 31 Jul, 2003 1 commit
  14. 18 Jul, 2003 2 commits
    • Andrew Morton's avatar
      [PATCH] slab: print stuff when the wrong cache is used · 5808e76e
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      Some extra diagnostics when someone passes the wrong object type
      into kmem_cache_free().  To help some bug which Manfred is chasing.
      5808e76e
    • Andrew Morton's avatar
      [PATCH] misc fixes · 5acf32ee
      Andrew Morton authored
      - i460-agp linkage fix ("Luck, Tony" <tony.luck@intel.com>)
      
      - Don't reimplement offsetof() in hfs
      
      - NBD warning fix
      
      - Remove unneeded null-pointer test in journal_stop (Andreas Gruenbacher)
      
      - remove debug stuff in journal_dirty_metadata()
      
      - slab.c typo fixes (Lev Makhlis <mlev@despammed.com>)
      
      - In devfs_mk_cdev() error path, don't print `buf' until we've written
        something into it.  (Reported by Gergely Nagy <algernon@gandalph.mad.hu>)
      
      - Two ISA sound drivers had their kmalloc() args reversed (Spotted by Steve
        French)
      5acf32ee
  15. 02 Jul, 2003 3 commits
    • Rusty Russell's avatar
      [PATCH] Per-cpu variable in mm/slab.c · 6f9199b5
      Rusty Russell authored
      Rather trivial conversion.  Tested on SMP.
      6f9199b5
    • Andrew Morton's avatar
      [PATCH] Security hook for vm_enough_memory · bc75ac4f
      Andrew Morton authored
      From: Stephen Smalley <sds@epoch.ncsc.mil>
      
      This patch against 2.5.73 replaces vm_enough_memory with a security hook
      per Alan Cox's suggestion so that security modules can completely replace
      the logic if desired.
      
      Note that the patch changes the interface to follow the convention of the
      other security hooks, i.e.  return 0 if ok or -errno on failure (-ENOMEM in
      this case) rather than returning a boolean.  It also exports various
      variables and functions required for the vm_enough_memory logic.
      bc75ac4f
    • Andrew Morton's avatar
      [PATCH] page unmapping debug · 98eb235b
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      Manfred's latest page unmapping debug patch.
      
      The patch adds support for a special debug mode to both the page and the slab
      allocator: Unused pages are removed from the kernel linear mapping.  This
      means that now any access to freed memory will cause an immediate exception.
      Right now, read accesses remain totally unnoticed and write accesses may be
      catched by the slab poisoning, but usually far too late for a meaningfull bug
      report.
      
      The implementation is based on a new arch dependant function,
      kernel_map_pages(), that removes the pages from the linear mapping.  It's
      right now only implemented for i386.
      
      Changelog:
      
      - Add kernel_map_pages() for i386, based on change_page_attr.  If
        DEBUG_PAGEALLOC is not set, then the function is an empty stub.  The stub
        is in <linux/mm.h>, i.e.  it exists for all archs.
      
      - Make change_page_attr irq safe.  Note that it's not fully irq safe due to
        the lack of the tlb flush ipi, but it's good enough for kernel_map_pages().
         Another problem is that kernel_map_pages is not permitted to fail, thus
        PSE is disabled if DEBUG_PAGEALLOC is enabled
      
      - use kernel_map pages for the page allocator.
      
      - use kernel_map_pages for the slab allocator.
      
        I couldn't resist and added additional debugging support into mm/slab.c:
      
        * at kfree time, the complete backtrace of the kfree caller is stored
          in the freed object.
      
        * a ptrinfo() function that dumps all known data about a kernel virtual
          address: the pte value, if it belongs to a slab cache the cache name and
          additional info.
      
        * merging of common code: new helper function obj_dbglen and obj_dbghdr
          for the conversion between the user visible object pointers/len and the
          actual, internal addresses and len values.
      98eb235b
  16. 28 Jun, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] kmem_cache_destroy() forgets to drain all objects · 1b3fa04f
      Andrew Morton authored
      From: Philippe Elie <phil.el@wanadoo.fr>
      
      kmem_cache_destroy() can fail with the following error: slab error in
      kmem_cache_destroy(): cache `xxx': Can't free all objects but the cache
      user really free'd all objects
      
      This is because drain_array_locked() only frees 80% of thge objects.
      
      Fix it by adding a parameter to drain_array_locked() telling it to drain
      100% of the objects.
      1b3fa04f
  17. 20 Jun, 2003 2 commits
  18. 10 Jun, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] optimize fixed-sized kmalloc calls · 95203fe7
      Andrew Morton authored
      From: Manfred Spraul and Brian Gerst
      
      The patch performs the kmalloc cache lookup for constant kmalloc calls at
      compile time.  The idea is that the loop in kmalloc takes a significant
      amount of time, and for kmalloc(4096,GFP_KERNEL), that lookup can happen
      entirely at compile time.
      
      A problem has been seen with gcc-3.2.2-5 from RedHat.  This code:
      
          if(__builtin_constant_t(size)) {
                if(size < 32) return kmem_cache_alloc(...);
                if(size < 64) return kmem_cache_alloc(...);
                if(size < 96) return kmem_cache_alloc(...);
                if(size < 128) return kmem_cache_alloc(...);
                ...
          }
      
      doesn't work, because gcc only optimizes the first two or three comparisons,
      and then suddenly generates code.
      
      But we did it that way anyway.  Apparently it's fixed in later compilers.
      95203fe7
  19. 06 Jun, 2003 3 commits
    • Rusty Russell's avatar
      [PATCH] Move cpu notifiers et al to cpu.h · 542f238e
      Rusty Russell authored
      Trivial patch: when these were introduced cpu.h didn't exist.
      542f238e
    • Andrew Morton's avatar
      [PATCH] misc fixes · 2ad69038
      Andrew Morton authored
      - Add comment about slab ctor behaviour (Ingo Oeser)
      
      - mm/slab.c:fprob() shows up in profiles a lot.  Rename it to something more
        meaningful.
      
      - fatfs printk warning fix (Randy Dunlap)
      
      - give the the time interpolator list and lock file-static scope (hch)
      2ad69038
    • Andrew Morton's avatar
      [PATCH] kmalloc_percpu: interface change · 32028c70
      Andrew Morton authored
      From: Rusty Russell <rusty@rustcorp.com.au>
      
      Several tweaks to the kmalloc_percpu()/kfree_percpu() interface, to
      allow future implementations to be more flexible, and make easier to
      use now we can see how it's actually being used.
      
      1) No flags argument: GFP_ATOMIC doesn't make much sense,
      
      2) Explicit alignment argument, so we don't have to give SMP_CACHE_BYTES
         alignment always,
      
      3) Zeros memory, since most callers want that and it's not entirely
         trivial,
      
      4) Convenient type-safe wrapper which takes a typename, and
      
      5) Rename to alloc_percpu/__alloc_percpu, since usage no longer matches
         kmalloc.
      32028c70
  20. 02 Jun, 2003 2 commits
    • Andrew Morton's avatar
      [PATCH] Additional fields in slabinfo · a7faa309
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      We need to present more information in /proc/slabinfo now the magazine
      layer has been added.
      
      The slabinfo version has been updated to 2.0.
      a7faa309
    • Andrew Morton's avatar
      [PATCH] magazine layer for slab · dd6b3d93
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      slab.c is not very efficient for passing objects between cpus.  Usually this
      is a rare event, but with network routing and cpu-affine NICs it is possible
      that nearly all allocation operations will occur on one cpu, and nearly all
      free operations on another cpu.
      
      This causes slab memory to be returned to slab's free page list rather than
      being cached on behalf of the particular slab cache.
      
      The attached patch solves that by adding an array of objects that is shared
      by all cpus.  Instead of multiple linked list operations per object, object
      pointers are now passed to/from the shared array (and thus between cpus) with
      memcopy operations.  On uniprocessor, the default array size is 0, because
      the shared array and the per-cpu head array are redundant.
      
      Additionally, the patch exports more statistics in /proc/slabinfo and make
      the array sizes tunable by writing to /proc/slabinfo.  Both changes break
      backward compatibility, user space scripts must look at the slabinfo version
      and act accordingly.
      
      The size of the new shared array may be altered at runtime, by writing to
      /proc/slabinfo.
      
      The new parameters for writing to /proc/slabinfo are:
      
      	#echo "cache-name limit batchcount shared" > /proc/slabinfo
      
      For example "size-4096 120 60 8" improves the slab efficiency for network
      routing, because the default values (24 12 8) are too small for the large
      series generated due to irq mitigation.  Note that only root has write
      permissions to /proc/slabinfo.
      
      These changes provided an overall 12% speedup in Robert Olson's gigE
      packet-formwarding testing on 2-way.
      dd6b3d93
  21. 25 May, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] slab: account for reclaimable caches · 8f542f30
      Andrew Morton authored
      We have a problem at present in vm_enough_memory(): it uses smoke-n-mirrors
      to try to work out how much memory can be reclaimed from dcache and icache.
      it sometimes gets it quite wrong, especially if the slab has internal
      fragmentation.  And it often does.
      
      So here we take a new approach.  Rather than trying to work out how many
      pages are reclaimable by counting up the number of inodes and dentries, we
      change the slab allocator to keep count of how many pages are currently used
      by slabs which can be shrunk by the VM.
      
      The creator of the slab marks the slab as being reclaimable at
      kmem_cache_create()-time.  Slab keeps a global counter of pages which are
      currently in use by thus-tagged slabs.
      
      Of course, we now slightly overestimate the amount of reclaimable memory,
      because not _all_ of the icache, dcache, mbcache and quota caches are
      reclaimable.
      
      But I think it's better to be a bit permissive rather than bogusly failing
      brk() calls as we do at present.
      8f542f30
  22. 12 May, 2003 2 commits
  23. 07 May, 2003 3 commits
    • Andrew Morton's avatar
      [PATCH] slab: additional debug checks · 09f95761
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      below is the promised patch for better slab debugging, against 2.5.68-mm4:
      
      Changes:
      
      - enable redzoning and last user accounting even for large objects, if
        that doesn't waste too much memory
      
      - document why FORCED_DEBUG doesn't enable redzoning&last user accounting
        for some caches.
      
      - check the validity of the bufctl chains in a slab in __free_blocks.
        This detects double-free error for the caches without redzoning.
      09f95761
    • Andrew Morton's avatar
      [PATCH] account for slab reclaim in try_to_free_pages() · f31fd780
      Andrew Morton authored
      try_to_free_pages() currently fails to notice that it successfully freed slab
      pages via shrink_slab().  So it can keep looping and eventually call
      out_of_memory(), even though there's a lot of memory now free.
      
      And even if it doesn't do that, it can free too much memory.
      
      The patch changes try_to_free_pages() so that it will notice freed slab pages
      and will return when enough memory has been freed via shrink_slab().
      
      Many options were considered, but must of them were unacceptably inaccurate,
      intrusive or sleazy.  I ended up putting the accounting into a stack-local
      structure which is pointed to by current->reclaim_state.
      
      One reason for this is that we can cleanly resurrect the current->local_pages
      pool by putting it into struct reclaim_state.
      
      (current->local_pages was removed because the per-cpu page pools in the page
      allocator largely duplicate its function.  But it is still possible for
      interrupt-time allocations to steal just-freed pages, so we might want to put
      it back some time.)
      f31fd780
    • Andrew Morton's avatar
      [PATCH] slab: initialisation cleanup and oops fix · 862fb282
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      attached is the promised cleanup/bugfix patch for the slab bootstrap:
      
      - kmem_cache_init & kmem_cache_sizes_init merged into one function,
        called after mem_init().  It's impossible to bring slab to an operational
        state without working gfp, thus the early partial initialization is not
        necessary.
      
      - g_cpucache_up set to FULL at the end of kmem_cache_init instead of the
        module init call.  This is a bugfix: slab was completely initialized,
        just the update of the state was missing.
      
      - some documentation for the bootstrap added.
      
      The minimal fix for the bug is a two-liner: move g_cpucache_up=FULL from
      cpucache_init to kmem_cache_sizes_init, but I want to get rid of
      kmem_cache_sizes_init, too.
      862fb282
  24. 12 Apr, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] Fix kmalloc_sizes[] indexing · 830d6ef2
      Andrew Morton authored
      From: Brian Gerst and David Mosberger
      
      The previous fix to the kmalloc_sizes[] array didn't null-terminate the
      correct array.
      
      Fix that up, and also avoid running ARRAY_SIZE() against an array which is
      really a null-terminated list.
      830d6ef2
  25. 09 Apr, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] null-terminate the kmalloc tables · 7ffbbaf2
      Andrew Morton authored
      From: David Mosberger <davidm@napali.hpl.hp.com>
      
      The cache_sizes array needs to be NULL terminated, otherwise an oversized
      kmalloc request runs off the end of the table.
      7ffbbaf2
  26. 28 Mar, 2003 2 commits
    • Andrew Morton's avatar
      [PATCH] slab: cache sizes cleanup · 860b3cb2
      Andrew Morton authored
      From: Brian Gerst <bgerst@didntduck.org>
      
      - Reduce code duplication by putting the kmalloc cache sizes into a header
        file.
      
      - Tidy up kmem_cache_sizes_init().
      860b3cb2
    • Andrew Morton's avatar
      [PATCH] slab: fix off-by-one in size calculation · 16d99651
      Andrew Morton authored
      From: Manfred Spraul <manfred@colorfullife.com>
      
      Brian spotted a stupid bug in the slab initialization:
      
      If multiple objects fit into one cacheline, then the allocator ignores
      SLAB_HWCACHE_ALIGN and squeezes the objects into the same cacheline.  The
      implementation contains an off by one error and thus doesn't work correctly:
      For Athlon optimized kernels, the 32-byte slab uses 64 byte of memory.
      16d99651