1. 30 Oct, 2002 40 commits
    • James Simmons's avatar
      Ug. Synced up. · d295086c
      James Simmons authored
      d295086c
    • James Simmons's avatar
      Merge bk://linux.bkbits.net/linux-2.5 · 32882f41
      James Simmons authored
      into maxwell.earthlink.net:/usr/src/linus-2.5
      32882f41
    • Linus Torvalds's avatar
      Linux v2.5.45. For real this time. · b1b782f7
      Linus Torvalds authored
      b1b782f7
    • Linus Torvalds's avatar
      Merge master.kernel.org:/home/davem/BK/net-2.5 · dc85a09d
      Linus Torvalds authored
      into penguin.transmeta.com:/home/penguin/torvalds/repositories/kernel/linux
      dc85a09d
    • Neil Brown's avatar
      [PATCH] kNFSd: Convert nfsd to use a list of pages instead of one big buffer · a0e7d495
      Neil Brown authored
      This means:
        1/ We don't need an order-4 allocation for each nfsd that starts
        2/ We don't need an order-4 allocation in skb_linearize when
           we receive a 32K write request
        3/ It will be easier to incorporate the zero-copy read changes
      
      The pages are handed around using an xdr_buf (instead of svc_buf)
      much like the NFS client so future crypto code can use the same
      data structure for both client and server.
      
      The code assumes that most requests and replies fit in a single page.
      The exceptions are assumed to have some largish 'data' bit, and the
      rest must fit in a single page.
      The 'data' bits are file data, readdir data, and symlinks.
      There must be only one 'data' bit per request.
      This is all fine for nfs/nlm.
      
      This isn't complete:
        1/ NFSv4 hasn't been converted yet (it won't compile)
        2/ NFSv3 allows symlinks upto 4096, but the code will only support
           upto about 3800 at the moment
        3/ readdir responses are limited to about 3800.
      
      but I thought that patch was big enough, and the rest can come
      later.
      
      
      This patch introduces vfs_readv and vfs_writev as parallels to
      vfs_read and vfs_write.  This means there is a fair bit of
      duplication in read_write.c that should probably be tidied up...
      a0e7d495
    • Neil Brown's avatar
      [PATCH] kNFSd: nfsd_readdir changes. · 335c5fc7
      Neil Brown authored
      nfsd_readdir - the common readdir code for all version of nfsd,
      contains a number of version-specific things with appropriate checks,
      and also does some xdr-encoding which rightly belongs elsewhere.
      
      This patch simplifies nfsd_readdir to do just the core stuff, and moves
      the version specifics into version specific files, and the xdr encoding
      into xdr encoding files.
      335c5fc7
    • Neil Brown's avatar
      [PATCH] kNFSd: Fix problem with buffer length with rpc/tcp · f319e5fa
      Neil Brown authored
      I forgot to add '1' for the record-length header in RPC/TCP.
       Thanks to  Hirokazu Takahashi <taka@valinux.co.jp>
      f319e5fa
    • Neil Brown's avatar
      [PATCH] kNFSd: Make sure export_open cleans up on failure. · 988d8f66
      Neil Brown authored
      Currently if the kmalloc in exports_open fails,
      the seq_file isn't seq_released.
      
      We now do the kmalloc first, and make sure to kfree
      if seq_open fails.
      988d8f66
    • Neil Brown's avatar
      [PATCH] kNFSd: Fix nfs shutdown problem. · b9d189e5
      Neil Brown authored
      The 'unexport everything' that happens when the
      last nfsd thread dies was shuting down too much -
      things that should only be shut down on module unload.
      b9d189e5
    • Matthew Dobson's avatar
      [PATCH] Remove sole CONFIG_MULIQUAD in kernel source · 23518c21
      Matthew Dobson authored
      There is one remaining instance of CONFIG_MULTIQUAD in the kernel source.
      
      Fix it to use the proper CONFIG_X86_NUMAQ instead.
      23518c21
    • Neil Brown's avatar
      [PATCH] md: factor out MD superblock handling code · d571b483
      Neil Brown authored
      Define an interface for interpreting and updating superblocks
      so we can more easily define new formats.
      
      With this patch, (almost) all superblock layout information is
      locating in a small set of routines dedicated to superblock
      handling.  This will allow us to provide a similar set for
      a different format.
      
      The two exceptions are:
       1/ autostart_array where the devices listed in the superblock
          are searched for.
       2/ raid5 'knows' the maximum number of devices for
           compute_parity.
      
      These will be addressed in a later patch.
      d571b483
    • Linus Torvalds's avatar
      Merge · 6932d2d5
      Linus Torvalds authored
      6932d2d5
    • Andi Kleen's avatar
      [PATCH] x86-64 updates for 2.5.44 · d05e5732
      Andi Kleen authored
      A few updates for x86-64 in 2.5.44. Some of the bugs fixed were serious.
      
      - Don't count ACPI mappings in end_pfn. This shrinks mem_map a lot
        on many setups.
      - Fix mem= option. Remove custom mapping support.
      - Revert per_cpu implementation to the generic version. The optimized one
        that used %gs directly triggered too many toolkit problems and was an
        constant source of bugs.
      - Make sure pgd_offset_k works correctly for vmalloc mappings. This makes
        modules work again properly.
      - Export pci dma symbols
      - Export other symbols to make more modules work
      - Don't drop physical address bits >32bit on iommu free.
      - Add more prototypes to fix warnings
      - Resync pci subsystem with i386
      - Fix pci dma kernel option parsing.
      - Do PCI peer bus scanning after ACPI in case it missed some busses
        (that's a workaround - 2.5 ACPI seems to have some problems here that
        I need to investigate more closely)
      - Remove the .eh_frame on linking. This saves several hundred KB in the
        bzImage
      - Fix MTRR initialization. It works properly now on SMP again.
      - Fix kernel option parsing, it was broken by section name changes in
        init.h
      - A few other cleanups and fixes.
      - Fix nonatomic warning in ioport.c
      d05e5732
    • Andrew Morton's avatar
      [PATCH] hot-n-cold pages: free and allocate hints · 8d6282a1
      Andrew Morton authored
      Add a `cold' hint to struct pagevec, and teach truncate and page
      reclaim to use it.
      
      Empirical testing showed that truncate's pages tend to be hot.  And page
      reclaim's are certainly cold.
      8d6282a1
    • Andrew Morton's avatar
      [PATCH] hot-n-cold pages: use cold pages for readahead · 5019ce29
      Andrew Morton authored
      It is usually the case that pagecache reads use busmastering hardware
      to transfer the data into pagecache.  This invalidates the CPU cache of
      the pagecache pages.
      
      So use cache-cold pages for pagecache reads.  To avoid wasting
      cache-hot pages.
      5019ce29
    • Andrew Morton's avatar
      [PATCH] hot-n-cold pages: page allocator core · a206231b
      Andrew Morton authored
      Hot/Cold pages and zone->lock amortisation
      a206231b
    • Andrew Morton's avatar
      [PATCH] hot-n-cold pages: bulk page freeing · 1d2652dd
      Andrew Morton authored
      Patch from Martin Bligh.
      
      Implements __free_pages_bulk().  Release multiple pages of a given
      order into the buddy all within a single acquisition of the zone lock.
      
      This also removes current->local_pages.  The per-task list of pages
      which only ever contained one page.  To prevent other tasks from
      stealing pages which this task has just freed up.
      
      Given that we're freeing into the per-cpu caches, and that those are
      multipage caches, and the cpu-stickiness of the scheduler, I think
      current->local_pages is no longer needed.
      1d2652dd
    • Andrew Morton's avatar
      [PATCH] hot-n-cold pages: bulk page allocator · 38e419f5
      Andrew Morton authored
      This is the hot-n-cold-pages series.  It introduces a per-cpu lockless
      LIFO pool in front of the page allocator.  For three reasons:
      
      1: To reduce lock contention on the buddy lock: we allocate and free
         pages in, typically, 16-page chunks.
      
      2: To return cache-warm pages to page allocation requests.
      
      3: As infrastructure for a page reservation API which can be used to
         ensure that the GFP_ATOMIC radix-tree node and pte_chain allocations
         cannot fail.  That code is not complete, and does not absolutely
         require hot-n-cold pages.  It'll work OK though.
      
      We add two queues per CPU.  The "hot" queue contains pages which the
      freeing code thought were likely to be cache-hot.  By default, new
      allocations are satisfied from this queue.
      
      The "cold" queue contains pages which the freeing code expected to be
      cache-cold.  The cold queue is mainly for lock amortisation, although
      it is possible to explicitly allocate cold pages.  The readahead code
      does that.
      
      I have been hot and cold on these patches for quite some time - the
      benefit is not great.
      
      - 4% speedup in Randy Hron's benching of the autoconf regression
        tests on a 4-way.  Most of this came from savings in pte_alloc and
        pmd_alloc: the pagetable clearing code liked the warmer pages (some
        architectures still have the pgt_cache, and can perhaps do away with
        them).
      
      - 1% to 2% speedup in kernel compiles on my 4-way and Martin's 32-way.
      
      - 60% speedup in a little test program which writes 80 kbytes to a
        file and ftruncates it to zero again.  Ran four instances of that on
        4-way and it loved the cache warmth.
      
      - 2.5% speedup in Specweb testing on 8-way
      
      - The thing which won me over: an 11% increase in throughput of the
        SDET benchmark on an 8-way PIII:
      
      	with hot & cold:
      
      	RESULT for 8 users is 17971    +12.1%
      	RESULT for 16 users is 17026   +12.0%
      	RESULT for 32 users is 17009   +10.4%
      	RESULT for 64 users is 16911   +10.3%
      
      	without:
      
      	RESULT for 8 users is 16038
      	RESULT for 16 users is 15200
      	RESULT for 32 users is 15406
      	RESULT for 64 users is 15331
      
        SDET is a very old SPEC test which simulates a development
        environment with a large number of users.  Lots of users running a
        mix of shell commands, basically.
      
      
      These patches were written by Martin Bligh and myself.
      
      This one implements rmqueue_bulk() - a function for removing multiple
      pages of a given order from the buddy lists.
      
      This is for lock amortisation: take the highly-contended zone->lock
      with less frequency, do more work once it has been acquired.
      38e419f5
    • Andrew Morton's avatar
      [PATCH] percpu: convert global page accounting · afce7191
      Andrew Morton authored
      Convert global page state accounting to use per-cpu storage
      
      (I think this code remains a little buggy, btw.  Note how I do
      
      	per_cpu(page_states, cpu).member += (delta);
      
      This gets done at interrupt time and hence is assuming that
      the "+=" operation on a ulong is atomic wrt interrupts on
      all architectures. How do we feel about that assumption?)
      afce7191
    • Andrew Morton's avatar
      [PATCH] percpu: create an EXPORT_PER_CPU_SYMBOL() macro · 999eac41
      Andrew Morton authored
      This is needed so that per-cpu information in the core kernel can be
      accessed from modules.
      999eac41
    • Andrew Morton's avatar
      [PATCH] percpu: convert buffer.c · e252fb96
      Andrew Morton authored
      Patch from Dipankar Sarma <dipankar@in.ibm.com>
      
      This patch makes per_cpu bh_accounting safe for cpu_possible
      allocation by using cpu notifiers.
      e252fb96
    • Andrew Morton's avatar
      [PATCH] percpu: convert softirqs · c1bf37e9
      Andrew Morton authored
      Patch from Dipankar Sarma <dipankar@in.ibm.com>
      
      This patch makes per_cpu tasklet vectors safe for cpu_possible
      allocation by using CPU notifiers.
      c1bf37e9
    • Andrew Morton's avatar
      [PATCH] percpu: convert timers · cf228cdc
      Andrew Morton authored
      Patch from Dipankar Sarma <dipankar@in.ibm.com>
      
      This patch changes the per-CPU data in timer management (tvec_bases)
      to use per_cpu data area and makes it safe for cpu_possible allocation
      by using CPU notifiers. End result - saving space.
      
      Depends on cpu_possible patch.
      cf228cdc
    • Andrew Morton's avatar
      [PATCH] percpu: convert RCU · c12e16e2
      Andrew Morton authored
      Patch from Dipankar Sarma <dipankar@in.ibm.com>
      
      This patch convers RCU per_cpu data to use per_cpu data area
      and makes it safe for cpu_possible allocation by using CPU
      notifiers.
      c12e16e2
    • Andrew Morton's avatar
      [PATCH] percpu: fix compile warning for UP builds · 0c83f291
      Andrew Morton authored
      A typical construct is:
      
      	int cpu = get_cpu();
      
      	foo = per_cpu(bar, cpu);
      	put_cpu();
      
      but this generates a compiler warning on uniprocessor builds: unused
      variable `cpu'.
      
      Add a dummy ref to `cpu' to per_cpu() to prevent this.
      0c83f291
    • Andrew Morton's avatar
      [PATCH] percpu: balance_dirty_pages ratelimit counters · f98bf5ff
      Andrew Morton authored
      Convert balance_dirty_pages_ratelimited() to use percpu storage
      for the ratelimiting counters.
      f98bf5ff
    • Alexey Kuznetsov's avatar
      [UDP]: Delete buggy assertion. · 4c664ca5
      Alexey Kuznetsov authored
      4c664ca5
    • Andrew Morton's avatar
      [PATCH] slab: Use CPU notifiers · 4524ea04
      Andrew Morton authored
      - allocate memory for cpu buffers in cpu_up_prepare
      
      - start the timer in cpu_online
      
      - free the memory for cpu buffers in cpu_up_cancel.
      4524ea04
    • Andrew Morton's avatar
      [PATCH] slab: additional code cleanup · b464df2e
      Andrew Morton authored
      From Manfred Spraul
      
      - remove all typedef, except the kmem_bufctl_t.  It's a redefine for
        an int, i.e.  qualifies as tiny.
      
      - convert most macros to inline functions.
      b464df2e
    • Andrew Morton's avatar
      [PATCH] slab: Remove cache_chain_lock · 716b7ab1
      Andrew Morton authored
      Manfred added a new lock to protect the global list of slab caches.  We
      already have a semaphore from those but he needs locking from timer
      context.
      
      So here we remove that lock and just do a down_trylock() on the
      existing semaphore.  If that fails give up - we'll try again next timer
      tick.
      716b7ab1
    • Andrew Morton's avatar
      [PATCH] slab: Rework the slab timer code to use add_timer_on · bf19f75e
      Andrew Morton authored
      Manfred had all this weird code to schedule a kernel thread onto a
      different CPU just so that we could bond a timer to that CPU.
      
      Convert it all to use the new add_timer_on().
      bf19f75e
    • Andrew Morton's avatar
      [PATCH] slab: reap timers · fd1425d5
      Andrew Morton authored
      - add a reap timer that returns stale objects from the cpu arrays
      - use list_for_each instead of while loops
      - /proc/slabinfo layout change, for a new field about reaping.
      
      Implementation:
      slab contains 2 caches that contain objects that might be usable to the
      systems:
      - the cpu arrays contains objects that other cpus could use
      - the slabs_free list contains freeable slabs, i.e. pages that someone
      else might want.
      
      The patch now keeps track of accesses to the cpu arrays and to the free
      list. If there were no recent activities in one of the caches, part of
      the cache is flushed.
      
      Unlike <2.5.39, only a small part (~20%) is flushed each time:
      The older kernel would refill/drain bounce heavily under memory pressure:
      
      - kmem_cache_alloc: notices that there are no objects in the cpu
              cache, loads 120 objects from the slab lists, return 1.
              [assuming batchcount=120]
      - kmem_cache_reap is called due to memory pressure, finds 119
              objects in the cpu array and returns them to the slab lists.
      - repeat.
      
      In addition, the length of the free list is limited based on the free
      list accesses: a fixed "1" limit hurts the large object caches.
      
      That's the last part for now, next is: [not yet written]
      - cleanup: BUG_ON instead of if() BUG
      - OOM handling for enable_cpucaches
      - remove the unconditional might_sleep() from
              cache_alloc_debugcheck_before, and make that DEBUG dependant.
      - initial NUMA support, just to collect some stats:
              Which percentage of the objects are freed on the wrong
              node? 0.1% or 20%?
      fd1425d5
    • Andrew Morton's avatar
      [PATCH] slab: uninline poisoning checks · 1aabbecc
      Andrew Morton authored
      remove inline from the cache poison checks: the functions are not
      performance critical.
      1aabbecc
    • Andrew Morton's avatar
      [PATCH] slab: cleanups and speedups · cad9cd51
      Andrew Morton authored
      - enable the cpu array for all caches
      
      - remove the optimized implementations for quick list access - with
        cpu arrays in all caches, the list access is now rare.
      
      - make the cpu arrays mandatory, this removes 50% of the conditional
        branches from the hot path of kmem_cache_alloc [1]
      
      - poisoning for objects with constructors
      
      Patch got a bit longer...
      
      I forgot to mention this: head arrays mean that some pages can be
      blocked due to objects in the head arrays, and not returned to
      page_alloc.c.  The current kernel never flushes the head arrays, this
      might worsen the behaviour of low memory systems.  The hunk that
      flushes the arrays regularly comes next.
      
      Details changelog: [to be read site by side with the patch]
      
      * docu update
      
      * "growing" is not really needed: races between grow and shrink are
        handled by retrying.  [additionally, the current kernel never
        shrinks]
      
      * move the batchcount into the cpu array:
      	the old code contained a race during cpu cache tuning:
      		update batchcount [in cachep] before or after the IPI?
      	And NUMA will need it anyway.
      
      * bootstrap support: the cpu arrays are really mandatory, nothing
        works without them.  Thus a statically allocated cpu array is needed
        to for starting the allocators.
      
      * move the full, partial & free lists into a separate structure, as a
        preparation for NUMA
      
      * structure reorganization: now the cpu arrays are the most important
        part, not the lists.
      
      * dead code elimination: remove "failures", nowhere read.
      
      * dead code elimination: remove "OPTIMIZE": not implemented.  The
        idea is to skip the virt_to_page lookup for caches with on-slab slab
        structures, and use (ptr&PAGE_MASK) instead.  The details are in
        Bonwicks paper.  Not fully implemented.
      
      * remove GROWN: kernel never shrinks a cache, thus grown is
        meaningless.
      
      * bootstrap: starting the slab allocator is now a 3 stage process:
      	- nothing works, use the statically allocated cpu arrays.
      	- the smallest kmalloc allocator works, use it to allocate
      		cpu arrays.
      	- all kmalloc allocators work, use the default cpu array size
      
      * register a cpu nodifier callback, and allocate the needed head
        arrays if a new cpu arrives
      
      * always enable head arrays, even for DEBUG builds.  Poisoning and
        red-zoning now happens before an object is added to the arrays.
        Insert enable_all_cpucaches into cpucache_init, there is no need for
        seperate function.
      
      * modifications to the debug checks due to the earlier calls of the
        dtor for caches with poisoning enabled
      
      * poison+ctor is now supported
      
      * squeezing 3 objects into a cacheline is hopeless, the FIXME is not
        solvable and can be removed.
      
      * add additional debug tests: check_irq_off(), check_irq_on(),
        check_spinlock_acquired().
      
      * move do_ccupdate_local nearer to do_tune_cpucache.  Should have
        been part of -04-drain.
      
      * additional objects checks.  red-zoning is tricky: it's implemented
        by increasing the object size by 2*BYTES_PER_WORD.  Thus
        BYTES_PER_WORD must be added to objp before calling the destructor,
        constructor or before returing the object from alloc.  The poison
        functions add BYTES_PER_WORD internally.
      
      * create a flagcheck function, right now the tests are duplicated in
        cache_grow [always] and alloc_debugcheck_before [DEBUG only]
      
      * modify slab list updates: all allocs are now bulk allocs that try
        to get multiple objects at once, update the list pointers only at the
        end of a bulk alloc, not once per alloc.
      
      * might_sleep was moved into kmem_flagcheck.
      
      * major hotpath change:
      	- cc always exists, no fallback
      	- cache_alloc_refill is called with disabled interrupts,
      	  and does everything to recover from an empty cpu array.
      	  Far shorter & simpler __cache_alloc [inlined in both
      	  kmalloc and kmem_cache_alloc]
      
      * __free_block, free_block, cache_flusharray: main implementation of
        returning objects to the lists.  no big changes, diff lost track.
      
      * new debug check: too early kmalloc or kmem_cache_alloc
      
      * slightly reduce the sizes of the cpu arrays: keep the size < a
        power of 2, including batchcount, avail and now limit, for optimal
        kmalloc memory efficiency.
      
      That's it.  I even found 2 bugs while reading: dtors and ctors for
      verify were called with wrong parameters, with RED_ZONE enabled, and
      some checks still assumed that POISON and ctor are incompatible.
      cad9cd51
    • Andrew Morton's avatar
      [PATCH] slab: remove spaces from /proc identifiers · 5bbb9ea6
      Andrew Morton authored
      From Manfred Spraul
      
      remove the space from the name of the DMA caches: they make it
      impossible to tune the caches through /proc/slabinfo, and make parsing
      /proc/slabinfo difficult
      5bbb9ea6
    • Andrew Morton's avatar
      [PATCH] slab: take the spinlock in the drain function. · fa652753
      Andrew Morton authored
      In 2.5, local_irq_disable() provides protection against
      smp_call_function() on all architectures.  (Or it will, not sure.  But
      davem says this is OK).
      
      So a spin_lock() within the smp_call_function() callback is now
      permitted, and we can remove/cleanup the workaround.
      fa652753
    • Andrew Morton's avatar
      [PATCH] slab: reduce internal fragmentation · 69e74939
      Andrew Morton authored
      From Manfred Spraul
      
      If an object is freed from a slab, then move the slab to the tail of
      the partial list - this should increase the probability that the other
      objects from the same page are freed, too, and that a page can be
      returned to gfp later.
      
      In other words: if we just freed an object from this page then make
      this page be the *last* page which is eligible for new allocations.
      Under the assumption that other objects in that same page are about to
      be freed up as well.
      
      The cpu arrays are now always in front of the list, i.e.  cache hit
      rates should not matter.
      69e74939
    • Andrew Morton's avatar
      [PATCH] slab: enable the cpu arrays on uniprocessor · 23797198
      Andrew Morton authored
      From Manfred Spraul
      
      Always enable the cpu arrays, even on uniprocessor.
      
      They provide LIFO ordering, which should improve cache hit rates.  And
      the array allocator is slightly faster than the list operations.
      23797198
    • Andrew Morton's avatar
      [PATCH] slab: cleanup: rename static functions · 91767dfd
      Andrew Morton authored
      From Manfred Spraul
      
      remove kmem_ from all static function that are only used in slab.c.
      Except kmem_cache_slabmgmt, I've renamed it to alloc_slabmgmt().
      91767dfd
    • Andrew Morton's avatar
      [PATCH] slab: add_timer_on: add a timer on a particular CPU · 22331dad
      Andrew Morton authored
      add_timer_on is like add_timer, except it takes a target CPU on which
      to add the timer.
      
      The slab code needs per-cpu timers for shrinking the per-cpu caches.
      22331dad