1. 04 Jul, 2002 20 commits
    • Andrew Morton's avatar
      [PATCH] fix a writeback race · 2ab9665b
      Andrew Morton authored
      Fixes a bug in generic_writepages() and its cut-n-paste-cousin,
      mpage_writepages().
      
      The code was clearing PageDirty and then baling out if it discovered
      the page was nder writeback.  Which would cause the dirty bit to be
      lost.
      
      It's a very small window, but reversing the order so PageDirty is only
      cleared when we know for-sure that IO will be started fixes it up.
      2ab9665b
    • Andrew Morton's avatar
      [PATCH] suppress more allocation failure warnings · 193ae036
      Andrew Morton authored
      The `page allocation failure' warning in __alloc_pages() is being a
      pain.  But I'm persisting with it...
      
      The patch renames PF_RADIX_TREE to PF_NOWARN, and uses it in a few
      places where allocations failures are known to happen.  These code
      paths are well-tested now and suppressing the warning is OK.
      193ae036
    • Andrew Morton's avatar
      [PATCH] always update page->flags atomically · a2b41d23
      Andrew Morton authored
      move_from_swap_cache() and move_to_swap_cache() are playing with
      page->flags nonatomically.  The page is on the LRU at the time and
      another CPU could be altering page->flags concurrently.
      
      The patch converts those functions to use atomic operations.
      
      It also rationalises the number of bits which are cleared.  It's not
      really clear to me what page flags we really want to set to a known
      state in there.
      
      It had no right to go clearing PG_arch_1.  I'm now clearing PG_arch_1
      inside rmqueue() which is still a bit presumptious.
      
      btw: shmem uses PAGE_CACHE_SIZE and swapper_space uses PAGE_SIZE.  I've
      been carefully maintaining the distinction, but it looks like shmem
      will break if we ever do make these values different.
      
      
      Also, __add_to_page_cache() was performing a non-atomic RMW against
      page->flags, under the assumption that it was a newly allocated page
      which no other CPU would look at.  Not true - this function is used for
      moving anon pages into swapcache.  Those anon pages are on the LRU -
      other CPUs can be performing operations against page->flags while
      __add_to_swap_cache is stomping on them.  This had me running around in
      circles for two days.
      
      So let's move the initialisation of the page state into rmqueue(),
      where the page really is new (could do it in page_cache_alloc,
      perhaps).
      
      The SetPageLocked() in __add_to_page_cache() is also rather curious.
      Seems OK for both pagecache and swapcache so I covered that with a
      comment.
      
      
      2.4 has the same problem.  Basically, add_to_swap_cache() can stomp on
      another CPU's manipulation of page->flags.  After a quick review of the
      code there, it is barely conceivable that a concurrent refill_inactve()
      could get its PG_referenced and PG_active bits scribbled on.  Rather
      unlikely because swap_out() will probably see PageActive() and bale
      out.
      
      Also, mark_dirty_kiobuf() could have its PG_dirty bit accidentally
      cleared (but try_to_swap_out() sets it again later).
      
      But there may be other code paths.  Really, I think this needs fixing
      in 2.4 - it's horrid.
      a2b41d23
    • Andrew Morton's avatar
      [PATCH] Use __GFP_HIGH in mpage_writepages() · a263b647
      Andrew Morton authored
      In mpage_writepage(), use __GFP_HIGH when allocating the BIO: writeback
      is a memory reclaim function and is entitle to dip into the page
      reserves to get its IO underway.
      a263b647
    • Andrew Morton's avatar
      [PATCH] resurrect __GFP_HIGH · 371151c9
      Andrew Morton authored
      This patch reinstates __GFP_HIGH functionality.
      
      __GFP_HIGH means "able to dip into the emergency pools".  However,
      somewhere along the line this got broken.  __GFP_HIGH ceased to do
      anything.  Instead, !__GFP_WAIT is used to tell the page allocator to
      try harder.
      
      __GFP_HIGH makes sense.  The concepts of "unable to sleep" and "should
      try harder" are quite separate, and overloading !__GFP_WAIT to mean
      "should access emergency pools" seems wrong.
      
      This patch fixes a problem in mempool_alloc().  mempool_alloc() tries
      the first allocation with __GFP_WAIT cleared.  If that fails, it tries
      again with __GFP_WAIT enabled (if the caller can support __GFP_WAIT).
      So it is currently performing an atomic allocation first, even though
      the caller said that they're prepared to go in and call the page
      stealer.
      
      I thought this was a mempool bug, but Ingo said:
      
      > no, it's not GFP_ATOMIC. The important difference is __GFP_HIGH, which
      > triggers the intrusive highprio allocation mode. Otherwise gfp_nowait is
      > just a nonblocking allocation of the same type as the original gfp_mask.
      > ...
      > what i've added is a bit more subtle allocation method, with both
      > performance and balancing-correctness in mind:
      >
      > 1. allocate via gfp_mask, but nonblocking
      > 2. if failure => try to get from the pool if the pool is 'full enough'.
      > 3. if failure => allocate with gfp_mask [which might block]
      >
      > there is performance data that this method improves bounce-IO performance
      > significantly, because even under VM pressure (when gfp_mask would block)
      > we can still use up to 50% of the memory pool without blocking (and
      > without endangering deadlock-free allocation). Ie. the memory pool is also
      > a fast 'frontside cache' of memory elements.
      
      Ingo was assuming that __GFP_HIGH was still functional.  It isn't, and the
      mempool design wants it.
      371151c9
    • Andrew Morton's avatar
      [PATCH] set_page_dirty() in mark_dirty_kiobuf() · 9bd6f86b
      Andrew Morton authored
      Yet another SetPageDirty/set_page_dirty bugfix: mark_dirty_kiobuf needs
      to run set_page_dirty() so the page goes onto its mapping's dirty_pages
      list.
      9bd6f86b
    • Andrew Morton's avatar
      [PATCH] check for O_DIRECT capability in open(), not write() · 6ef5d4bb
      Andrew Morton authored
      For O_DIRECT opens we're currently checking that the fs supports
      O_DIRECT at write(2)-time.
      
      This is a forward-port of Andrea's patch which moves the check to
      open() time.  Seems more sensible.
      6ef5d4bb
    • Andrew Morton's avatar
      [PATCH] set TASK_RUNNING in yield() · b5b6fa52
      Andrew Morton authored
      It seems that the yield() macro requires state TASK_RUNNING, but
      practically none of the callers remember to do that.
      
      The patch turns yield() into a real function which sets state
      TASK_RUNNING before scheduling.
      b5b6fa52
    • Andrew Morton's avatar
      [PATCH] set TASK_RUNNING in cond_resched() · b2bd3a26
      Andrew Morton authored
      do_select() does set_current_state(TASK_INTERRUPTIBLE) then calls
      __pollwait() which calls __get_free_page() and the cond_resched() which
      I added to the pagecache reclaim code never returns.
      
      The patch makes cond_resched() more useful by setting current->state to
      TASK_RUNNING before scheduling.
      b2bd3a26
    • Andrew Morton's avatar
      [PATCH] add new list_splice_init() · f42e6ed8
      Andrew Morton authored
      A little cleanup: Most callers of list_splice() immediately
      reinitialise the source list_head after calling list_splice().
      
      So create a new list_splice_init() which does all that.
      f42e6ed8
    • Andrew Morton's avatar
      [PATCH] shmem fixes · e7c89646
      Andrew Morton authored
      A shmem cleanup/bugfix patch from Hugh Dickins.
      
      - Minor: in try_to_unuse(), only wait on writeout if we actually
        started new writeout.  Otherwise, there is no need because a
        wait_on_page_writeback() has already been executed against this page.
        And it's locked, so no new writeback can start.
      
      - Minor: in shmem_unuse_inode(): remove all the
        wait_on_page_writeback() logic.  We already did that in
        try_to_unuse(), adn the page is locked so no new writeback can start.
      
      - Less minor: add a missing a page_cache_release() to
        shmem_get_page_locked() in the uncommon case where the page was found
        to be under writeout.
      e7c89646
    • Andrew Morton's avatar
      [PATCH] remove swap_get_block() · b6a7f088
      Andrew Morton authored
      Patch from Christoph Hellwig removes swap_get_block().
      
      I was sort-of hanging onto this function because it is a standard
      get_block function, and maybe perhaps it could be used to make swap use
      the regular filesystem I/O functions.  We don't want to do that, so
      kill it.
      b6a7f088
    • Andrew Morton's avatar
      [PATCH] pdflush cleanup · f0e10c64
      Andrew Morton authored
      Writeback/pdflush cleanup patch from Steven Augart
      
      * Exposes nr_pdflush_threads as /proc/sys/vm/nr_pdflush_threads, read-only.
      
        (I like this - I expect that management of the pdflush thread pool
        will be important for many-spindle machines, and this is a neat way
        of getting at the info).
      
      * Adds minimum and maximum checking to the five writable pdflush
        and fs-writeback  parameters.
      
      * Minor indentation fix in sysctl.c
      
      * mm/pdflush.c now includes linux/writeback.h, which prototypes
        pdflush_operation.  This is so that the compiler can
        automatically check that the prototype matches the definition.
      
      * Adds a few comments to existing code.
      f0e10c64
    • Andrew Morton's avatar
      [PATCH] misc cleanups and fixes · 06be3a5e
      Andrew Morton authored
      - Comment and documentation fixlets
      
      - Remove some unneeded fields from swapper_inode (these are a
        leftover from when I had swap using the filesystem IO functions).
      
      - fix a printk bug in pci/pool.c: when dma_addr_t is 64 bit it
        generates a compile warning, and will print out garbage.  Cast it to
        unsigned long long.
      
      - Convert some writeback #defines into enums (Steven Augart)
      06be3a5e
    • Andrew Morton's avatar
      [PATCH] debug check for leaked blockdev buffers · 5226cca6
      Andrew Morton authored
      Having just fiddled with the refcounts of blockdev buffers, I want some
      way of assuring that the code is correct and is not leaking
      buffer_heads.
      
      There's no easy way to do this: if a blockdev page has pinned buffers
      then truncate_complete_page just cuts it loose and we leak memory.
      
      The patch adds a bit of debug code to catch these leaks.  This code,
      PF_RADIX_TREE and buffer_error() need to be removed later on.
      5226cca6
    • Andrew Morton's avatar
      [PATCH] Remove ext3's buffer_head cache · 34cb9226
      Andrew Morton authored
      Removes ext3's open-coded inode and allocation bitmap LRUs.
      
      This patch includes a cleanup to ext3_new_block().  The local variables
      `bh', `bh2', `i', `j', `k' and `tmp' have been renamed to something
      more palatable.
      34cb9226
    • Andrew Morton's avatar
      [PATCH] Remove ext2's buffer_head cache · 7ef751c5
      Andrew Morton authored
      Remove ext2's open-coded bitmap LRUs.  Core kernel does this for it now.
      7ef751c5
    • Andrew Morton's avatar
      [PATCH] per-cpu buffer_head cache · e7ae11b6
      Andrew Morton authored
      ext2 and ext3 implement a custom LRU cache of buffer_heads - the eight
      most-recently-used inode bitmap buffers and the eight MRU block bitmap
      buffers.
      
      I don't like them, for a number of reasons:
      
      - The code is duplicated between filesystems
      
      - The functionality is unavailable to other filesystems
      
      - The LRU only applies to bitmap buffers.  And not, say, indirects.
      
      - The LRUs are subtly dependent upon lock_super() for protection:
        without lock_super protection a bitmap could be evicted and freed
        while in use.
      
        And removing this dependence on lock_super() gets us one step on
        the way toward getting that semaphore out of the ext2 block allocator -
        it causes significant contention under some loads and should be a
        spinlock.
      
      - The LRUs pin 64 kbytes per mounted filesystem.
      
      Now, we could just delete those LRUs and rely on the VM to manage the
      memory.  But that would introduce significant lock contention in
      __find_get_block - the blockdev mapping's private_lock and page_lock
      are heavily used.
      
      So this patch introduces a transparent per-CPU bh lru which is hidden
      inside __find_get_block(), __getblk() and __bread().  It is designed to
      shorten code paths and to reduce lock contention.  It uses a seven-slot
      LRU.  It achieves a 99% hit rate in `dbench 64'.  It provides benefit
      to all filesystems.
      
      The next patches remove the open-coded LRUs from ext2 and ext3.
      
      Taken together, these patches are a code cleanup (300-400 lines gone),
      and they reduce lock contention.  Anton tested these patches on the
      32-way and demonstrated a throughput improvement of up to 15% on
      RAM-only dbench runs.  See http://samba.org/~anton/linux/2.5.24/dbench/
      
      Most of this benefit is from avoiding find_get_page() on the blockdev
      mapping.  Because the generic LRU copes with indirect blocks as well as
      bitmaps.
      e7ae11b6
    • Andrew Morton's avatar
      [PATCH] Fix 3c59x driver for some 3c566B's · fbaf74c8
      Andrew Morton authored
      Fix from Rahul Karnik and Donald Becker - some new 3c566B mini-PCI NICs
      refuse to power up the transceiver unless we tickle an undocumented bit
      in an undocumented register.  They worked this out by before-and-after
      diffing of the register contents when it was set up by the Windows
      driver.
      fbaf74c8
    • Andrew Morton's avatar
      [PATCH] handle BIO allocation failures in swap_writepage() · 27c02b00
      Andrew Morton authored
      If allocation of a BIO for swap writeout fails, mark the page dirty
      again to save it from eviction.
      27c02b00
  2. 20 Jun, 2002 20 commits