An error occurred fetching the project authors.
  1. 29 Dec, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] Fix sysenter disabling in vm86 mode · 783faefa
      Andrew Morton authored
      From: Brian Gerst <bgerst@didntduck.org>
      
      The current code disables sysenter when first entering vm86 mode, but does
      not disable it again when coming back to a vm86 task after a task switch.
      783faefa
  2. 15 Oct, 2003 1 commit
    • Tigran Aivazian's avatar
      [PATCH] update to microcode update driver · 2afc8160
      Tigran Aivazian authored
      This contains the following changes:
      
      a) changes from Intel to support the new microcode data format
         (backward compatible of course)
      
      b) changes from me to remove the no longer needed features of the driver,
         namely we don't need to keep a copy of applied microcode in kernel
         memory.
      
         This feature was hardly useful in the days of regular devfs
         /dev/cpu/microcode file and now it is completely useless so I removed
         it (after taking into account all the feedback on linux-kernel I
         received since the announcement of the intention to do this)
      
      These are rather critical because otherwise we can't really say Linux
      fully supports the very latest Intel cpus (which require microcode in
      the new format).
      2afc8160
  3. 09 Oct, 2003 1 commit
    • Andi Kleen's avatar
      [PATCH] Prefetch workaround for Athlon/Opteron · 9b7a76f4
      Andi Kleen authored
      This is the latest iteration of the workaround for the Athlon/Opteron
      prefetch erratum.  Sometimes the CPU would incorrectly report an
      exception on prefetch.
      
      This supercedes the previous dumb workaround of checking for AMD CPUs in
      prefetch().  That one bloated the kernel by several KB and lead to lots
      of unnecessary checks in hot paths. 
      
      Also this one handles user space faults too, so the kernel can
      effectively isolte the user space from caring about this errata.
      
      Instead it handles it in the slow path of the exception handler (the
      check is only done when the kernel would normally trigger seg fault or
      crash anyways)
      
      All the serious criticisms to the previous patches have been addressed. 
      It checks segment bases now, handles vm86 mode and avoids deadlocks when
      the prefetch exception happened inside mmap_sem.
      
      This includes review and fixes from Jamie Lokier and Andrew Morton.
      Opcode decoder based on code from Richard Brunner.
      9b7a76f4
  4. 21 Sep, 2003 1 commit
    • Matthew Wilcox's avatar
      [PATCH] Move EISA_bus · 972b4a74
      Matthew Wilcox authored
      When I change the setting of CONFIG_EISA, everything rebuilds.  This is
      because EISA_bus is declared in <asm/processor.h> which is implicitly
      included by just about everything.  This is a silly place to declare it,
      so this patch moves it to include/linux/eisa.h.
      
      While I'm at it, I also move the variable definition to
      drivers/eisa/eisa-bus.c.  The rest of this patch is fixing up the fallout
      from having to include <linux/eisa.h> if you use EISA_bus.
      972b4a74
  5. 09 Sep, 2003 1 commit
  6. 31 Aug, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] disable prefetch on athlons · e2d3b22c
      Andrew Morton authored
      K7's (at least) are faulting in the prefetch instruction.  The AMD
      engineers have said they will be getting back to us on it, and the fix is
      looking complex, and nobody seems to be standing up to work on it.
      
      So hum.  The usual manifestation is an oops in hlist_for_each(), down in
      the VFS inode lookup code.  Disrupting our testers in this way is very bad,
      so this patch just disables prefetch on all AMD parts in a rather stupid
      way.
      e2d3b22c
  7. 20 Aug, 2003 1 commit
    • Albert Cahalan's avatar
      [PATCH] IO port bitmap cleanups, x86-64 oops fix · 0b401e25
      Albert Cahalan authored
      This patch brings x86-64 and i386 closer together, eliminating an oops
      that LTP test ioperm02.c causes on x86-64.  An IO port permission bitmap
      must be followed by an extra 0xff.
      
      (Add comments to that effect, to avoid the problem in the future).
      0b401e25
  8. 20 Jun, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] show_stack() portability and cleanup patch · 0d5ff9d0
      Andrew Morton authored
      From: David Mosberger <davidm@napali.hpl.hp.com>
      
      This is an attempt at sanitizing the interface for stack trace dumping
      somewhat.  It's basically the last thing which prevents 2.5.x from working
      out-of-the-box for ia64.  ia64 apparently cannot reasonably implement the
      show_stack interface declared in sched.h.
      
      Here is the rationale: modern calling conventions don't maintain a frame
      pointer and it's not possible to get a reliable stack trace with only a stack
      pointer as the starting point.  You really need more machine state to start
      with.  For a while, I thought the solution is to pass a task pointer to
      show_stack(), but it turns out that this would negatively impact x86 because
      it's sometimes useful to show only portions of a stack trace (e.g., starting
      from the point at which a trap occurred).  Thus, this patch _adds_ the task
      pointer instead:
      
       extern void show_stack(struct task_struct *tsk, unsigned long *sp);
      
      The idea here is that show_stack(tsk, sp) will show the backtrace of task
      "tsk", starting from the stack frame that "sp" is pointing to.  If tsk is
      NULL, the trace will be for the current task.  If "sp" is NULL, all stack
      frames of the task are shown.  If both are NULL, you'll get the full trace of
      the current task.
      
      I _think_ this should make everyone happy.
      
      The patch also removes the declaration of show_trace() in linux/sched.h (it
      never was a generic function; some platforms, in particular x86, may want to
      update accordingly).
      
      Finally, the patch replaces the one call to show_trace_task() with the
      equivalent call show_stack(task, NULL).
      
      The patch below is for Alpha and i386, since I can (compile-)test those (I'll
      provide the ia64 update through my regular updates).  The other arches will
      break visibly and updating the code should be trivial:
      
      - add a task pointer argument to show_stack() and pass NULL as the first
        argument where needed
      
      - remove show_trace_task()
      
      - declare show_trace() in a platform-specific header file if you really
        want to keep it around
      0d5ff9d0
  9. 09 May, 2003 1 commit
  10. 07 May, 2003 1 commit
  11. 30 Apr, 2003 2 commits
  12. 17 Apr, 2003 1 commit
  13. 24 Mar, 2003 1 commit
  14. 20 Mar, 2003 1 commit
  15. 10 Mar, 2003 2 commits
    • Linus Torvalds's avatar
      Use a fixed per-cpu SYSENTER_MSR_ESP value by having the sysenter · e3db4852
      Linus Torvalds authored
      entry routine load the real ESP0 off that per-cpu stack. Make this
      even faster by putting the sysenter stack in the per-CPU TSS, so
      that we can use the tss->esp0 value directly (which we have to
      update on task switches anyway).
      
      CAREFUL! This needs very subtle code for debug and NMI exceptions,
      to make sure we don't run with the sysenter stack in any real kernel
      code!
      e3db4852
    • Linus Torvalds's avatar
      Move "used FPU status" into new non-atomic thread_info->status field. · 450b4497
      Linus Torvalds authored
      This allows us to avoid having to use atomic updates for the lazy FP
      status setting, since we don't have to worry about other CPU's racing
      on the fields.
      
      Also, fix x86 FP state after fork() by making sure the FP is unlazied
      _before_ we copy the state information. Otherwise, if a process did a
      fork() while holding the FP state lazily in the registers, the child
      would incorrectly unlazy bogus state.
      450b4497
  16. 09 Mar, 2003 1 commit
  17. 19 Feb, 2003 1 commit
    • Linus Torvalds's avatar
      Add doublefault handling with a task gate. · 7f754cf4
      Linus Torvalds authored
      This potentially helps debugging, since otherwise a double fault
      would generate a triple fault and then reboot the machine. Now
      instead it can print out a note about where the problem happened,
      unless all the kernel data structures are truly buggered.
      7f754cf4
  18. 15 Feb, 2003 1 commit
    • Andrew Morton's avatar
      [PATCH] Make the world safe for -Wundef · 7807a1bc
      Andrew Morton authored
      Patch from: Valdis.Kletnieks@vt.edu
      
      This is a patch to clean things up so compiling with -Wundef becomes feasible
      (This patch cuts the number of warnings with 'make allyesconfig' drop from
      around 10,000 to several hundred).  It was originally inspired by the
      discussion of things that include linux/version.h but don't use the contents
      - after doing this, I was able to find that there *WAS* at least one place
      where version.h was missing (see following patch).
      7807a1bc
  19. 29 Dec, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] INIT_TASK/INIT_TSS cleanup · 6950ee0a
      Andrew Morton authored
      Ingo added saved_fs, saved_gs to thread_struct and didn't add
      corresponding initializers to INIT_THREAD.  We assign NULL to an
      unsigned int and the compiler warns.
      
      The patch converts it to use designated initialisers and fixes the
      io_bitmap initializer in the process.
      6950ee0a
  20. 28 Dec, 2002 2 commits
  21. 24 Dec, 2002 1 commit
  22. 22 Dec, 2002 1 commit
    • Manfred Spraul's avatar
      [PATCH] Avoid overwriting boot_cpu_data from trampoline code · dd0f2bdf
      Manfred Spraul authored
      boot_cpu_data should contain the common capabilities of all cpus in the
      system. identify_cpu [arch/i386/kernel/cpu/common.c] tries to enforce
      that. But right now, the SMP trampoline code [arch/i386/kernel/head.S]
      overwrites boot_cpu_data when the secondary cpus are started, i.e.
      boot_cpu_data contains the capabilities from the last cpu that booted :-(
      
      The attached patch adds a new, __initdata variable for the asm code.
      dd0f2bdf
  23. 21 Dec, 2002 1 commit
  24. 25 Nov, 2002 1 commit
  25. 03 Nov, 2002 1 commit
    • Manfred Spraul's avatar
      [PATCH] complete the move of the LDT code into · cc1741c0
      Manfred Spraul authored
      The i386 LDT code had it's own set of arch hooks (??_segments), I've
      replaced most of them with the mmu context hooks in a previous patch.
      The attached patch completes that change: replace release_segments with
      destroy_context.
      
      The patch is part of the -ac kernels in 2.4.  The patch breaks x86-64,
      Andi Kleen promised to send you the corresponding
      s/release_segments/destroy_context/ patch.
      cc1741c0
  26. 31 Oct, 2002 1 commit
    • Andrew Morton's avatar
      [PATCH] speedup heuristic for get_unmapped_area · 631709da
      Andrew Morton authored
      [I was going to send shared pagetables today, but it failed in
       my testing under X :( ]
      
      the first one is an mmap inefficiency that was reported by Saurabh Desai.
      The test_str02 NPTL test-utility does the following: it tests the maximum
      number of threads by creating a new thread, which thread creates a new
      thread itself, etc. It basically creates thousands of parallel threads,
      which means thousands of thread stacks.
      
      NPTL uses mmap() to allocate new default thread stacks - and POSIX
      requires us to install a 'guard page' as well, which is done via
      mprotect(PROT_NONE) on the first page of the stack. This means that tons
      of NPTL threads means 2* tons of vmas per MM, all allocated in a forward
      fashion starting at the virtual address of 1 GB (TASK_UNMAPPED_BASE).
      
      Saurabh reported a slowdown after the first couple of thousands of
      threads, which i can reproduce as well. The reason for this slowdown is
      the get_unmapped_area() implementation, which tries to achieve the most
      compact virtual memory allocation, by searching for the vma at
      TASK_UNMAPPED_BASE, and then linearly searching for a hole. With thousands
      of linearly allocated vmas this is an increasingly painful thing to do ...
      
      obviously, high-performance threaded applications will create stacks
      without the guard page, which triggers the anon-vma merging code so we end
      up with one large vma, not tons of small vmas.
      
      it's also possible for userspace to be smarter by setting aside a stack
      space and keeping a bitmap of allocated stacks and using MAP_FIXED (this
      also enables it to do the guard page not via mprotect() but by keeping the
      stacks apart by 1 page - ie. half the number of vmas) - but this also
      decreases flexibility.
      
      So i think that the default behavior nevertheless makes sense as well, so
      IMO we should optimize it in the kernel.
      
      there are various solutions to this problem, none of which solve the
      problem in a 100% sufficient way, so i went for the simplest approach: i
      added code to cache the 'last known hole' address in mm->free_area_cache,
      which is used as a hint to get_unmapped_area().
      
      this fixed the test_str02 testcase wonderfully, thread creation
      performance for this testcase is O(1) again, but this simpler solution
      obviously has a number of weak spots, and the (unlikely but possible)
      worst-case is quite close to the current situation. In any case, this
      approach does not sacrifice the perfect VM compactness out mmap()
      implementation achieves, so it's a performance optimization with no
      externally visible consequences.
      
      The most generic and still perfectly-compact VM allocation solution would
      be to have a vma tree for the 'inverse virtual memory space', ie. a tree
      of free virtual memory ranges, which could be searched and iterated like
      the space of allocated vmas. I think we could do this by extending vmas,
      but the drawback is larger vmas. This does not save us from having to scan
      vmas linearly still, because the size constraint is still present, but at
      least most of the anon-mmap activities are constant sized. (both malloc()
      and the thread-stack allocator uses mostly fixed sizes.)
      
      This patch contains some fixes from Dave Miller - on some architectures
      it is not posible to evaluate TASK_UNMAPPED_BASE at compile-time.
      631709da
  27. 08 Oct, 2002 1 commit
  28. 12 Aug, 2002 1 commit
    • Ingo Molnar's avatar
      [PATCH] tls-2.5.31-D9 · b40c812e
      Ingo Molnar authored
      3 TLS entries, 9 cycles copying and no branches in the context-switch
      path. The patch also adds Christoph's suggestion and renames
      modify_ldt_ldt_s (yuck!) to user_desc.
      b40c812e
  29. 28 Jul, 2002 2 commits
  30. 25 Jul, 2002 1 commit
    • Ingo Molnar's avatar
      [PATCH] Thread-Local Storage (TLS) support · 0bbed3be
      Ingo Molnar authored
      the following patch implements proper x86 TLS support in the Linux kernel,
      via a new system-call, sys_set_thread_area():
      
         http://redhat.com/~mingo/tls-patches/tls-2.5.28-C6
      
      a TLS test utility can be downloaded from:
      
          http://redhat.com/~mingo/tls-patches/tls_test.c
      
      what is TLS? Thread Local Storage is a concept used by threading
      abstractions - fast an efficient way to store per-thread local (but not
      on-stack local) data. The __thread extension is already supported by gcc.
      
      proper TLS support in compilers (and glibc/pthreads) is a bit problematic
      on the x86 platform. There's only 8 general purpose registers available,
      so on x86 we have to use segments to access the TLS. The approach used by
      glibc so far was to set up a per-thread LDT entry to describe the TLS.
      Besides the generic unrobustness of LDTs, this also introduced a limit:
      the maximum number of LDT entries is 8192, so the maximum number of
      threads per application is 8192.
      
      this patch does it differently - the kernel keeps a specific per-thread
      GDT entry that can be set up and modified by each thread:
      
           asmlinkage int sys_set_thread_area(unsigned int base,
                     unsigned int limit, unsigned int flags)
      
      the kernel, upon context-switch, modifies this GDT entry to match that of
      the thread's TLS setting. This way user-space threaded code can access
      per-thread data via this descriptor - by using the same, constant %gs (or
      %gs) selector. The number of TLS areas is unlimited, and there is no
      additional allocation overhead associated with TLS support.
      
      
      the biggest problem preventing the introduction of this concept was
      Linux's global shared GDT on SMP systems. The patch fixes this by
      implementing a per-CPU GDT, which is also a nice context-switch speedup,
      2-task lat_ctx context-switching got faster by about 5% on a dual Celeron
      testbox. [ Could it be that a shared GDT is fundamentally suboptimal on
      SMP? perhaps updating the 'accessed' bit in the DS/CS descriptors causes
      some sort locked memory cycle overhead? ]
      
      the GDT layout got simplified:
      
       *   0 - null
       *   1 - Thread-Local Storage (TLS) segment
       *   2 - kernel code segment
       *   3 - kernel data segment
       *   4 - user code segment              <==== new cacheline
       *   5 - user data segment
       *   6 - TSS
       *   7 - LDT
       *   8 - APM BIOS support               <==== new cacheline
       *   9 - APM BIOS support
       *  10 - APM BIOS support
       *  11 - APM BIOS support
       *  12 - PNPBIOS support                <==== new cacheline
       *  13 - PNPBIOS support
       *  14 - PNPBIOS support
       *  15 - PNPBIOS support
       *  16 - PNPBIOS support                <==== new cacheline
       *  17 - not used
       *  18 - not used
       *  19 - not used
      
      set_thread_area() currently recognizes the following flags:
      
        #define TLS_FLAG_LIMIT_IN_PAGES         0x00000001
        #define TLS_FLAG_WRITABLE               0x00000002
        #define TLS_FLAG_CLEAR                  0x00000004
      
      - in theory we could avoid the 'limit in pages' bit, but i wanted to
        preserve the flexibility to potentially enable the setting of
        byte-granularity stack segments for example. And unlimited segments
        (granularity = pages, limit = 0xfffff) might have a performance
        advantage on some CPUs. We could also automatically figure out the best
        possible granularity for a given limit - but i wanted to avoid this kind
        of guesswork. Some CPUs might have a plus for page-limit segments - who
        knows.
      
      - The 'writable' flag is straightforward and could be useful to some
        applications.
      
      - The 'clear' flag clears the TLS. [note that a base 0 limit 0 TLS is in
        fact legal, it's a single-byte segment at address 0.]
      
      (the system-call does not expose any other segment options to user-space,
      priviledge level is 3, the segment is 32-bit, etc. - it's using safe and
      sane defaults.)
      
      NOTE: the interface does not allow the changing of the TLS of another
      thread on purpose - that would just complicate the interface (and
      implementation) unnecesserily. Is there any good reason to allow the
      setting of another thread's TLS?
      
      NOTE2: non-pthreads glibc applications can call set_thread_area() to set
      up a GDT entry just below the end of stack. We could use some sort of
      default TLS area as well, but that would hard-code a given segment.
      0bbed3be
  31. 13 Jun, 2002 1 commit
    • Benjamin LaHaise's avatar
      [PATCH] 2.5.20 x86 iobitmap cleanup · ded80dca
      Benjamin LaHaise authored
      This makes the IO bitmap allocations a separately allocated structure,
      shrinking the default task size.
      
      We alloc it in sys_ioperm() and copy_thread() and frees in
      exit_thread().  It also gets rid of the IO_BITMAP_SIZE+1 crap, as only
      the tss actually needs the tail long, and we weren't copying it into the
      bitmap anyways.
      ded80dca
  32. 05 Jun, 2002 1 commit
    • Dave Jones's avatar
      [PATCH] large x86 setup cleanup. · 9fc4eb64
      Dave Jones authored
      Patrick Mochel did a great job here at splitting up some of the larger
      messy parts of arch/i386/kernel/setup.c, and introduced a nice abstraction
      which gives us a much nicer way to ensure we can add workarounds for vendor
      specific bugs / features without polluting other vendor code paths.
      
      Mark Haverkamp also brought this up to date for merging in my tree circa
      2.5.14, and asides from 1-2 now fixed small thinkos, there haven't been
      any problems.
      
      This also features a workaround for an errata item on stepping C0 of
      the Intel Pentium 4 Xeon, which isn't in your tree yet, where we must
      disable the hardware prefetcher to ensure sane operation.
      9fc4eb64
  33. 21 May, 2002 2 commits
    • Brian Gerst's avatar
      [PATCH] remaining cpu_has cleanups · ec66b46d
      Brian Gerst authored
      This patch cleans up the remaining direct tests against x86_capability.
            It moves the cpu_has_* macros to the more appropriate
      cpufeature.h.  It also introduces the cpu_has() macro to test features
      for individual cpus.
      ec66b46d
    • Brian Gerst's avatar
      [PATCH] cpu_has_mmx · 9a3f1f54
      Brian Gerst authored
      This patch takes the cpu_has_mmx macro introduced in the xor.h header
      and puts it in the proper place.  It also converts the ov511 driver to
      use the new macro.
      9a3f1f54
  34. 20 May, 2002 1 commit
  35. 28 Apr, 2002 1 commit
    • Dave Jones's avatar
      [PATCH] Dynamic LDT sizing. · 6a2a196f
      Dave Jones authored
      Originally from Manfred Spraul.
      
      * dynamically grow the LDT
      Every app that's linked against libpthread right now allocates a full 64
      kB LDT, without proper error handling, and always from the vmalloc area
      6a2a196f