1. 24 Apr, 2007 1 commit
  2. 23 Apr, 2007 25 commits
  3. 13 Apr, 2007 1 commit
    • Paul Mackerras's avatar
      [POWERPC] Fix detection of loader-supplied initrd on OF platforms · 390cbb56
      Paul Mackerras authored
      Commit 79c85419 introduced code to move
      the initrd if it was in a place where it would get overwritten by the
      kernel image.  Unfortunately this exposed the fact that the code that
      checks whether the values passed in r3 and r4 are intended to indicate
      the start address and size of an initrd image was not as thorough as the
      kernel's checks.  The symptom is that on OF-based platforms, the
      bootwrapper can cause an exception which causes the system to drop back
      into OF.
      
      Previously it didn't matter so much if the code incorrectly thought that
      there was an initrd, since the values for start and size were just passed
      through to the kernel.  Now the bootwrapper needs to apply the same checks
      as the kernel since it is now using the initrd data itself (in the process
      of copying it if necessary).  This adds the code to do that.
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      390cbb56
  4. 12 Apr, 2007 13 commits
    • Kumar Gala's avatar
      [POWERPC] Miscellaneous arch/powerpc Kconfig and platform/Kconfig cleanup · 98750261
      Kumar Gala authored
      * Cleaned up some whitespace in arch/powerpc/Kconfig
      * Moved sourcing of platforms/embedded6xx/Kconfig into platform/Kconfig
      * Moved sourcing of platforms/4xx/Kconfig into platform/Kconfig and disabled it
      * Removed EMBEDDEDBOOT since its not supported in arch/powerpc
      * Removed PC_KEYBOARD since its not used anywhere
      * Moved a few CONFIG options around in platform/Kconfig
      * Moved interrupt controllers into platform/Kconfig out of bus section
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      98750261
    • Kumar Gala's avatar
      [POWERPC] Convert 85xx platform to unified platform Kconfig · db947808
      Kumar Gala authored
      Moved 85xx platform Kconfig over to being sourced by the unified
      arch/powerpc/platforms/Kconfig.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      db947808
    • Kumar Gala's avatar
      [POWERPC] Convert 8xx platform to unified platform Kconfig · c8a55f3d
      Kumar Gala authored
      Moved 8xx platform Kconfig over to being sourced by the unified
      arch/powerpc/platforms/Kconfig.  Also, cleaned up whitespace issues in 8xx
      Kconfig.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      c8a55f3d
    • Kumar Gala's avatar
      [POWERPC] Convert 82xx platform to unified platform Kconfig · d6071f88
      Kumar Gala authored
      Moved 82xx platform Kconfig over to being sourced by the unified
      arch/powerpc/platforms/Kconfig.  Also, cleaned up whitespace issues in 82xx
      Kconfig.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      d6071f88
    • Kumar Gala's avatar
      [POWERPC] Convert 83xx platform to unified platform Kconfig · b5a48346
      Kumar Gala authored
      Moved 83xx platform Kconfig over to being sourced by the unified
      arch/powerpc/platforms/Kconfig.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      b5a48346
    • Kumar Gala's avatar
      [POWERPC] Convert 86xx platform to unified platform Kconfig · 4a89f7fa
      Kumar Gala authored
      Moved 86xx platform Kconfig over to being sourced by the unified
      arch/powerpc/platforms/Kconfig.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      4a89f7fa
    • Kumar Gala's avatar
      [POWERPC] Ensure platform CONFIG options have correct dependencies · 164a460d
      Kumar Gala authored
      We currently support TAU and CPU frequency scaling only on discrete
      (non-SOC) processors.
      Signed-off-by: default avatarKumar Gala <galak@kernel.crashing.org>
      164a460d
    • Joachim Fenkes's avatar
      [POWERPC] ibmebus: change probe/remove interface from using loc-code to DT path · 0727702a
      Joachim Fenkes authored
      In some cases, multiple OFDT nodes might share the same location code, so
      the location code is not a unique identifier for an OFDT node. Changed the
      ibmebus probe/remove interface to use the DT path of the device node instead
      of the location code.
      
      The DT path must be written into probe/remove right as it would appear in
      the "devspec" attribute of the ebus device: relative to the DT root, with a
      leading slash and without a trailing slash. One trailing newline will not
      hurt; multiple newlines will (like perl's chomp()).
      
      Example:
      
       Add a device "/proc/device-tree/foo@12345678" to ibmebus like this:
          echo /foo@12345678 > /sys/bus/ibmebus/probe
      
       Remove the device like this:
          echo /foo@12345678 > /sys/bus/ibmebus/remove
      Signed-off-by: default avatarJoachim Fenkes <fenkes@de.ibm.com>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      0727702a
    • Benjamin Herrenschmidt's avatar
      [POWERPC] DEBUG_PAGEALLOC for 64-bit · 370a908d
      Benjamin Herrenschmidt authored
      Here's an implementation of DEBUG_PAGEALLOC for 64 bits powerpc.
      It applies on top of the 32 bits patch.
      
      Unlike Anton's previous attempt, I'm not using updatepp. I'm removing
      the hash entries from the bolted mapping (using a map in RAM of all the
      slots). Expensive but it doesn't really matter, does it ? :-)
      
      Memory hot-added doesn't benefit from this unless it's added at an
      address that is below end_of_DRAM() as calculated at boot time.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/Kconfig.debug      |    2
       arch/powerpc/mm/hash_utils_64.c |   84 ++++++++++++++++++++++++++++++++++++++--
       2 files changed, 82 insertions(+), 4 deletions(-)
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      370a908d
    • Benjamin Herrenschmidt's avatar
      [POWERPC] DEBUG_PAGEALLOC for 32-bit · 88df6e90
      Benjamin Herrenschmidt authored
      Here's an implementation of DEBUG_PAGEALLOC for ppc32. It disables BAT
      mapping and is only tested with Hash table based processor though it
      shouldn't be too hard to adapt it to others.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/Kconfig.debug       |    9 ++++++
       arch/powerpc/mm/init_32.c        |    4 +++
       arch/powerpc/mm/pgtable_32.c     |   52 +++++++++++++++++++++++++++++++++++++++
       arch/powerpc/mm/ppc_mmu_32.c     |    4 ++-
       include/asm-powerpc/cacheflush.h |    6 ++++
       5 files changed, 74 insertions(+), 1 deletion(-)
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      88df6e90
    • Benjamin Herrenschmidt's avatar
      [POWERPC] Fix 32-bit mm operations when not using BATs · ee4f2ea4
      Benjamin Herrenschmidt authored
      On hash table based 32 bits powerpc's, the hash management code runs with
      a big spinlock. It's thus important that it never causes itself a hash
      fault. That code is generally safe (it does memory accesses in real mode
      among other things) with the exception of the actual access to the code
      itself. That is, the kernel text needs to be accessible without taking
      a hash miss exceptions.
      
      This is currently guaranteed by having a BAT register mapping part of the
      linear mapping permanently, which includes the kernel text. But this is
      not true if using the "nobats" kernel command line option (which can be
      useful for debugging) and will not be true when using DEBUG_PAGEALLOC
      implemented in a subsequent patch.
      
      This patch fixes this by pre-faulting in the hash table pages that hit
      the kernel text, and making sure we never evict such a page under hash
      pressure.
      Signed-off-by: default avatarBenjamin Herrenchmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/hash_low_32.S |   22 ++++++++++++++++++++--
       arch/powerpc/mm/mem.c         |    3 ---
       arch/powerpc/mm/mmu_decl.h    |    4 ++++
       arch/powerpc/mm/pgtable_32.c  |   11 +++++++----
       4 files changed, 31 insertions(+), 9 deletions(-)
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      ee4f2ea4
    • Benjamin Herrenschmidt's avatar
      [POWERPC] Cleanup 32-bit map_page · 3be4e699
      Benjamin Herrenschmidt authored
      The 32 bits map_page() function is used internally by the mm code
      for early mmu mappings and for ioremap. It should never be called
      for an address that already has a valid PTE or hash entry, so we
      add a BUG_ON for that and remove the useless flush_HPTE call.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
       arch/powerpc/mm/pgtable_32.c |    9 ++++++---
       1 file changed, 6 insertions(+), 3 deletions(-)
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      3be4e699
    • Benjamin Herrenschmidt's avatar
      [POWERPC] Make tlb flush batch use lazy MMU mode · a741e679
      Benjamin Herrenschmidt authored
      The current tlb flush code on powerpc 64 bits has a subtle race since we
      lost the page table lock due to the possible faulting in of new PTEs
      after a previous one has been removed but before the corresponding hash
      entry has been evicted, which can leads to all sort of fatal problems.
      
      This patch reworks the batch code completely. It doesn't use the mmu_gather
      stuff anymore. Instead, we use the lazy mmu hooks that were added by the
      paravirt code. They have the nice property that the enter/leave lazy mmu
      mode pair is always fully contained by the PTE lock for a given range
      of PTEs. Thus we can guarantee that all batches are flushed on a given
      CPU before it drops that lock.
      
      We also generalize batching for any PTE update that require a flush.
      
      Batching is now enabled on a CPU by arch_enter_lazy_mmu_mode() and
      disabled by arch_leave_lazy_mmu_mode(). The code epects that this is
      always contained within a PTE lock section so no preemption can happen
      and no PTE insertion in that range from another CPU. When batching
      is enabled on a CPU, every PTE updates that need a hash flush will
      use the batch for that flush.
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
      a741e679