1. 07 Nov, 2013 2 commits
  2. 30 Oct, 2013 1 commit
  3. 29 Oct, 2013 2 commits
    • Russell King's avatar
      ARM: fix misplaced arch_virt_to_idmap() · 5e4432d3
      Russell King authored
      Olof Johansson reported:
      
      In file included from arch/arm/include/asm/page.h:163:0,
                       from include/linux/mm_types.h:16,
                       from include/linux/sched.h:24,
                       from arch/arm/kernel/asm-offsets.c:13:
      arch/arm/include/asm/memory.h: In function '__virt_to_idmap':
      arch/arm/include/asm/memory.h:300:6: error: 'arch_virt_to_idmap' undeclared (first use in this function)
      
      caused by arch_virt_to_idmap being placed inside a different
      preprocessor conditional to its user.  Move it along side its user.
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      5e4432d3
    • Sricharan R's avatar
      ARM: 7870/1: head: Fix the missing underscore in __ARMEB__ macro and .align keyword · 830fd4d6
      Sricharan R authored
      Commit 'f52bb722'
      Author: Sricharan R <r.sricharan@ti.com>
          ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses
      
      introduced a __ARMEB__ macro usage in a new place, but missed the second
      underscore. So correcting it here.
      
      Also a explicit .align keyword is needed for the label with .long
      data-type to be aligned on the 4 byte boundary. Otherwise this can
      cause problem for thumb2 build. So adding it here.
      Signed-off-by: default avatarSricharan R <r.sricharan@ti.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      830fd4d6
  4. 23 Oct, 2013 2 commits
  5. 19 Oct, 2013 23 commits
  6. 18 Oct, 2013 1 commit
  7. 11 Oct, 2013 5 commits
    • Santosh Shilimkar's avatar
      ARM: mm: Recreate kernel mappings in early_paging_init() · a77e0c7b
      Santosh Shilimkar authored
      This patch adds a step in the init sequence, in order to recreate
      the kernel code/data page table mappings prior to full paging
      initialization.  This is necessary on LPAE systems that run out of
      a physical address space outside the 4G limit.  On these systems,
      this implementation provides a machine descriptor hook that allows
      the PHYS_OFFSET to be overridden in a machine specific fashion.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarR Sricharan <r.sricharan@ti.com>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      a77e0c7b
    • Sricharan R's avatar
      ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses · f52bb722
      Sricharan R authored
      The current phys_to_virt patching mechanism works only for 32 bit
      physical addresses and this patch extends the idea for 64bit physical
      addresses.
      
      The 64bit v2p patching mechanism patches the higher 8 bits of physical
      address with a constant using 'mov' instruction and lower 32bits are patched
      using 'add'. While this is correct, in those platforms where the lowmem addressable
      physical memory spawns across 4GB boundary, a carry bit can be produced as a
      result of addition of lower 32bits. This has to be taken in to account and added
      in to the upper. The patched __pv_offset and va are added in lower 32bits, where
      __pv_offset can be in two's complement form when PA_START < VA_START and that can
      result in a false carry bit.
      
      e.g
          1) PA = 0x80000000; VA = 0xC0000000
             __pv_offset = PA - VA = 0xC0000000 (2's complement)
      
          2) PA = 0x2 80000000; VA = 0xC000000
             __pv_offset = PA - VA = 0x1 C0000000
      
      So adding __pv_offset + VA should never result in a true overflow for (1).
      So in order to differentiate between a true carry, a __pv_offset is extended
      to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is
      2's complement. So 'mvn #0' is inserted instead of 'mov' while patching
      for the same reason. Since mov, add, sub instruction are to patched
      with different constants inside the same stub, the rotation field
      of the opcode is using to differentiate between them.
      
      So the above examples for v2p translation becomes for VA=0xC0000000,
          1) PA[63:32] = 0xffffffff
             PA[31:0] = VA + 0xC0000000 --> results in a carry
             PA[63:32] = PA[63:32] + carry
      
             PA[63:0] = 0x0 80000000
      
          2) PA[63:32] = 0x1
             PA[31:0] = VA + 0xC0000000 --> results in a carry
             PA[63:32] = PA[63:32] + carry
      
             PA[63:0] = 0x2 80000000
      
      The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as
      part of the review of first and second versions of the subject patch.
      
      There is no corresponding change on the phys_to_virt() side, because
      computations on the upper 32-bits would be discarded anyway.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Reviewed-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarSricharan R <r.sricharan@ti.com>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      f52bb722
    • Santosh Shilimkar's avatar
      ARM: mm: Move the idmap print to appropriate place in the code · c1a5f4f6
      Santosh Shilimkar authored
      Commit 9e9a367c {ARM: Section based HYP idmap} moved
      the address conversion inside identity_mapping_add() without
      respective print which carries useful idmap information.
      
      Move the print as well inside identity_mapping_add() to
      fix the same.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      c1a5f4f6
    • Santosh Shilimkar's avatar
      ARM: mm: Introduce virt_to_idmap() with an arch hook · 4dc9a817
      Santosh Shilimkar authored
      On some PAE systems (e.g. TI Keystone), memory is above the
      32-bit addressable limit, and the interconnect provides an
      aliased view of parts of physical memory in the 32-bit addressable
      space.  This alias is strictly for boot time usage, and is not
      otherwise usable because of coherency limitations. On such systems,
      the idmap mechanism needs to take this aliased mapping into account.
      
      This patch introduces virt_to_idmap() and a arch function pointer which
      can be populated by platform which needs it. Also populate necessary
      idmap spots with now available virt_to_idmap(). Avoided #ifdef approach
      to be compatible with multi-platform builds.
      
      Most architecture won't touch it and in that case virt_to_idmap()
      fall-back to existing virt_to_phys() macro.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      4dc9a817
    • Santosh Shilimkar's avatar
      ARM: mm: use phys_addr_t appropriately in p2v and v2p conversions · ca5a45c0
      Santosh Shilimkar authored
      Fix remainder types used when converting back and forth between
      physical and virtual addresses.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarSantosh Shilimkar <santosh.shilimkar@ti.com>
      ca5a45c0
  8. 09 Oct, 2013 1 commit
    • Will Deacon's avatar
      ARM: perf: fix group validation for mixed software and hardware groups · 2dfcb802
      Will Deacon authored
      Since software events can always be scheduled, perf allows software and
      hardware events to be mixed together in the same event group. There are
      two ways in which this can come about:
      
        (1) A SW event is added to a HW group. This validates using the HW PMU
            of the group leader.
      
        (2) A HW event is added to a SW group. This inserts the SW events and
            the new HW event into a HW context, but the SW event remains the
            group leader.
      
      When validating the latter case, we would ideally compare the PMU of
      each event in the group with the relevant HW PMU. The problem is, in the
      face of potentially multiple HW PMUs, we don't have a handle on the
      relevant structure. Commit 7b9f72c6 ("ARM: perf: clean up event
      group validation") attempting to resolve this issue, but actually made
      things *worse* by comparing with the leader PMU. If the leader is a SW
      event, then we automatically `pass' all the HW events during validation!
      
      This patch removes the check against the leader PMU. Whilst this will
      allow events from multiple HW PMUs to be grouped together, that should
      probably be dealt with in perf core as the result of a later patch.
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2dfcb802
  9. 07 Oct, 2013 2 commits
  10. 06 Oct, 2013 1 commit