1. 11 Apr, 2014 1 commit
  2. 09 Apr, 2014 2 commits
  3. 08 Apr, 2014 3 commits
  4. 03 Apr, 2014 1 commit
    • Russell King's avatar
      ARM: Better virt_to_page() handling · e26a9e00
      Russell King authored
      virt_to_page() is incredibly inefficient when virt-to-phys patching is
      enabled.  This is because we end up with this calculation:
      
        page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]
      
      in assembly.  The asm virt_to_phys() is equivalent this this operation:
      
        addr - PAGE_OFFSET + __pv_phys_offset
      
      and we can see that because this is assembly, the compiler has no chance
      to optimise some of that away.  This should reduce down to:
      
        page = &mem_map[(addr - PAGE_OFFSET) >> 12]
      
      for the common cases.  Permit the compiler to make this optimisation by
      giving it more of the information it needs - do this by providing a
      virt_to_pfn() macro.
      
      Another issue which makes this more complex is that __pv_phys_offset is
      a 64-bit type on all platforms.  This is needlessly wasteful - if we
      store the physical offset as a PFN, we can save a lot of work having
      to deal with 64-bit values, which sometimes ends up producing incredibly
      horrid code:
      
           a4c:       e3009000        movw    r9, #0
                              a4c: R_ARM_MOVW_ABS_NC  __pv_phys_offset
           a50:       e3409000        movt    r9, #0          ; r9 = &__pv_phys_offset
                              a50: R_ARM_MOVT_ABS     __pv_phys_offset
           a54:       e3002000        movw    r2, #0
                              a54: R_ARM_MOVW_ABS_NC  __pv_phys_offset
           a58:       e3402000        movt    r2, #0          ; r2 = &__pv_phys_offset
                              a58: R_ARM_MOVT_ABS     __pv_phys_offset
           a5c:       e5999004        ldr     r9, [r9, #4]    ; r9 = high word of __pv_phys_offset
           a60:       e3001000        movw    r1, #0
                              a60: R_ARM_MOVW_ABS_NC  mem_map
           a64:       e592c000        ldr     ip, [r2]        ; ip = low word of __pv_phys_offset
      Reviewed-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      e26a9e00
  5. 12 Mar, 2014 11 commits
  6. 25 Feb, 2014 9 commits
  7. 18 Feb, 2014 2 commits
    • Steven Capper's avatar
      ARM: 7979/1: mm: Remove hugetlb warning from Coherent DMA allocator · 6ea41c80
      Steven Capper authored
      The Coherant DMA allocator allocates pages of high order then splits
      them up into smaller pages.
      
      This splitting logic would run into problems if the allocator was
      given compound pages. Thus the Coherant DMA allocator was originally
      incompatible with compound pages existing and, by extension, huge
      pages. A compile #error was put in place whenever huge pages were
      enabled.
      
      Compatibility with compound pages has since been introduced by the
      following commit (which merely excludes GFP_COMP pages from being
      requested by the coherant DMA allocator):
        ea2e7057 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations
      
      When huge page support was introduced to ARM, the compile #error in
      dma-mapping.c was replaced by a #warning when it should have been
      removed instead.
      
      This patch removes the compile #warning in dma-mapping.c when huge
      pages are enabled.
      Signed-off-by: default avatarSteve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      6ea41c80
    • Dave Martin's avatar
      ARM: 7962/2: Make all mcpm functions notrace · ea36d2ab
      Dave Martin authored
      The functions in mcpm_entry.c are mostly intended for use during
      scary cache and coherency disabling sequences, or do other things
      which confuse trace ... like powering a CPU down and not
      returning. Similarly for the backend code.
      
      For simplicity, this patch just makes whole files notrace.
      There should be more than enough traceable points on the paths to
      these functions, but we can be more fine-grained later if there is
      a need for it.
      
      Jon Medhurst:
      Also added spc.o to the list of files as it contains functions used by
      MCPM code which have comments comments like: "might be used in code
      paths where normal cacheable locks are not working"
      Signed-off-by: default avatarDave Martin <dave.martin@linaro.org>
      Signed-off-by: default avatarJon Medhurst <tixy@linaro.org>
      Acked-by: default avatarNicolas Pitre <nico@linaro.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ea36d2ab
  8. 10 Feb, 2014 10 commits
  9. 09 Feb, 2014 1 commit
    • Al Viro's avatar
      fix a kmap leak in virtio_console · c9efe511
      Al Viro authored
      While we are at it, don't do kmap() under kmap_atomic(), *especially*
      for a page we'd allocated with GFP_KERNEL.  It's spelled "page_address",
      and had that been more than that, we'd have a real trouble - kmap_high()
      can block, and doing that while holding kmap_atomic() is a Bad Idea(tm).
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      c9efe511