1. 29 Jul, 2015 1 commit
  2. 28 Jul, 2015 5 commits
    • Will Deacon's avatar
      arm64: pgtable: fix definition of pte_valid · 766ffb69
      Will Deacon authored
      pte_valid should check if the PTE_VALID bit (1 << 0) is set in the pte,
      so fix the macro definition to use bitwise & instead of logical &&.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      766ffb69
    • Will Deacon's avatar
      arm64: spinlock: fix ll/sc unlock on big-endian systems · c1d7cd22
      Will Deacon authored
      When unlocking a spinlock, we perform a read-modify-write on the owner
      ticket in order to increment it and store it back with release
      semantics.
      
      In the LL/SC case, we load the 16-bit ticket using a 32-bit load and
      therefore store back the wrong halfword on a big-endian system,
      corrupting the lock after the first unlock and killing the system dead.
      
      This patch fixes the unlock code to use 16-bit accessors consistently.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c1d7cd22
    • Catalin Marinas's avatar
      arm64: Use last level TLBI for user pte changes · 4150e50b
      Catalin Marinas authored
      The flush_tlb_page() function is used on user address ranges when PTEs
      (or PMDs/PUDs for huge pages) were changed (attributes or clearing). For
      such cases, it is more efficient to invalidate only the last level of
      the TLB with the "tlbi vale1is" instruction.
      
      In the TLB shoot-down case, the TLB caching of the intermediate page
      table levels (pmd, pud, pgd) is handled by __flush_tlb_pgtable() via the
      __(pte|pmd|pud)_free_tlb() functions and it is not deferred to
      tlb_finish_mmu() (as of commit 285994a6 - "arm64: Invalidate the TLB
      corresponding to intermediate page table levels"). The tlb_flush()
      function only needs to invalidate the TLB for the last level of page
      tables; the __flush_tlb_range() function gains a fourth argument for
      last level TLBI.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4150e50b
    • Catalin Marinas's avatar
      arm64: Clean up __flush_tlb(_kernel)_range functions · da4e7330
      Catalin Marinas authored
      This patch moves the MAX_TLB_RANGE check into the
      flush_tlb(_kernel)_range functions directly to avoid the
      undescore-prefixed definitions (and for consistency with a subsequent
      patch).
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      da4e7330
    • Mark Rutland's avatar
      arm64: mm: mark create_mapping as __init · c53e0baa
      Mark Rutland authored
      Currently create_mapping is marked with __ref, apparently because it
      refers to early_alloc. However, create_mapping has no logic to prevent
      erroneous use of early_alloc after it has been freed, and is only ever
      called by __init functions anyway. Thus the __ref marker is misleading
      and unnecessary.
      
      Instead, this patch marks create_mapping as __init, resulting in
      warnings if it is used from a a non __init functions, and allowing its
      memory to be reclaimed.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c53e0baa
  3. 27 Jul, 2015 34 commits