1. 31 Jul, 2020 3 commits
    • Catalin Marinas's avatar
      Merge branch 'for-next/tlbi' into for-next/core · 18aa3bd5
      Catalin Marinas authored
      * for-next/tlbi:
        : Support for TTL (translation table level) hint in the TLB operations
        arm64: tlb: Use the TLBI RANGE feature in arm64
        arm64: enable tlbi range instructions
        arm64: tlb: Detect the ARMv8.4 TLBI RANGE feature
        arm64: tlb: don't set the ttl value in flush_tlb_page_nosync
        arm64: Shift the __tlbi_level() indentation left
        arm64: tlb: Set the TTL field in flush_*_tlb_range
        arm64: tlb: Set the TTL field in flush_tlb_range
        tlb: mmu_gather: add tlb_flush_*_range APIs
        arm64: Add tlbi_user_level TLB invalidation helper
        arm64: Add level-hinted TLB invalidation helper
        arm64: Document SW reserved PTE/PMD bits in Stage-2 descriptors
        arm64: Detect the ARMv8.4 TTL feature
      18aa3bd5
    • Catalin Marinas's avatar
      Merge branches 'for-next/misc', 'for-next/vmcoreinfo', 'for-next/cpufeature',... · 4557062d
      Catalin Marinas authored
      Merge branches 'for-next/misc', 'for-next/vmcoreinfo', 'for-next/cpufeature', 'for-next/acpi', 'for-next/perf', 'for-next/timens', 'for-next/msi-iommu' and 'for-next/trivial' into for-next/core
      
      * for-next/misc:
        : Miscellaneous fixes and cleanups
        arm64: use IRQ_STACK_SIZE instead of THREAD_SIZE for irq stack
        arm64/mm: save memory access in check_and_switch_context() fast switch path
        recordmcount: only record relocation of type R_AARCH64_CALL26 on arm64.
        arm64: Reserve HWCAP2_MTE as (1 << 18)
        arm64/entry: deduplicate SW PAN entry/exit routines
        arm64: s/AMEVTYPE/AMEVTYPER
        arm64/hugetlb: Reserve CMA areas for gigantic pages on 16K and 64K configs
        arm64: stacktrace: Move export for save_stack_trace_tsk()
        smccc: Make constants available to assembly
        arm64/mm: Redefine CONT_{PTE, PMD}_SHIFT
        arm64/defconfig: Enable CONFIG_KEXEC_FILE
        arm64: Document sysctls for emulated deprecated instructions
        arm64/panic: Unify all three existing notifier blocks
        arm64/module: Optimize module load time by optimizing PLT counting
      
      * for-next/vmcoreinfo:
        : Export the virtual and physical address sizes in vmcoreinfo
        arm64/crash_core: Export TCR_EL1.T1SZ in vmcoreinfo
        crash_core, vmcoreinfo: Append 'MAX_PHYSMEM_BITS' to vmcoreinfo
      
      * for-next/cpufeature:
        : CPU feature handling cleanups
        arm64/cpufeature: Validate feature bits spacing in arm64_ftr_regs[]
        arm64/cpufeature: Replace all open bits shift encodings with macros
        arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR2 register
        arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR1 register
        arm64/cpufeature: Add remaining feature bits in ID_AA64MMFR0 register
      
      * for-next/acpi:
        : ACPI updates for arm64
        arm64/acpi: disallow writeable AML opregion mapping for EFI code regions
        arm64/acpi: disallow AML memory opregions to access kernel memory
      
      * for-next/perf:
        : perf updates for arm64
        arm64: perf: Expose some new events via sysfs
        tools headers UAPI: Update tools's copy of linux/perf_event.h
        arm64: perf: Add cap_user_time_short
        perf: Add perf_event_mmap_page::cap_user_time_short ABI
        arm64: perf: Only advertise cap_user_time for arch_timer
        arm64: perf: Implement correct cap_user_time
        time/sched_clock: Use raw_read_seqcount_latch()
        sched_clock: Expose struct clock_read_data
        arm64: perf: Correct the event index in sysfs
        perf/smmuv3: To simplify code for ioremap page in pmcg
      
      * for-next/timens:
        : Time namespace support for arm64
        arm64: enable time namespace support
        arm64/vdso: Restrict splitting VVAR VMA
        arm64/vdso: Handle faults on timens page
        arm64/vdso: Add time namespace page
        arm64/vdso: Zap vvar pages when switching to a time namespace
        arm64/vdso: use the fault callback to map vvar pages
      
      * for-next/msi-iommu:
        : Make the MSI/IOMMU input/output ID translation PCI agnostic, augment the
        : MSI/IOMMU ACPI/OF ID mapping APIs to accept an input ID bus-specific parameter
        : and apply the resulting changes to the device ID space provided by the
        : Freescale FSL bus
        bus: fsl-mc: Add ACPI support for fsl-mc
        bus/fsl-mc: Refactor the MSI domain creation in the DPRC driver
        of/irq: Make of_msi_map_rid() PCI bus agnostic
        of/irq: make of_msi_map_get_device_domain() bus agnostic
        dt-bindings: arm: fsl: Add msi-map device-tree binding for fsl-mc bus
        of/device: Add input id to of_dma_configure()
        of/iommu: Make of_map_rid() PCI agnostic
        ACPI/IORT: Add an input ID to acpi_dma_configure()
        ACPI/IORT: Remove useless PCI bus walk
        ACPI/IORT: Make iort_msi_map_rid() PCI agnostic
        ACPI/IORT: Make iort_get_device_domain IRQ domain agnostic
        ACPI/IORT: Make iort_match_node_callback walk the ACPI namespace for NC
      
      * for-next/trivial:
        : Trivial fixes
        arm64: sigcontext.h: delete duplicated word
        arm64: ptrace.h: delete duplicated word
        arm64: pgtable-hwdef.h: delete duplicated words
      4557062d
    • Maninder Singh's avatar
      arm64: use IRQ_STACK_SIZE instead of THREAD_SIZE for irq stack · 338c11e9
      Maninder Singh authored
      IRQ_STACK_SIZE can be made different from THREAD_SIZE,
      and as IRQ_STACK_SIZE is used while irq stack allocation,
      same define should be used while printing information of irq stack.
      Signed-off-by: default avatarManinder Singh <maninder1.s@samsung.com>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Link: https://lore.kernel.org/r/1596196190-14141-1-git-send-email-maninder1.s@samsung.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      338c11e9
  2. 30 Jul, 2020 4 commits
  3. 28 Jul, 2020 12 commits
  4. 24 Jul, 2020 8 commits
  5. 23 Jul, 2020 1 commit
  6. 22 Jul, 2020 1 commit
  7. 21 Jul, 2020 1 commit
  8. 20 Jul, 2020 8 commits
  9. 15 Jul, 2020 2 commits
    • Zhenyu Ye's avatar
      arm64: tlb: Use the TLBI RANGE feature in arm64 · d1d3aa98
      Zhenyu Ye authored
      Add __TLBI_VADDR_RANGE macro and rewrite __flush_tlb_range().
      
      When cpu supports TLBI feature, the minimum range granularity is
      decided by 'scale', so we can not flush all pages by one instruction
      in some cases.
      
      For example, when the pages = 0xe81a, let's start 'scale' from
      maximum, and find right 'num' for each 'scale':
      
      1. scale = 3, we can flush no pages because the minimum range is
         2^(5*3 + 1) = 0x10000.
      2. scale = 2, the minimum range is 2^(5*2 + 1) = 0x800, we can
         flush 0xe800 pages this time, the num = 0xe800/0x800 - 1 = 0x1c.
         Remaining pages is 0x1a;
      3. scale = 1, the minimum range is 2^(5*1 + 1) = 0x40, no page
         can be flushed.
      4. scale = 0, we flush the remaining 0x1a pages, the num =
         0x1a/0x2 - 1 = 0xd.
      
      However, in most scenarios, the pages = 1 when flush_tlb_range() is
      called. Start from scale = 3 or other proper value (such as scale =
      ilog2(pages)), will incur extra overhead.
      So increase 'scale' from 0 to maximum, the flush order is exactly
      opposite to the example.
      Signed-off-by: default avatarZhenyu Ye <yezhenyu2@huawei.com>
      Link: https://lore.kernel.org/r/20200715071945.897-4-yezhenyu2@huawei.com
      [catalin.marinas@arm.com: removed unnecessary masks in __TLBI_VADDR_RANGE]
      [catalin.marinas@arm.com: __TLB_RANGE_NUM subtracts 1]
      [catalin.marinas@arm.com: minor adjustments to the comments]
      [catalin.marinas@arm.com: introduce system_supports_tlb_range()]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d1d3aa98
    • Zhenyu Ye's avatar
      arm64: enable tlbi range instructions · 7c78f67e
      Zhenyu Ye authored
      TLBI RANGE feature instoduces new assembly instructions and only
      support by binutils >= 2.30.  Add necessary Kconfig logic to allow
      this to be enabled and pass '-march=armv8.4-a' to KBUILD_CFLAGS.
      Signed-off-by: default avatarZhenyu Ye <yezhenyu2@huawei.com>
      Link: https://lore.kernel.org/r/20200715071945.897-3-yezhenyu2@huawei.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7c78f67e