1. 19 Aug, 2013 4 commits
  2. 13 Aug, 2013 6 commits
    • Russell King's avatar
      Merge branch 'security-fixes' into fixes · 2a282247
      Russell King authored
      2a282247
    • Stephen Warren's avatar
      ARM: 7807/1: kexec: validate CPU hotplug support · 2103f6cb
      Stephen Warren authored
      Architectures should fully validate whether kexec is possible as part of
      machine_kexec_prepare(), so that user-space's kexec_load() operation can
      report any problems. Performing validation in machine_kexec() itself is
      too late, since it is not allowed to return.
      
      Prior to this patch, ARM's machine_kexec() was testing after-the-fact
      whether machine_kexec_prepare() was able to disable all but one CPU.
      Instead, modify machine_kexec_prepare() to validate all conditions
      necessary for machine_kexec_prepare()'s to succeed. BUG if the validation
      succeeded, yet disabling the CPUs didn't actually work.
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      Acked-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      2103f6cb
    • Will Deacon's avatar
      ARM: 7812/1: rwlocks: retry trylock operation if strex fails on free lock · 00efaa02
      Will Deacon authored
      Commit 15e7e5c1 ("ARM: 7749/1: spinlock: retry trylock operation if
      strex fails on free lock") modifying our arch_spin_trylock to retry the
      acquisition if the lock appeared uncontended, but the strex failed.
      
      This patch does the same for rwlocks, which were missed by the original
      patch.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      00efaa02
    • Will Deacon's avatar
      ARM: 7811/1: locks: use early clobber in arch_spin_trylock · afa31d8e
      Will Deacon authored
      The res variable is written before we've finished with the input
      operands (namely the lock address), so ensure that we mark it as `early
      clobber' to avoid unintended register sharing.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      afa31d8e
    • Stephen Boyd's avatar
      ARM: 7810/1: perf: Fix array out of bounds access in armpmu_map_hw_event() · d9f96635
      Stephen Boyd authored
      Vince Weaver reports an oops in the ARM perf event code while
      running his perf_fuzzer tool on a pandaboard running v3.11-rc4.
      
      Unable to handle kernel paging request at virtual address 73fd14cc
      pgd = eca6c000
      [73fd14cc] *pgd=00000000
      Internal error: Oops: 5 [#1] SMP ARM
      Modules linked in: snd_soc_omap_hdmi omapdss snd_soc_omap_abe_twl6040 snd_soc_twl6040 snd_soc_omap snd_soc_omap_hdmi_card snd_soc_omap_mcpdm snd_soc_omap_mcbsp snd_soc_core snd_compress regmap_spi snd_pcm snd_page_alloc snd_timer snd soundcore
      CPU: 1 PID: 2790 Comm: perf_fuzzer Not tainted 3.11.0-rc4 #6
      task: eddcab80 ti: ed892000 task.ti: ed892000
      PC is at armpmu_map_event+0x20/0x88
      LR is at armpmu_event_init+0x38/0x280
      pc : [<c001c3e4>]    lr : [<c001c17c>]    psr: 60000013
      sp : ed893e40  ip : ecececec  fp : edfaec00
      r10: 00000000  r9 : 00000000  r8 : ed8c3ac0
      r7 : ed8c3b5c  r6 : edfaec00  r5 : 00000000  r4 : 00000000
      r3 : 000000ff  r2 : c0496144  r1 : c049611c  r0 : edfaec00
      Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment user
      Control: 10c5387d  Table: aca6c04a  DAC: 00000015
      Process perf_fuzzer (pid: 2790, stack limit = 0xed892240)
      Stack: (0xed893e40 to 0xed894000)
      3e40: 00000800 c001c17c 00000002 c008a748 00000001 00000000 00000000 c00bf078
      3e60: 00000000 edfaee50 00000000 00000000 00000000 edfaec00 ed8c3ac0 edfaec00
      3e80: 00000000 c073ffac ed893f20 c00bf180 00000001 00000000 c00bf078 ed893f20
      3ea0: 00000000 ed8c3ac0 00000000 00000000 00000000 c0cb0818 eddcab80 c00bf440
      3ec0: ed893f20 00000000 eddcab80 eca76800 00000000 eca76800 00000000 00000000
      3ee0: 00000000 ec984c80 eddcab80 c00bfe68 00000000 00000000 00000000 00000080
      3f00: 00000000 ed892000 00000000 ed892030 00000004 ecc7e3c8 ecc7e3c8 00000000
      3f20: 00000000 00000048 ecececec 00000000 00000000 00000000 00000000 00000000
      3f40: 00000000 00000000 00297810 00000000 00000000 00000000 00000000 00000000
      3f60: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
      3f80: 00000002 00000002 000103a4 00000002 0000016c c00128e8 ed892000 00000000
      3fa0: 00090998 c0012700 00000002 000103a4 00090ab8 00000000 00000000 0000000f
      3fc0: 00000002 000103a4 00000002 0000016c 00090ab0 00090ab8 000107a0 00090998
      3fe0: bed92be0 bed92bd0 0000b785 b6e8f6d0 40000010 00090ab8 00000000 00000000
      [<c001c3e4>] (armpmu_map_event+0x20/0x88) from [<c001c17c>] (armpmu_event_init+0x38/0x280)
      [<c001c17c>] (armpmu_event_init+0x38/0x280) from [<c00bf180>] (perf_init_event+0x108/0x180)
      [<c00bf180>] (perf_init_event+0x108/0x180) from [<c00bf440>] (perf_event_alloc+0x248/0x40c)
      [<c00bf440>] (perf_event_alloc+0x248/0x40c) from [<c00bfe68>] (SyS_perf_event_open+0x4f4/0x8fc)
      [<c00bfe68>] (SyS_perf_event_open+0x4f4/0x8fc) from [<c0012700>] (ret_fast_syscall+0x0/0x48)
      Code: 0a000005 e3540004 0a000016 e3540000 (0791010c)
      
      This is because event->attr.config in armpmu_event_init()
      contains a very large number copied directly from userspace and
      is never checked against the size of the array indexed in
      armpmu_map_hw_event(). Fix the problem by checking the value of
      config before indexing the array and rejecting invalid config
      values.
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      d9f96635
    • Will Deacon's avatar
      ARM: 7809/1: perf: fix event validation for software group leaders · c95eb318
      Will Deacon authored
      It is possible to construct an event group with a software event as a
      group leader and then subsequently add a hardware event to the group.
      This results in the event group being validated by adding all members
      of the group to a fake PMU and attempting to allocate each event on
      their respective PMU.
      
      Unfortunately, for software events wthout a corresponding arm_pmu, this
      results in a kernel crash attempting to dereference the ->get_event_idx
      function pointer.
      
      This patch fixes the problem by checking explicitly for software events
      and ignoring those in event validation (since they can always be
      scheduled). We will probably want to revisit this for 3.12, since the
      validation checks don't appear to work correctly when dealing with
      multiple hardware PMUs anyway.
      
      Cc: <stable@vger.kernel.org>
      Reported-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      c95eb318
  3. 08 Aug, 2013 1 commit
    • Russell King's avatar
      ARM: Fix FIQ code on VIVT CPUs · 2ba85e7a
      Russell King authored
      Aaro Koskinen reports the following oops:
      Installing fiq handler from c001b110, length 0x164
      Unable to handle kernel paging request at virtual address ffff1224
      pgd = c0004000
      [ffff1224] *pgd=00000000, *pte=11fff0cb, *ppte=11fff00a
      ...
      [<c0013154>] (set_fiq_handler+0x0/0x6c) from [<c0365d38>] (ams_delta_init_fiq+0xa8/0x160)
       r6:00000164 r5:c001b110 r4:00000000 r3:fefecb4c
      [<c0365c90>] (ams_delta_init_fiq+0x0/0x160) from [<c0365b14>] (ams_delta_init+0xd4/0x114)
       r6:00000000 r5:fffece10 r4:c037a9e0
      [<c0365a40>] (ams_delta_init+0x0/0x114) from [<c03613b4>] (customize_machine+0x24/0x30)
      
      This is because the vectors page is now write-protected, and to change
      code in there we must write to its original alias.  Make that change,
      and adjust the cache flushing such that the code will become visible
      to the instruction stream on VIVT CPUs.
      Reported-by: default avatarAaro Koskinen <aaro.koskinen@iki.fi>
      Tested-by: default avatarAaro Koskinen <aaro.koskinen@iki.fi>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      2ba85e7a
  4. 07 Aug, 2013 2 commits
  5. 03 Aug, 2013 3 commits
  6. 01 Aug, 2013 5 commits
  7. 31 Jul, 2013 8 commits
  8. 26 Jul, 2013 5 commits
    • Russell King's avatar
      ARM: Fix sorting of machine- initializers · 6eddacae
      Russell King authored
      So, there's a comment I put at the top of this, which people seem to
      fail to read.  So let's fix it for them instead.
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      6eddacae
    • Will Deacon's avatar
      ARM: 7791/1: a.out: remove partial a.out support · acfdd4b1
      Will Deacon authored
      a.out support on ARM requires that argc, argv and envp are passed in
      r0-r2 respectively, which requires hacking load_aout_binary to
      prevent argc being clobbered by the return code. Whilst mainline kernels
      do set the registers up in start_thread, the aout loader has never
      carried the hack in mainline.
      
      Initialising the registers in this way actually goes against the libc
      expectations for ELF binaries, where argc, argv and envp are passed on
      the stack, with r0 being used to hold a pointer to an exit function for
      cleaning up after the dynamic linker if required. If the pointer is
      NULL, then it is ignored. When execing an ELF binary, Linux currently
      zeroes r0, then sets it to argc and then finally clobbers it with the
      return value of the execve syscall, so we actually end up with:
      
      	r0 = 0
      	stack[0] = argc
      	r1 = stack[1] = argv
      	r2 = stack[2] = envp
      
      libc treats r1 and r2 as undefined. The clobbering of r0 by sys_execve
      works for user-spawned threads, but when executing an ELF binary from a
      kernel thread (via call_usermodehelper), the execve is performed on the
      ret_from_fork path, which restores r0 from the saved pt_regs, resulting
      in argc being presented to the C library. This has horrible consequences
      when the application exits, since we have an exit function registered
      using argc, resulting in a jump to hyperspace.
      
      This patch solves the problem by removing the partial a.out support from
      arch/arm/ altogether.
      
      Cc: <stable@vger.kernel.org>
      Cc: Ashish Sangwan <ashishsangwan2@gmail.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      acfdd4b1
    • Catalin Marinas's avatar
      ARM: 7790/1: Fix deferred mm switch on VIVT processors · bdae73cd
      Catalin Marinas authored
      As of commit b9d4d42a (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
      pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
      finish_arch_post_lock_switch() function to avoid whole cache flushing
      with interrupts disabled. The need for deferred mm switch is stored as a
      thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
      have another thread switch before finish_arch_post_lock_switch(). If the
      new thread has the same mm as the previous 'next' thread, the scheduler
      will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
      the new thread.
      
      This patch moves the switch pending flag to the mm_context_t structure
      since this is specific to the mm rather than thread.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      Tested-by: default avatarMarc Kleine-Budde <mkl@pengutronix.de>
      Cc: <stable@vger.kernel.org> # 3.5+
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      bdae73cd
    • Fabio Estevam's avatar
      ARM: 7789/1: Do not run dummy_flush_tlb_a15_erratum() on non-Cortex-A15 · 1f49856b
      Fabio Estevam authored
      Commit 93dc6887 (ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB operations)) causes the following undefined instruction error on a mx53 (Cortex-A8):
      
      Internal error: Oops - undefined instruction: 0 [#1] SMP ARM
      CPU: 0 PID: 275 Comm: modprobe Not tainted 3.11.0-rc2-next-20130722-00009-g9b0f371 #881
      task: df46cc00 ti: df48e000 task.ti: df48e000
      PC is at check_and_switch_context+0x17c/0x4d0
      LR is at check_and_switch_context+0xdc/0x4d0
      
      This problem happens because check_and_switch_context() calls dummy_flush_tlb_a15_erratum() without checking if we are really running on a Cortex-A15 or not.
      
      To avoid this issue, only call dummy_flush_tlb_a15_erratum() inside
      check_and_switch_context() if erratum_a15_798181() returns true, which means that we are really running on a Cortex-A15.
      Signed-off-by: default avatarFabio Estevam <fabio.estevam@freescale.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarRoger Quadros <rogerq@ti.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      1f49856b
    • Mark Rutland's avatar
      ARM: 7787/1: virt: ensure visibility of __boot_cpu_mode · 8fbac214
      Mark Rutland authored
      Secondary CPUs write to __boot_cpu_mode with caches disabled, and thus a
      cached value of __boot_cpu_mode may be incoherent with that in memory.
      This could lead to a failure to detect mismatched boot modes.
      
      This patch adds flushing to ensure that writes by secondaries to
      __boot_cpu_mode are made visible before we test against it.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarDave Martin <Dave.Martin@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      8fbac214
  9. 22 Jul, 2013 5 commits
    • Tetsuyuki Kobayashi's avatar
      ARM: 7788/1: elf: fix lpae hwcap feature reporting in proc/cpuinfo · ab8d46c0
      Tetsuyuki Kobayashi authored
      Commit a469abd0 ("ARM: elf: add new hwcap for identifying atomic
      ldrd/strd instructions") added a new hwcap to identify LPAE on CPUs
      which support it. Whilst the hwcap data is correct, the string reported
      in /proc/cpuinfo actually matches on HWCAP_VFPD32, which was missing
      an entry in the string table.
      
      This patch fixes this problem by adding a "vfpd32" string at the correct
      offset, preventing us from falsely advertising LPAE on CPUs which do not
      support it.
      
      [will: added commit message]
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarTetsuyuki Kobayashi <koba@kmckk.co.jp>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ab8d46c0
    • Mark Rutland's avatar
      ARM: 7786/1: hyp: fix macro parameterisation · b60d5db6
      Mark Rutland authored
      Currently, compare_cpu_mode_with_primary uses a mixture of macro
      arguments and hardcoded registers, and does so incorrectly, as it
      stores (__boot_cpu_mode_offset | BOOT_CPU_MODE_MISMATCH) to
      (__boot_cpu_mode + &__boot_cpu_mode_offset), which could corrupt an
      arbitrary portion of memory.
      
      This patch fixes up compare_cpu_mode_with_primary to use the macro
      arguments, correctly updating __boot_cpu_mode.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarDave Martin <Dave.Martin@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      b60d5db6
    • Russell King's avatar
      ARM: 7785/1: mm: restrict early_alloc to section-aligned memory · c65b7e98
      Russell King authored
      When map_lowmem() runs, and processes a memory bank whose start or end
      is not section-aligned, memory must be allocated to store the 2nd-level
      page tables. Those allocations are made by calling memblock_alloc().
      
      At this point, the only memory that is free *and* mapped is memory which
      has already been mapped by map_lowmem() itself. For this reason, we must
      calculate the first point at which map_lowmem() will need to allocate
      memory, and set the memblock allocation limit to a lower address, so that
      memblock_alloc() is guaranteed to return memory that is already mapped.
      
      This patch enhances sanity_check_meminfo() to calculate that memory
      address, and pass it to memblock_set_current_limit(), rather than just
      assuming the limit is arm_lowmem_limit.
      
      The algorithm applied is:
      
      * Default memblock_limit to arm_lowmem_limit in the absence of any other
        limit; arm_lowmem_limit is the highest memory that is mapped by
        map_lowmem().
      
      * While walking the list of memblocks, if the start of a block is not
        aligned, 2nd-level page tables will need to be allocated to map the
        first few pages of the block. Hence, the memblock_limit must be before
        the start of the block.
      
      * Similarly, if the end of any block is not aligned, 2nd-level page
        tables will need to be allocated to map the last few pages of the
        block. Hence, the memblock_limit must point at the end of the block,
        rounded down to section-alignment.
      
      * The memory blocks are assumed to be sorted in address order, so the
        first unaligned block start or end is used to set the limit.
      
      With this algorithm, the start or end of almost any bank can be non-
      section-aligned. The only exception is that the start of bank 0 must
      be section-aligned, since otherwise memory would need to be allocated
      when mapping the start of bank 0, which occurs before any free memory
      is mapped.
      
      [swarren, wrote commit description, rewrote calculation of memblock_limit]
      Signed-off-by: default avatarStephen Warren <swarren@nvidia.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      c65b7e98
    • Will Deacon's avatar
      ARM: 7784/1: mm: ensure SMP alternates assemble to exactly 4 bytes with Thumb-2 · bf3f0f33
      Will Deacon authored
      Commit ae8a8b95 ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE
      and use ALT_SMP instead") added early function returns for page table
      cache flushing operations on ARMv7 SMP CPUs.
      
      Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences
      assemble to 2 bytes which can lead to corruption of the instruction
      stream after code patching.
      
      This patch fixes the alternates to use wide (32-bit) instructions for
      Thumb-2, therefore ensuring that the patching code works correctly.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      bf3f0f33
    • Russell King's avatar
      ARM: document DEBUG_UNCOMPRESS Kconfig option · b6992fa9
      Russell King authored
      This non-user visible option lacked any kind of documentation.  This
      is quite common for non-user visible options; certian people can't
      understand the point of documenting such options with help text.
      
      However, here we have a case in point: developers don't understand the
      option either, as they were thinking that when the option is not set,
      the decompressor should produce no output what so ever.  This is
      incorrect, as the purpose of this option is to control whether a
      multiplatform kernel uses the kernel debugging macros to produce
      output or not.
      
      So let's document this via help rather than commentry to prevent others
      falling into this misunderstanding.
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      b6992fa9
  10. 21 Jul, 2013 1 commit