Commit 8546dc1d authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-arm

Pull ARM updates from Russell King:
 "The major items included in here are:

   - MCPM, multi-cluster power management, part of the infrastructure
     required for ARMs big.LITTLE support.

   - A rework of the ARM KVM code to allow re-use by ARM64.

   - Error handling cleanups of the IS_ERR_OR_NULL() madness and fixes
     of that stuff for arch/arm

   - Preparatory patches for Cortex-M3 support from Uwe Kleine-König.

  There is also a set of three patches in here from Hugh/Catalin to
  address freeing of inappropriate page tables on LPAE.  You already
  have these from akpm, but they were already part of my tree at the
  time he sent them, so unfortunately they'll end up with duplicate
  commits"

* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (77 commits)
  ARM: EXYNOS: remove unnecessary use of IS_ERR_VALUE()
  ARM: IMX: remove unnecessary use of IS_ERR_VALUE()
  ARM: OMAP: use consistent error checking
  ARM: cleanup: OMAP hwmod error checking
  ARM: 7709/1: mcpm: Add explicit AFLAGS to support v6/v7 multiplatform kernels
  ARM: 7700/2: Make cpu_init() notrace
  ARM: 7702/1: Set the page table freeing ceiling to TASK_SIZE
  ARM: 7701/1: mm: Allow arch code to control the user page table ceiling
  ARM: 7703/1: Disable preemption in broadcast_tlb*_a15_erratum()
  ARM: mcpm: provide an interface to set the SMP ops at run time
  ARM: mcpm: generic SMP secondary bringup and hotplug support
  ARM: mcpm_head.S: vlock-based first man election
  ARM: mcpm: Add baremetal voting mutexes
  ARM: mcpm: introduce helpers for platform coherency exit/setup
  ARM: mcpm: introduce the CPU/cluster power API
  ARM: multi-cluster PM: secondary kernel entry code
  ARM: cacheflush: add synchronization helpers for mixed cache state accesses
  ARM: cpu hotplug: remove majority of cache flushing from platforms
  ARM: smp: flush L1 cache in cpu_die()
  ARM: tegra: remove tegra specific cpu_disable()
  ...
parents 9992ba72 33b9f582
Cluster-wide Power-up/power-down race avoidance algorithm
=========================================================
This file documents the algorithm which is used to coordinate CPU and
cluster setup and teardown operations and to manage hardware coherency
controls safely.
The section "Rationale" explains what the algorithm is for and why it is
needed. "Basic model" explains general concepts using a simplified view
of the system. The other sections explain the actual details of the
algorithm in use.
Rationale
---------
In a system containing multiple CPUs, it is desirable to have the
ability to turn off individual CPUs when the system is idle, reducing
power consumption and thermal dissipation.
In a system containing multiple clusters of CPUs, it is also desirable
to have the ability to turn off entire clusters.
Turning entire clusters off and on is a risky business, because it
involves performing potentially destructive operations affecting a group
of independently running CPUs, while the OS continues to run. This
means that we need some coordination in order to ensure that critical
cluster-level operations are only performed when it is truly safe to do
so.
Simple locking may not be sufficient to solve this problem, because
mechanisms like Linux spinlocks may rely on coherency mechanisms which
are not immediately enabled when a cluster powers up. Since enabling or
disabling those mechanisms may itself be a non-atomic operation (such as
writing some hardware registers and invalidating large caches), other
methods of coordination are required in order to guarantee safe
power-down and power-up at the cluster level.
The mechanism presented in this document describes a coherent memory
based protocol for performing the needed coordination. It aims to be as
lightweight as possible, while providing the required safety properties.
Basic model
-----------
Each cluster and CPU is assigned a state, as follows:
DOWN
COMING_UP
UP
GOING_DOWN
+---------> UP ----------+
| v
COMING_UP GOING_DOWN
^ |
+--------- DOWN <--------+
DOWN: The CPU or cluster is not coherent, and is either powered off or
suspended, or is ready to be powered off or suspended.
COMING_UP: The CPU or cluster has committed to moving to the UP state.
It may be part way through the process of initialisation and
enabling coherency.
UP: The CPU or cluster is active and coherent at the hardware
level. A CPU in this state is not necessarily being used
actively by the kernel.
GOING_DOWN: The CPU or cluster has committed to moving to the DOWN
state. It may be part way through the process of teardown and
coherency exit.
Each CPU has one of these states assigned to it at any point in time.
The CPU states are described in the "CPU state" section, below.
Each cluster is also assigned a state, but it is necessary to split the
state value into two parts (the "cluster" state and "inbound" state) and
to introduce additional states in order to avoid races between different
CPUs in the cluster simultaneously modifying the state. The cluster-
level states are described in the "Cluster state" section.
To help distinguish the CPU states from cluster states in this
discussion, the state names are given a CPU_ prefix for the CPU states,
and a CLUSTER_ or INBOUND_ prefix for the cluster states.
CPU state
---------
In this algorithm, each individual core in a multi-core processor is
referred to as a "CPU". CPUs are assumed to be single-threaded:
therefore, a CPU can only be doing one thing at a single point in time.
This means that CPUs fit the basic model closely.
The algorithm defines the following states for each CPU in the system:
CPU_DOWN
CPU_COMING_UP
CPU_UP
CPU_GOING_DOWN
cluster setup and
CPU setup complete policy decision
+-----------> CPU_UP ------------+
| v
CPU_COMING_UP CPU_GOING_DOWN
^ |
+----------- CPU_DOWN <----------+
policy decision CPU teardown complete
or hardware event
The definitions of the four states correspond closely to the states of
the basic model.
Transitions between states occur as follows.
A trigger event (spontaneous) means that the CPU can transition to the
next state as a result of making local progress only, with no
requirement for any external event to happen.
CPU_DOWN:
A CPU reaches the CPU_DOWN state when it is ready for
power-down. On reaching this state, the CPU will typically
power itself down or suspend itself, via a WFI instruction or a
firmware call.
Next state: CPU_COMING_UP
Conditions: none
Trigger events:
a) an explicit hardware power-up operation, resulting
from a policy decision on another CPU;
b) a hardware event, such as an interrupt.
CPU_COMING_UP:
A CPU cannot start participating in hardware coherency until the
cluster is set up and coherent. If the cluster is not ready,
then the CPU will wait in the CPU_COMING_UP state until the
cluster has been set up.
Next state: CPU_UP
Conditions: The CPU's parent cluster must be in CLUSTER_UP.
Trigger events: Transition of the parent cluster to CLUSTER_UP.
Refer to the "Cluster state" section for a description of the
CLUSTER_UP state.
CPU_UP:
When a CPU reaches the CPU_UP state, it is safe for the CPU to
start participating in local coherency.
This is done by jumping to the kernel's CPU resume code.
Note that the definition of this state is slightly different
from the basic model definition: CPU_UP does not mean that the
CPU is coherent yet, but it does mean that it is safe to resume
the kernel. The kernel handles the rest of the resume
procedure, so the remaining steps are not visible as part of the
race avoidance algorithm.
The CPU remains in this state until an explicit policy decision
is made to shut down or suspend the CPU.
Next state: CPU_GOING_DOWN
Conditions: none
Trigger events: explicit policy decision
CPU_GOING_DOWN:
While in this state, the CPU exits coherency, including any
operations required to achieve this (such as cleaning data
caches).
Next state: CPU_DOWN
Conditions: local CPU teardown complete
Trigger events: (spontaneous)
Cluster state
-------------
A cluster is a group of connected CPUs with some common resources.
Because a cluster contains multiple CPUs, it can be doing multiple
things at the same time. This has some implications. In particular, a
CPU can start up while another CPU is tearing the cluster down.
In this discussion, the "outbound side" is the view of the cluster state
as seen by a CPU tearing the cluster down. The "inbound side" is the
view of the cluster state as seen by a CPU setting the CPU up.
In order to enable safe coordination in such situations, it is important
that a CPU which is setting up the cluster can advertise its state
independently of the CPU which is tearing down the cluster. For this
reason, the cluster state is split into two parts:
"cluster" state: The global state of the cluster; or the state
on the outbound side:
CLUSTER_DOWN
CLUSTER_UP
CLUSTER_GOING_DOWN
"inbound" state: The state of the cluster on the inbound side.
INBOUND_NOT_COMING_UP
INBOUND_COMING_UP
The different pairings of these states results in six possible
states for the cluster as a whole:
CLUSTER_UP
+==========> INBOUND_NOT_COMING_UP -------------+
# |
|
CLUSTER_UP <----+ |
INBOUND_COMING_UP | v
^ CLUSTER_GOING_DOWN CLUSTER_GOING_DOWN
# INBOUND_COMING_UP <=== INBOUND_NOT_COMING_UP
CLUSTER_DOWN | |
INBOUND_COMING_UP <----+ |
|
^ |
+=========== CLUSTER_DOWN <------------+
INBOUND_NOT_COMING_UP
Transitions -----> can only be made by the outbound CPU, and
only involve changes to the "cluster" state.
Transitions ===##> can only be made by the inbound CPU, and only
involve changes to the "inbound" state, except where there is no
further transition possible on the outbound side (i.e., the
outbound CPU has put the cluster into the CLUSTER_DOWN state).
The race avoidance algorithm does not provide a way to determine
which exact CPUs within the cluster play these roles. This must
be decided in advance by some other means. Refer to the section
"Last man and first man selection" for more explanation.
CLUSTER_DOWN/INBOUND_NOT_COMING_UP is the only state where the
cluster can actually be powered down.
The parallelism of the inbound and outbound CPUs is observed by
the existence of two different paths from CLUSTER_GOING_DOWN/
INBOUND_NOT_COMING_UP (corresponding to GOING_DOWN in the basic
model) to CLUSTER_DOWN/INBOUND_COMING_UP (corresponding to
COMING_UP in the basic model). The second path avoids cluster
teardown completely.
CLUSTER_UP/INBOUND_COMING_UP is equivalent to UP in the basic
model. The final transition to CLUSTER_UP/INBOUND_NOT_COMING_UP
is trivial and merely resets the state machine ready for the
next cycle.
Details of the allowable transitions follow.
The next state in each case is notated
<cluster state>/<inbound state> (<transitioner>)
where the <transitioner> is the side on which the transition
can occur; either the inbound or the outbound side.
CLUSTER_DOWN/INBOUND_NOT_COMING_UP:
Next state: CLUSTER_DOWN/INBOUND_COMING_UP (inbound)
Conditions: none
Trigger events:
a) an explicit hardware power-up operation, resulting
from a policy decision on another CPU;
b) a hardware event, such as an interrupt.
CLUSTER_DOWN/INBOUND_COMING_UP:
In this state, an inbound CPU sets up the cluster, including
enabling of hardware coherency at the cluster level and any
other operations (such as cache invalidation) which are required
in order to achieve this.
The purpose of this state is to do sufficient cluster-level
setup to enable other CPUs in the cluster to enter coherency
safely.
Next state: CLUSTER_UP/INBOUND_COMING_UP (inbound)
Conditions: cluster-level setup and hardware coherency complete
Trigger events: (spontaneous)
CLUSTER_UP/INBOUND_COMING_UP:
Cluster-level setup is complete and hardware coherency is
enabled for the cluster. Other CPUs in the cluster can safely
enter coherency.
This is a transient state, leading immediately to
CLUSTER_UP/INBOUND_NOT_COMING_UP. All other CPUs on the cluster
should consider treat these two states as equivalent.
Next state: CLUSTER_UP/INBOUND_NOT_COMING_UP (inbound)
Conditions: none
Trigger events: (spontaneous)
CLUSTER_UP/INBOUND_NOT_COMING_UP:
Cluster-level setup is complete and hardware coherency is
enabled for the cluster. Other CPUs in the cluster can safely
enter coherency.
The cluster will remain in this state until a policy decision is
made to power the cluster down.
Next state: CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP (outbound)
Conditions: none
Trigger events: policy decision to power down the cluster
CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP:
An outbound CPU is tearing the cluster down. The selected CPU
must wait in this state until all CPUs in the cluster are in the
CPU_DOWN state.
When all CPUs are in the CPU_DOWN state, the cluster can be torn
down, for example by cleaning data caches and exiting
cluster-level coherency.
To avoid wasteful unnecessary teardown operations, the outbound
should check the inbound cluster state for asynchronous
transitions to INBOUND_COMING_UP. Alternatively, individual
CPUs can be checked for entry into CPU_COMING_UP or CPU_UP.
Next states:
CLUSTER_DOWN/INBOUND_NOT_COMING_UP (outbound)
Conditions: cluster torn down and ready to power off
Trigger events: (spontaneous)
CLUSTER_GOING_DOWN/INBOUND_COMING_UP (inbound)
Conditions: none
Trigger events:
a) an explicit hardware power-up operation,
resulting from a policy decision on another
CPU;
b) a hardware event, such as an interrupt.
CLUSTER_GOING_DOWN/INBOUND_COMING_UP:
The cluster is (or was) being torn down, but another CPU has
come online in the meantime and is trying to set up the cluster
again.
If the outbound CPU observes this state, it has two choices:
a) back out of teardown, restoring the cluster to the
CLUSTER_UP state;
b) finish tearing the cluster down and put the cluster
in the CLUSTER_DOWN state; the inbound CPU will
set up the cluster again from there.
Choice (a) permits the removal of some latency by avoiding
unnecessary teardown and setup operations in situations where
the cluster is not really going to be powered down.
Next states:
CLUSTER_UP/INBOUND_COMING_UP (outbound)
Conditions: cluster-level setup and hardware
coherency complete
Trigger events: (spontaneous)
CLUSTER_DOWN/INBOUND_COMING_UP (outbound)
Conditions: cluster torn down and ready to power off
Trigger events: (spontaneous)
Last man and First man selection
--------------------------------
The CPU which performs cluster tear-down operations on the outbound side
is commonly referred to as the "last man".
The CPU which performs cluster setup on the inbound side is commonly
referred to as the "first man".
The race avoidance algorithm documented above does not provide a
mechanism to choose which CPUs should play these roles.
Last man:
When shutting down the cluster, all the CPUs involved are initially
executing Linux and hence coherent. Therefore, ordinary spinlocks can
be used to select a last man safely, before the CPUs become
non-coherent.
First man:
Because CPUs may power up asynchronously in response to external wake-up
events, a dynamic mechanism is needed to make sure that only one CPU
attempts to play the first man role and do the cluster-level
initialisation: any other CPUs must wait for this to complete before
proceeding.
Cluster-level initialisation may involve actions such as configuring
coherency controls in the bus fabric.
The current implementation in mcpm_head.S uses a separate mutual exclusion
mechanism to do this arbitration. This mechanism is documented in
detail in vlocks.txt.
Features and Limitations
------------------------
Implementation:
The current ARM-based implementation is split between
arch/arm/common/mcpm_head.S (low-level inbound CPU operations) and
arch/arm/common/mcpm_entry.c (everything else):
__mcpm_cpu_going_down() signals the transition of a CPU to the
CPU_GOING_DOWN state.
__mcpm_cpu_down() signals the transition of a CPU to the CPU_DOWN
state.
A CPU transitions to CPU_COMING_UP and then to CPU_UP via the
low-level power-up code in mcpm_head.S. This could
involve CPU-specific setup code, but in the current
implementation it does not.
__mcpm_outbound_enter_critical() and __mcpm_outbound_leave_critical()
handle transitions from CLUSTER_UP to CLUSTER_GOING_DOWN
and from there to CLUSTER_DOWN or back to CLUSTER_UP (in
the case of an aborted cluster power-down).
These functions are more complex than the __mcpm_cpu_*()
functions due to the extra inter-CPU coordination which
is needed for safe transitions at the cluster level.
A cluster transitions from CLUSTER_DOWN back to CLUSTER_UP via
the low-level power-up code in mcpm_head.S. This
typically involves platform-specific setup code,
provided by the platform-specific power_up_setup
function registered via mcpm_sync_init.
Deep topologies:
As currently described and implemented, the algorithm does not
support CPU topologies involving more than two levels (i.e.,
clusters of clusters are not supported). The algorithm could be
extended by replicating the cluster-level states for the
additional topological levels, and modifying the transition
rules for the intermediate (non-outermost) cluster levels.
Colophon
--------
Originally created and documented by Dave Martin for Linaro Limited, in
collaboration with Nicolas Pitre and Achin Gupta.
Copyright (C) 2012-2013 Linaro Limited
Distributed under the terms of Version 2 of the GNU General Public
License, as defined in linux/COPYING.
vlocks for Bare-Metal Mutual Exclusion
======================================
Voting Locks, or "vlocks" provide a simple low-level mutual exclusion
mechanism, with reasonable but minimal requirements on the memory
system.
These are intended to be used to coordinate critical activity among CPUs
which are otherwise non-coherent, in situations where the hardware
provides no other mechanism to support this and ordinary spinlocks
cannot be used.
vlocks make use of the atomicity provided by the memory system for
writes to a single memory location. To arbitrate, every CPU "votes for
itself", by storing a unique number to a common memory location. The
final value seen in that memory location when all the votes have been
cast identifies the winner.
In order to make sure that the election produces an unambiguous result
in finite time, a CPU will only enter the election in the first place if
no winner has been chosen and the election does not appear to have
started yet.
Algorithm
---------
The easiest way to explain the vlocks algorithm is with some pseudo-code:
int currently_voting[NR_CPUS] = { 0, };
int last_vote = -1; /* no votes yet */
bool vlock_trylock(int this_cpu)
{
/* signal our desire to vote */
currently_voting[this_cpu] = 1;
if (last_vote != -1) {
/* someone already volunteered himself */
currently_voting[this_cpu] = 0;
return false; /* not ourself */
}
/* let's suggest ourself */
last_vote = this_cpu;
currently_voting[this_cpu] = 0;
/* then wait until everyone else is done voting */
for_each_cpu(i) {
while (currently_voting[i] != 0)
/* wait */;
}
/* result */
if (last_vote == this_cpu)
return true; /* we won */
return false;
}
bool vlock_unlock(void)
{
last_vote = -1;
}
The currently_voting[] array provides a way for the CPUs to determine
whether an election is in progress, and plays a role analogous to the
"entering" array in Lamport's bakery algorithm [1].
However, once the election has started, the underlying memory system
atomicity is used to pick the winner. This avoids the need for a static
priority rule to act as a tie-breaker, or any counters which could
overflow.
As long as the last_vote variable is globally visible to all CPUs, it
will contain only one value that won't change once every CPU has cleared
its currently_voting flag.
Features and limitations
------------------------
* vlocks are not intended to be fair. In the contended case, it is the
_last_ CPU which attempts to get the lock which will be most likely
to win.
vlocks are therefore best suited to situations where it is necessary
to pick a unique winner, but it does not matter which CPU actually
wins.
* Like other similar mechanisms, vlocks will not scale well to a large
number of CPUs.
vlocks can be cascaded in a voting hierarchy to permit better scaling
if necessary, as in the following hypothetical example for 4096 CPUs:
/* first level: local election */
my_town = towns[(this_cpu >> 4) & 0xf];
I_won = vlock_trylock(my_town, this_cpu & 0xf);
if (I_won) {
/* we won the town election, let's go for the state */
my_state = states[(this_cpu >> 8) & 0xf];
I_won = vlock_lock(my_state, this_cpu & 0xf));
if (I_won) {
/* and so on */
I_won = vlock_lock(the_whole_country, this_cpu & 0xf];
if (I_won) {
/* ... */
}
vlock_unlock(the_whole_country);
}
vlock_unlock(my_state);
}
vlock_unlock(my_town);
ARM implementation
------------------
The current ARM implementation [2] contains some optimisations beyond
the basic algorithm:
* By packing the members of the currently_voting array close together,
we can read the whole array in one transaction (providing the number
of CPUs potentially contending the lock is small enough). This
reduces the number of round-trips required to external memory.
In the ARM implementation, this means that we can use a single load
and comparison:
LDR Rt, [Rn]
CMP Rt, #0
...in place of code equivalent to:
LDRB Rt, [Rn]
CMP Rt, #0
LDRBEQ Rt, [Rn, #1]
CMPEQ Rt, #0
LDRBEQ Rt, [Rn, #2]
CMPEQ Rt, #0
LDRBEQ Rt, [Rn, #3]
CMPEQ Rt, #0
This cuts down on the fast-path latency, as well as potentially
reducing bus contention in contended cases.
The optimisation relies on the fact that the ARM memory system
guarantees coherency between overlapping memory accesses of
different sizes, similarly to many other architectures. Note that
we do not care which element of currently_voting appears in which
bits of Rt, so there is no need to worry about endianness in this
optimisation.
If there are too many CPUs to read the currently_voting array in
one transaction then multiple transations are still required. The
implementation uses a simple loop of word-sized loads for this
case. The number of transactions is still fewer than would be
required if bytes were loaded individually.
In principle, we could aggregate further by using LDRD or LDM, but
to keep the code simple this was not attempted in the initial
implementation.
* vlocks are currently only used to coordinate between CPUs which are
unable to enable their caches yet. This means that the
implementation removes many of the barriers which would be required
when executing the algorithm in cached memory.
packing of the currently_voting array does not work with cached
memory unless all CPUs contending the lock are cache-coherent, due
to cache writebacks from one CPU clobbering values written by other
CPUs. (Though if all the CPUs are cache-coherent, you should be
probably be using proper spinlocks instead anyway).
* The "no votes yet" value used for the last_vote variable is 0 (not
-1 as in the pseudocode). This allows statically-allocated vlocks
to be implicitly initialised to an unlocked state simply by putting
them in .bss.
An offset is added to each CPU's ID for the purpose of setting this
variable, so that no CPU uses the value 0 for its ID.
Colophon
--------
Originally created and documented by Dave Martin for Linaro Limited, for
use in ARM-based big.LITTLE platforms, with review and input gratefully
received from Nicolas Pitre and Achin Gupta. Thanks to Nicolas for
grabbing most of this text out of the relevant mail thread and writing
up the pseudocode.
Copyright (C) 2012-2013 Linaro Limited
Distributed under the terms of Version 2 of the GNU General Public
License, as defined in linux/COPYING.
References
----------
[1] Lamport, L. "A New Solution of Dijkstra's Concurrent Programming
Problem", Communications of the ACM 17, 8 (August 1974), 453-455.
http://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm
[2] linux/arch/arm/common/vlock.S, www.kernel.org.
......@@ -59,6 +59,7 @@ config ARM
select CLONE_BACKWARDS
select OLD_SIGSUSPEND3
select OLD_SIGACTION
select HAVE_CONTEXT_TRACKING
help
The ARM series is a line of low-power-consumption RISC chip designs
licensed by ARM Ltd and targeted at embedded applications and
......@@ -1479,6 +1480,14 @@ config HAVE_ARM_TWD
help
This options enables support for the ARM timer and watchdog unit
config MCPM
bool "Multi-Cluster Power Management"
depends on CPU_V7 && SMP
help
This option provides the common power management infrastructure
for (multi-)cluster based systems, such as big.LITTLE based
systems.
choice
prompt "Memory split"
default VMSPLIT_3G
......@@ -1565,8 +1574,9 @@ config SCHED_HRTICK
def_bool HIGH_RES_TIMERS
config THUMB2_KERNEL
bool "Compile the kernel in Thumb-2 mode"
bool "Compile the kernel in Thumb-2 mode" if !CPU_THUMBONLY
depends on CPU_V7 && !CPU_V6 && !CPU_V6K
default y if CPU_THUMBONLY
select AEABI
select ARM_ASM_UNIFIED
select ARM_UNWIND
......
......@@ -641,6 +641,17 @@ config DEBUG_LL_INCLUDE
default "debug/zynq.S" if DEBUG_ZYNQ_UART0 || DEBUG_ZYNQ_UART1
default "mach/debug-macro.S"
config DEBUG_UNCOMPRESS
bool
default y if ARCH_MULTIPLATFORM && DEBUG_LL && \
!DEBUG_OMAP2PLUS_UART && \
!DEBUG_TEGRA_UART
config UNCOMPRESS_INCLUDE
string
default "debug/uncompress.h" if ARCH_MULTIPLATFORM
default "mach/uncompress.h"
config EARLY_PRINTK
bool "Early printk"
depends on DEBUG_LL
......
......@@ -24,6 +24,9 @@ endif
AFLAGS_head.o += -DTEXT_OFFSET=$(TEXT_OFFSET)
HEAD = head.o
OBJS += misc.o decompress.o
ifeq ($(CONFIG_DEBUG_UNCOMPRESS),y)
OBJS += debug.o
endif
FONTC = $(srctree)/drivers/video/console/font_acorn_8x8.c
# string library code (-Os is enforced to keep it much smaller)
......
#include <linux/linkage.h>
#include <asm/assembler.h>
#include CONFIG_DEBUG_LL_INCLUDE
ENTRY(putc)
addruart r1, r2, r3
waituart r3, r1
senduart r0, r1
busyuart r3, r1
mov pc, lr
ENDPROC(putc)
......@@ -25,13 +25,7 @@ unsigned int __machine_arch_type;
static void putstr(const char *ptr);
extern void error(char *x);
#ifdef CONFIG_ARCH_MULTIPLATFORM
static inline void putc(int c) {}
static inline void flush(void) {}
static inline void arch_decomp_setup(void) {}
#else
#include <mach/uncompress.h>
#endif
#include CONFIG_UNCOMPRESS_INCLUDE
#ifdef CONFIG_DEBUG_ICEDCC
......
......@@ -11,3 +11,6 @@ obj-$(CONFIG_SHARP_PARAM) += sharpsl_param.o
obj-$(CONFIG_SHARP_SCOOP) += scoop.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o
AFLAGS_mcpm_head.o := -march=armv7-a
AFLAGS_vlock.o := -march=armv7-a
/*
* arch/arm/common/mcpm_entry.c -- entry point for multi-cluster PM
*
* Created by: Nicolas Pitre, March 2012
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/irqflags.h>
#include <asm/mcpm.h>
#include <asm/cacheflush.h>
#include <asm/idmap.h>
#include <asm/cputype.h>
extern unsigned long mcpm_entry_vectors[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER];
void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr)
{
unsigned long val = ptr ? virt_to_phys(ptr) : 0;
mcpm_entry_vectors[cluster][cpu] = val;
sync_cache_w(&mcpm_entry_vectors[cluster][cpu]);
}
static const struct mcpm_platform_ops *platform_ops;
int __init mcpm_platform_register(const struct mcpm_platform_ops *ops)
{
if (platform_ops)
return -EBUSY;
platform_ops = ops;
return 0;
}
int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster)
{
if (!platform_ops)
return -EUNATCH; /* try not to shadow power_up errors */
might_sleep();
return platform_ops->power_up(cpu, cluster);
}
typedef void (*phys_reset_t)(unsigned long);
void mcpm_cpu_power_down(void)
{
phys_reset_t phys_reset;
BUG_ON(!platform_ops);
BUG_ON(!irqs_disabled());
/*
* Do this before calling into the power_down method,
* as it might not always be safe to do afterwards.
*/
setup_mm_for_reboot();
platform_ops->power_down();
/*
* It is possible for a power_up request to happen concurrently
* with a power_down request for the same CPU. In this case the
* power_down method might not be able to actually enter a
* powered down state with the WFI instruction if the power_up
* method has removed the required reset condition. The
* power_down method is then allowed to return. We must perform
* a re-entry in the kernel as if the power_up method just had
* deasserted reset on the CPU.
*
* To simplify race issues, the platform specific implementation
* must accommodate for the possibility of unordered calls to
* power_down and power_up with a usage count. Therefore, if a
* call to power_up is issued for a CPU that is not down, then
* the next call to power_down must not attempt a full shutdown
* but only do the minimum (normally disabling L1 cache and CPU
* coherency) and return just as if a concurrent power_up request
* had happened as described above.
*/
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset(virt_to_phys(mcpm_entry_point));
/* should never get here */
BUG();
}
void mcpm_cpu_suspend(u64 expected_residency)
{
phys_reset_t phys_reset;
BUG_ON(!platform_ops);
BUG_ON(!irqs_disabled());
/* Very similar to mcpm_cpu_power_down() */
setup_mm_for_reboot();
platform_ops->suspend(expected_residency);
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset(virt_to_phys(mcpm_entry_point));
BUG();
}
int mcpm_cpu_powered_up(void)
{
if (!platform_ops)
return -EUNATCH;
if (platform_ops->powered_up)
platform_ops->powered_up();
return 0;
}
struct sync_struct mcpm_sync;
/*
* __mcpm_cpu_going_down: Indicates that the cpu is being torn down.
* This must be called at the point of committing to teardown of a CPU.
* The CPU cache (SCTRL.C bit) is expected to still be active.
*/
void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster)
{
mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_GOING_DOWN;
sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu);
}
/*
* __mcpm_cpu_down: Indicates that cpu teardown is complete and that the
* cluster can be torn down without disrupting this CPU.
* To avoid deadlocks, this must be called before a CPU is powered down.
* The CPU cache (SCTRL.C bit) is expected to be off.
* However L2 cache might or might not be active.
*/
void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster)
{
dmb();
mcpm_sync.clusters[cluster].cpus[cpu].cpu = CPU_DOWN;
sync_cache_w(&mcpm_sync.clusters[cluster].cpus[cpu].cpu);
dsb_sev();
}
/*
* __mcpm_outbound_leave_critical: Leave the cluster teardown critical section.
* @state: the final state of the cluster:
* CLUSTER_UP: no destructive teardown was done and the cluster has been
* restored to the previous state (CPU cache still active); or
* CLUSTER_DOWN: the cluster has been torn-down, ready for power-off
* (CPU cache disabled, L2 cache either enabled or disabled).
*/
void __mcpm_outbound_leave_critical(unsigned int cluster, int state)
{
dmb();
mcpm_sync.clusters[cluster].cluster = state;
sync_cache_w(&mcpm_sync.clusters[cluster].cluster);
dsb_sev();
}
/*
* __mcpm_outbound_enter_critical: Enter the cluster teardown critical section.
* This function should be called by the last man, after local CPU teardown
* is complete. CPU cache expected to be active.
*
* Returns:
* false: the critical section was not entered because an inbound CPU was
* observed, or the cluster is already being set up;
* true: the critical section was entered: it is now safe to tear down the
* cluster.
*/
bool __mcpm_outbound_enter_critical(unsigned int cpu, unsigned int cluster)
{
unsigned int i;
struct mcpm_sync_struct *c = &mcpm_sync.clusters[cluster];
/* Warn inbound CPUs that the cluster is being torn down: */
c->cluster = CLUSTER_GOING_DOWN;
sync_cache_w(&c->cluster);
/* Back out if the inbound cluster is already in the critical region: */
sync_cache_r(&c->inbound);
if (c->inbound == INBOUND_COMING_UP)
goto abort;
/*
* Wait for all CPUs to get out of the GOING_DOWN state, so that local
* teardown is complete on each CPU before tearing down the cluster.
*
* If any CPU has been woken up again from the DOWN state, then we
* shouldn't be taking the cluster down at all: abort in that case.
*/
sync_cache_r(&c->cpus);
for (i = 0; i < MAX_CPUS_PER_CLUSTER; i++) {
int cpustate;
if (i == cpu)
continue;
while (1) {
cpustate = c->cpus[i].cpu;
if (cpustate != CPU_GOING_DOWN)
break;
wfe();
sync_cache_r(&c->cpus[i].cpu);
}
switch (cpustate) {
case CPU_DOWN:
continue;
default:
goto abort;
}
}
return true;
abort:
__mcpm_outbound_leave_critical(cluster, CLUSTER_UP);
return false;
}
int __mcpm_cluster_state(unsigned int cluster)
{
sync_cache_r(&mcpm_sync.clusters[cluster].cluster);
return mcpm_sync.clusters[cluster].cluster;
}
extern unsigned long mcpm_power_up_setup_phys;
int __init mcpm_sync_init(
void (*power_up_setup)(unsigned int affinity_level))
{
unsigned int i, j, mpidr, this_cluster;
BUILD_BUG_ON(MCPM_SYNC_CLUSTER_SIZE * MAX_NR_CLUSTERS != sizeof mcpm_sync);
BUG_ON((unsigned long)&mcpm_sync & (__CACHE_WRITEBACK_GRANULE - 1));
/*
* Set initial CPU and cluster states.
* Only one cluster is assumed to be active at this point.
*/
for (i = 0; i < MAX_NR_CLUSTERS; i++) {
mcpm_sync.clusters[i].cluster = CLUSTER_DOWN;
mcpm_sync.clusters[i].inbound = INBOUND_NOT_COMING_UP;
for (j = 0; j < MAX_CPUS_PER_CLUSTER; j++)
mcpm_sync.clusters[i].cpus[j].cpu = CPU_DOWN;
}
mpidr = read_cpuid_mpidr();
this_cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
for_each_online_cpu(i)
mcpm_sync.clusters[this_cluster].cpus[i].cpu = CPU_UP;
mcpm_sync.clusters[this_cluster].cluster = CLUSTER_UP;
sync_cache_w(&mcpm_sync);
if (power_up_setup) {
mcpm_power_up_setup_phys = virt_to_phys(power_up_setup);
sync_cache_w(&mcpm_power_up_setup_phys);
}
return 0;
}
/*
* arch/arm/common/mcpm_head.S -- kernel entry point for multi-cluster PM
*
* Created by: Nicolas Pitre, March 2012
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*
* Refer to Documentation/arm/cluster-pm-race-avoidance.txt
* for details of the synchronisation algorithms used here.
*/
#include <linux/linkage.h>
#include <asm/mcpm.h>
#include "vlock.h"
.if MCPM_SYNC_CLUSTER_CPUS
.error "cpus must be the first member of struct mcpm_sync_struct"
.endif
.macro pr_dbg string
#if defined(CONFIG_DEBUG_LL) && defined(DEBUG)
b 1901f
1902: .asciz "CPU"
1903: .asciz " cluster"
1904: .asciz ": \string"
.align
1901: adr r0, 1902b
bl printascii
mov r0, r9
bl printhex8
adr r0, 1903b
bl printascii
mov r0, r10
bl printhex8
adr r0, 1904b
bl printascii
#endif
.endm
.arm
.align
ENTRY(mcpm_entry_point)
THUMB( adr r12, BSYM(1f) )
THUMB( bx r12 )
THUMB( .thumb )
1:
mrc p15, 0, r0, c0, c0, 5 @ MPIDR
ubfx r9, r0, #0, #8 @ r9 = cpu
ubfx r10, r0, #8, #8 @ r10 = cluster
mov r3, #MAX_CPUS_PER_CLUSTER
mla r4, r3, r10, r9 @ r4 = canonical CPU index
cmp r4, #(MAX_CPUS_PER_CLUSTER * MAX_NR_CLUSTERS)
blo 2f
/* We didn't expect this CPU. Try to cheaply make it quiet. */
1: wfi
wfe
b 1b
2: pr_dbg "kernel mcpm_entry_point\n"
/*
* MMU is off so we need to get to various variables in a
* position independent way.
*/
adr r5, 3f
ldmia r5, {r6, r7, r8, r11}
add r6, r5, r6 @ r6 = mcpm_entry_vectors
ldr r7, [r5, r7] @ r7 = mcpm_power_up_setup_phys
add r8, r5, r8 @ r8 = mcpm_sync
add r11, r5, r11 @ r11 = first_man_locks
mov r0, #MCPM_SYNC_CLUSTER_SIZE
mla r8, r0, r10, r8 @ r8 = sync cluster base
@ Signal that this CPU is coming UP:
mov r0, #CPU_COMING_UP
mov r5, #MCPM_SYNC_CPU_SIZE
mla r5, r9, r5, r8 @ r5 = sync cpu address
strb r0, [r5]
@ At this point, the cluster cannot unexpectedly enter the GOING_DOWN
@ state, because there is at least one active CPU (this CPU).
mov r0, #VLOCK_SIZE
mla r11, r0, r10, r11 @ r11 = cluster first man lock
mov r0, r11
mov r1, r9 @ cpu
bl vlock_trylock @ implies DMB
cmp r0, #0 @ failed to get the lock?
bne mcpm_setup_wait @ wait for cluster setup if so
ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
cmp r0, #CLUSTER_UP @ cluster already up?
bne mcpm_setup @ if not, set up the cluster
@ Otherwise, release the first man lock and skip setup:
mov r0, r11
bl vlock_unlock
b mcpm_setup_complete
mcpm_setup:
@ Control dependency implies strb not observable before previous ldrb.
@ Signal that the cluster is being brought up:
mov r0, #INBOUND_COMING_UP
strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
dmb
@ Any CPU trying to take the cluster into CLUSTER_GOING_DOWN from this
@ point onwards will observe INBOUND_COMING_UP and abort.
@ Wait for any previously-pending cluster teardown operations to abort
@ or complete:
mcpm_teardown_wait:
ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
cmp r0, #CLUSTER_GOING_DOWN
bne first_man_setup
wfe
b mcpm_teardown_wait
first_man_setup:
dmb
@ If the outbound gave up before teardown started, skip cluster setup:
cmp r0, #CLUSTER_UP
beq mcpm_setup_leave
@ power_up_setup is now responsible for setting up the cluster:
cmp r7, #0
mov r0, #1 @ second (cluster) affinity level
blxne r7 @ Call power_up_setup if defined
dmb
mov r0, #CLUSTER_UP
strb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
dmb
mcpm_setup_leave:
@ Leave the cluster setup critical section:
mov r0, #INBOUND_NOT_COMING_UP
strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
dsb
sev
mov r0, r11
bl vlock_unlock @ implies DMB
b mcpm_setup_complete
@ In the contended case, non-first men wait here for cluster setup
@ to complete:
mcpm_setup_wait:
ldrb r0, [r8, #MCPM_SYNC_CLUSTER_CLUSTER]
cmp r0, #CLUSTER_UP
wfene
bne mcpm_setup_wait
dmb
mcpm_setup_complete:
@ If a platform-specific CPU setup hook is needed, it is
@ called from here.
cmp r7, #0
mov r0, #0 @ first (CPU) affinity level
blxne r7 @ Call power_up_setup if defined
dmb
@ Mark the CPU as up:
mov r0, #CPU_UP
strb r0, [r5]
@ Observability order of CPU_UP and opening of the gate does not matter.
mcpm_entry_gated:
ldr r5, [r6, r4, lsl #2] @ r5 = CPU entry vector
cmp r5, #0
wfeeq
beq mcpm_entry_gated
dmb
pr_dbg "released\n"
bx r5
.align 2
3: .word mcpm_entry_vectors - .
.word mcpm_power_up_setup_phys - 3b
.word mcpm_sync - 3b
.word first_man_locks - 3b
ENDPROC(mcpm_entry_point)
.bss
.align CACHE_WRITEBACK_ORDER
.type first_man_locks, #object
first_man_locks:
.space VLOCK_SIZE * MAX_NR_CLUSTERS
.align CACHE_WRITEBACK_ORDER
.type mcpm_entry_vectors, #object
ENTRY(mcpm_entry_vectors)
.space 4 * MAX_NR_CLUSTERS * MAX_CPUS_PER_CLUSTER
.type mcpm_power_up_setup_phys, #object
ENTRY(mcpm_power_up_setup_phys)
.space 4 @ set by mcpm_sync_init()
/*
* linux/arch/arm/mach-vexpress/mcpm_platsmp.c
*
* Created by: Nicolas Pitre, November 2012
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Code to handle secondary CPU bringup and hotplug for the cluster power API.
*/
#include <linux/init.h>
#include <linux/smp.h>
#include <linux/spinlock.h>
#include <linux/irqchip/arm-gic.h>
#include <asm/mcpm.h>
#include <asm/smp.h>
#include <asm/smp_plat.h>
static void __init simple_smp_init_cpus(void)
{
}
static int __cpuinit mcpm_boot_secondary(unsigned int cpu, struct task_struct *idle)
{
unsigned int mpidr, pcpu, pcluster, ret;
extern void secondary_startup(void);
mpidr = cpu_logical_map(cpu);
pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: logical CPU %d is physical CPU %d cluster %d\n",
__func__, cpu, pcpu, pcluster);
mcpm_set_entry_vector(pcpu, pcluster, NULL);
ret = mcpm_cpu_power_up(pcpu, pcluster);
if (ret)
return ret;
mcpm_set_entry_vector(pcpu, pcluster, secondary_startup);
arch_send_wakeup_ipi_mask(cpumask_of(cpu));
dsb_sev();
return 0;
}
static void __cpuinit mcpm_secondary_init(unsigned int cpu)
{
mcpm_cpu_powered_up();
gic_secondary_init(0);
}
#ifdef CONFIG_HOTPLUG_CPU
static int mcpm_cpu_disable(unsigned int cpu)
{
/*
* We assume all CPUs may be shut down.
* This would be the hook to use for eventual Secure
* OS migration requests as described in the PSCI spec.
*/
return 0;
}
static void mcpm_cpu_die(unsigned int cpu)
{
unsigned int mpidr, pcpu, pcluster;
mpidr = read_cpuid_mpidr();
pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
mcpm_set_entry_vector(pcpu, pcluster, NULL);
mcpm_cpu_power_down();
}
#endif
static struct smp_operations __initdata mcpm_smp_ops = {
.smp_init_cpus = simple_smp_init_cpus,
.smp_boot_secondary = mcpm_boot_secondary,
.smp_secondary_init = mcpm_secondary_init,
#ifdef CONFIG_HOTPLUG_CPU
.cpu_disable = mcpm_cpu_disable,
.cpu_die = mcpm_cpu_die,
#endif
};
void __init mcpm_smp_set_ops(void)
{
smp_set_ops(&mcpm_smp_ops);
}
/*
* vlock.S - simple voting lock implementation for ARM
*
* Created by: Dave Martin, 2012-08-16
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*
* This algorithm is described in more detail in
* Documentation/arm/vlocks.txt.
*/
#include <linux/linkage.h>
#include "vlock.h"
/* Select different code if voting flags can fit in a single word. */
#if VLOCK_VOTING_SIZE > 4
#define FEW(x...)
#define MANY(x...) x
#else
#define FEW(x...) x
#define MANY(x...)
#endif
@ voting lock for first-man coordination
.macro voting_begin rbase:req, rcpu:req, rscratch:req
mov \rscratch, #1
strb \rscratch, [\rbase, \rcpu]
dmb
.endm
.macro voting_end rbase:req, rcpu:req, rscratch:req
dmb
mov \rscratch, #0
strb \rscratch, [\rbase, \rcpu]
dsb
sev
.endm
/*
* The vlock structure must reside in Strongly-Ordered or Device memory.
* This implementation deliberately eliminates most of the barriers which
* would be required for other memory types, and assumes that independent
* writes to neighbouring locations within a cacheline do not interfere
* with one another.
*/
@ r0: lock structure base
@ r1: CPU ID (0-based index within cluster)
ENTRY(vlock_trylock)
add r1, r1, #VLOCK_VOTING_OFFSET
voting_begin r0, r1, r2
ldrb r2, [r0, #VLOCK_OWNER_OFFSET] @ check whether lock is held
cmp r2, #VLOCK_OWNER_NONE
bne trylock_fail @ fail if so
@ Control dependency implies strb not observable before previous ldrb.
strb r1, [r0, #VLOCK_OWNER_OFFSET] @ submit my vote
voting_end r0, r1, r2 @ implies DMB
@ Wait for the current round of voting to finish:
MANY( mov r3, #VLOCK_VOTING_OFFSET )
0:
MANY( ldr r2, [r0, r3] )
FEW( ldr r2, [r0, #VLOCK_VOTING_OFFSET] )
cmp r2, #0
wfene
bne 0b
MANY( add r3, r3, #4 )
MANY( cmp r3, #VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE )
MANY( bne 0b )
@ Check who won:
dmb
ldrb r2, [r0, #VLOCK_OWNER_OFFSET]
eor r0, r1, r2 @ zero if I won, else nonzero
bx lr
trylock_fail:
voting_end r0, r1, r2
mov r0, #1 @ nonzero indicates that I lost
bx lr
ENDPROC(vlock_trylock)
@ r0: lock structure base
ENTRY(vlock_unlock)
dmb
mov r1, #VLOCK_OWNER_NONE
strb r1, [r0, #VLOCK_OWNER_OFFSET]
dsb
sev
bx lr
ENDPROC(vlock_unlock)
/*
* vlock.h - simple voting lock implementation
*
* Created by: Dave Martin, 2012-08-16
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef __VLOCK_H
#define __VLOCK_H
#include <asm/mcpm.h>
/* Offsets and sizes are rounded to a word (4 bytes) */
#define VLOCK_OWNER_OFFSET 0
#define VLOCK_VOTING_OFFSET 4
#define VLOCK_VOTING_SIZE ((MAX_CPUS_PER_CLUSTER + 3) / 4 * 4)
#define VLOCK_SIZE (VLOCK_VOTING_OFFSET + VLOCK_VOTING_SIZE)
#define VLOCK_OWNER_NONE 0
#endif /* ! __VLOCK_H */
......@@ -243,6 +243,29 @@ typedef struct {
#define ATOMIC64_INIT(i) { (i) }
#ifdef CONFIG_ARM_LPAE
static inline u64 atomic64_read(const atomic64_t *v)
{
u64 result;
__asm__ __volatile__("@ atomic64_read\n"
" ldrd %0, %H0, [%1]"
: "=&r" (result)
: "r" (&v->counter), "Qo" (v->counter)
);
return result;
}
static inline void atomic64_set(atomic64_t *v, u64 i)
{
__asm__ __volatile__("@ atomic64_set\n"
" strd %2, %H2, [%1]"
: "=Qo" (v->counter)
: "r" (&v->counter), "r" (i)
);
}
#else
static inline u64 atomic64_read(const atomic64_t *v)
{
u64 result;
......@@ -269,6 +292,7 @@ static inline void atomic64_set(atomic64_t *v, u64 i)
: "r" (&v->counter), "r" (i)
: "cc");
}
#endif
static inline void atomic64_add(u64 i, atomic64_t *v)
{
......
......@@ -363,4 +363,79 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
flush_cache_all();
}
/*
* Memory synchronization helpers for mixed cached vs non cached accesses.
*
* Some synchronization algorithms have to set states in memory with the
* cache enabled or disabled depending on the code path. It is crucial
* to always ensure proper cache maintenance to update main memory right
* away in that case.
*
* Any cached write must be followed by a cache clean operation.
* Any cached read must be preceded by a cache invalidate operation.
* Yet, in the read case, a cache flush i.e. atomic clean+invalidate
* operation is needed to avoid discarding possible concurrent writes to the
* accessed memory.
*
* Also, in order to prevent a cached writer from interfering with an
* adjacent non-cached writer, each state variable must be located to
* a separate cache line.
*/
/*
* This needs to be >= the max cache writeback size of all
* supported platforms included in the current kernel configuration.
* This is used to align state variables to their own cache lines.
*/
#define __CACHE_WRITEBACK_ORDER 6 /* guessed from existing platforms */
#define __CACHE_WRITEBACK_GRANULE (1 << __CACHE_WRITEBACK_ORDER)
/*
* There is no __cpuc_clean_dcache_area but we use it anyway for
* code intent clarity, and alias it to __cpuc_flush_dcache_area.
*/
#define __cpuc_clean_dcache_area __cpuc_flush_dcache_area
/*
* Ensure preceding writes to *p by this CPU are visible to
* subsequent reads by other CPUs:
*/
static inline void __sync_cache_range_w(volatile void *p, size_t size)
{
char *_p = (char *)p;
__cpuc_clean_dcache_area(_p, size);
outer_clean_range(__pa(_p), __pa(_p + size));
}
/*
* Ensure preceding writes to *p by other CPUs are visible to
* subsequent reads by this CPU. We must be careful not to
* discard data simultaneously written by another CPU, hence the
* usage of flush rather than invalidate operations.
*/
static inline void __sync_cache_range_r(volatile void *p, size_t size)
{
char *_p = (char *)p;
#ifdef CONFIG_OUTER_CACHE
if (outer_cache.flush_range) {
/*
* Ensure dirty data migrated from other CPUs into our cache
* are cleaned out safely before the outer cache is cleaned:
*/
__cpuc_clean_dcache_area(_p, size);
/* Clean and invalidate stale data for *p from outer ... */
outer_flush_range(__pa(_p), __pa(_p + size));
}
#endif
/* ... and inner cache: */
__cpuc_flush_dcache_area(_p, size);
}
#define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr))
#define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr))
#endif
......@@ -42,6 +42,8 @@
#define vectors_high() (0)
#endif
#ifdef CONFIG_CPU_CP15
extern unsigned long cr_no_alignment; /* defined in entry-armv.S */
extern unsigned long cr_alignment; /* defined in entry-armv.S */
......@@ -82,6 +84,18 @@ static inline void set_copro_access(unsigned int val)
isb();
}
#endif
#else /* ifdef CONFIG_CPU_CP15 */
/*
* cr_alignment and cr_no_alignment are tightly coupled to cp15 (at least in the
* minds of the developers). Yielding 0 for machines without a cp15 (and making
* it read-only) is fine for most cases and saves quite some #ifdeffery.
*/
#define cr_no_alignment UL(0)
#define cr_alignment UL(0)
#endif /* ifdef CONFIG_CPU_CP15 / else */
#endif /* ifndef __ASSEMBLY__ */
#endif
......@@ -38,6 +38,24 @@
#define MPIDR_AFFINITY_LEVEL(mpidr, level) \
((mpidr >> (MPIDR_LEVEL_BITS * level)) & MPIDR_LEVEL_MASK)
#define ARM_CPU_IMP_ARM 0x41
#define ARM_CPU_IMP_INTEL 0x69
#define ARM_CPU_PART_ARM1136 0xB360
#define ARM_CPU_PART_ARM1156 0xB560
#define ARM_CPU_PART_ARM1176 0xB760
#define ARM_CPU_PART_ARM11MPCORE 0xB020
#define ARM_CPU_PART_CORTEX_A8 0xC080
#define ARM_CPU_PART_CORTEX_A9 0xC090
#define ARM_CPU_PART_CORTEX_A5 0xC050
#define ARM_CPU_PART_CORTEX_A15 0xC0F0
#define ARM_CPU_PART_CORTEX_A7 0xC070
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
#define ARM_CPU_XSCALE_ARCH_V2 0x4000
#define ARM_CPU_XSCALE_ARCH_V3 0x6000
extern unsigned int processor_id;
#ifdef CONFIG_CPU_CP15
......@@ -50,6 +68,7 @@ extern unsigned int processor_id;
: "cc"); \
__val; \
})
#define read_cpuid_ext(ext_reg) \
({ \
unsigned int __val; \
......@@ -59,29 +78,24 @@ extern unsigned int processor_id;
: "cc"); \
__val; \
})
#else
#define read_cpuid(reg) (processor_id)
#define read_cpuid_ext(reg) 0
#endif
#define ARM_CPU_IMP_ARM 0x41
#define ARM_CPU_IMP_INTEL 0x69
#else /* ifdef CONFIG_CPU_CP15 */
#define ARM_CPU_PART_ARM1136 0xB360
#define ARM_CPU_PART_ARM1156 0xB560
#define ARM_CPU_PART_ARM1176 0xB760
#define ARM_CPU_PART_ARM11MPCORE 0xB020
#define ARM_CPU_PART_CORTEX_A8 0xC080
#define ARM_CPU_PART_CORTEX_A9 0xC090
#define ARM_CPU_PART_CORTEX_A5 0xC050
#define ARM_CPU_PART_CORTEX_A15 0xC0F0
#define ARM_CPU_PART_CORTEX_A7 0xC070
/*
* read_cpuid and read_cpuid_ext should only ever be called on machines that
* have cp15 so warn on other usages.
*/
#define read_cpuid(reg) \
({ \
WARN_ON_ONCE(1); \
0; \
})
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
#define ARM_CPU_XSCALE_ARCH_V2 0x4000
#define ARM_CPU_XSCALE_ARCH_V3 0x6000
#define read_cpuid_ext(reg) read_cpuid(reg)
#endif /* ifdef CONFIG_CPU_CP15 / else */
#ifdef CONFIG_CPU_CP15
/*
* The CPU ID never changes at run time, so we might as well tell the
* compiler that it's constant. Use this function to read the CPU ID
......@@ -92,6 +106,15 @@ static inline unsigned int __attribute_const__ read_cpuid_id(void)
return read_cpuid(CPUID_ID);
}
#else /* ifdef CONFIG_CPU_CP15 */
static inline unsigned int __attribute_const__ read_cpuid_id(void)
{
return processor_id;
}
#endif /* ifdef CONFIG_CPU_CP15 / else */
static inline unsigned int __attribute_const__ read_cpuid_implementor(void)
{
return (read_cpuid_id() & 0xFF000000) >> 24;
......
......@@ -18,12 +18,12 @@
* ================
*
* We have the following to choose from:
* arm6 - ARM6 style
* arm7 - ARM7 style
* v4_early - ARMv4 without Thumb early abort handler
* v4t_late - ARMv4 with Thumb late abort handler
* v4t_early - ARMv4 with Thumb early abort handler
* v5tej_early - ARMv5 with Thumb and Java early abort handler
* v5t_early - ARMv5 with Thumb early abort handler
* v5tj_early - ARMv5 with Thumb and Java early abort handler
* xscale - ARMv5 with Thumb with Xscale extensions
* v6_early - ARMv6 generic early abort handler
* v7_early - ARMv7 generic early abort handler
......@@ -39,19 +39,19 @@
# endif
#endif
#ifdef CONFIG_CPU_ABRT_LV4T
#ifdef CONFIG_CPU_ABRT_EV4
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
# define CPU_DABORT_HANDLER v4t_late_abort
# define CPU_DABORT_HANDLER v4_early_abort
# endif
#endif
#ifdef CONFIG_CPU_ABRT_EV4
#ifdef CONFIG_CPU_ABRT_LV4T
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
# define CPU_DABORT_HANDLER v4_early_abort
# define CPU_DABORT_HANDLER v4t_late_abort
# endif
#endif
......@@ -63,19 +63,19 @@
# endif
#endif
#ifdef CONFIG_CPU_ABRT_EV5TJ
#ifdef CONFIG_CPU_ABRT_EV5T
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
# define CPU_DABORT_HANDLER v5tj_early_abort
# define CPU_DABORT_HANDLER v5t_early_abort
# endif
#endif
#ifdef CONFIG_CPU_ABRT_EV5T
#ifdef CONFIG_CPU_ABRT_EV5TJ
# ifdef CPU_DABORT_HANDLER
# define MULTI_DABORT 1
# else
# define CPU_DABORT_HANDLER v5t_early_abort
# define CPU_DABORT_HANDLER v5tj_early_abort
# endif
#endif
......
......@@ -211,4 +211,8 @@
#define HSR_HVC_IMM_MASK ((1UL << 16) - 1)
#define HSR_DABT_S1PTW (1U << 7)
#define HSR_DABT_CM (1U << 8)
#define HSR_DABT_EA (1U << 9)
#endif /* __ARM_KVM_ARM_H__ */
......@@ -75,7 +75,7 @@ extern char __kvm_hyp_code_end[];
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
#endif
......
......@@ -22,11 +22,12 @@
#include <linux/kvm_host.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h>
#include <asm/kvm_arm.h>
u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
u32 *vcpu_spsr(struct kvm_vcpu *vcpu);
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu);
int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run);
bool kvm_condition_valid(struct kvm_vcpu *vcpu);
void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr);
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
......@@ -37,14 +38,14 @@ static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
return 1;
}
static inline u32 *vcpu_pc(struct kvm_vcpu *vcpu)
static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu)
{
return (u32 *)&vcpu->arch.regs.usr_regs.ARM_pc;
return &vcpu->arch.regs.usr_regs.ARM_pc;
}
static inline u32 *vcpu_cpsr(struct kvm_vcpu *vcpu)
static inline unsigned long *vcpu_cpsr(struct kvm_vcpu *vcpu)
{
return (u32 *)&vcpu->arch.regs.usr_regs.ARM_cpsr;
return &vcpu->arch.regs.usr_regs.ARM_cpsr;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
......@@ -69,4 +70,96 @@ static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
return reg == 15;
}
static inline u32 kvm_vcpu_get_hsr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.hsr;
}
static inline unsigned long kvm_vcpu_get_hfar(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.hxfar;
}
static inline phys_addr_t kvm_vcpu_get_fault_ipa(struct kvm_vcpu *vcpu)
{
return ((phys_addr_t)vcpu->arch.fault.hpfar & HPFAR_MASK) << 8;
}
static inline unsigned long kvm_vcpu_get_hyp_pc(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault.hyp_pc;
}
static inline bool kvm_vcpu_dabt_isvalid(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_ISV;
}
static inline bool kvm_vcpu_dabt_iswrite(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_WNR;
}
static inline bool kvm_vcpu_dabt_issext(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_SSE;
}
static inline int kvm_vcpu_dabt_get_rd(struct kvm_vcpu *vcpu)
{
return (kvm_vcpu_get_hsr(vcpu) & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
}
static inline bool kvm_vcpu_dabt_isextabt(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_EA;
}
static inline bool kvm_vcpu_dabt_iss1tw(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_DABT_S1PTW;
}
/* Get Access Size from a data abort */
static inline int kvm_vcpu_dabt_get_as(struct kvm_vcpu *vcpu)
{
switch ((kvm_vcpu_get_hsr(vcpu) >> 22) & 0x3) {
case 0:
return 1;
case 1:
return 2;
case 2:
return 4;
default:
kvm_err("Hardware is weird: SAS 0b11 is reserved\n");
return -EFAULT;
}
}
/* This one is not specific to Data Abort */
static inline bool kvm_vcpu_trap_il_is32bit(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_IL;
}
static inline u8 kvm_vcpu_trap_get_class(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) >> HSR_EC_SHIFT;
}
static inline bool kvm_vcpu_trap_is_iabt(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_trap_get_class(vcpu) == HSR_EC_IABT;
}
static inline u8 kvm_vcpu_trap_get_fault(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_FSC_TYPE;
}
static inline u32 kvm_vcpu_hvc_get_imm(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_hsr(vcpu) & HSR_HVC_IMM_MASK;
}
#endif /* __ARM_KVM_EMULATE_H__ */
......@@ -80,6 +80,15 @@ struct kvm_mmu_memory_cache {
void *objects[KVM_NR_MEM_OBJS];
};
struct kvm_vcpu_fault_info {
u32 hsr; /* Hyp Syndrome Register */
u32 hxfar; /* Hyp Data/Inst. Fault Address Register */
u32 hpfar; /* Hyp IPA Fault Address Register */
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
};
typedef struct vfp_hard_struct kvm_kernel_vfp_t;
struct kvm_vcpu_arch {
struct kvm_regs regs;
......@@ -93,13 +102,11 @@ struct kvm_vcpu_arch {
u32 midr;
/* Exception Information */
u32 hsr; /* Hyp Syndrome Register */
u32 hxfar; /* Hyp Data/Inst Fault Address Register */
u32 hpfar; /* Hyp IPA Fault Address Register */
struct kvm_vcpu_fault_info fault;
/* Floating point registers (VFP and Advanced SIMD/NEON) */
struct vfp_hard_struct vfp_guest;
struct vfp_hard_struct *vfp_host;
kvm_kernel_vfp_t vfp_guest;
kvm_kernel_vfp_t *vfp_host;
/* VGIC state */
struct vgic_cpu vgic_cpu;
......@@ -122,9 +129,6 @@ struct kvm_vcpu_arch {
/* Interrupt related fields */
u32 irq_lines; /* IRQ and FIQ levels */
/* Hyp exception information */
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
......@@ -181,4 +185,26 @@ struct kvm_one_reg;
int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int exception_index);
static inline void __cpu_init_hyp_mode(unsigned long long pgd_ptr,
unsigned long hyp_stack_ptr,
unsigned long vector_ptr)
{
unsigned long pgd_low, pgd_high;
pgd_low = (pgd_ptr & ((1ULL << 32) - 1));
pgd_high = (pgd_ptr >> 32ULL);
/*
* Call initialization code, and switch to the full blown
* HYP code. The init code doesn't need to preserve these registers as
* r1-r3 and r12 are already callee save according to the AAPCS.
* Note that we slightly misuse the prototype by casing the pgd_low to
* a void *.
*/
kvm_call_hyp((void *)pgd_low, pgd_high, hyp_stack_ptr, vector_ptr);
}
#endif /* __ARM_KVM_HOST_H__ */
......@@ -19,6 +19,18 @@
#ifndef __ARM_KVM_MMU_H__
#define __ARM_KVM_MMU_H__
#include <asm/cacheflush.h>
#include <asm/pgalloc.h>
#include <asm/idmap.h>
/*
* We directly use the kernel VA for the HYP, as we can directly share
* the mapping (HTTBR "covers" TTBR1).
*/
#define HYP_PAGE_OFFSET_MASK (~0UL)
#define HYP_PAGE_OFFSET PAGE_OFFSET
#define KERN_TO_HYP(kva) (kva)
int create_hyp_mappings(void *from, void *to);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_hyp_pmds(void);
......@@ -36,6 +48,16 @@ phys_addr_t kvm_mmu_get_httbr(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
static inline void kvm_set_pte(pte_t *pte, pte_t new_pte)
{
pte_val(*pte) = new_pte;
/*
* flush_pmd_entry just takes a void pointer and cleans the necessary
* cache entries, so we can reuse the function for ptes.
*/
flush_pmd_entry(pte);
}
static inline bool kvm_is_write_fault(unsigned long hsr)
{
unsigned long hsr_ec = hsr >> HSR_EC_SHIFT;
......@@ -47,4 +69,49 @@ static inline bool kvm_is_write_fault(unsigned long hsr)
return true;
}
static inline void kvm_clean_pgd(pgd_t *pgd)
{
clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
}
static inline void kvm_clean_pmd_entry(pmd_t *pmd)
{
clean_pmd_entry(pmd);
}
static inline void kvm_clean_pte(pte_t *pte)
{
clean_pte_table(pte);
}
static inline void kvm_set_s2pte_writable(pte_t *pte)
{
pte_val(*pte) |= L_PTE_S2_RDWR;
}
struct kvm;
static inline void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
{
/*
* If we are going to insert an instruction page and the icache is
* either VIPT or PIPT, there is a potential problem where the host
* (or another VM) may have used the same page as this guest, and we
* read incorrect data from the icache. If we're using a PIPT cache,
* we can invalidate just that page, but if we are using a VIPT cache
* we need to invalidate the entire icache - damn shame - as written
* in the ARM ARM (DDI 0406C.b - Page B3-1393).
*
* VIVT caches are tagged using both the ASID and the VMID and doesn't
* need any kind of flushing (DDI 0406C.b - Page B3-1392).
*/
if (icache_is_pipt()) {
unsigned long hva = gfn_to_hva(kvm, gfn);
__cpuc_coherent_user_range(hva, hva + PAGE_SIZE);
} else if (!icache_is_vivt_asid_tagged()) {
/* any kind of VIPT cache */
__flush_icache_all();
}
}
#endif /* __ARM_KVM_MMU_H__ */
......@@ -21,7 +21,6 @@
#include <linux/kernel.h>
#include <linux/kvm.h>
#include <linux/kvm_host.h>
#include <linux/irqreturn.h>
#include <linux/spinlock.h>
#include <linux/types.h>
......
......@@ -30,6 +30,11 @@ struct hw_pci {
void (*postinit)(void);
u8 (*swizzle)(struct pci_dev *dev, u8 *pin);
int (*map_irq)(const struct pci_dev *dev, u8 slot, u8 pin);
resource_size_t (*align_resource)(struct pci_dev *dev,
const struct resource *res,
resource_size_t start,
resource_size_t size,
resource_size_t align);
};
/*
......@@ -51,6 +56,12 @@ struct pci_sys_data {
u8 (*swizzle)(struct pci_dev *, u8 *);
/* IRQ mapping */
int (*map_irq)(const struct pci_dev *, u8, u8);
/* Resource alignement requirements */
resource_size_t (*align_resource)(struct pci_dev *dev,
const struct resource *res,
resource_size_t start,
resource_size_t size,
resource_size_t align);
void *private_data; /* platform controller private data */
};
......
/*
* arch/arm/include/asm/mcpm.h
*
* Created by: Nicolas Pitre, April 2012
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef MCPM_H
#define MCPM_H
/*
* Maximum number of possible clusters / CPUs per cluster.
*
* This should be sufficient for quite a while, while keeping the
* (assembly) code simpler. When this starts to grow then we'll have
* to consider dynamic allocation.
*/
#define MAX_CPUS_PER_CLUSTER 4
#define MAX_NR_CLUSTERS 2
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <asm/cacheflush.h>
/*
* Platform specific code should use this symbol to set up secondary
* entry location for processors to use when released from reset.
*/
extern void mcpm_entry_point(void);
/*
* This is used to indicate where the given CPU from given cluster should
* branch once it is ready to re-enter the kernel using ptr, or NULL if it
* should be gated. A gated CPU is held in a WFE loop until its vector
* becomes non NULL.
*/
void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr);
/*
* CPU/cluster power operations API for higher subsystems to use.
*/
/**
* mcpm_cpu_power_up - make given CPU in given cluster runable
*
* @cpu: CPU number within given cluster
* @cluster: cluster number for the CPU
*
* The identified CPU is brought out of reset. If the cluster was powered
* down then it is brought up as well, taking care not to let the other CPUs
* in the cluster run, and ensuring appropriate cluster setup.
*
* Caller must ensure the appropriate entry vector is initialized with
* mcpm_set_entry_vector() prior to calling this.
*
* This must be called in a sleepable context. However, the implementation
* is strongly encouraged to return early and let the operation happen
* asynchronously, especially when significant delays are expected.
*
* If the operation cannot be performed then an error code is returned.
*/
int mcpm_cpu_power_up(unsigned int cpu, unsigned int cluster);
/**
* mcpm_cpu_power_down - power the calling CPU down
*
* The calling CPU is powered down.
*
* If this CPU is found to be the "last man standing" in the cluster
* then the cluster is prepared for power-down too.
*
* This must be called with interrupts disabled.
*
* This does not return. Re-entry in the kernel is expected via
* mcpm_entry_point.
*/
void mcpm_cpu_power_down(void);
/**
* mcpm_cpu_suspend - bring the calling CPU in a suspended state
*
* @expected_residency: duration in microseconds the CPU is expected
* to remain suspended, or 0 if unknown/infinity.
*
* The calling CPU is suspended. The expected residency argument is used
* as a hint by the platform specific backend to implement the appropriate
* sleep state level according to the knowledge it has on wake-up latency
* for the given hardware.
*
* If this CPU is found to be the "last man standing" in the cluster
* then the cluster may be prepared for power-down too, if the expected
* residency makes it worthwhile.
*
* This must be called with interrupts disabled.
*
* This does not return. Re-entry in the kernel is expected via
* mcpm_entry_point.
*/
void mcpm_cpu_suspend(u64 expected_residency);
/**
* mcpm_cpu_powered_up - housekeeping workafter a CPU has been powered up
*
* This lets the platform specific backend code perform needed housekeeping
* work. This must be called by the newly activated CPU as soon as it is
* fully operational in kernel space, before it enables interrupts.
*
* If the operation cannot be performed then an error code is returned.
*/
int mcpm_cpu_powered_up(void);
/*
* Platform specific methods used in the implementation of the above API.
*/
struct mcpm_platform_ops {
int (*power_up)(unsigned int cpu, unsigned int cluster);
void (*power_down)(void);
void (*suspend)(u64);
void (*powered_up)(void);
};
/**
* mcpm_platform_register - register platform specific power methods
*
* @ops: mcpm_platform_ops structure to register
*
* An error is returned if the registration has been done previously.
*/
int __init mcpm_platform_register(const struct mcpm_platform_ops *ops);
/* Synchronisation structures for coordinating safe cluster setup/teardown: */
/*
* When modifying this structure, make sure you update the MCPM_SYNC_ defines
* to match.
*/
struct mcpm_sync_struct {
/* individual CPU states */
struct {
s8 cpu __aligned(__CACHE_WRITEBACK_GRANULE);
} cpus[MAX_CPUS_PER_CLUSTER];
/* cluster state */
s8 cluster __aligned(__CACHE_WRITEBACK_GRANULE);
/* inbound-side state */
s8 inbound __aligned(__CACHE_WRITEBACK_GRANULE);
};
struct sync_struct {
struct mcpm_sync_struct clusters[MAX_NR_CLUSTERS];
};
extern unsigned long sync_phys; /* physical address of *mcpm_sync */
void __mcpm_cpu_going_down(unsigned int cpu, unsigned int cluster);
void __mcpm_cpu_down(unsigned int cpu, unsigned int cluster);
void __mcpm_outbound_leave_critical(unsigned int cluster, int state);
bool __mcpm_outbound_enter_critical(unsigned int this_cpu, unsigned int cluster);
int __mcpm_cluster_state(unsigned int cluster);
int __init mcpm_sync_init(
void (*power_up_setup)(unsigned int affinity_level));
void __init mcpm_smp_set_ops(void);
#else
/*
* asm-offsets.h causes trouble when included in .c files, and cacheflush.h
* cannot be included in asm files. Let's work around the conflict like this.
*/
#include <asm/asm-offsets.h>
#define __CACHE_WRITEBACK_GRANULE CACHE_WRITEBACK_GRANULE
#endif /* ! __ASSEMBLY__ */
/* Definitions for mcpm_sync_struct */
#define CPU_DOWN 0x11
#define CPU_COMING_UP 0x12
#define CPU_UP 0x13
#define CPU_GOING_DOWN 0x14
#define CLUSTER_DOWN 0x21
#define CLUSTER_UP 0x22
#define CLUSTER_GOING_DOWN 0x23
#define INBOUND_NOT_COMING_UP 0x31
#define INBOUND_COMING_UP 0x32
/*
* Offsets for the mcpm_sync_struct members, for use in asm.
* We don't want to make them global to the kernel via asm-offsets.c.
*/
#define MCPM_SYNC_CLUSTER_CPUS 0
#define MCPM_SYNC_CPU_SIZE __CACHE_WRITEBACK_GRANULE
#define MCPM_SYNC_CLUSTER_CLUSTER \
(MCPM_SYNC_CLUSTER_CPUS + MCPM_SYNC_CPU_SIZE * MAX_CPUS_PER_CLUSTER)
#define MCPM_SYNC_CLUSTER_INBOUND \
(MCPM_SYNC_CLUSTER_CLUSTER + __CACHE_WRITEBACK_GRANULE)
#define MCPM_SYNC_CLUSTER_SIZE \
(MCPM_SYNC_CLUSTER_INBOUND + __CACHE_WRITEBACK_GRANULE)
#endif
......@@ -152,6 +152,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp __user *,
#define TIF_SYSCALL_AUDIT 9
#define TIF_SYSCALL_TRACEPOINT 10
#define TIF_SECCOMP 11 /* seccomp syscall filtering active */
#define TIF_NOHZ 12 /* in adaptive nohz mode */
#define TIF_USING_IWMMXT 17
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 20
......
......@@ -166,7 +166,7 @@
# define v6wbi_always_flags (-1UL)
#endif
#define v7wbi_tlb_flags_smp (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \
#define v7wbi_tlb_flags_smp (TLB_WB | TLB_BARRIER | \
TLB_V7_UIS_FULL | TLB_V7_UIS_PAGE | \
TLB_V7_UIS_ASID | TLB_V7_UIS_BP)
#define v7wbi_tlb_flags_up (TLB_WB | TLB_DCLEAN | TLB_BARRIER | \
......
#ifdef CONFIG_DEBUG_UNCOMPRESS
extern void putc(int c);
#else
static inline void putc(int c) {}
#endif
static inline void flush(void) {}
static inline void arch_decomp_setup(void) {}
......@@ -53,12 +53,12 @@
#define KVM_ARM_FIQ_spsr fiq_regs[7]
struct kvm_regs {
struct pt_regs usr_regs;/* R0_usr - R14_usr, PC, CPSR */
__u32 svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
__u32 abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
__u32 und_regs[3]; /* SP_und, LR_und, SPSR_und */
__u32 irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
__u32 fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
struct pt_regs usr_regs; /* R0_usr - R14_usr, PC, CPSR */
unsigned long svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
unsigned long abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
unsigned long und_regs[3]; /* SP_und, LR_und, SPSR_und */
unsigned long irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
unsigned long fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
};
/* Supported Processor Types */
......
......@@ -149,6 +149,10 @@ int main(void)
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
BLANK();
DEFINE(CACHE_WRITEBACK_ORDER, __CACHE_WRITEBACK_ORDER);
DEFINE(CACHE_WRITEBACK_GRANULE, __CACHE_WRITEBACK_GRANULE);
BLANK();
#ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
......@@ -165,10 +169,10 @@ int main(void)
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.hsr));
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.hxfar));
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.hyp_pc));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.fault.hsr));
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.fault.hxfar));
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.fault.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.fault.hyp_pc));
#ifdef CONFIG_KVM_ARM_VGIC
DEFINE(VCPU_VGIC_CPU, offsetof(struct kvm_vcpu, arch.vgic_cpu));
DEFINE(VGIC_CPU_HCR, offsetof(struct vgic_cpu, vgic_hcr));
......
......@@ -462,6 +462,7 @@ static void pcibios_init_hw(struct hw_pci *hw, struct list_head *head)
sys->busnr = busnr;
sys->swizzle = hw->swizzle;
sys->map_irq = hw->map_irq;
sys->align_resource = hw->align_resource;
INIT_LIST_HEAD(&sys->resources);
if (hw->private_data)
......@@ -574,6 +575,8 @@ char * __init pcibios_setup(char *str)
resource_size_t pcibios_align_resource(void *data, const struct resource *res,
resource_size_t size, resource_size_t align)
{
struct pci_dev *dev = data;
struct pci_sys_data *sys = dev->sysdata;
resource_size_t start = res->start;
if (res->flags & IORESOURCE_IO && start & 0x300)
......@@ -581,6 +584,9 @@ resource_size_t pcibios_align_resource(void *data, const struct resource *res,
start = (start + align - 1) & ~(align - 1);
if (sys->align_resource)
return sys->align_resource(dev, res, start, size, align);
return start;
}
......
......@@ -192,18 +192,6 @@ __dabt_svc:
svc_entry
mov r2, sp
dabt_helper
@
@ IRQs off again before pulling preserved data off the stack
@
disable_irq_notrace
#ifdef CONFIG_TRACE_IRQFLAGS
tst r5, #PSR_I_BIT
bleq trace_hardirqs_on
tst r5, #PSR_I_BIT
blne trace_hardirqs_off
#endif
svc_exit r5 @ return from exception
UNWIND(.fnend )
ENDPROC(__dabt_svc)
......@@ -223,12 +211,7 @@ __irq_svc:
blne svc_preempt
#endif
#ifdef CONFIG_TRACE_IRQFLAGS
@ The parent context IRQs must have been enabled to get here in
@ the first place, so there's no point checking the PSR I bit.
bl trace_hardirqs_on
#endif
svc_exit r5 @ return from exception
svc_exit r5, irq = 1 @ return from exception
UNWIND(.fnend )
ENDPROC(__irq_svc)
......@@ -295,22 +278,8 @@ __und_svc_fault:
mov r0, sp @ struct pt_regs *regs
bl __und_fault
@
@ IRQs off again before pulling preserved data off the stack
@
__und_svc_finish:
disable_irq_notrace
@
@ restore SPSR and restart the instruction
@
ldr r5, [sp, #S_PSR] @ Get SVC cpsr
#ifdef CONFIG_TRACE_IRQFLAGS
tst r5, #PSR_I_BIT
bleq trace_hardirqs_on
tst r5, #PSR_I_BIT
blne trace_hardirqs_off
#endif
svc_exit r5 @ return from exception
UNWIND(.fnend )
ENDPROC(__und_svc)
......@@ -320,18 +289,6 @@ __pabt_svc:
svc_entry
mov r2, sp @ regs
pabt_helper
@
@ IRQs off again before pulling preserved data off the stack
@
disable_irq_notrace
#ifdef CONFIG_TRACE_IRQFLAGS
tst r5, #PSR_I_BIT
bleq trace_hardirqs_on
tst r5, #PSR_I_BIT
blne trace_hardirqs_off
#endif
svc_exit r5 @ return from exception
UNWIND(.fnend )
ENDPROC(__pabt_svc)
......@@ -396,6 +353,7 @@ ENDPROC(__pabt_svc)
#ifdef CONFIG_IRQSOFF_TRACER
bl trace_hardirqs_off
#endif
ct_user_exit save = 0
.endm
.macro kuser_cmpxchg_check
......@@ -562,21 +520,21 @@ ENDPROC(__und_usr)
@ Fall-through from Thumb-2 __und_usr
@
#ifdef CONFIG_NEON
get_thread_info r10 @ get current thread
adr r6, .LCneon_thumb_opcodes
b 2f
#endif
call_fpe:
get_thread_info r10 @ get current thread
#ifdef CONFIG_NEON
adr r6, .LCneon_arm_opcodes
2:
ldr r7, [r6], #4 @ mask value
cmp r7, #0 @ end mask?
beq 1f
and r8, r0, r7
2: ldr r5, [r6], #4 @ mask value
ldr r7, [r6], #4 @ opcode bits matching in mask
cmp r5, #0 @ end mask?
beq 1f
and r8, r0, r5
cmp r8, r7 @ NEON instruction?
bne 2b
get_thread_info r10
mov r7, #1
strb r7, [r10, #TI_USED_CP + 10] @ mark CP#10 as used
strb r7, [r10, #TI_USED_CP + 11] @ mark CP#11 as used
......@@ -586,7 +544,6 @@ call_fpe:
tst r0, #0x08000000 @ only CDP/CPRT/LDC/STC have bit 27
tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2
moveq pc, lr
get_thread_info r10 @ get current thread
and r8, r0, #0x00000f00 @ mask out CP number
THUMB( lsr r8, r8, #8 )
mov r7, #1
......
......@@ -35,12 +35,11 @@ ret_fast_syscall:
ldr r1, [tsk, #TI_FLAGS]
tst r1, #_TIF_WORK_MASK
bne fast_work_pending
#if defined(CONFIG_IRQSOFF_TRACER)
asm_trace_hardirqs_on
#endif
/* perform architecture specific actions before user return */
arch_ret_to_user r1, lr
ct_user_enter
restore_user_regs fast = 1, offset = S_OFF
UNWIND(.fnend )
......@@ -71,11 +70,11 @@ ENTRY(ret_to_user_from_irq)
tst r1, #_TIF_WORK_MASK
bne work_pending
no_work_pending:
#if defined(CONFIG_IRQSOFF_TRACER)
asm_trace_hardirqs_on
#endif
/* perform architecture specific actions before user return */
arch_ret_to_user r1, lr
ct_user_enter save = 0
restore_user_regs fast = 0, offset = 0
ENDPROC(ret_to_user_from_irq)
......@@ -406,6 +405,7 @@ ENTRY(vector_swi)
mcr p15, 0, ip, c1, c0 @ update control register
#endif
enable_irq
ct_user_exit
get_thread_info tsk
adr tbl, sys_call_table @ load syscall table pointer
......
......@@ -74,7 +74,24 @@
.endm
#ifndef CONFIG_THUMB2_KERNEL
.macro svc_exit, rpsr
.macro svc_exit, rpsr, irq = 0
.if \irq != 0
@ IRQs already off
#ifdef CONFIG_TRACE_IRQFLAGS
@ The parent context IRQs must have been enabled to get here in
@ the first place, so there's no point checking the PSR I bit.
bl trace_hardirqs_on
#endif
.else
@ IRQs off again before pulling preserved data off the stack
disable_irq_notrace
#ifdef CONFIG_TRACE_IRQFLAGS
tst \rpsr, #PSR_I_BIT
bleq trace_hardirqs_on
tst \rpsr, #PSR_I_BIT
blne trace_hardirqs_off
#endif
.endif
msr spsr_cxsf, \rpsr
#if defined(CONFIG_CPU_V6)
ldr r0, [sp]
......@@ -120,7 +137,24 @@
mov pc, \reg
.endm
#else /* CONFIG_THUMB2_KERNEL */
.macro svc_exit, rpsr
.macro svc_exit, rpsr, irq = 0
.if \irq != 0
@ IRQs already off
#ifdef CONFIG_TRACE_IRQFLAGS
@ The parent context IRQs must have been enabled to get here in
@ the first place, so there's no point checking the PSR I bit.
bl trace_hardirqs_on
#endif
.else
@ IRQs off again before pulling preserved data off the stack
disable_irq_notrace
#ifdef CONFIG_TRACE_IRQFLAGS
tst \rpsr, #PSR_I_BIT
bleq trace_hardirqs_on
tst \rpsr, #PSR_I_BIT
blne trace_hardirqs_off
#endif
.endif
ldr lr, [sp, #S_SP] @ top of the stack
ldrd r0, r1, [sp, #S_LR] @ calling lr and pc
clrex @ clear the exclusive monitor
......@@ -163,6 +197,34 @@
.endm
#endif /* !CONFIG_THUMB2_KERNEL */
/*
* Context tracking subsystem. Used to instrument transitions
* between user and kernel mode.
*/
.macro ct_user_exit, save = 1
#ifdef CONFIG_CONTEXT_TRACKING
.if \save
stmdb sp!, {r0-r3, ip, lr}
bl user_exit
ldmia sp!, {r0-r3, ip, lr}
.else
bl user_exit
.endif
#endif
.endm
.macro ct_user_enter, save = 1
#ifdef CONFIG_CONTEXT_TRACKING
.if \save
stmdb sp!, {r0-r3, ip, lr}
bl user_enter
ldmia sp!, {r0-r3, ip, lr}
.else
bl user_enter
.endif
#endif
.endm
/*
* These are the registers used in the syscall handler, and allow us to
* have in theory up to 7 arguments to a function - r0 to r6.
......
......@@ -98,8 +98,9 @@ __mmap_switched:
str r9, [r4] @ Save processor ID
str r1, [r5] @ Save machine type
str r2, [r6] @ Save atags pointer
bic r4, r0, #CR_A @ Clear 'A' bit
stmia r7, {r0, r4} @ Save control register values
cmp r7, #0
bicne r4, r0, #CR_A @ Clear 'A' bit
stmneia r7, {r0, r4} @ Save control register values
b start_kernel
ENDPROC(__mmap_switched)
......@@ -113,7 +114,11 @@ __mmap_switched_data:
.long processor_id @ r4
.long __machine_arch_type @ r5
.long __atags_pointer @ r6
#ifdef CONFIG_CPU_CP15
.long cr_alignment @ r7
#else
.long 0 @ r7
#endif
.long init_thread_union + THREAD_START_SP @ sp
.size __mmap_switched_data, . - __mmap_switched_data
......
......@@ -32,15 +32,21 @@
* numbers for r1.
*
*/
.arm
__HEAD
#ifdef CONFIG_CPU_THUMBONLY
.thumb
ENTRY(stext)
#else
.arm
ENTRY(stext)
THUMB( adr r9, BSYM(1f) ) @ Kernel is always entered in ARM.
THUMB( bx r9 ) @ If this is a Thumb-2 kernel,
THUMB( .thumb ) @ switch to Thumb now.
THUMB(1: )
#endif
setmode PSR_F_BIT | PSR_I_BIT | SVC_MODE, r9 @ ensure svc mode
@ and irqs disabled
......
......@@ -407,15 +407,16 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
* atomic helpers and the signal restart code. Insert it into the
* gate_vma so that it is visible through ptrace and /proc/<pid>/mem.
*/
static struct vm_area_struct gate_vma;
static struct vm_area_struct gate_vma = {
.vm_start = 0xffff0000,
.vm_end = 0xffff0000 + PAGE_SIZE,
.vm_flags = VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYEXEC,
.vm_mm = &init_mm,
};
static int __init gate_vma_init(void)
{
gate_vma.vm_start = 0xffff0000;
gate_vma.vm_end = 0xffff0000 + PAGE_SIZE;
gate_vma.vm_page_prot = PAGE_READONLY_EXEC;
gate_vma.vm_flags = VM_READ | VM_EXEC |
VM_MAYREAD | VM_MAYEXEC;
gate_vma.vm_page_prot = PAGE_READONLY_EXEC;
return 0;
}
arch_initcall(gate_vma_init);
......
......@@ -26,7 +26,7 @@ static int save_return_addr(struct stackframe *frame, void *d)
struct return_address_data *data = d;
if (!data->level) {
data->addr = (void *)frame->lr;
data->addr = (void *)frame->pc;
return 1;
} else {
......@@ -41,7 +41,8 @@ void *return_address(unsigned int level)
struct stackframe frame;
register unsigned long current_sp asm ("sp");
data.level = level + 1;
data.level = level + 2;
data.addr = NULL;
frame.fp = (unsigned long)__builtin_frame_address(0);
frame.sp = current_sp;
......
......@@ -290,10 +290,10 @@ static int cpu_has_aliasing_icache(unsigned int arch)
static void __init cacheid_init(void)
{
unsigned int cachetype = read_cpuid_cachetype();
unsigned int arch = cpu_architecture();
if (arch >= CPU_ARCH_ARMv6) {
unsigned int cachetype = read_cpuid_cachetype();
if ((cachetype & (7 << 29)) == 4 << 29) {
/* ARMv7 register format */
arch = CPU_ARCH_ARMv7;
......@@ -389,7 +389,7 @@ static void __init feat_v6_fixup(void)
*
* cpu_init sets up the per-CPU stacks.
*/
void cpu_init(void)
void notrace cpu_init(void)
{
unsigned int cpu = smp_processor_id();
struct stack *stk = &stacks[cpu];
......
......@@ -211,6 +211,13 @@ void __cpuinit __cpu_die(unsigned int cpu)
}
printk(KERN_NOTICE "CPU%u: shutdown\n", cpu);
/*
* platform_cpu_kill() is generally expected to do the powering off
* and/or cutting of clocks to the dying CPU. Optionally, this may
* be done by the CPU which is dying in preference to supporting
* this call, but that means there is _no_ synchronisation between
* the requesting CPU and the dying CPU actually losing power.
*/
if (!platform_cpu_kill(cpu))
printk("CPU%u: unable to kill\n", cpu);
}
......@@ -230,14 +237,41 @@ void __ref cpu_die(void)
idle_task_exit();
local_irq_disable();
mb();
/* Tell __cpu_die() that this CPU is now safe to dispose of */
/*
* Flush the data out of the L1 cache for this CPU. This must be
* before the completion to ensure that data is safely written out
* before platform_cpu_kill() gets called - which may disable
* *this* CPU and power down its cache.
*/
flush_cache_louis();
/*
* Tell __cpu_die() that this CPU is now safe to dispose of. Once
* this returns, power and/or clocks can be removed at any point
* from this CPU and its cache by platform_cpu_kill().
*/
RCU_NONIDLE(complete(&cpu_died));
/*
* actual CPU shutdown procedure is at least platform (if not
* CPU) specific.
* Ensure that the cache lines associated with that completion are
* written out. This covers the case where _this_ CPU is doing the
* powering down, to ensure that the completion is visible to the
* CPU waiting for this one.
*/
flush_cache_louis();
/*
* The actual CPU shutdown procedure is at least platform (if not
* CPU) specific. This may remove power, or it may simply spin.
*
* Platforms are generally expected *NOT* to return from this call,
* although there are some which do because they have no way to
* power down the CPU. These platforms are the _only_ reason we
* have a return path which uses the fragment of assembly below.
*
* The return path should not be used for platforms which can
* power off the CPU.
*/
if (smp_ops.cpu_die)
smp_ops.cpu_die(cpu);
......
......@@ -41,7 +41,7 @@ void scu_enable(void __iomem *scu_base)
#ifdef CONFIG_ARM_ERRATA_764369
/* Cortex-A9 only */
if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
if ((read_cpuid_id() & 0xff0ffff0) == 0x410fc090) {
scu_ctrl = __raw_readl(scu_base + 0x30);
if (!(scu_ctrl & 1))
__raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
......
......@@ -98,21 +98,21 @@ static void broadcast_tlb_a15_erratum(void)
return;
dummy_flush_tlb_a15_erratum();
smp_call_function_many(cpu_online_mask, ipi_flush_tlb_a15_erratum,
NULL, 1);
smp_call_function(ipi_flush_tlb_a15_erratum, NULL, 1);
}
static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
{
int cpu;
int cpu, this_cpu;
cpumask_t mask = { CPU_BITS_NONE };
if (!erratum_a15_798181())
return;
dummy_flush_tlb_a15_erratum();
this_cpu = get_cpu();
for_each_online_cpu(cpu) {
if (cpu == smp_processor_id())
if (cpu == this_cpu)
continue;
/*
* We only need to send an IPI if the other CPUs are running
......@@ -127,6 +127,7 @@ static void broadcast_tlb_mm_a15_erratum(struct mm_struct *mm)
cpumask_set_cpu(cpu, &mask);
}
smp_call_function_many(&mask, ipi_flush_tlb_a15_erratum, NULL, 1);
put_cpu();
}
void flush_tlb_all(void)
......
......@@ -17,7 +17,7 @@ AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
kvm-arm-y = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o guest.o mmu.o emulate.o reset.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o mmio.o psci.o
obj-$(CONFIG_KVM_ARM_VGIC) += vgic.o
obj-$(CONFIG_KVM_ARM_TIMER) += arch_timer.o
......@@ -30,11 +30,9 @@
#define CREATE_TRACE_POINTS
#include "trace.h"
#include <asm/unified.h>
#include <asm/uaccess.h>
#include <asm/ptrace.h>
#include <asm/mman.h>
#include <asm/cputype.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
#include <asm/virt.h>
......@@ -44,14 +42,13 @@
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_psci.h>
#include <asm/opcodes.h>
#ifdef REQUIRES_VIRT
__asm__(".arch_extension virt");
#endif
static DEFINE_PER_CPU(unsigned long, kvm_arm_hyp_stack_page);
static struct vfp_hard_struct __percpu *kvm_host_vfp_state;
static kvm_kernel_vfp_t __percpu *kvm_host_vfp_state;
static unsigned long hyp_default_vectors;
/* Per-CPU variable containing the currently running vcpu. */
......@@ -304,22 +301,6 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
return 0;
}
int __attribute_const__ kvm_target_cpu(void)
{
unsigned long implementor = read_cpuid_implementor();
unsigned long part_number = read_cpuid_part_number();
if (implementor != ARM_CPU_IMP_ARM)
return -EINVAL;
switch (part_number) {
case ARM_CPU_PART_CORTEX_A15:
return KVM_ARM_TARGET_CORTEX_A15;
default:
return -EINVAL;
}
}
int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
{
int ret;
......@@ -482,163 +463,6 @@ static void update_vttbr(struct kvm *kvm)
spin_unlock(&kvm_vmid_lock);
}
static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
/* SVC called from Hyp mode should never get here */
kvm_debug("SVC called from Hyp mode shouldn't go here\n");
BUG();
return -EINVAL; /* Squash warning */
}
static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
vcpu->arch.hsr & HSR_HVC_IMM_MASK);
if (kvm_psci_call(vcpu))
return 1;
kvm_inject_undefined(vcpu);
return 1;
}
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
if (kvm_psci_call(vcpu))
return 1;
kvm_inject_undefined(vcpu);
return 1;
}
static int handle_pabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
/* The hypervisor should never cause aborts */
kvm_err("Prefetch Abort taken from Hyp mode at %#08x (HSR: %#08x)\n",
vcpu->arch.hxfar, vcpu->arch.hsr);
return -EFAULT;
}
static int handle_dabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
/* This is either an error in the ws. code or an external abort */
kvm_err("Data Abort taken from Hyp mode at %#08x (HSR: %#08x)\n",
vcpu->arch.hxfar, vcpu->arch.hsr);
return -EFAULT;
}
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
static exit_handle_fn arm_exit_handlers[] = {
[HSR_EC_WFI] = kvm_handle_wfi,
[HSR_EC_CP15_32] = kvm_handle_cp15_32,
[HSR_EC_CP15_64] = kvm_handle_cp15_64,
[HSR_EC_CP14_MR] = kvm_handle_cp14_access,
[HSR_EC_CP14_LS] = kvm_handle_cp14_load_store,
[HSR_EC_CP14_64] = kvm_handle_cp14_access,
[HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access,
[HSR_EC_CP10_ID] = kvm_handle_cp10_id,
[HSR_EC_SVC_HYP] = handle_svc_hyp,
[HSR_EC_HVC] = handle_hvc,
[HSR_EC_SMC] = handle_smc,
[HSR_EC_IABT] = kvm_handle_guest_abort,
[HSR_EC_IABT_HYP] = handle_pabt_hyp,
[HSR_EC_DABT] = kvm_handle_guest_abort,
[HSR_EC_DABT_HYP] = handle_dabt_hyp,
};
/*
* A conditional instruction is allowed to trap, even though it
* wouldn't be executed. So let's re-implement the hardware, in
* software!
*/
static bool kvm_condition_valid(struct kvm_vcpu *vcpu)
{
unsigned long cpsr, cond, insn;
/*
* Exception Code 0 can only happen if we set HCR.TGE to 1, to
* catch undefined instructions, and then we won't get past
* the arm_exit_handlers test anyway.
*/
BUG_ON(((vcpu->arch.hsr & HSR_EC) >> HSR_EC_SHIFT) == 0);
/* Top two bits non-zero? Unconditional. */
if (vcpu->arch.hsr >> 30)
return true;
cpsr = *vcpu_cpsr(vcpu);
/* Is condition field valid? */
if ((vcpu->arch.hsr & HSR_CV) >> HSR_CV_SHIFT)
cond = (vcpu->arch.hsr & HSR_COND) >> HSR_COND_SHIFT;
else {
/* This can happen in Thumb mode: examine IT state. */
unsigned long it;
it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
/* it == 0 => unconditional. */
if (it == 0)
return true;
/* The cond for this insn works out as the top 4 bits. */
cond = (it >> 4);
}
/* Shift makes it look like an ARM-mode instruction */
insn = cond << 28;
return arm_check_condition(insn, cpsr) != ARM_OPCODE_CONDTEST_FAIL;
}
/*
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
* proper exit to QEMU.
*/
static int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int exception_index)
{
unsigned long hsr_ec;
switch (exception_index) {
case ARM_EXCEPTION_IRQ:
return 1;
case ARM_EXCEPTION_UNDEFINED:
kvm_err("Undefined exception in Hyp mode at: %#08x\n",
vcpu->arch.hyp_pc);
BUG();
panic("KVM: Hypervisor undefined exception!\n");
case ARM_EXCEPTION_DATA_ABORT:
case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC:
hsr_ec = (vcpu->arch.hsr & HSR_EC) >> HSR_EC_SHIFT;
if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers)
|| !arm_exit_handlers[hsr_ec]) {
kvm_err("Unknown exception class: %#08lx, "
"hsr: %#08x\n", hsr_ec,
(unsigned int)vcpu->arch.hsr);
BUG();
}
/*
* See ARM ARM B1.14.1: "Hyp traps on instructions
* that fail their condition code check"
*/
if (!kvm_condition_valid(vcpu)) {
bool is_wide = vcpu->arch.hsr & HSR_IL;
kvm_skip_instr(vcpu, is_wide);
return 1;
}
return arm_exit_handlers[hsr_ec](vcpu, run);
default:
kvm_pr_unimpl("Unsupported exception type: %d",
exception_index);
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
return 0;
}
}
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
{
if (likely(vcpu->arch.has_run_once))
......@@ -973,7 +797,6 @@ long kvm_arch_vm_ioctl(struct file *filp,
static void cpu_init_hyp_mode(void *vector)
{
unsigned long long pgd_ptr;
unsigned long pgd_low, pgd_high;
unsigned long hyp_stack_ptr;
unsigned long stack_page;
unsigned long vector_ptr;
......@@ -982,20 +805,11 @@ static void cpu_init_hyp_mode(void *vector)
__hyp_set_vectors((unsigned long)vector);
pgd_ptr = (unsigned long long)kvm_mmu_get_httbr();
pgd_low = (pgd_ptr & ((1ULL << 32) - 1));
pgd_high = (pgd_ptr >> 32ULL);
stack_page = __get_cpu_var(kvm_arm_hyp_stack_page);
hyp_stack_ptr = stack_page + PAGE_SIZE;
vector_ptr = (unsigned long)__kvm_hyp_vector;
/*
* Call initialization code, and switch to the full blown
* HYP code. The init code doesn't need to preserve these registers as
* r1-r3 and r12 are already callee save according to the AAPCS.
* Note that we slightly misuse the prototype by casing the pgd_low to
* a void *.
*/
kvm_call_hyp((void *)pgd_low, pgd_high, hyp_stack_ptr, vector_ptr);
__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
}
/**
......@@ -1078,7 +892,7 @@ static int init_hyp_mode(void)
/*
* Map the host VFP structures
*/
kvm_host_vfp_state = alloc_percpu(struct vfp_hard_struct);
kvm_host_vfp_state = alloc_percpu(kvm_kernel_vfp_t);
if (!kvm_host_vfp_state) {
err = -ENOMEM;
kvm_err("Cannot allocate host VFP state\n");
......@@ -1086,7 +900,7 @@ static int init_hyp_mode(void)
}
for_each_possible_cpu(cpu) {
struct vfp_hard_struct *vfp;
kvm_kernel_vfp_t *vfp;
vfp = per_cpu_ptr(kvm_host_vfp_state, cpu);
err = create_hyp_mappings(vfp, vfp + 1);
......
......@@ -76,7 +76,7 @@ static bool access_dcsw(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
u32 val;
unsigned long val;
int cpu;
if (!p->is_write)
......@@ -293,12 +293,12 @@ static int emulate_cp15(struct kvm_vcpu *vcpu,
if (likely(r->access(vcpu, params, r))) {
/* Skip instruction, since it was emulated */
kvm_skip_instr(vcpu, (vcpu->arch.hsr >> 25) & 1);
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}
/* If access function fails, it should complain. */
} else {
kvm_err("Unsupported guest CP15 access at: %08x\n",
kvm_err("Unsupported guest CP15 access at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
}
......@@ -315,14 +315,14 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
struct coproc_params params;
params.CRm = (vcpu->arch.hsr >> 1) & 0xf;
params.Rt1 = (vcpu->arch.hsr >> 5) & 0xf;
params.is_write = ((vcpu->arch.hsr & 1) == 0);
params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
params.is_64bit = true;
params.Op1 = (vcpu->arch.hsr >> 16) & 0xf;
params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
params.Op2 = 0;
params.Rt2 = (vcpu->arch.hsr >> 10) & 0xf;
params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
params.CRn = 0;
return emulate_cp15(vcpu, &params);
......@@ -347,14 +347,14 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
struct coproc_params params;
params.CRm = (vcpu->arch.hsr >> 1) & 0xf;
params.Rt1 = (vcpu->arch.hsr >> 5) & 0xf;
params.is_write = ((vcpu->arch.hsr & 1) == 0);
params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
params.is_64bit = false;
params.CRn = (vcpu->arch.hsr >> 10) & 0xf;
params.Op1 = (vcpu->arch.hsr >> 14) & 0x7;
params.Op2 = (vcpu->arch.hsr >> 17) & 0x7;
params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
params.Rt2 = 0;
return emulate_cp15(vcpu, &params);
......
......@@ -84,7 +84,7 @@ static inline bool read_zero(struct kvm_vcpu *vcpu,
static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
const struct coproc_params *params)
{
kvm_debug("CP15 write to read-only register at: %08x\n",
kvm_debug("CP15 write to read-only register at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
return false;
......@@ -93,7 +93,7 @@ static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
const struct coproc_params *params)
{
kvm_debug("CP15 read to write-only register at: %08x\n",
kvm_debug("CP15 read to write-only register at: %08lx\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
return false;
......
......@@ -20,6 +20,7 @@
#include <linux/kvm_host.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_emulate.h>
#include <asm/opcodes.h>
#include <trace/events/kvm.h>
#include "trace.h"
......@@ -109,10 +110,10 @@ static const unsigned long vcpu_reg_offsets[VCPU_NR_MODES][15] = {
* Return a pointer to the register number valid in the current mode of
* the virtual CPU.
*/
u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
{
u32 *reg_array = (u32 *)&vcpu->arch.regs;
u32 mode = *vcpu_cpsr(vcpu) & MODE_MASK;
unsigned long *reg_array = (unsigned long *)&vcpu->arch.regs;
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
case USR_MODE...SVC_MODE:
......@@ -141,9 +142,9 @@ u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num)
/*
* Return the SPSR for the current mode of the virtual CPU.
*/
u32 *vcpu_spsr(struct kvm_vcpu *vcpu)
unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu)
{
u32 mode = *vcpu_cpsr(vcpu) & MODE_MASK;
unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK;
switch (mode) {
case SVC_MODE:
return &vcpu->arch.regs.KVM_ARM_SVC_spsr;
......@@ -160,20 +161,48 @@ u32 *vcpu_spsr(struct kvm_vcpu *vcpu)
}
}
/**
* kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
* @vcpu: the vcpu pointer
* @run: the kvm_run structure pointer
*
* Simply sets the wait_for_interrupts flag on the vcpu structure, which will
* halt execution of world-switches and schedule other host processes until
* there is an incoming IRQ or FIQ to the VM.
/*
* A conditional instruction is allowed to trap, even though it
* wouldn't be executed. So let's re-implement the hardware, in
* software!
*/
int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
bool kvm_condition_valid(struct kvm_vcpu *vcpu)
{
trace_kvm_wfi(*vcpu_pc(vcpu));
kvm_vcpu_block(vcpu);
return 1;
unsigned long cpsr, cond, insn;
/*
* Exception Code 0 can only happen if we set HCR.TGE to 1, to
* catch undefined instructions, and then we won't get past
* the arm_exit_handlers test anyway.
*/
BUG_ON(!kvm_vcpu_trap_get_class(vcpu));
/* Top two bits non-zero? Unconditional. */
if (kvm_vcpu_get_hsr(vcpu) >> 30)
return true;
cpsr = *vcpu_cpsr(vcpu);
/* Is condition field valid? */
if ((kvm_vcpu_get_hsr(vcpu) & HSR_CV) >> HSR_CV_SHIFT)
cond = (kvm_vcpu_get_hsr(vcpu) & HSR_COND) >> HSR_COND_SHIFT;
else {
/* This can happen in Thumb mode: examine IT state. */
unsigned long it;
it = ((cpsr >> 8) & 0xFC) | ((cpsr >> 25) & 0x3);
/* it == 0 => unconditional. */
if (it == 0)
return true;
/* The cond for this insn works out as the top 4 bits. */
cond = (it >> 4);
}
/* Shift makes it look like an ARM-mode instruction */
insn = cond << 28;
return arm_check_condition(insn, cpsr) != ARM_OPCODE_CONDTEST_FAIL;
}
/**
......@@ -257,9 +286,9 @@ static u32 exc_vector_base(struct kvm_vcpu *vcpu)
*/
void kvm_inject_undefined(struct kvm_vcpu *vcpu)
{
u32 new_lr_value;
u32 new_spsr_value;
u32 cpsr = *vcpu_cpsr(vcpu);
unsigned long new_lr_value;
unsigned long new_spsr_value;
unsigned long cpsr = *vcpu_cpsr(vcpu);
u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
bool is_thumb = (cpsr & PSR_T_BIT);
u32 vect_offset = 4;
......@@ -291,9 +320,9 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu)
*/
static void inject_abt(struct kvm_vcpu *vcpu, bool is_pabt, unsigned long addr)
{
u32 new_lr_value;
u32 new_spsr_value;
u32 cpsr = *vcpu_cpsr(vcpu);
unsigned long new_lr_value;
unsigned long new_spsr_value;
unsigned long cpsr = *vcpu_cpsr(vcpu);
u32 sctlr = vcpu->arch.cp15[c1_SCTLR];
bool is_thumb = (cpsr & PSR_T_BIT);
u32 vect_offset;
......
......@@ -22,6 +22,7 @@
#include <linux/module.h>
#include <linux/vmalloc.h>
#include <linux/fs.h>
#include <asm/cputype.h>
#include <asm/uaccess.h>
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
......@@ -180,6 +181,22 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
return -EINVAL;
}
int __attribute_const__ kvm_target_cpu(void)
{
unsigned long implementor = read_cpuid_implementor();
unsigned long part_number = read_cpuid_part_number();
if (implementor != ARM_CPU_IMP_ARM)
return -EINVAL;
switch (part_number) {
case ARM_CPU_PART_CORTEX_A15:
return KVM_ARM_TARGET_CORTEX_A15;
default:
return -EINVAL;
}
}
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init)
{
......
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/kvm.h>
#include <linux/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <asm/kvm_mmu.h>
#include <asm/kvm_psci.h>
#include <trace/events/kvm.h>
#include "trace.h"
#include "trace.h"
typedef int (*exit_handle_fn)(struct kvm_vcpu *, struct kvm_run *);
static int handle_svc_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
/* SVC called from Hyp mode should never get here */
kvm_debug("SVC called from Hyp mode shouldn't go here\n");
BUG();
return -EINVAL; /* Squash warning */
}
static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
trace_kvm_hvc(*vcpu_pc(vcpu), *vcpu_reg(vcpu, 0),
kvm_vcpu_hvc_get_imm(vcpu));
if (kvm_psci_call(vcpu))
return 1;
kvm_inject_undefined(vcpu);
return 1;
}
static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
if (kvm_psci_call(vcpu))
return 1;
kvm_inject_undefined(vcpu);
return 1;
}
static int handle_pabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
/* The hypervisor should never cause aborts */
kvm_err("Prefetch Abort taken from Hyp mode at %#08lx (HSR: %#08x)\n",
kvm_vcpu_get_hfar(vcpu), kvm_vcpu_get_hsr(vcpu));
return -EFAULT;
}
static int handle_dabt_hyp(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
/* This is either an error in the ws. code or an external abort */
kvm_err("Data Abort taken from Hyp mode at %#08lx (HSR: %#08x)\n",
kvm_vcpu_get_hfar(vcpu), kvm_vcpu_get_hsr(vcpu));
return -EFAULT;
}
/**
* kvm_handle_wfi - handle a wait-for-interrupts instruction executed by a guest
* @vcpu: the vcpu pointer
* @run: the kvm_run structure pointer
*
* Simply sets the wait_for_interrupts flag on the vcpu structure, which will
* halt execution of world-switches and schedule other host processes until
* there is an incoming IRQ or FIQ to the VM.
*/
static int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
trace_kvm_wfi(*vcpu_pc(vcpu));
kvm_vcpu_block(vcpu);
return 1;
}
static exit_handle_fn arm_exit_handlers[] = {
[HSR_EC_WFI] = kvm_handle_wfi,
[HSR_EC_CP15_32] = kvm_handle_cp15_32,
[HSR_EC_CP15_64] = kvm_handle_cp15_64,
[HSR_EC_CP14_MR] = kvm_handle_cp14_access,
[HSR_EC_CP14_LS] = kvm_handle_cp14_load_store,
[HSR_EC_CP14_64] = kvm_handle_cp14_access,
[HSR_EC_CP_0_13] = kvm_handle_cp_0_13_access,
[HSR_EC_CP10_ID] = kvm_handle_cp10_id,
[HSR_EC_SVC_HYP] = handle_svc_hyp,
[HSR_EC_HVC] = handle_hvc,
[HSR_EC_SMC] = handle_smc,
[HSR_EC_IABT] = kvm_handle_guest_abort,
[HSR_EC_IABT_HYP] = handle_pabt_hyp,
[HSR_EC_DABT] = kvm_handle_guest_abort,
[HSR_EC_DABT_HYP] = handle_dabt_hyp,
};
static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
{
u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
!arm_exit_handlers[hsr_ec]) {
kvm_err("Unknown exception class: hsr: %#08x\n",
(unsigned int)kvm_vcpu_get_hsr(vcpu));
BUG();
}
return arm_exit_handlers[hsr_ec];
}
/*
* Return > 0 to return to guest, < 0 on error, 0 (and set exit_reason) on
* proper exit to userspace.
*/
int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
int exception_index)
{
exit_handle_fn exit_handler;
switch (exception_index) {
case ARM_EXCEPTION_IRQ:
return 1;
case ARM_EXCEPTION_UNDEFINED:
kvm_err("Undefined exception in Hyp mode at: %#08lx\n",
kvm_vcpu_get_hyp_pc(vcpu));
BUG();
panic("KVM: Hypervisor undefined exception!\n");
case ARM_EXCEPTION_DATA_ABORT:
case ARM_EXCEPTION_PREF_ABORT:
case ARM_EXCEPTION_HVC:
/*
* See ARM ARM B1.14.1: "Hyp traps on instructions
* that fail their condition code check"
*/
if (!kvm_condition_valid(vcpu)) {
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 1;
}
exit_handler = kvm_get_exit_handler(vcpu);
return exit_handler(vcpu, run);
default:
kvm_pr_unimpl("Unsupported exception type: %d",
exception_index);
run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
return 0;
}
}
......@@ -35,15 +35,18 @@ __kvm_hyp_code_start:
/********************************************************************
* Flush per-VMID TLBs
*
* void __kvm_tlb_flush_vmid(struct kvm *kvm);
* void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
*
* We rely on the hardware to broadcast the TLB invalidation to all CPUs
* inside the inner-shareable domain (which is the case for all v7
* implementations). If we come across a non-IS SMP implementation, we'll
* have to use an IPI based mechanism. Until then, we stick to the simple
* hardware assisted version.
*
* As v7 does not support flushing per IPA, just nuke the whole TLB
* instead, ignoring the ipa value.
*/
ENTRY(__kvm_tlb_flush_vmid)
ENTRY(__kvm_tlb_flush_vmid_ipa)
push {r2, r3}
add r0, r0, #KVM_VTTBR
......@@ -60,7 +63,7 @@ ENTRY(__kvm_tlb_flush_vmid)
pop {r2, r3}
bx lr
ENDPROC(__kvm_tlb_flush_vmid)
ENDPROC(__kvm_tlb_flush_vmid_ipa)
/********************************************************************
* Flush TLBs and instruction caches of all CPUs inside the inner-shareable
......@@ -235,9 +238,9 @@ ENTRY(kvm_call_hyp)
* instruction is issued since all traps are disabled when running the host
* kernel as per the Hyp-mode initialization at boot time.
*
* HVC instructions cause a trap to the vector page + offset 0x18 (see hyp_hvc
* HVC instructions cause a trap to the vector page + offset 0x14 (see hyp_hvc
* below) when the HVC instruction is called from SVC mode (i.e. a guest or the
* host kernel) and they cause a trap to the vector page + offset 0xc when HVC
* host kernel) and they cause a trap to the vector page + offset 0x8 when HVC
* instructions are called from within Hyp-mode.
*
* Hyp-ABI: Calling HYP-mode functions from host (in SVC mode):
......
......@@ -33,16 +33,16 @@
*/
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
__u32 *dest;
unsigned long *dest;
unsigned int len;
int mask;
if (!run->mmio.is_write) {
dest = vcpu_reg(vcpu, vcpu->arch.mmio_decode.rt);
memset(dest, 0, sizeof(int));
*dest = 0;
len = run->mmio.len;
if (len > 4)
if (len > sizeof(unsigned long))
return -EINVAL;
memcpy(dest, run->mmio.data, len);
......@@ -50,7 +50,8 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
trace_kvm_mmio(KVM_TRACE_MMIO_READ, len, run->mmio.phys_addr,
*((u64 *)run->mmio.data));
if (vcpu->arch.mmio_decode.sign_extend && len < 4) {
if (vcpu->arch.mmio_decode.sign_extend &&
len < sizeof(unsigned long)) {
mask = 1U << ((len * 8) - 1);
*dest = (*dest ^ mask) - mask;
}
......@@ -65,40 +66,29 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
unsigned long rt, len;
bool is_write, sign_extend;
if ((vcpu->arch.hsr >> 8) & 1) {
if (kvm_vcpu_dabt_isextabt(vcpu)) {
/* cache operation on I/O addr, tell guest unsupported */
kvm_inject_dabt(vcpu, vcpu->arch.hxfar);
kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
if ((vcpu->arch.hsr >> 7) & 1) {
if (kvm_vcpu_dabt_iss1tw(vcpu)) {
/* page table accesses IO mem: tell guest to fix its TTBR */
kvm_inject_dabt(vcpu, vcpu->arch.hxfar);
kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
switch ((vcpu->arch.hsr >> 22) & 0x3) {
case 0:
len = 1;
break;
case 1:
len = 2;
break;
case 2:
len = 4;
break;
default:
kvm_err("Hardware is weird: SAS 0b11 is reserved\n");
return -EFAULT;
}
len = kvm_vcpu_dabt_get_as(vcpu);
if (unlikely(len < 0))
return len;
is_write = vcpu->arch.hsr & HSR_WNR;
sign_extend = vcpu->arch.hsr & HSR_SSE;
rt = (vcpu->arch.hsr & HSR_SRT_MASK) >> HSR_SRT_SHIFT;
is_write = kvm_vcpu_dabt_iswrite(vcpu);
sign_extend = kvm_vcpu_dabt_issext(vcpu);
rt = kvm_vcpu_dabt_get_rd(vcpu);
if (kvm_vcpu_reg_is_pc(vcpu, rt)) {
/* IO memory trying to read/write pc */
kvm_inject_pabt(vcpu, vcpu->arch.hxfar);
kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
return 1;
}
......@@ -112,7 +102,7 @@ static int decode_hsr(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
* The MMIO instruction is emulated and should not be re-executed
* in the guest.
*/
kvm_skip_instr(vcpu, (vcpu->arch.hsr >> 25) & 1);
kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
return 0;
}
......@@ -130,7 +120,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
* space do its magic.
*/
if (vcpu->arch.hsr & HSR_ISV) {
if (kvm_vcpu_dabt_isvalid(vcpu)) {
ret = decode_hsr(vcpu, fault_ipa, &mmio);
if (ret)
return ret;
......
......@@ -20,7 +20,6 @@
#include <linux/kvm_host.h>
#include <linux/io.h>
#include <trace/events/kvm.h>
#include <asm/idmap.h>
#include <asm/pgalloc.h>
#include <asm/cacheflush.h>
#include <asm/kvm_arm.h>
......@@ -28,8 +27,6 @@
#include <asm/kvm_mmio.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_emulate.h>
#include <asm/mach/map.h>
#include <trace/events/kvm.h>
#include "trace.h"
......@@ -37,19 +34,9 @@ extern char __hyp_idmap_text_start[], __hyp_idmap_text_end[];
static DEFINE_MUTEX(kvm_hyp_pgd_mutex);
static void kvm_tlb_flush_vmid(struct kvm *kvm)
static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
kvm_call_hyp(__kvm_tlb_flush_vmid, kvm);
}
static void kvm_set_pte(pte_t *pte, pte_t new_pte)
{
pte_val(*pte) = new_pte;
/*
* flush_pmd_entry just takes a void pointer and cleans the necessary
* cache entries, so we can reuse the function for ptes.
*/
flush_pmd_entry(pte);
kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
}
static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache,
......@@ -98,33 +85,42 @@ static void free_ptes(pmd_t *pmd, unsigned long addr)
}
}
static void free_hyp_pgd_entry(unsigned long addr)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
unsigned long hyp_addr = KERN_TO_HYP(addr);
pgd = hyp_pgd + pgd_index(hyp_addr);
pud = pud_offset(pgd, hyp_addr);
if (pud_none(*pud))
return;
BUG_ON(pud_bad(*pud));
pmd = pmd_offset(pud, hyp_addr);
free_ptes(pmd, addr);
pmd_free(NULL, pmd);
pud_clear(pud);
}
/**
* free_hyp_pmds - free a Hyp-mode level-2 tables and child level-3 tables
*
* Assumes this is a page table used strictly in Hyp-mode and therefore contains
* only mappings in the kernel memory area, which is above PAGE_OFFSET.
* either mappings in the kernel memory area (above PAGE_OFFSET), or
* device mappings in the vmalloc range (from VMALLOC_START to VMALLOC_END).
*/
void free_hyp_pmds(void)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
unsigned long addr;
mutex_lock(&kvm_hyp_pgd_mutex);
for (addr = PAGE_OFFSET; addr != 0; addr += PGDIR_SIZE) {
pgd = hyp_pgd + pgd_index(addr);
pud = pud_offset(pgd, addr);
if (pud_none(*pud))
continue;
BUG_ON(pud_bad(*pud));
pmd = pmd_offset(pud, addr);
free_ptes(pmd, addr);
pmd_free(NULL, pmd);
pud_clear(pud);
}
for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
free_hyp_pgd_entry(addr);
for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
free_hyp_pgd_entry(addr);
mutex_unlock(&kvm_hyp_pgd_mutex);
}
......@@ -136,7 +132,9 @@ static void create_hyp_pte_mappings(pmd_t *pmd, unsigned long start,
struct page *page;
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
pte = pte_offset_kernel(pmd, addr);
unsigned long hyp_addr = KERN_TO_HYP(addr);
pte = pte_offset_kernel(pmd, hyp_addr);
BUG_ON(!virt_addr_valid(addr));
page = virt_to_page(addr);
kvm_set_pte(pte, mk_pte(page, PAGE_HYP));
......@@ -151,7 +149,9 @@ static void create_hyp_io_pte_mappings(pmd_t *pmd, unsigned long start,
unsigned long addr;
for (addr = start & PAGE_MASK; addr < end; addr += PAGE_SIZE) {
pte = pte_offset_kernel(pmd, addr);
unsigned long hyp_addr = KERN_TO_HYP(addr);
pte = pte_offset_kernel(pmd, hyp_addr);
BUG_ON(pfn_valid(*pfn_base));
kvm_set_pte(pte, pfn_pte(*pfn_base, PAGE_HYP_DEVICE));
(*pfn_base)++;
......@@ -166,12 +166,13 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start,
unsigned long addr, next;
for (addr = start; addr < end; addr = next) {
pmd = pmd_offset(pud, addr);
unsigned long hyp_addr = KERN_TO_HYP(addr);
pmd = pmd_offset(pud, hyp_addr);
BUG_ON(pmd_sect(*pmd));
if (pmd_none(*pmd)) {
pte = pte_alloc_one_kernel(NULL, addr);
pte = pte_alloc_one_kernel(NULL, hyp_addr);
if (!pte) {
kvm_err("Cannot allocate Hyp pte\n");
return -ENOMEM;
......@@ -206,17 +207,23 @@ static int __create_hyp_mappings(void *from, void *to, unsigned long *pfn_base)
unsigned long addr, next;
int err = 0;
BUG_ON(start > end);
if (start < PAGE_OFFSET)
if (start >= end)
return -EINVAL;
/* Check for a valid kernel memory mapping */
if (!pfn_base && (!virt_addr_valid(from) || !virt_addr_valid(to - 1)))
return -EINVAL;
/* Check for a valid kernel IO mapping */
if (pfn_base && (!is_vmalloc_addr(from) || !is_vmalloc_addr(to - 1)))
return -EINVAL;
mutex_lock(&kvm_hyp_pgd_mutex);
for (addr = start; addr < end; addr = next) {
pgd = hyp_pgd + pgd_index(addr);
pud = pud_offset(pgd, addr);
unsigned long hyp_addr = KERN_TO_HYP(addr);
pgd = hyp_pgd + pgd_index(hyp_addr);
pud = pud_offset(pgd, hyp_addr);
if (pud_none_or_clear_bad(pud)) {
pmd = pmd_alloc_one(NULL, addr);
pmd = pmd_alloc_one(NULL, hyp_addr);
if (!pmd) {
kvm_err("Cannot allocate Hyp pmd\n");
err = -ENOMEM;
......@@ -236,12 +243,13 @@ static int __create_hyp_mappings(void *from, void *to, unsigned long *pfn_base)
}
/**
* create_hyp_mappings - map a kernel virtual address range in Hyp mode
* create_hyp_mappings - duplicate a kernel virtual address range in Hyp mode
* @from: The virtual kernel start address of the range
* @to: The virtual kernel end address of the range (exclusive)
*
* The same virtual address as the kernel virtual address is also used in
* Hyp-mode mapping to the same underlying physical pages.
* The same virtual address as the kernel virtual address is also used
* in Hyp-mode mapping (modulo HYP_PAGE_OFFSET) to the same underlying
* physical pages.
*
* Note: Wrapping around zero in the "to" address is not supported.
*/
......@@ -251,10 +259,13 @@ int create_hyp_mappings(void *from, void *to)
}
/**
* create_hyp_io_mappings - map a physical IO range in Hyp mode
* @from: The virtual HYP start address of the range
* @to: The virtual HYP end address of the range (exclusive)
* create_hyp_io_mappings - duplicate a kernel IO mapping into Hyp mode
* @from: The kernel start VA of the range
* @to: The kernel end VA of the range (exclusive)
* @addr: The physical start address which gets mapped
*
* The resulting HYP VA is the same as the kernel VA, modulo
* HYP_PAGE_OFFSET.
*/
int create_hyp_io_mappings(void *from, void *to, phys_addr_t addr)
{
......@@ -290,7 +301,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm)
VM_BUG_ON((unsigned long)pgd & (S2_PGD_SIZE - 1));
memset(pgd, 0, PTRS_PER_S2_PGD * sizeof(pgd_t));
clean_dcache_area(pgd, PTRS_PER_S2_PGD * sizeof(pgd_t));
kvm_clean_pgd(pgd);
kvm->arch.pgd = pgd;
return 0;
......@@ -422,22 +433,22 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
return 0; /* ignore calls from kvm_set_spte_hva */
pmd = mmu_memory_cache_alloc(cache);
pud_populate(NULL, pud, pmd);
pmd += pmd_index(addr);
get_page(virt_to_page(pud));
} else
pmd = pmd_offset(pud, addr);
}
pmd = pmd_offset(pud, addr);
/* Create 2nd stage page table mapping - Level 2 */
if (pmd_none(*pmd)) {
if (!cache)
return 0; /* ignore calls from kvm_set_spte_hva */
pte = mmu_memory_cache_alloc(cache);
clean_pte_table(pte);
kvm_clean_pte(pte);
pmd_populate_kernel(NULL, pmd, pte);
pte += pte_index(addr);
get_page(virt_to_page(pmd));
} else
pte = pte_offset_kernel(pmd, addr);
}
pte = pte_offset_kernel(pmd, addr);
if (iomap && pte_present(*pte))
return -EFAULT;
......@@ -446,7 +457,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
old_pte = *pte;
kvm_set_pte(pte, *new_pte);
if (pte_present(old_pte))
kvm_tlb_flush_vmid(kvm);
kvm_tlb_flush_vmid_ipa(kvm, addr);
else
get_page(virt_to_page(pte));
......@@ -473,7 +484,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
pfn = __phys_to_pfn(pa);
for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE | L_PTE_S2_RDWR);
pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
kvm_set_s2pte_writable(&pte);
ret = mmu_topup_memory_cache(&cache, 2, 2);
if (ret)
......@@ -492,29 +504,6 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
return ret;
}
static void coherent_icache_guest_page(struct kvm *kvm, gfn_t gfn)
{
/*
* If we are going to insert an instruction page and the icache is
* either VIPT or PIPT, there is a potential problem where the host
* (or another VM) may have used the same page as this guest, and we
* read incorrect data from the icache. If we're using a PIPT cache,
* we can invalidate just that page, but if we are using a VIPT cache
* we need to invalidate the entire icache - damn shame - as written
* in the ARM ARM (DDI 0406C.b - Page B3-1393).
*
* VIVT caches are tagged using both the ASID and the VMID and doesn't
* need any kind of flushing (DDI 0406C.b - Page B3-1392).
*/
if (icache_is_pipt()) {
unsigned long hva = gfn_to_hva(kvm, gfn);
__cpuc_coherent_user_range(hva, hva + PAGE_SIZE);
} else if (!icache_is_vivt_asid_tagged()) {
/* any kind of VIPT cache */
__flush_icache_all();
}
}
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
gfn_t gfn, struct kvm_memory_slot *memslot,
unsigned long fault_status)
......@@ -526,7 +515,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
unsigned long mmu_seq;
struct kvm_mmu_memory_cache *memcache = &vcpu->arch.mmu_page_cache;
write_fault = kvm_is_write_fault(vcpu->arch.hsr);
write_fault = kvm_is_write_fault(kvm_vcpu_get_hsr(vcpu));
if (fault_status == FSC_PERM && !write_fault) {
kvm_err("Unexpected L2 read permission error\n");
return -EFAULT;
......@@ -560,7 +549,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (mmu_notifier_retry(vcpu->kvm, mmu_seq))
goto out_unlock;
if (writable) {
pte_val(new_pte) |= L_PTE_S2_RDWR;
kvm_set_s2pte_writable(&new_pte);
kvm_set_pfn_dirty(pfn);
}
stage2_set_pte(vcpu->kvm, memcache, fault_ipa, &new_pte, false);
......@@ -585,7 +574,6 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
*/
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
unsigned long hsr_ec;
unsigned long fault_status;
phys_addr_t fault_ipa;
struct kvm_memory_slot *memslot;
......@@ -593,18 +581,17 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
gfn_t gfn;
int ret, idx;
hsr_ec = vcpu->arch.hsr >> HSR_EC_SHIFT;
is_iabt = (hsr_ec == HSR_EC_IABT);
fault_ipa = ((phys_addr_t)vcpu->arch.hpfar & HPFAR_MASK) << 8;
is_iabt = kvm_vcpu_trap_is_iabt(vcpu);
fault_ipa = kvm_vcpu_get_fault_ipa(vcpu);
trace_kvm_guest_fault(*vcpu_pc(vcpu), vcpu->arch.hsr,
vcpu->arch.hxfar, fault_ipa);
trace_kvm_guest_fault(*vcpu_pc(vcpu), kvm_vcpu_get_hsr(vcpu),
kvm_vcpu_get_hfar(vcpu), fault_ipa);
/* Check the stage-2 fault is trans. fault or write fault */
fault_status = (vcpu->arch.hsr & HSR_FSC_TYPE);
fault_status = kvm_vcpu_trap_get_fault(vcpu);
if (fault_status != FSC_FAULT && fault_status != FSC_PERM) {
kvm_err("Unsupported fault status: EC=%#lx DFCS=%#lx\n",
hsr_ec, fault_status);
kvm_err("Unsupported fault status: EC=%#x DFCS=%#lx\n",
kvm_vcpu_trap_get_class(vcpu), fault_status);
return -EFAULT;
}
......@@ -614,7 +601,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (!kvm_is_visible_gfn(vcpu->kvm, gfn)) {
if (is_iabt) {
/* Prefetch Abort on I/O address */
kvm_inject_pabt(vcpu, vcpu->arch.hxfar);
kvm_inject_pabt(vcpu, kvm_vcpu_get_hfar(vcpu));
ret = 1;
goto out_unlock;
}
......@@ -626,8 +613,13 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run)
goto out_unlock;
}
/* Adjust page offset */
fault_ipa |= vcpu->arch.hxfar & ~PAGE_MASK;
/*
* The IPA is reported as [MAX:12], so we need to
* complement it with the bottom 12 bits from the
* faulting VA. This is always 12 bits, irrespective
* of the page size.
*/
fault_ipa |= kvm_vcpu_get_hfar(vcpu) & ((1 << 12) - 1);
ret = io_mem_abort(vcpu, run, fault_ipa);
goto out_unlock;
}
......@@ -682,7 +674,7 @@ static void handle_hva_to_gpa(struct kvm *kvm,
static void kvm_unmap_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
{
unmap_stage2_range(kvm, gpa, PAGE_SIZE);
kvm_tlb_flush_vmid(kvm);
kvm_tlb_flush_vmid_ipa(kvm, gpa);
}
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva)
......@@ -776,7 +768,7 @@ void kvm_clear_hyp_idmap(void)
pmd = pmd_offset(pud, addr);
pud_clear(pud);
clean_pmd_entry(pmd);
kvm_clean_pmd_entry(pmd);
pmd_free(NULL, (pmd_t *)((unsigned long)pmd & PAGE_MASK));
} while (pgd++, addr = next, addr < end);
}
......@@ -1477,7 +1477,7 @@ int kvm_vgic_set_addr(struct kvm *kvm, unsigned long type, u64 addr)
if (addr & ~KVM_PHYS_MASK)
return -E2BIG;
if (addr & ~PAGE_MASK)
if (addr & (SZ_4K - 1))
return -EINVAL;
mutex_lock(&kvm->lock);
......
......@@ -28,7 +28,6 @@ static inline void cpu_enter_lowpower_a9(void)
{
unsigned int v;
flush_cache_all();
asm volatile(
" mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
......
......@@ -1252,7 +1252,7 @@ static void __init nuri_camera_init(void)
}
m5mols_board_info.irq = s5p_register_gpio_interrupt(GPIO_CAM_8M_ISP_INT);
if (!IS_ERR_VALUE(m5mols_board_info.irq))
if (m5mols_board_info.irq >= 0)
s3c_gpio_cfgpin(GPIO_CAM_8M_ISP_INT, S3C_GPIO_SFN(0xF));
else
pr_err("%s: Failed to configure 8M_ISP_INT GPIO\n", __func__);
......
......@@ -14,7 +14,6 @@
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/kernel.h>
#include <asm/cacheflush.h>
#include "core.h"
......
......@@ -37,7 +37,7 @@ int __init mxc_device_init(void)
int ret;
ret = device_register(&mxc_aips_bus);
if (IS_ERR_VALUE(ret))
if (ret < 0)
goto done;
ret = device_register(&mxc_ahb_bus);
......
......@@ -11,7 +11,6 @@
*/
#include <linux/errno.h>
#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include "common.h"
......@@ -20,7 +19,6 @@ static inline void cpu_enter_lowpower(void)
{
unsigned int v;
flush_cache_all();
asm volatile(
"mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
......
......@@ -536,16 +536,14 @@ static void __init ap_init_of(void)
'A' + (ap_sc_id & 0x0f));
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR_OR_NULL(soc_dev)) {
if (IS_ERR(soc_dev)) {
kfree(soc_dev_attr->revision);
kfree(soc_dev_attr);
return;
}
parent = soc_device_to_device(soc_dev);
if (!IS_ERR_OR_NULL(parent))
integrator_init_sysfs(parent, ap_sc_id);
integrator_init_sysfs(parent, ap_sc_id);
of_platform_populate(root, of_default_bus_match_table,
ap_auxdata_lookup, parent);
......
......@@ -360,17 +360,14 @@ static void __init intcp_init_of(void)
'A' + (intcp_sc_id & 0x0f));
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR_OR_NULL(soc_dev)) {
if (IS_ERR(soc_dev)) {
kfree(soc_dev_attr->revision);
kfree(soc_dev_attr);
return;
}
parent = soc_device_to_device(soc_dev);
if (!IS_ERR_OR_NULL(parent))
integrator_init_sysfs(parent, intcp_sc_id);
integrator_init_sysfs(parent, intcp_sc_id);
of_platform_populate(root, of_default_bus_match_table,
intcp_auxdata_lookup, parent);
}
......
......@@ -10,16 +10,12 @@
#include <linux/errno.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include "common.h"
static inline void cpu_enter_lowpower(void)
{
/* Just flush the cache. Changing the coherency is not yet
* available on msm. */
flush_cache_all();
}
static inline void cpu_leave_lowpower(void)
......
......@@ -479,7 +479,7 @@ static int __init beagle_opp_init(void)
/* Initialize the omap3 opp table if not already created. */
r = omap3_opp_init();
if (IS_ERR_VALUE(r) && (r != -EEXIST)) {
if (r < 0 && (r != -EEXIST)) {
pr_err("%s: opp default init failed\n", __func__);
return r;
}
......
......@@ -611,7 +611,7 @@ int __init omap2_clk_switch_mpurate_at_boot(const char *mpurate_ck_name)
return -ENOENT;
r = clk_set_rate(mpurate_ck, mpurate);
if (IS_ERR_VALUE(r)) {
if (r < 0) {
WARN(1, "clock: %s: unable to set MPU rate to %d: %d\n",
mpurate_ck_name, mpurate, r);
clk_put(mpurate_ck);
......
......@@ -303,7 +303,7 @@ static int omap2_onenand_setup_async(void __iomem *onenand_base)
t = omap2_onenand_calc_async_timings();
ret = gpmc_set_async_mode(gpmc_onenand_data->cs, &t);
if (IS_ERR_VALUE(ret))
if (ret < 0)
return ret;
omap2_onenand_set_async_mode(onenand_base);
......@@ -325,7 +325,7 @@ static int omap2_onenand_setup_sync(void __iomem *onenand_base, int *freq_ptr)
t = omap2_onenand_calc_sync_timings(gpmc_onenand_data, freq);
ret = gpmc_set_sync_mode(gpmc_onenand_data->cs, &t);
if (IS_ERR_VALUE(ret))
if (ret < 0)
return ret;
set_onenand_cfg(onenand_base);
......
......@@ -716,7 +716,7 @@ static int gpmc_setup_irq(void)
return -EINVAL;
gpmc_irq_start = irq_alloc_descs(-1, 0, GPMC_NR_IRQ, 0);
if (IS_ERR_VALUE(gpmc_irq_start)) {
if (gpmc_irq_start < 0) {
pr_err("irq_alloc_descs failed\n");
return gpmc_irq_start;
}
......@@ -801,7 +801,7 @@ static int gpmc_mem_init(void)
continue;
gpmc_cs_get_memconf(cs, &base, &size);
rc = gpmc_cs_insert_mem(cs, base, size);
if (IS_ERR_VALUE(rc)) {
if (rc < 0) {
while (--cs >= 0)
if (gpmc_cs_mem_enabled(cs))
gpmc_cs_delete_mem(cs);
......@@ -1370,14 +1370,14 @@ static int gpmc_probe(struct platform_device *pdev)
GPMC_REVISION_MINOR(l));
rc = gpmc_mem_init();
if (IS_ERR_VALUE(rc)) {
if (rc < 0) {
clk_disable_unprepare(gpmc_l3_clk);
clk_put(gpmc_l3_clk);
dev_err(gpmc_dev, "failed to reserve memory\n");
return rc;
}
if (IS_ERR_VALUE(gpmc_setup_irq()))
if (gpmc_setup_irq() < 0)
dev_warn(gpmc_dev, "gpmc_setup_irq failed\n");
/* Now the GPMC is initialised, unreserve the chip-selects */
......
......@@ -314,7 +314,7 @@ void __init omap3xxx_check_revision(void)
* If the processor type is Cortex-A8 and the revision is 0x0
* it means its Cortex r0p0 which is 3430 ES1.0.
*/
cpuid = read_cpuid(CPUID_ID);
cpuid = read_cpuid_id();
if ((((cpuid >> 4) & 0xfff) == 0xc08) && ((cpuid & 0xf) == 0x0)) {
omap_revision = OMAP3430_REV_ES1_0;
cpu_rev = "1.0";
......@@ -475,7 +475,7 @@ void __init omap4xxx_check_revision(void)
* Use ARM register to detect the correct ES version
*/
if (!rev && (hawkeye != 0xb94e) && (hawkeye != 0xb975)) {
idcode = read_cpuid(CPUID_ID);
idcode = read_cpuid_id();
rev = (idcode & 0xf) - 1;
}
......
......@@ -174,7 +174,7 @@ static void __init omap4_smp_init_cpus(void)
unsigned int i = 0, ncores = 1, cpu_id;
/* Use ARM cpuid check here, as SoC detection will not work so early */
cpu_id = read_cpuid(CPUID_ID) & CPU_MASK;
cpu_id = read_cpuid_id() & CPU_MASK;
if (cpu_id == CPU_CORTEX_A9) {
/*
* Currently we can't call ioremap here because
......
......@@ -131,7 +131,7 @@ static int omap_device_build_from_dt(struct platform_device *pdev)
int oh_cnt, i, ret = 0;
oh_cnt = of_property_count_strings(node, "ti,hwmods");
if (!oh_cnt || IS_ERR_VALUE(oh_cnt)) {
if (oh_cnt <= 0) {
dev_dbg(&pdev->dev, "No 'hwmods' to build omap_device\n");
return -ENODEV;
}
......@@ -815,20 +815,17 @@ struct device *omap_device_get_by_hwmod_name(const char *oh_name)
}
oh = omap_hwmod_lookup(oh_name);
if (IS_ERR_OR_NULL(oh)) {
if (!oh) {
WARN(1, "%s: no hwmod for %s\n", __func__,
oh_name);
return ERR_PTR(oh ? PTR_ERR(oh) : -ENODEV);
return ERR_PTR(-ENODEV);
}
if (IS_ERR_OR_NULL(oh->od)) {
if (!oh->od) {
WARN(1, "%s: no omap_device for %s\n", __func__,
oh_name);
return ERR_PTR(oh->od ? PTR_ERR(oh->od) : -ENODEV);
return ERR_PTR(-ENODEV);
}
if (IS_ERR_OR_NULL(oh->od->pdev))
return ERR_PTR(oh->od->pdev ? PTR_ERR(oh->od->pdev) : -ENODEV);
return &oh->od->pdev->dev;
}
......
......@@ -1663,7 +1663,7 @@ static int _deassert_hardreset(struct omap_hwmod *oh, const char *name)
return -ENOSYS;
ret = _lookup_hardreset(oh, name, &ohri);
if (IS_ERR_VALUE(ret))
if (ret < 0)
return ret;
if (oh->clkdm) {
......@@ -2413,7 +2413,7 @@ static int __init _init(struct omap_hwmod *oh, void *data)
_init_mpu_rt_base(oh, NULL);
r = _init_clocks(oh, NULL);
if (IS_ERR_VALUE(r)) {
if (r < 0) {
WARN(1, "omap_hwmod: %s: couldn't init clocks\n", oh->name);
return -EINVAL;
}
......
......@@ -217,7 +217,7 @@ static int __init pwrdms_setup(struct powerdomain *pwrdm, void *dir)
return 0;
d = debugfs_create_dir(pwrdm->name, (struct dentry *)dir);
if (!(IS_ERR_OR_NULL(d)))
if (d)
(void) debugfs_create_file("suspend", S_IRUGO|S_IWUSR, d,
(void *)pwrdm, &pwrdm_suspend_fops);
......@@ -261,8 +261,8 @@ static int __init pm_dbg_init(void)
return 0;
d = debugfs_create_dir("pm_debug", NULL);
if (IS_ERR_OR_NULL(d))
return PTR_ERR(d);
if (!d)
return -EINVAL;
(void) debugfs_create_file("count", S_IRUGO,
d, (void *)DEBUG_FILE_COUNTERS, &debug_fops);
......
......@@ -1180,7 +1180,7 @@ bool pwrdm_can_ever_lose_context(struct powerdomain *pwrdm)
{
int i;
if (IS_ERR_OR_NULL(pwrdm)) {
if (!pwrdm) {
pr_debug("powerdomain: %s: invalid powerdomain pointer\n",
__func__);
return 1;
......
......@@ -288,7 +288,7 @@ static int __init omap_dm_timer_init_one(struct omap_dm_timer *timer,
r = -EINVAL;
} else {
r = clk_set_parent(timer->fclk, src);
if (IS_ERR_VALUE(r))
if (r < 0)
pr_warn("%s: %s cannot set source\n",
__func__, oh->name);
clk_put(src);
......
......@@ -10,13 +10,10 @@
#include <linux/errno.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
static inline void platform_do_lowpower(unsigned int cpu)
{
flush_cache_all();
/* we put the platform to just WFI */
for (;;) {
__asm__ __volatile__("dsb\n\t" "wfi\n\t"
......
......@@ -12,7 +12,6 @@
#include <linux/errno.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include <asm/smp_plat.h>
......@@ -20,7 +19,6 @@ static inline void cpu_enter_lowpower(void)
{
unsigned int v;
flush_cache_all();
asm volatile(
" mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
......
......@@ -104,14 +104,6 @@ static int sh73a0_cpu_kill(unsigned int cpu)
static void sh73a0_cpu_die(unsigned int cpu)
{
/*
* The ARM MPcore does not issue a cache coherency request for the L1
* cache when powering off single CPUs. We must take care of this and
* further caches.
*/
dsb();
flush_cache_all();
/* Set power off mode. This takes the CPU out of the MP cluster */
scu_power_mode(shmobile_scu_base, SCU_PM_POWEROFF);
......
......@@ -13,7 +13,6 @@
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/cp15.h>
#include <asm/smp_plat.h>
......@@ -21,7 +20,6 @@ static inline void cpu_enter_lowpower(void)
{
unsigned int v;
flush_cache_all();
asm volatile(
" mcr p15, 0, %1, c7, c5, 0\n"
" dsb\n"
......
......@@ -56,9 +56,9 @@ int __init harmony_pcie_init(void)
gpio_direction_output(en_vdd_1v05, 1);
regulator = regulator_get(NULL, "vdd_ldo0,vddio_pex_clk");
if (IS_ERR_OR_NULL(regulator)) {
pr_err("%s: regulator_get failed: %d\n", __func__,
(int)PTR_ERR(regulator));
if (IS_ERR(regulator)) {
err = PTR_ERR(regulator);
pr_err("%s: regulator_get failed: %d\n", __func__, err);
goto err_reg;
}
......
......@@ -2,4 +2,3 @@ extern struct smp_operations tegra_smp_ops;
extern int tegra_cpu_kill(unsigned int cpu);
extern void tegra_cpu_die(unsigned int cpu);
extern int tegra_cpu_disable(unsigned int cpu);
......@@ -11,7 +11,6 @@
#include <linux/smp.h>
#include <linux/clk/tegra.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include "fuse.h"
......@@ -47,15 +46,6 @@ void __ref tegra_cpu_die(unsigned int cpu)
BUG();
}
int tegra_cpu_disable(unsigned int cpu)
{
/*
* we don't allow CPU 0 to be shutdown (it is still too special
* e.g. clock tick interrupts)
*/
return cpu == 0 ? -EPERM : 0;
}
void __init tegra_hotplug_init(void)
{
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
......
......@@ -173,6 +173,5 @@ struct smp_operations tegra_smp_ops __initdata = {
#ifdef CONFIG_HOTPLUG_CPU
.cpu_kill = tegra_cpu_kill,
.cpu_die = tegra_cpu_die,
.cpu_disable = tegra_cpu_disable,
#endif
};
......@@ -276,7 +276,7 @@ static struct tegra_emc_pdata *tegra_emc_fill_pdata(struct platform_device *pdev
int i;
WARN_ON(pdev->dev.platform_data);
BUG_ON(IS_ERR_OR_NULL(c));
BUG_ON(IS_ERR(c));
pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
pdata->tables = devm_kzalloc(&pdev->dev, sizeof(*pdata->tables),
......
......@@ -149,14 +149,13 @@ struct device * __init ux500_soc_device_init(const char *soc_id)
soc_info_populate(soc_dev_attr, soc_id);
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR_OR_NULL(soc_dev)) {
if (IS_ERR(soc_dev)) {
kfree(soc_dev_attr);
return NULL;
}
parent = soc_device_to_device(soc_dev);
if (!IS_ERR_OR_NULL(parent))
device_create_file(parent, &ux500_soc_attr);
device_create_file(parent, &ux500_soc_attr);
return parent;
}
......@@ -12,7 +12,6 @@
#include <linux/errno.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include "setup.h"
......@@ -24,8 +23,6 @@
*/
void __ref ux500_cpu_die(unsigned int cpu)
{
flush_cache_all();
/* directly enter low power state, skipping secure registers */
for (;;) {
__asm__ __volatile__("dsb\n\t" "wfi\n\t"
......
......@@ -12,7 +12,6 @@
#include <linux/errno.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include <asm/cp15.h>
......@@ -20,7 +19,6 @@ static inline void cpu_enter_lowpower(void)
{
unsigned int v;
flush_cache_all();
asm volatile(
"mcr p15, 0, %1, c7, c5, 0\n"
" mcr p15, 0, %1, c7, c10, 4\n"
......
......@@ -397,6 +397,13 @@ config CPU_V7
select CPU_PABRT_V7
select CPU_TLB_V7 if MMU
config CPU_THUMBONLY
bool
# There are no CPUs available with MMU that don't implement an ARM ISA:
depends on !MMU
help
Select this if your CPU doesn't support the 32 bit ARM instructions.
# Figure out what processor architecture version we should be using.
# This defines the compiler instruction set which depends on the machine type.
config CPU_32v3
......@@ -605,7 +612,7 @@ config ARCH_DMA_ADDR_T_64BIT
bool
config ARM_THUMB
bool "Support Thumb user binaries"
bool "Support Thumb user binaries" if !CPU_THUMBONLY
depends on CPU_ARM720T || CPU_ARM740T || CPU_ARM920T || CPU_ARM922T || CPU_ARM925T || CPU_ARM926T || CPU_ARM940T || CPU_ARM946E || CPU_ARM1020 || CPU_ARM1020E || CPU_ARM1022 || CPU_ARM1026 || CPU_XSCALE || CPU_XSC3 || CPU_MOHAWK || CPU_V6 || CPU_V6K || CPU_V7 || CPU_FEROCEON
default y
help
......
......@@ -961,12 +961,14 @@ static int __init alignment_init(void)
return -ENOMEM;
#endif
#ifdef CONFIG_CPU_CP15
if (cpu_is_v6_unaligned()) {
cr_alignment &= ~CR_A;
cr_no_alignment &= ~CR_A;
set_cr(cr_alignment);
ai_usermode = safe_usermode(ai_usermode, false);
}
#endif
hook_fault_code(FAULT_CODE_ALIGNMENT, do_alignment, SIGBUS, BUS_ADRALN,
"alignment exception");
......
......@@ -823,16 +823,17 @@ static void dma_cache_maint_page(struct page *page, unsigned long offset,
if (PageHighMem(page)) {
if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
vaddr = kmap_high_get(page);
if (vaddr) {
vaddr += offset;
op(vaddr, len, dir);
kunmap_high(page);
} else if (cache_is_vipt()) {
/* unmapped pages might still be cached */
if (cache_is_vipt_nonaliasing()) {
vaddr = kmap_atomic(page);
op(vaddr + offset, len, dir);
kunmap_atomic(vaddr);
} else {
vaddr = kmap_high_get(page);
if (vaddr) {
op(vaddr + offset, len, dir);
kunmap_high(page);
}
}
} else {
vaddr = page_address(page) + offset;
......
......@@ -170,15 +170,18 @@ void __flush_dcache_page(struct address_space *mapping, struct page *page)
if (!PageHighMem(page)) {
__cpuc_flush_dcache_area(page_address(page), PAGE_SIZE);
} else {
void *addr = kmap_high_get(page);
if (addr) {
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
kunmap_high(page);
} else if (cache_is_vipt()) {
/* unmapped pages might still be cached */
void *addr;
if (cache_is_vipt_nonaliasing()) {
addr = kmap_atomic(page);
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
kunmap_atomic(addr);
} else {
addr = kmap_high_get(page);
if (addr) {
__cpuc_flush_dcache_area(addr, PAGE_SIZE);
kunmap_high(page);
}
}
}
......
......@@ -113,6 +113,7 @@ static struct cachepolicy cache_policies[] __initdata = {
}
};
#ifdef CONFIG_CPU_CP15
/*
* These are useful for identifying cache coherency
* problems by allowing the cache or the cache and
......@@ -211,6 +212,22 @@ void adjust_cr(unsigned long mask, unsigned long set)
}
#endif
#else /* ifdef CONFIG_CPU_CP15 */
static int __init early_cachepolicy(char *p)
{
pr_warning("cachepolicy kernel parameter not supported without cp15\n");
}
early_param("cachepolicy", early_cachepolicy);
static int __init noalign_setup(char *__unused)
{
pr_warning("noalign kernel parameter not supported without cp15\n");
}
__setup("noalign", noalign_setup);
#endif /* ifdef CONFIG_CPU_CP15 / else */
#define PROT_PTE_DEVICE L_PTE_PRESENT|L_PTE_YOUNG|L_PTE_DIRTY|L_PTE_XN
#define PROT_SECT_DEVICE PMD_TYPE_SECT|PMD_SECT_AP_WRITE
......
......@@ -80,12 +80,10 @@ ENTRY(cpu_v6_do_idle)
mov pc, lr
ENTRY(cpu_v6_dcache_clean_area)
#ifndef TLB_CAN_READ_FROM_L1_CACHE
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
add r0, r0, #D_CACHE_LINE_SIZE
subs r1, r1, #D_CACHE_LINE_SIZE
bhi 1b
#endif
mov pc, lr
/*
......
......@@ -110,7 +110,8 @@ ENTRY(cpu_v7_set_pte_ext)
ARM( str r3, [r0, #2048]! )
THUMB( add r0, r0, #2048 )
THUMB( str r3, [r0] )
mcr p15, 0, r0, c7, c10, 1 @ flush_pte
ALT_SMP(mov pc,lr)
ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
#endif
mov pc, lr
ENDPROC(cpu_v7_set_pte_ext)
......
......@@ -73,7 +73,8 @@ ENTRY(cpu_v7_set_pte_ext)
tst r3, #1 << (55 - 32) @ L_PTE_DIRTY
orreq r2, #L_PTE_RDONLY
1: strd r2, r3, [r0]
mcr p15, 0, r0, c7, c10, 1 @ flush_pte
ALT_SMP(mov pc, lr)
ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte
#endif
mov pc, lr
ENDPROC(cpu_v7_set_pte_ext)
......
......@@ -75,14 +75,14 @@ ENTRY(cpu_v7_do_idle)
ENDPROC(cpu_v7_do_idle)
ENTRY(cpu_v7_dcache_clean_area)
#ifndef TLB_CAN_READ_FROM_L1_CACHE
ALT_SMP(mov pc, lr) @ MP extensions imply L1 PTW
ALT_UP(W(nop))
dcache_line_size r2, r3
1: mcr p15, 0, r0, c7, c10, 1 @ clean D entry
add r0, r0, r2
subs r1, r1, r2
bhi 1b
dsb
#endif
mov pc, lr
ENDPROC(cpu_v7_dcache_clean_area)
......@@ -402,6 +402,8 @@ __v7_ca9mp_proc_info:
__v7_proc __v7_ca9mp_setup
.size __v7_ca9mp_proc_info, . - __v7_ca9mp_proc_info
#endif /* CONFIG_ARM_LPAE */
/*
* Marvell PJ4B processor.
*/
......@@ -411,7 +413,6 @@ __v7_pj4b_proc_info:
.long 0xfffffff0
__v7_proc __v7_pj4b_setup
.size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info
#endif /* CONFIG_ARM_LPAE */
/*
* ARM Ltd. Cortex A7 processor.
......
......@@ -140,8 +140,7 @@ static int omap_dm_timer_prepare(struct omap_dm_timer *timer)
*/
if (!(timer->capability & OMAP_TIMER_NEEDS_RESET)) {
timer->fclk = clk_get(&timer->pdev->dev, "fck");
if (WARN_ON_ONCE(IS_ERR_OR_NULL(timer->fclk))) {
timer->fclk = NULL;
if (WARN_ON_ONCE(IS_ERR(timer->fclk))) {
dev_err(&timer->pdev->dev, ": No fclk handle.\n");
return -EINVAL;
}
......@@ -373,7 +372,7 @@ EXPORT_SYMBOL_GPL(omap_dm_timer_modify_idlect_mask);
struct clk *omap_dm_timer_get_fclk(struct omap_dm_timer *timer)
{
if (timer)
if (timer && !IS_ERR(timer->fclk))
return timer->fclk;
return NULL;
}
......@@ -482,7 +481,7 @@ int omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
if (pdata && pdata->set_timer_src)
return pdata->set_timer_src(timer->pdev, source);
if (!timer->fclk)
if (IS_ERR(timer->fclk))
return -EINVAL;
switch (source) {
......@@ -500,13 +499,13 @@ int omap_dm_timer_set_source(struct omap_dm_timer *timer, int source)
}
parent = clk_get(&timer->pdev->dev, parent_name);
if (IS_ERR_OR_NULL(parent)) {
if (IS_ERR(parent)) {
pr_err("%s: %s not found\n", __func__, parent_name);
return -EINVAL;
}
ret = clk_set_parent(timer->fclk, parent);
if (IS_ERR_VALUE(ret))
if (ret < 0)
pr_err("%s: failed to set %s as parent\n", __func__,
parent_name);
......@@ -808,6 +807,7 @@ static int omap_dm_timer_probe(struct platform_device *pdev)
return -ENOMEM;
}
timer->fclk = ERR_PTR(-ENODEV);
timer->io_base = devm_ioremap_resource(dev, mem);
if (IS_ERR(timer->io_base))
return PTR_ERR(timer->io_base);
......
......@@ -16,7 +16,7 @@
# are merged into mainline or have been edited in the machine database
# within the last 12 months. References to machine_is_NAME() do not count!
#
# Last update: Thu Apr 26 08:44:23 2012
# Last update: Fri Mar 22 17:24:50 2013
#
# machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number
#
......@@ -64,8 +64,8 @@ h7201 ARCH_H7201 H7201 161
h7202 ARCH_H7202 H7202 162
iq80321 ARCH_IQ80321 IQ80321 169
ks8695 ARCH_KS8695 KS8695 180
karo ARCH_KARO KARO 190
smdk2410 ARCH_SMDK2410 SMDK2410 193
ceiva ARCH_CEIVA CEIVA 200
voiceblue MACH_VOICEBLUE VOICEBLUE 218
h5400 ARCH_H5400 H5400 220
omap_innovator MACH_OMAP_INNOVATOR OMAP_INNOVATOR 234
......@@ -95,6 +95,7 @@ lpd7a400 MACH_LPD7A400 LPD7A400 389
lpd7a404 MACH_LPD7A404 LPD7A404 390
csb337 MACH_CSB337 CSB337 399
mainstone MACH_MAINSTONE MAINSTONE 406
lite300 MACH_LITE300 LITE300 408
xcep MACH_XCEP XCEP 413
arcom_vulcan MACH_ARCOM_VULCAN ARCOM_VULCAN 414
nomadik MACH_NOMADIK NOMADIK 420
......@@ -131,12 +132,14 @@ kb9200 MACH_KB9200 KB9200 612
sx1 MACH_SX1 SX1 613
ixdp465 MACH_IXDP465 IXDP465 618
ixdp2351 MACH_IXDP2351 IXDP2351 619
cm4008 MACH_CM4008 CM4008 624
iq80332 MACH_IQ80332 IQ80332 629
gtwx5715 MACH_GTWX5715 GTWX5715 641
csb637 MACH_CSB637 CSB637 648
n30 MACH_N30 N30 656
nec_mp900 MACH_NEC_MP900 NEC_MP900 659
kafa MACH_KAFA KAFA 662
cm41xx MACH_CM41XX CM41XX 672
ts72xx MACH_TS72XX TS72XX 673
otom MACH_OTOM OTOM 680
nexcoder_2440 MACH_NEXCODER_2440 NEXCODER_2440 681
......@@ -149,6 +152,7 @@ colibri MACH_COLIBRI COLIBRI 729
gateway7001 MACH_GATEWAY7001 GATEWAY7001 731
pcm027 MACH_PCM027 PCM027 732
anubis MACH_ANUBIS ANUBIS 734
xboardgp8 MACH_XBOARDGP8 XBOARDGP8 742
akita MACH_AKITA AKITA 744
e330 MACH_E330 E330 753
nokia770 MACH_NOKIA770 NOKIA770 755
......@@ -157,9 +161,11 @@ edb9315a MACH_EDB9315A EDB9315A 772
stargate2 MACH_STARGATE2 STARGATE2 774
intelmote2 MACH_INTELMOTE2 INTELMOTE2 775
trizeps4 MACH_TRIZEPS4 TRIZEPS4 776
pnx4008 MACH_PNX4008 PNX4008 782
cpuat91 MACH_CPUAT91 CPUAT91 787
iq81340sc MACH_IQ81340SC IQ81340SC 799
iq81340mc MACH_IQ81340MC IQ81340MC 801
se4200 MACH_SE4200 SE4200 809
micro9 MACH_MICRO9 MICRO9 811
micro9l MACH_MICRO9L MICRO9L 812
omap_palmte MACH_OMAP_PALMTE OMAP_PALMTE 817
......@@ -178,6 +184,7 @@ mx21ads MACH_MX21ADS MX21ADS 851
ams_delta MACH_AMS_DELTA AMS_DELTA 862
nas100d MACH_NAS100D NAS100D 865
magician MACH_MAGICIAN MAGICIAN 875
cm4002 MACH_CM4002 CM4002 876
nxdkn MACH_NXDKN NXDKN 880
palmtx MACH_PALMTX PALMTX 885
s3c2413 MACH_S3C2413 S3C2413 887
......@@ -203,7 +210,6 @@ omap_fsample MACH_OMAP_FSAMPLE OMAP_FSAMPLE 970
snapper_cl15 MACH_SNAPPER_CL15 SNAPPER_CL15 986
omap_palmz71 MACH_OMAP_PALMZ71 OMAP_PALMZ71 993
smdk2412 MACH_SMDK2412 SMDK2412 1009
bkde303 MACH_BKDE303 BKDE303 1021
smdk2413 MACH_SMDK2413 SMDK2413 1022
aml_m5900 MACH_AML_M5900 AML_M5900 1024
balloon3 MACH_BALLOON3 BALLOON3 1029
......@@ -214,6 +220,7 @@ fsg MACH_FSG FSG 1091
at91sam9260ek MACH_AT91SAM9260EK AT91SAM9260EK 1099
glantank MACH_GLANTANK GLANTANK 1100
n2100 MACH_N2100 N2100 1101
im42xx MACH_IM42XX IM42XX 1105
qt2410 MACH_QT2410 QT2410 1108
kixrp435 MACH_KIXRP435 KIXRP435 1109
cc9p9360dev MACH_CC9P9360DEV CC9P9360DEV 1114
......@@ -247,6 +254,7 @@ csb726 MACH_CSB726 CSB726 1359
davinci_dm6467_evm MACH_DAVINCI_DM6467_EVM DAVINCI_DM6467_EVM 1380
davinci_dm355_evm MACH_DAVINCI_DM355_EVM DAVINCI_DM355_EVM 1381
littleton MACH_LITTLETON LITTLETON 1388
im4004 MACH_IM4004 IM4004 1400
realview_pb11mp MACH_REALVIEW_PB11MP REALVIEW_PB11MP 1407
mx27_3ds MACH_MX27_3DS MX27_3DS 1430
halibut MACH_HALIBUT HALIBUT 1439
......@@ -268,6 +276,7 @@ dns323 MACH_DNS323 DNS323 1542
omap3_beagle MACH_OMAP3_BEAGLE OMAP3_BEAGLE 1546
nokia_n810 MACH_NOKIA_N810 NOKIA_N810 1548
pcm038 MACH_PCM038 PCM038 1551
sg310 MACH_SG310 SG310 1564
ts209 MACH_TS209 TS209 1565
at91cap9adk MACH_AT91CAP9ADK AT91CAP9ADK 1566
mx31moboard MACH_MX31MOBOARD MX31MOBOARD 1574
......@@ -371,7 +380,6 @@ pcm043 MACH_PCM043 PCM043 2072
sheevaplug MACH_SHEEVAPLUG SHEEVAPLUG 2097
avengers_lite MACH_AVENGERS_LITE AVENGERS_LITE 2104
mx51_babbage MACH_MX51_BABBAGE MX51_BABBAGE 2125
tx37 MACH_TX37 TX37 2127
rd78x00_masa MACH_RD78X00_MASA RD78X00_MASA 2135
dm355_leopard MACH_DM355_LEOPARD DM355_LEOPARD 2138
ts219 MACH_TS219 TS219 2139
......@@ -380,12 +388,12 @@ davinci_da850_evm MACH_DAVINCI_DA850_EVM DAVINCI_DA850_EVM 2157
at91sam9g10ek MACH_AT91SAM9G10EK AT91SAM9G10EK 2159
omap_4430sdp MACH_OMAP_4430SDP OMAP_4430SDP 2160
magx_zn5 MACH_MAGX_ZN5 MAGX_ZN5 2162
tx25 MACH_TX25 TX25 2177
omap3_torpedo MACH_OMAP3_TORPEDO OMAP3_TORPEDO 2178
anw6410 MACH_ANW6410 ANW6410 2183
imx27_visstrim_m10 MACH_IMX27_VISSTRIM_M10 IMX27_VISSTRIM_M10 2187
portuxg20 MACH_PORTUXG20 PORTUXG20 2191
smdkc110 MACH_SMDKC110 SMDKC110 2193
cabespresso MACH_CABESPRESSO CABESPRESSO 2194
omap3517evm MACH_OMAP3517EVM OMAP3517EVM 2200
netspace_v2 MACH_NETSPACE_V2 NETSPACE_V2 2201
netspace_max_v2 MACH_NETSPACE_MAX_V2 NETSPACE_MAX_V2 2202
......@@ -404,6 +412,7 @@ bigdisk MACH_BIGDISK BIGDISK 2283
at91sam9g20ek_2mmc MACH_AT91SAM9G20EK_2MMC AT91SAM9G20EK_2MMC 2288
bcmring MACH_BCMRING BCMRING 2289
mahimahi MACH_MAHIMAHI MAHIMAHI 2304
cerebric MACH_CEREBRIC CEREBRIC 2311
smdk6442 MACH_SMDK6442 SMDK6442 2324
openrd_base MACH_OPENRD_BASE OPENRD_BASE 2325
devkit8000 MACH_DEVKIT8000 DEVKIT8000 2330
......@@ -423,10 +432,10 @@ raumfeld_rc MACH_RAUMFELD_RC RAUMFELD_RC 2413
raumfeld_connector MACH_RAUMFELD_CONNECTOR RAUMFELD_CONNECTOR 2414
raumfeld_speaker MACH_RAUMFELD_SPEAKER RAUMFELD_SPEAKER 2415
tnetv107x MACH_TNETV107X TNETV107X 2418
mx51_m2id MACH_MX51_M2ID MX51_M2ID 2428
smdkv210 MACH_SMDKV210 SMDKV210 2456
omap_zoom3 MACH_OMAP_ZOOM3 OMAP_ZOOM3 2464
omap_3630sdp MACH_OMAP_3630SDP OMAP_3630SDP 2465
cybook2440 MACH_CYBOOK2440 CYBOOK2440 2466
smartq7 MACH_SMARTQ7 SMARTQ7 2479
watson_efm_plugin MACH_WATSON_EFM_PLUGIN WATSON_EFM_PLUGIN 2491
g4evm MACH_G4EVM G4EVM 2493
......@@ -434,12 +443,10 @@ omapl138_hawkboard MACH_OMAPL138_HAWKBOARD OMAPL138_HAWKBOARD 2495
ts41x MACH_TS41X TS41X 2502
phy3250 MACH_PHY3250 PHY3250 2511
mini6410 MACH_MINI6410 MINI6410 2520
tx51 MACH_TX51 TX51 2529
mx28evk MACH_MX28EVK MX28EVK 2531
smartq5 MACH_SMARTQ5 SMARTQ5 2534
davinci_dm6467tevm MACH_DAVINCI_DM6467TEVM DAVINCI_DM6467TEVM 2548
mxt_td60 MACH_MXT_TD60 MXT_TD60 2550
pca101 MACH_PCA101 PCA101 2595
capc7117 MACH_CAPC7117 CAPC7117 2612
icontrol MACH_ICONTROL ICONTROL 2624
gplugd MACH_GPLUGD GPLUGD 2625
......@@ -465,6 +472,7 @@ igep0030 MACH_IGEP0030 IGEP0030 2717
sbc3530 MACH_SBC3530 SBC3530 2722
saarb MACH_SAARB SAARB 2727
harmony MACH_HARMONY HARMONY 2731
cybook_orizon MACH_CYBOOK_ORIZON CYBOOK_ORIZON 2733
msm7x30_fluid MACH_MSM7X30_FLUID MSM7X30_FLUID 2741
cm_t3517 MACH_CM_T3517 CM_T3517 2750
wbd222 MACH_WBD222 WBD222 2753
......@@ -480,10 +488,8 @@ eukrea_cpuimx35sd MACH_EUKREA_CPUIMX35SD EUKREA_CPUIMX35SD 2821
eukrea_cpuimx51sd MACH_EUKREA_CPUIMX51SD EUKREA_CPUIMX51SD 2822
eukrea_cpuimx51 MACH_EUKREA_CPUIMX51 EUKREA_CPUIMX51 2823
smdkc210 MACH_SMDKC210 SMDKC210 2838
pcaal1 MACH_PCAAL1 PCAAL1 2843
t5325 MACH_T5325 T5325 2846
income MACH_INCOME INCOME 2849
mx257sx MACH_MX257SX MX257SX 2861
goni MACH_GONI GONI 2862
bv07 MACH_BV07 BV07 2882
openrd_ultimate MACH_OPENRD_ULTIMATE OPENRD_ULTIMATE 2884
......@@ -491,7 +497,6 @@ devixp MACH_DEVIXP DEVIXP 2885
miccpt MACH_MICCPT MICCPT 2886
mic256 MACH_MIC256 MIC256 2887
u5500 MACH_U5500 U5500 2890
pov15hd MACH_POV15HD POV15HD 2910
linkstation_lschl MACH_LINKSTATION_LSCHL LINKSTATION_LSCHL 2913
smdkv310 MACH_SMDKV310 SMDKV310 2925
wm8505_7in_netbook MACH_WM8505_7IN_NETBOOK WM8505_7IN_NETBOOK 2928
......@@ -518,7 +523,6 @@ prima2_evb MACH_PRIMA2_EVB PRIMA2_EVB 3103
paz00 MACH_PAZ00 PAZ00 3128
acmenetusfoxg20 MACH_ACMENETUSFOXG20 ACMENETUSFOXG20 3129
ag5evm MACH_AG5EVM AG5EVM 3189
tsunagi MACH_TSUNAGI TSUNAGI 3197
ics_if_voip MACH_ICS_IF_VOIP ICS_IF_VOIP 3206
wlf_cragg_6410 MACH_WLF_CRAGG_6410 WLF_CRAGG_6410 3207
trimslice MACH_TRIMSLICE TRIMSLICE 3209
......@@ -529,8 +533,6 @@ msm8960_sim MACH_MSM8960_SIM MSM8960_SIM 3230
msm8960_rumi3 MACH_MSM8960_RUMI3 MSM8960_RUMI3 3231
gsia18s MACH_GSIA18S GSIA18S 3234
mx53_loco MACH_MX53_LOCO MX53_LOCO 3273
tx53 MACH_TX53 TX53 3279
encore MACH_ENCORE ENCORE 3284
wario MACH_WARIO WARIO 3288
cm_t3730 MACH_CM_T3730 CM_T3730 3290
hrefv60 MACH_HREFV60 HREFV60 3293
......@@ -538,603 +540,24 @@ armlex4210 MACH_ARMLEX4210 ARMLEX4210 3361
snowball MACH_SNOWBALL SNOWBALL 3363
xilinx_ep107 MACH_XILINX_EP107 XILINX_EP107 3378
nuri MACH_NURI NURI 3379
wtplug MACH_WTPLUG WTPLUG 3412
veridis_a300 MACH_VERIDIS_A300 VERIDIS_A300 3448
origen MACH_ORIGEN ORIGEN 3455
wm8650refboard MACH_WM8650REFBOARD WM8650REFBOARD 3472
xarina MACH_XARINA XARINA 3476
sdvr MACH_SDVR SDVR 3478
acer_maya MACH_ACER_MAYA ACER_MAYA 3479
pico MACH_PICO PICO 3480
cwmx233 MACH_CWMX233 CWMX233 3481
cwam1808 MACH_CWAM1808 CWAM1808 3482
cwdm365 MACH_CWDM365 CWDM365 3483
mx51_moray MACH_MX51_MORAY MX51_MORAY 3484
thales_cbc MACH_THALES_CBC THALES_CBC 3485
bluepoint MACH_BLUEPOINT BLUEPOINT 3486
dir665 MACH_DIR665 DIR665 3487
acmerover1 MACH_ACMEROVER1 ACMEROVER1 3488
shooter_ct MACH_SHOOTER_CT SHOOTER_CT 3489
bliss MACH_BLISS BLISS 3490
blissc MACH_BLISSC BLISSC 3491
thales_adc MACH_THALES_ADC THALES_ADC 3492
ubisys_p9d_evp MACH_UBISYS_P9D_EVP UBISYS_P9D_EVP 3493
atdgp318 MACH_ATDGP318 ATDGP318 3494
dma210u MACH_DMA210U DMA210U 3495
em_t3 MACH_EM_T3 EM_T3 3496
htx3250 MACH_HTX3250 HTX3250 3497
g50 MACH_G50 G50 3498
eco5 MACH_ECO5 ECO5 3499
wintergrasp MACH_WINTERGRASP WINTERGRASP 3500
puro MACH_PURO PURO 3501
shooter_k MACH_SHOOTER_K SHOOTER_K 3502
nspire MACH_NSPIRE NSPIRE 3503
mickxx MACH_MICKXX MICKXX 3504
lxmb MACH_LXMB LXMB 3505
adam MACH_ADAM ADAM 3507
b1004 MACH_B1004 B1004 3508
oboea MACH_OBOEA OBOEA 3509
a1015 MACH_A1015 A1015 3510
robin_vbdt30 MACH_ROBIN_VBDT30 ROBIN_VBDT30 3511
tegra_enterprise MACH_TEGRA_ENTERPRISE TEGRA_ENTERPRISE 3512
rfl108200_mk10 MACH_RFL108200_MK10 RFL108200_MK10 3513
rfl108300_mk16 MACH_RFL108300_MK16 RFL108300_MK16 3514
rover_v7 MACH_ROVER_V7 ROVER_V7 3515
miphone MACH_MIPHONE MIPHONE 3516
femtobts MACH_FEMTOBTS FEMTOBTS 3517
monopoli MACH_MONOPOLI MONOPOLI 3518
boss MACH_BOSS BOSS 3519
davinci_dm368_vtam MACH_DAVINCI_DM368_VTAM DAVINCI_DM368_VTAM 3520
clcon MACH_CLCON CLCON 3521
nokia_rm696 MACH_NOKIA_RM696 NOKIA_RM696 3522
tahiti MACH_TAHITI TAHITI 3523
fighter MACH_FIGHTER FIGHTER 3524
sgh_i710 MACH_SGH_I710 SGH_I710 3525
integreproscb MACH_INTEGREPROSCB INTEGREPROSCB 3526
monza MACH_MONZA MONZA 3527
calimain MACH_CALIMAIN CALIMAIN 3528
mx6q_sabreauto MACH_MX6Q_SABREAUTO MX6Q_SABREAUTO 3529
gma01x MACH_GMA01X GMA01X 3530
sbc51 MACH_SBC51 SBC51 3531
fit MACH_FIT FIT 3532
steelhead MACH_STEELHEAD STEELHEAD 3533
panther MACH_PANTHER PANTHER 3534
msm8960_liquid MACH_MSM8960_LIQUID MSM8960_LIQUID 3535
lexikonct MACH_LEXIKONCT LEXIKONCT 3536
ns2816_stb MACH_NS2816_STB NS2816_STB 3537
sei_mm2_lpc3250 MACH_SEI_MM2_LPC3250 SEI_MM2_LPC3250 3538
cmimx53 MACH_CMIMX53 CMIMX53 3539
sandwich MACH_SANDWICH SANDWICH 3540
chief MACH_CHIEF CHIEF 3541
pogo_e02 MACH_POGO_E02 POGO_E02 3542
mikrap_x168 MACH_MIKRAP_X168 MIKRAP_X168 3543
htcmozart MACH_HTCMOZART HTCMOZART 3544
htcgold MACH_HTCGOLD HTCGOLD 3545
mt72xx MACH_MT72XX MT72XX 3546
mx51_ivy MACH_MX51_IVY MX51_IVY 3547
mx51_lvd MACH_MX51_LVD MX51_LVD 3548
omap3_wiser2 MACH_OMAP3_WISER2 OMAP3_WISER2 3549
dreamplug MACH_DREAMPLUG DREAMPLUG 3550
cobas_c_111 MACH_COBAS_C_111 COBAS_C_111 3551
cobas_u_411 MACH_COBAS_U_411 COBAS_U_411 3552
hssd MACH_HSSD HSSD 3553
iom35x MACH_IOM35X IOM35X 3554
psom_omap MACH_PSOM_OMAP PSOM_OMAP 3555
iphone_2g MACH_IPHONE_2G IPHONE_2G 3556
iphone_3g MACH_IPHONE_3G IPHONE_3G 3557
ipod_touch_1g MACH_IPOD_TOUCH_1G IPOD_TOUCH_1G 3558
pharos_tpc MACH_PHAROS_TPC PHAROS_TPC 3559
mx53_hydra MACH_MX53_HYDRA MX53_HYDRA 3560
ns2816_dev_board MACH_NS2816_DEV_BOARD NS2816_DEV_BOARD 3561
iphone_3gs MACH_IPHONE_3GS IPHONE_3GS 3562
iphone_4 MACH_IPHONE_4 IPHONE_4 3563
ipod_touch_4g MACH_IPOD_TOUCH_4G IPOD_TOUCH_4G 3564
dragon_e1100 MACH_DRAGON_E1100 DRAGON_E1100 3565
topside MACH_TOPSIDE TOPSIDE 3566
irisiii MACH_IRISIII IRISIII 3567
deto_macarm9 MACH_DETO_MACARM9 DETO_MACARM9 3568
eti_d1 MACH_ETI_D1 ETI_D1 3569
som3530sdk MACH_SOM3530SDK SOM3530SDK 3570
oc_engine MACH_OC_ENGINE OC_ENGINE 3571
apq8064_sim MACH_APQ8064_SIM APQ8064_SIM 3572
alps MACH_ALPS ALPS 3575
tny_t3730 MACH_TNY_T3730 TNY_T3730 3576
geryon_nfe MACH_GERYON_NFE GERYON_NFE 3577
ns2816_ref_board MACH_NS2816_REF_BOARD NS2816_REF_BOARD 3578
silverstone MACH_SILVERSTONE SILVERSTONE 3579
mtt2440 MACH_MTT2440 MTT2440 3580
ynicdb MACH_YNICDB YNICDB 3581
bct MACH_BCT BCT 3582
tuscan MACH_TUSCAN TUSCAN 3583
xbt_sam9g45 MACH_XBT_SAM9G45 XBT_SAM9G45 3584
enbw_cmc MACH_ENBW_CMC ENBW_CMC 3585
ch104mx257 MACH_CH104MX257 CH104MX257 3587
openpri MACH_OPENPRI OPENPRI 3588
am335xevm MACH_AM335XEVM AM335XEVM 3589
picodmb MACH_PICODMB PICODMB 3590
waluigi MACH_WALUIGI WALUIGI 3591
punicag7 MACH_PUNICAG7 PUNICAG7 3592
ipad_1g MACH_IPAD_1G IPAD_1G 3593
appletv_2g MACH_APPLETV_2G APPLETV_2G 3594
mach_ecog45 MACH_MACH_ECOG45 MACH_ECOG45 3595
ait_cam_enc_4xx MACH_AIT_CAM_ENC_4XX AIT_CAM_ENC_4XX 3596
runnymede MACH_RUNNYMEDE RUNNYMEDE 3597
play MACH_PLAY PLAY 3598
hw90260 MACH_HW90260 HW90260 3599
tagh MACH_TAGH TAGH 3600
filbert MACH_FILBERT FILBERT 3601
getinge_netcomv3 MACH_GETINGE_NETCOMV3 GETINGE_NETCOMV3 3602
cw20 MACH_CW20 CW20 3603
cinema MACH_CINEMA CINEMA 3604
cinema_tea MACH_CINEMA_TEA CINEMA_TEA 3605
cinema_coffee MACH_CINEMA_COFFEE CINEMA_COFFEE 3606
cinema_juice MACH_CINEMA_JUICE CINEMA_JUICE 3607
mx53_mirage2 MACH_MX53_MIRAGE2 MX53_MIRAGE2 3609
mx53_efikasb MACH_MX53_EFIKASB MX53_EFIKASB 3610
stm_b2000 MACH_STM_B2000 STM_B2000 3612
m28evk MACH_M28EVK M28EVK 3613
pda MACH_PDA PDA 3614
meraki_mr58 MACH_MERAKI_MR58 MERAKI_MR58 3615
kota2 MACH_KOTA2 KOTA2 3616
letcool MACH_LETCOOL LETCOOL 3617
mx27iat MACH_MX27IAT MX27IAT 3618
apollo_td MACH_APOLLO_TD APOLLO_TD 3619
arena MACH_ARENA ARENA 3620
gsngateway MACH_GSNGATEWAY GSNGATEWAY 3621
lf2000 MACH_LF2000 LF2000 3622
bonito MACH_BONITO BONITO 3623
asymptote MACH_ASYMPTOTE ASYMPTOTE 3624
bst2brd MACH_BST2BRD BST2BRD 3625
tx335s MACH_TX335S TX335S 3626
pelco_tesla MACH_PELCO_TESLA PELCO_TESLA 3627
rrhtestplat MACH_RRHTESTPLAT RRHTESTPLAT 3628
vidtonic_pro MACH_VIDTONIC_PRO VIDTONIC_PRO 3629
pl_apollo MACH_PL_APOLLO PL_APOLLO 3630
pl_phoenix MACH_PL_PHOENIX PL_PHOENIX 3631
m28cu3 MACH_M28CU3 M28CU3 3632
vvbox_hd MACH_VVBOX_HD VVBOX_HD 3633
coreware_sam9260_ MACH_COREWARE_SAM9260_ COREWARE_SAM9260_ 3634
marmaduke MACH_MARMADUKE MARMADUKE 3635
amg_xlcore_camera MACH_AMG_XLCORE_CAMERA AMG_XLCORE_CAMERA 3636
omap3_egf MACH_OMAP3_EGF OMAP3_EGF 3637
smdk4212 MACH_SMDK4212 SMDK4212 3638
dnp9200 MACH_DNP9200 DNP9200 3639
tf101 MACH_TF101 TF101 3640
omap3silvio MACH_OMAP3SILVIO OMAP3SILVIO 3641
picasso2 MACH_PICASSO2 PICASSO2 3642
vangogh2 MACH_VANGOGH2 VANGOGH2 3643
olpc_xo_1_75 MACH_OLPC_XO_1_75 OLPC_XO_1_75 3644
gx400 MACH_GX400 GX400 3645
gs300 MACH_GS300 GS300 3646
acer_a9 MACH_ACER_A9 ACER_A9 3647
vivow_evm MACH_VIVOW_EVM VIVOW_EVM 3648
veloce_cxq MACH_VELOCE_CXQ VELOCE_CXQ 3649
veloce_cxm MACH_VELOCE_CXM VELOCE_CXM 3650
p1852 MACH_P1852 P1852 3651
naxy100 MACH_NAXY100 NAXY100 3652
taishan MACH_TAISHAN TAISHAN 3653
touchlink MACH_TOUCHLINK TOUCHLINK 3654
stm32f103ze MACH_STM32F103ZE STM32F103ZE 3655
mcx MACH_MCX MCX 3656
stm_nmhdk_fli7610 MACH_STM_NMHDK_FLI7610 STM_NMHDK_FLI7610 3657
top28x MACH_TOP28X TOP28X 3658
okl4vp_microvisor MACH_OKL4VP_MICROVISOR OKL4VP_MICROVISOR 3659
pop MACH_POP POP 3660
layer MACH_LAYER LAYER 3661
trondheim MACH_TRONDHEIM TRONDHEIM 3662
eva MACH_EVA EVA 3663
trust_taurus MACH_TRUST_TAURUS TRUST_TAURUS 3664
ns2816_huashan MACH_NS2816_HUASHAN NS2816_HUASHAN 3665
ns2816_yangcheng MACH_NS2816_YANGCHENG NS2816_YANGCHENG 3666
p852 MACH_P852 P852 3667
flea3 MACH_FLEA3 FLEA3 3668
bowfin MACH_BOWFIN BOWFIN 3669
mv88de3100 MACH_MV88DE3100 MV88DE3100 3670
pia_am35x MACH_PIA_AM35X PIA_AM35X 3671
cedar MACH_CEDAR CEDAR 3672
picasso_e MACH_PICASSO_E PICASSO_E 3673
samsung_e60 MACH_SAMSUNG_E60 SAMSUNG_E60 3674
sdvr_mini MACH_SDVR_MINI SDVR_MINI 3676
omap3_ij3k MACH_OMAP3_IJ3K OMAP3_IJ3K 3677
modasmc1 MACH_MODASMC1 MODASMC1 3678
apq8064_rumi3 MACH_APQ8064_RUMI3 APQ8064_RUMI3 3679
matrix506 MACH_MATRIX506 MATRIX506 3680
msm9615_mtp MACH_MSM9615_MTP MSM9615_MTP 3681
dm36x_spawndc MACH_DM36X_SPAWNDC DM36X_SPAWNDC 3682
sff792 MACH_SFF792 SFF792 3683
am335xiaevm MACH_AM335XIAEVM AM335XIAEVM 3684
g3c2440 MACH_G3C2440 G3C2440 3685
tion270 MACH_TION270 TION270 3686
w22q7arm02 MACH_W22Q7ARM02 W22Q7ARM02 3687
omap_cat MACH_OMAP_CAT OMAP_CAT 3688
at91sam9n12ek MACH_AT91SAM9N12EK AT91SAM9N12EK 3689
morrison MACH_MORRISON MORRISON 3690
svdu MACH_SVDU SVDU 3691
lpp01 MACH_LPP01 LPP01 3692
ubc283 MACH_UBC283 UBC283 3693
zeppelin MACH_ZEPPELIN ZEPPELIN 3694
motus MACH_MOTUS MOTUS 3695
neomainboard MACH_NEOMAINBOARD NEOMAINBOARD 3696
devkit3250 MACH_DEVKIT3250 DEVKIT3250 3697
devkit7000 MACH_DEVKIT7000 DEVKIT7000 3698
fmc_uic MACH_FMC_UIC FMC_UIC 3699
fmc_dcm MACH_FMC_DCM FMC_DCM 3700
batwm MACH_BATWM BATWM 3701
atlas6cb MACH_ATLAS6CB ATLAS6CB 3702
blue MACH_BLUE BLUE 3705
colorado MACH_COLORADO COLORADO 3706
popc MACH_POPC POPC 3707
promwad_jade MACH_PROMWAD_JADE PROMWAD_JADE 3708
amp MACH_AMP AMP 3709
gnet_amp MACH_GNET_AMP GNET_AMP 3710
toques MACH_TOQUES TOQUES 3711
apx4devkit MACH_APX4DEVKIT APX4DEVKIT 3712
dct_storm MACH_DCT_STORM DCT_STORM 3713
owl MACH_OWL OWL 3715
cogent_csb1741 MACH_COGENT_CSB1741 COGENT_CSB1741 3716
adillustra610 MACH_ADILLUSTRA610 ADILLUSTRA610 3718
ecafe_na04 MACH_ECAFE_NA04 ECAFE_NA04 3719
popct MACH_POPCT POPCT 3720
omap3_helena MACH_OMAP3_HELENA OMAP3_HELENA 3721
ach MACH_ACH ACH 3722
module_dtb MACH_MODULE_DTB MODULE_DTB 3723
oslo_elisabeth MACH_OSLO_ELISABETH OSLO_ELISABETH 3725
tt01 MACH_TT01 TT01 3726
msm8930_cdp MACH_MSM8930_CDP MSM8930_CDP 3727
msm8930_mtp MACH_MSM8930_MTP MSM8930_MTP 3728
msm8930_fluid MACH_MSM8930_FLUID MSM8930_FLUID 3729
ltu11 MACH_LTU11 LTU11 3730
am1808_spawnco MACH_AM1808_SPAWNCO AM1808_SPAWNCO 3731
flx6410 MACH_FLX6410 FLX6410 3732
mx6q_qsb MACH_MX6Q_QSB MX6Q_QSB 3733
mx53_plt424 MACH_MX53_PLT424 MX53_PLT424 3734
jasmine MACH_JASMINE JASMINE 3735
l138_owlboard_plus MACH_L138_OWLBOARD_PLUS L138_OWLBOARD_PLUS 3736
wr21 MACH_WR21 WR21 3737
peaboy MACH_PEABOY PEABOY 3739
mx28_plato MACH_MX28_PLATO MX28_PLATO 3740
kacom2 MACH_KACOM2 KACOM2 3741
slco MACH_SLCO SLCO 3742
imx51pico MACH_IMX51PICO IMX51PICO 3743
glink1 MACH_GLINK1 GLINK1 3744
diamond MACH_DIAMOND DIAMOND 3745
d9000 MACH_D9000 D9000 3746
w5300e01 MACH_W5300E01 W5300E01 3747
im6000 MACH_IM6000 IM6000 3748
mx51_fred51 MACH_MX51_FRED51 MX51_FRED51 3749
stm32f2 MACH_STM32F2 STM32F2 3750
ville MACH_VILLE VILLE 3751
ptip_murnau MACH_PTIP_MURNAU PTIP_MURNAU 3752
ptip_classic MACH_PTIP_CLASSIC PTIP_CLASSIC 3753
mx53grb MACH_MX53GRB MX53GRB 3754
gagarin MACH_GAGARIN GAGARIN 3755
nas2big MACH_NAS2BIG NAS2BIG 3757
superfemto MACH_SUPERFEMTO SUPERFEMTO 3758
teufel MACH_TEUFEL TEUFEL 3759
dinara MACH_DINARA DINARA 3760
vanquish MACH_VANQUISH VANQUISH 3761
zipabox1 MACH_ZIPABOX1 ZIPABOX1 3762
u9540 MACH_U9540 U9540 3763
jet MACH_JET JET 3764
smdk4412 MACH_SMDK4412 SMDK4412 3765
elite MACH_ELITE ELITE 3766
spear320_hmi MACH_SPEAR320_HMI SPEAR320_HMI 3767
ontario MACH_ONTARIO ONTARIO 3768
mx6q_sabrelite MACH_MX6Q_SABRELITE MX6Q_SABRELITE 3769
vc200 MACH_VC200 VC200 3770
msm7625a_ffa MACH_MSM7625A_FFA MSM7625A_FFA 3771
msm7625a_surf MACH_MSM7625A_SURF MSM7625A_SURF 3772
benthossbp MACH_BENTHOSSBP BENTHOSSBP 3773
smdk5210 MACH_SMDK5210 SMDK5210 3774
empq2300 MACH_EMPQ2300 EMPQ2300 3775
minipos MACH_MINIPOS MINIPOS 3776
omap5_sevm MACH_OMAP5_SEVM OMAP5_SEVM 3777
shelter MACH_SHELTER SHELTER 3778
omap3_devkit8500 MACH_OMAP3_DEVKIT8500 OMAP3_DEVKIT8500 3779
edgetd MACH_EDGETD EDGETD 3780
copperyard MACH_COPPERYARD COPPERYARD 3781
edge_u MACH_EDGE_U EDGE_U 3783
edge_td MACH_EDGE_TD EDGE_TD 3784
wdss MACH_WDSS WDSS 3785
dl_pb25 MACH_DL_PB25 DL_PB25 3786
dss11 MACH_DSS11 DSS11 3787
cpa MACH_CPA CPA 3788
aptp2000 MACH_APTP2000 APTP2000 3789
marzen MACH_MARZEN MARZEN 3790
st_turbine MACH_ST_TURBINE ST_TURBINE 3791
gtl_it3300 MACH_GTL_IT3300 GTL_IT3300 3792
mx6_mule MACH_MX6_MULE MX6_MULE 3793
v7pxa_dt MACH_V7PXA_DT V7PXA_DT 3794
v7mmp_dt MACH_V7MMP_DT V7MMP_DT 3795
dragon7 MACH_DRAGON7 DRAGON7 3796
krome MACH_KROME KROME 3797
oratisdante MACH_ORATISDANTE ORATISDANTE 3798
fathom MACH_FATHOM FATHOM 3799
dns325 MACH_DNS325 DNS325 3800
sarnen MACH_SARNEN SARNEN 3801
ubisys_g1 MACH_UBISYS_G1 UBISYS_G1 3802
mx53_pf1 MACH_MX53_PF1 MX53_PF1 3803
asanti MACH_ASANTI ASANTI 3804
volta MACH_VOLTA VOLTA 3805
knight MACH_KNIGHT KNIGHT 3807
beaglebone MACH_BEAGLEBONE BEAGLEBONE 3808
becker MACH_BECKER BECKER 3809
fc360 MACH_FC360 FC360 3810
pmi2_xls MACH_PMI2_XLS PMI2_XLS 3811
taranto MACH_TARANTO TARANTO 3812
plutux MACH_PLUTUX PLUTUX 3813
ipmp_medcom MACH_IPMP_MEDCOM IPMP_MEDCOM 3814
absolut MACH_ABSOLUT ABSOLUT 3815
awpb3 MACH_AWPB3 AWPB3 3816
nfp32xx_dt MACH_NFP32XX_DT NFP32XX_DT 3817
dl_pb53 MACH_DL_PB53 DL_PB53 3818
acu_ii MACH_ACU_II ACU_II 3819
avalon MACH_AVALON AVALON 3820
sphinx MACH_SPHINX SPHINX 3821
titan_t MACH_TITAN_T TITAN_T 3822
harvest_boris MACH_HARVEST_BORIS HARVEST_BORIS 3823
mach_msm7x30_m3s MACH_MACH_MSM7X30_M3S MACH_MSM7X30_M3S 3824
smdk5250 MACH_SMDK5250 SMDK5250 3825
imxt_lite MACH_IMXT_LITE IMXT_LITE 3826
imxt_std MACH_IMXT_STD IMXT_STD 3827
imxt_log MACH_IMXT_LOG IMXT_LOG 3828
imxt_nav MACH_IMXT_NAV IMXT_NAV 3829
imxt_full MACH_IMXT_FULL IMXT_FULL 3830
ag09015 MACH_AG09015 AG09015 3831
am3517_mt_ventoux MACH_AM3517_MT_VENTOUX AM3517_MT_VENTOUX 3832
dp1arm9 MACH_DP1ARM9 DP1ARM9 3833
picasso_m MACH_PICASSO_M PICASSO_M 3834
video_gadget MACH_VIDEO_GADGET VIDEO_GADGET 3835
mtt_om3x MACH_MTT_OM3X MTT_OM3X 3836
mx6q_arm2 MACH_MX6Q_ARM2 MX6Q_ARM2 3837
picosam9g45 MACH_PICOSAM9G45 PICOSAM9G45 3838
vpm_dm365 MACH_VPM_DM365 VPM_DM365 3839
bonfire MACH_BONFIRE BONFIRE 3840
mt2p2d MACH_MT2P2D MT2P2D 3841
sigpda01 MACH_SIGPDA01 SIGPDA01 3842
cn27 MACH_CN27 CN27 3843
mx25_cwtap MACH_MX25_CWTAP MX25_CWTAP 3844
apf28 MACH_APF28 APF28 3845
pelco_maxwell MACH_PELCO_MAXWELL PELCO_MAXWELL 3846
ge_phoenix MACH_GE_PHOENIX GE_PHOENIX 3847
empc_a500 MACH_EMPC_A500 EMPC_A500 3848
ims_arm9 MACH_IMS_ARM9 IMS_ARM9 3849
mini2416 MACH_MINI2416 MINI2416 3850
mini2450 MACH_MINI2450 MINI2450 3851
mini310 MACH_MINI310 MINI310 3852
spear_hurricane MACH_SPEAR_HURRICANE SPEAR_HURRICANE 3853
mt7208 MACH_MT7208 MT7208 3854
lpc178x MACH_LPC178X LPC178X 3855
farleys MACH_FARLEYS FARLEYS 3856
efm32gg_dk3750 MACH_EFM32GG_DK3750 EFM32GG_DK3750 3857
zeus_board MACH_ZEUS_BOARD ZEUS_BOARD 3858
cc51 MACH_CC51 CC51 3859
fxi_c210 MACH_FXI_C210 FXI_C210 3860
msm8627_cdp MACH_MSM8627_CDP MSM8627_CDP 3861
msm8627_mtp MACH_MSM8627_MTP MSM8627_MTP 3862
armadillo800eva MACH_ARMADILLO800EVA ARMADILLO800EVA 3863
primou MACH_PRIMOU PRIMOU 3864
primoc MACH_PRIMOC PRIMOC 3865
primoct MACH_PRIMOCT PRIMOCT 3866
a9500 MACH_A9500 A9500 3867
pluto MACH_PLUTO PLUTO 3869
acfx100 MACH_ACFX100 ACFX100 3870
msm8625_rumi3 MACH_MSM8625_RUMI3 MSM8625_RUMI3 3871
valente MACH_VALENTE VALENTE 3872
crfs_rfeye MACH_CRFS_RFEYE CRFS_RFEYE 3873
rfeye MACH_RFEYE RFEYE 3874
phidget_sbc3 MACH_PHIDGET_SBC3 PHIDGET_SBC3 3875
tcw_mika MACH_TCW_MIKA TCW_MIKA 3876
imx28_egf MACH_IMX28_EGF IMX28_EGF 3877
valente_wx MACH_VALENTE_WX VALENTE_WX 3878
huangshans MACH_HUANGSHANS HUANGSHANS 3879
bosphorus1 MACH_BOSPHORUS1 BOSPHORUS1 3880
prima MACH_PRIMA PRIMA 3881
evita_ulk MACH_EVITA_ULK EVITA_ULK 3884
merisc600 MACH_MERISC600 MERISC600 3885
dolak MACH_DOLAK DOLAK 3886
sbc53 MACH_SBC53 SBC53 3887
elite_ulk MACH_ELITE_ULK ELITE_ULK 3888
pov2 MACH_POV2 POV2 3889
ipod_touch_2g MACH_IPOD_TOUCH_2G IPOD_TOUCH_2G 3890
da850_pqab MACH_DA850_PQAB DA850_PQAB 3891
fermi MACH_FERMI FERMI 3892
ccardwmx28 MACH_CCARDWMX28 CCARDWMX28 3893
ccardmx28 MACH_CCARDMX28 CCARDMX28 3894
fs20_fcm2050 MACH_FS20_FCM2050 FS20_FCM2050 3895
kinetis MACH_KINETIS KINETIS 3896
kai MACH_KAI KAI 3897
bcthb2 MACH_BCTHB2 BCTHB2 3898
inels3_cu MACH_INELS3_CU INELS3_CU 3899
da850_apollo MACH_DA850_APOLLO DA850_APOLLO 3901
tracnas MACH_TRACNAS TRACNAS 3902
mityarm335x MACH_MITYARM335X MITYARM335X 3903
xcgz7x MACH_XCGZ7X XCGZ7X 3904
cubox MACH_CUBOX CUBOX 3905
terminator MACH_TERMINATOR TERMINATOR 3906
eye03 MACH_EYE03 EYE03 3907
kota3 MACH_KOTA3 KOTA3 3908
pscpe MACH_PSCPE PSCPE 3910
akt1100 MACH_AKT1100 AKT1100 3911
pcaaxl2 MACH_PCAAXL2 PCAAXL2 3912
primodd_ct MACH_PRIMODD_CT PRIMODD_CT 3913
nsbc MACH_NSBC NSBC 3914
meson2_skt MACH_MESON2_SKT MESON2_SKT 3915
meson2_ref MACH_MESON2_REF MESON2_REF 3916
ccardwmx28js MACH_CCARDWMX28JS CCARDWMX28JS 3917
ccardmx28js MACH_CCARDMX28JS CCARDMX28JS 3918
indico MACH_INDICO INDICO 3919
msm8960dt MACH_MSM8960DT MSM8960DT 3920
primods MACH_PRIMODS PRIMODS 3921
beluga_m1388 MACH_BELUGA_M1388 BELUGA_M1388 3922
primotd MACH_PRIMOTD PRIMOTD 3923
varan_master MACH_VARAN_MASTER VARAN_MASTER 3924
primodd MACH_PRIMODD PRIMODD 3925
jetduo MACH_JETDUO JETDUO 3926
mx53_umobo MACH_MX53_UMOBO MX53_UMOBO 3927
trats MACH_TRATS TRATS 3928
starcraft MACH_STARCRAFT STARCRAFT 3929
qseven_tegra2 MACH_QSEVEN_TEGRA2 QSEVEN_TEGRA2 3930
lichee_sun4i_devbd MACH_LICHEE_SUN4I_DEVBD LICHEE_SUN4I_DEVBD 3931
movenow MACH_MOVENOW MOVENOW 3932
golf_u MACH_GOLF_U GOLF_U 3933
msm7627a_evb MACH_MSM7627A_EVB MSM7627A_EVB 3934
rambo MACH_RAMBO RAMBO 3935
golfu MACH_GOLFU GOLFU 3936
mango310 MACH_MANGO310 MANGO310 3937
dns343 MACH_DNS343 DNS343 3938
var_som_om44 MACH_VAR_SOM_OM44 VAR_SOM_OM44 3939
naon MACH_NAON NAON 3940
vp4000 MACH_VP4000 VP4000 3941
impcard MACH_IMPCARD IMPCARD 3942
smoovcam MACH_SMOOVCAM SMOOVCAM 3943
cobham3725 MACH_COBHAM3725 COBHAM3725 3944
cobham3730 MACH_COBHAM3730 COBHAM3730 3945
cobham3703 MACH_COBHAM3703 COBHAM3703 3946
quetzal MACH_QUETZAL QUETZAL 3947
apq8064_cdp MACH_APQ8064_CDP APQ8064_CDP 3948
apq8064_mtp MACH_APQ8064_MTP APQ8064_MTP 3949
apq8064_fluid MACH_APQ8064_FLUID APQ8064_FLUID 3950
apq8064_liquid MACH_APQ8064_LIQUID APQ8064_LIQUID 3951
mango210 MACH_MANGO210 MANGO210 3952
mango100 MACH_MANGO100 MANGO100 3953
mango24 MACH_MANGO24 MANGO24 3954
mango64 MACH_MANGO64 MANGO64 3955
nsa320 MACH_NSA320 NSA320 3956
elv_ccu2 MACH_ELV_CCU2 ELV_CCU2 3957
triton_x00 MACH_TRITON_X00 TRITON_X00 3958
triton_1500_2000 MACH_TRITON_1500_2000 TRITON_1500_2000 3959
pogoplugv4 MACH_POGOPLUGV4 POGOPLUGV4 3960
venus_cl MACH_VENUS_CL VENUS_CL 3961
vulcano_g20 MACH_VULCANO_G20 VULCANO_G20 3962
sgs_i9100 MACH_SGS_I9100 SGS_I9100 3963
stsv2 MACH_STSV2 STSV2 3964
csb1724 MACH_CSB1724 CSB1724 3965
omapl138_lcdk MACH_OMAPL138_LCDK OMAPL138_LCDK 3966
pvd_mx25 MACH_PVD_MX25 PVD_MX25 3968
meson6_skt MACH_MESON6_SKT MESON6_SKT 3969
meson6_ref MACH_MESON6_REF MESON6_REF 3970
pxm MACH_PXM PXM 3971
pogoplugv3 MACH_POGOPLUGV3 POGOPLUGV3 3973
mlp89626 MACH_MLP89626 MLP89626 3974
iomegahmndce MACH_IOMEGAHMNDCE IOMEGAHMNDCE 3975
pogoplugv3pci MACH_POGOPLUGV3PCI POGOPLUGV3PCI 3976
bntv250 MACH_BNTV250 BNTV250 3977
mx53_qseven MACH_MX53_QSEVEN MX53_QSEVEN 3978
gtl_it1100 MACH_GTL_IT1100 GTL_IT1100 3979
mx6q_sabresd MACH_MX6Q_SABRESD MX6Q_SABRESD 3980
mt4 MACH_MT4 MT4 3981
jumbo_d MACH_JUMBO_D JUMBO_D 3982
jumbo_i MACH_JUMBO_I JUMBO_I 3983
fs20_dmp MACH_FS20_DMP FS20_DMP 3984
dns320 MACH_DNS320 DNS320 3985
mx28bacos MACH_MX28BACOS MX28BACOS 3986
tl80 MACH_TL80 TL80 3987
polatis_nic_1001 MACH_POLATIS_NIC_1001 POLATIS_NIC_1001 3988
tely MACH_TELY TELY 3989
u8520 MACH_U8520 U8520 3990
manta MACH_MANTA MANTA 3991
mpq8064_cdp MACH_MPQ8064_CDP MPQ8064_CDP 3993
mpq8064_dtv MACH_MPQ8064_DTV MPQ8064_DTV 3995
dm368som MACH_DM368SOM DM368SOM 3996
gprisb2 MACH_GPRISB2 GPRISB2 3997
chammid MACH_CHAMMID CHAMMID 3998
seoul2 MACH_SEOUL2 SEOUL2 3999
omap4_nooktablet MACH_OMAP4_NOOKTABLET OMAP4_NOOKTABLET 4000
aalto MACH_AALTO AALTO 4001
metro MACH_METRO METRO 4002
cydm3730 MACH_CYDM3730 CYDM3730 4003
tqma53 MACH_TQMA53 TQMA53 4004
msm7627a_qrd3 MACH_MSM7627A_QRD3 MSM7627A_QRD3 4005
mx28_canby MACH_MX28_CANBY MX28_CANBY 4006
tiger MACH_TIGER TIGER 4007
pcats_9307_type_a MACH_PCATS_9307_TYPE_A PCATS_9307_TYPE_A 4008
pcats_9307_type_o MACH_PCATS_9307_TYPE_O PCATS_9307_TYPE_O 4009
pcats_9307_type_r MACH_PCATS_9307_TYPE_R PCATS_9307_TYPE_R 4010
streamplug MACH_STREAMPLUG STREAMPLUG 4011
icechicken_dev MACH_ICECHICKEN_DEV ICECHICKEN_DEV 4012
hedgehog MACH_HEDGEHOG HEDGEHOG 4013
yusend_obc MACH_YUSEND_OBC YUSEND_OBC 4014
imxninja MACH_IMXNINJA IMXNINJA 4015
omap4_jarod MACH_OMAP4_JAROD OMAP4_JAROD 4016
eco5_pk MACH_ECO5_PK ECO5_PK 4017
qj2440 MACH_QJ2440 QJ2440 4018
mx6q_mercury MACH_MX6Q_MERCURY MX6Q_MERCURY 4019
cm6810 MACH_CM6810 CM6810 4020
omap4_torpedo MACH_OMAP4_TORPEDO OMAP4_TORPEDO 4021
nsa310 MACH_NSA310 NSA310 4022
tmx536 MACH_TMX536 TMX536 4023
ktt20 MACH_KTT20 KTT20 4024
dragonix MACH_DRAGONIX DRAGONIX 4025
lungching MACH_LUNGCHING LUNGCHING 4026
bulogics MACH_BULOGICS BULOGICS 4027
mx535_sx MACH_MX535_SX MX535_SX 4028
ngui3250 MACH_NGUI3250 NGUI3250 4029
salutec_dac MACH_SALUTEC_DAC SALUTEC_DAC 4030
loco MACH_LOCO LOCO 4031
ctera_plug_usi MACH_CTERA_PLUG_USI CTERA_PLUG_USI 4032
scepter MACH_SCEPTER SCEPTER 4033
sga MACH_SGA SGA 4034
p_81_j5 MACH_P_81_J5 P_81_J5 4035
p_81_o4 MACH_P_81_O4 P_81_O4 4036
msm8625_surf MACH_MSM8625_SURF MSM8625_SURF 4037
carallon_shark MACH_CARALLON_SHARK CARALLON_SHARK 4038
ordog MACH_ORDOG ORDOG 4040
puente_io MACH_PUENTE_IO PUENTE_IO 4041
msm8625_evb MACH_MSM8625_EVB MSM8625_EVB 4042
ev_am1707 MACH_EV_AM1707 EV_AM1707 4043
ev_am1707e2 MACH_EV_AM1707E2 EV_AM1707E2 4044
ev_am3517e2 MACH_EV_AM3517E2 EV_AM3517E2 4045
calabria MACH_CALABRIA CALABRIA 4046
ev_imx287 MACH_EV_IMX287 EV_IMX287 4047
erau MACH_ERAU ERAU 4048
sichuan MACH_SICHUAN SICHUAN 4049
davinci_da850 MACH_DAVINCI_DA850 DAVINCI_DA850 4051
omap138_trunarc MACH_OMAP138_TRUNARC OMAP138_TRUNARC 4052
bcm4761 MACH_BCM4761 BCM4761 4053
picasso_e2 MACH_PICASSO_E2 PICASSO_E2 4054
picasso_mf MACH_PICASSO_MF PICASSO_MF 4055
miro MACH_MIRO MIRO 4056
at91sam9g20ewon3 MACH_AT91SAM9G20EWON3 AT91SAM9G20EWON3 4057
yoyo MACH_YOYO YOYO 4058
windjkl MACH_WINDJKL WINDJKL 4059
monarudo MACH_MONARUDO MONARUDO 4060
batan MACH_BATAN BATAN 4061
tadao MACH_TADAO TADAO 4062
baso MACH_BASO BASO 4063
mahon MACH_MAHON MAHON 4064
villec2 MACH_VILLEC2 VILLEC2 4065
asi1230 MACH_ASI1230 ASI1230 4066
alaska MACH_ALASKA ALASKA 4067
swarco_shdsl2 MACH_SWARCO_SHDSL2 SWARCO_SHDSL2 4068
oxrtu MACH_OXRTU OXRTU 4069
omap5_panda MACH_OMAP5_PANDA OMAP5_PANDA 4070
c8000 MACH_C8000 C8000 4072
bje_display3_5 MACH_BJE_DISPLAY3_5 BJE_DISPLAY3_5 4073
picomod7 MACH_PICOMOD7 PICOMOD7 4074
picocom5 MACH_PICOCOM5 PICOCOM5 4075
qblissa8 MACH_QBLISSA8 QBLISSA8 4076
armstonea8 MACH_ARMSTONEA8 ARMSTONEA8 4077
netdcu14 MACH_NETDCU14 NETDCU14 4078
at91sam9x5_epiphan MACH_AT91SAM9X5_EPIPHAN AT91SAM9X5_EPIPHAN 4079
p2u MACH_P2U P2U 4080
doris MACH_DORIS DORIS 4081
j49 MACH_J49 J49 4082
vdss2e MACH_VDSS2E VDSS2E 4083
vc300 MACH_VC300 VC300 4084
ns115_pad_test MACH_NS115_PAD_TEST NS115_PAD_TEST 4085
ns115_pad_ref MACH_NS115_PAD_REF NS115_PAD_REF 4086
ns115_phone_test MACH_NS115_PHONE_TEST NS115_PHONE_TEST 4087
ns115_phone_ref MACH_NS115_PHONE_REF NS115_PHONE_REF 4088
golfc MACH_GOLFC GOLFC 4089
xerox_olympus MACH_XEROX_OLYMPUS XEROX_OLYMPUS 4090
mx6sl_arm2 MACH_MX6SL_ARM2 MX6SL_ARM2 4091
csb1701_csb1726 MACH_CSB1701_CSB1726 CSB1701_CSB1726 4092
at91sam9xeek MACH_AT91SAM9XEEK AT91SAM9XEEK 4093
ebv210 MACH_EBV210 EBV210 4094
msm7627a_qrd7 MACH_MSM7627A_QRD7 MSM7627A_QRD7 4095
svthin MACH_SVTHIN SVTHIN 4096
duovero MACH_DUOVERO DUOVERO 4097
chupacabra MACH_CHUPACABRA CHUPACABRA 4098
scorpion MACH_SCORPION SCORPION 4099
davinci_he_hmi10 MACH_DAVINCI_HE_HMI10 DAVINCI_HE_HMI10 4100
......@@ -1157,7 +580,6 @@ tam335x MACH_TAM335X TAM335X 4116
grouper MACH_GROUPER GROUPER 4117
mpcsa21_9g20 MACH_MPCSA21_9G20 MPCSA21_9G20 4118
m6u_cpu MACH_M6U_CPU M6U_CPU 4119
davinci_dp10 MACH_DAVINCI_DP10 DAVINCI_DP10 4120
ginkgo MACH_GINKGO GINKGO 4121
cgt_qmx6 MACH_CGT_QMX6 CGT_QMX6 4122
profpga MACH_PROFPGA PROFPGA 4123
......@@ -1204,3 +626,384 @@ baileys MACH_BAILEYS BAILEYS 4169
familybox MACH_FAMILYBOX FAMILYBOX 4170
ensemble_mx35 MACH_ENSEMBLE_MX35 ENSEMBLE_MX35 4171
sc_sps_1 MACH_SC_SPS_1 SC_SPS_1 4172
ucsimply_sam9260 MACH_UCSIMPLY_SAM9260 UCSIMPLY_SAM9260 4173
unicorn MACH_UNICORN UNICORN 4174
m9g45a MACH_M9G45A M9G45A 4175
mtwebif MACH_MTWEBIF MTWEBIF 4176
playstone MACH_PLAYSTONE PLAYSTONE 4177
chelsea MACH_CHELSEA CHELSEA 4178
bayern MACH_BAYERN BAYERN 4179
mitwo MACH_MITWO MITWO 4180
mx25_noah MACH_MX25_NOAH MX25_NOAH 4181
stm_b2020 MACH_STM_B2020 STM_B2020 4182
annax_src MACH_ANNAX_SRC ANNAX_SRC 4183
ionics_stratus MACH_IONICS_STRATUS IONICS_STRATUS 4184
hugo MACH_HUGO HUGO 4185
em300 MACH_EM300 EM300 4186
mmp3_qseven MACH_MMP3_QSEVEN MMP3_QSEVEN 4187
bosphorus2 MACH_BOSPHORUS2 BOSPHORUS2 4188
tt2200 MACH_TT2200 TT2200 4189
ocelot3 MACH_OCELOT3 OCELOT3 4190
tek_cobra MACH_TEK_COBRA TEK_COBRA 4191
protou MACH_PROTOU PROTOU 4192
msm8625_evt MACH_MSM8625_EVT MSM8625_EVT 4193
mx53_sellwood MACH_MX53_SELLWOOD MX53_SELLWOOD 4194
somiq_am35 MACH_SOMIQ_AM35 SOMIQ_AM35 4195
somiq_am37 MACH_SOMIQ_AM37 SOMIQ_AM37 4196
k2_plc_cl MACH_K2_PLC_CL K2_PLC_CL 4197
tc2 MACH_TC2 TC2 4198
dulex_j MACH_DULEX_J DULEX_J 4199
stm_b2044 MACH_STM_B2044 STM_B2044 4200
deluxe_j MACH_DELUXE_J DELUXE_J 4201
mango2443 MACH_MANGO2443 MANGO2443 4202
cp2dcg MACH_CP2DCG CP2DCG 4203
cp2dtg MACH_CP2DTG CP2DTG 4204
cp2dug MACH_CP2DUG CP2DUG 4205
var_som_am33 MACH_VAR_SOM_AM33 VAR_SOM_AM33 4206
pepper MACH_PEPPER PEPPER 4207
mango2450 MACH_MANGO2450 MANGO2450 4208
valente_wx_c9 MACH_VALENTE_WX_C9 VALENTE_WX_C9 4209
minitv MACH_MINITV MINITV 4210
u8540 MACH_U8540 U8540 4211
iv_atlas_i_z7e MACH_IV_ATLAS_I_Z7E IV_ATLAS_I_Z7E 4212
mach_type_sky MACH_MACH_TYPE_SKY MACH_TYPE_SKY 4214
bluesky MACH_BLUESKY BLUESKY 4215
ngrouter MACH_NGROUTER NGROUTER 4216
mx53_denetim MACH_MX53_DENETIM MX53_DENETIM 4217
opal MACH_OPAL OPAL 4218
gnet_us3gref MACH_GNET_US3GREF GNET_US3GREF 4219
gnet_nc3g MACH_GNET_NC3G GNET_NC3G 4220
gnet_ge3g MACH_GNET_GE3G GNET_GE3G 4221
adp2 MACH_ADP2 ADP2 4222
tqma28 MACH_TQMA28 TQMA28 4223
kacom3 MACH_KACOM3 KACOM3 4224
rrhdemo MACH_RRHDEMO RRHDEMO 4225
protodug MACH_PROTODUG PROTODUG 4226
lago MACH_LAGO LAGO 4227
ktt30 MACH_KTT30 KTT30 4228
ts43xx MACH_TS43XX TS43XX 4229
mx6q_denso MACH_MX6Q_DENSO MX6Q_DENSO 4230
comsat_gsmumts8 MACH_COMSAT_GSMUMTS8 COMSAT_GSMUMTS8 4231
dreamx MACH_DREAMX DREAMX 4232
thunderstonem MACH_THUNDERSTONEM THUNDERSTONEM 4233
yoyopad MACH_YOYOPAD YOYOPAD 4234
yoyopatient MACH_YOYOPATIENT YOYOPATIENT 4235
a10l MACH_A10L A10L 4236
mq60 MACH_MQ60 MQ60 4237
linkstation_lsql MACH_LINKSTATION_LSQL LINKSTATION_LSQL 4238
am3703gateway MACH_AM3703GATEWAY AM3703GATEWAY 4239
accipiter MACH_ACCIPITER ACCIPITER 4240
magnidug MACH_MAGNIDUG MAGNIDUG 4242
hydra MACH_HYDRA HYDRA 4243
sun3i MACH_SUN3I SUN3I 4244
stm_b2078 MACH_STM_B2078 STM_B2078 4245
at91sam9263deskv2 MACH_AT91SAM9263DESKV2 AT91SAM9263DESKV2 4246
deluxe_r MACH_DELUXE_R DELUXE_R 4247
p_98_v MACH_P_98_V P_98_V 4248
p_98_c MACH_P_98_C P_98_C 4249
davinci_am18xx_omn MACH_DAVINCI_AM18XX_OMN DAVINCI_AM18XX_OMN 4250
socfpga_cyclone5 MACH_SOCFPGA_CYCLONE5 SOCFPGA_CYCLONE5 4251
cabatuin MACH_CABATUIN CABATUIN 4252
yoyopad_ft MACH_YOYOPAD_FT YOYOPAD_FT 4253
dan2400evb MACH_DAN2400EVB DAN2400EVB 4254
dan3400evb MACH_DAN3400EVB DAN3400EVB 4255
edm_sf_imx6 MACH_EDM_SF_IMX6 EDM_SF_IMX6 4256
edm_cf_imx6 MACH_EDM_CF_IMX6 EDM_CF_IMX6 4257
vpos3xx MACH_VPOS3XX VPOS3XX 4258
vulcano_9x5 MACH_VULCANO_9X5 VULCANO_9X5 4259
spmp8000 MACH_SPMP8000 SPMP8000 4260
catalina MACH_CATALINA CATALINA 4261
rd88f5181l_fe MACH_RD88F5181L_FE RD88F5181L_FE 4262
mx535_mx MACH_MX535_MX MX535_MX 4263
armadillo840 MACH_ARMADILLO840 ARMADILLO840 4264
spc9000baseboard MACH_SPC9000BASEBOARD SPC9000BASEBOARD 4265
iris MACH_IRIS IRIS 4266
protodcg MACH_PROTODCG PROTODCG 4267
palmtree MACH_PALMTREE PALMTREE 4268
novena MACH_NOVENA NOVENA 4269
ma_um MACH_MA_UM MA_UM 4270
ma_am MACH_MA_AM MA_AM 4271
ems348 MACH_EMS348 EMS348 4272
cm_fx6 MACH_CM_FX6 CM_FX6 4273
arndale MACH_ARNDALE ARNDALE 4274
q5xr5 MACH_Q5XR5 Q5XR5 4275
willow MACH_WILLOW WILLOW 4276
omap3621_odyv3 MACH_OMAP3621_ODYV3 OMAP3621_ODYV3 4277
omapl138_presonus MACH_OMAPL138_PRESONUS OMAPL138_PRESONUS 4278
dvf99 MACH_DVF99 DVF99 4279
impression_j MACH_IMPRESSION_J IMPRESSION_J 4280
qblissa9 MACH_QBLISSA9 QBLISSA9 4281
robin_heliview10 MACH_ROBIN_HELIVIEW10 ROBIN_HELIVIEW10 4282
sun7i MACH_SUN7I SUN7I 4283
mx6q_hdmidongle MACH_MX6Q_HDMIDONGLE MX6Q_HDMIDONGLE 4284
mx6_sid2 MACH_MX6_SID2 MX6_SID2 4285
helios_v3 MACH_HELIOS_V3 HELIOS_V3 4286
helios_v4 MACH_HELIOS_V4 HELIOS_V4 4287
q7_imx6 MACH_Q7_IMX6 Q7_IMX6 4288
odroidx MACH_ODROIDX ODROIDX 4289
robpro MACH_ROBPRO ROBPRO 4290
research59if_mk1 MACH_RESEARCH59IF_MK1 RESEARCH59IF_MK1 4291
bobsleigh MACH_BOBSLEIGH BOBSLEIGH 4292
dcshgwt3 MACH_DCSHGWT3 DCSHGWT3 4293
gld1018 MACH_GLD1018 GLD1018 4294
ev10 MACH_EV10 EV10 4295
nitrogen6x MACH_NITROGEN6X NITROGEN6X 4296
p_107_bb MACH_P_107_BB P_107_BB 4297
evita_utl MACH_EVITA_UTL EVITA_UTL 4298
falconwing MACH_FALCONWING FALCONWING 4299
dct3 MACH_DCT3 DCT3 4300
cpx2e_cell MACH_CPX2E_CELL CPX2E_CELL 4301
amiro MACH_AMIRO AMIRO 4302
mx6q_brassboard MACH_MX6Q_BRASSBOARD MX6Q_BRASSBOARD 4303
dalmore MACH_DALMORE DALMORE 4304
omap3_portal7cp MACH_OMAP3_PORTAL7CP OMAP3_PORTAL7CP 4305
tegra_pluto MACH_TEGRA_PLUTO TEGRA_PLUTO 4306
mx6sl_evk MACH_MX6SL_EVK MX6SL_EVK 4307
m7 MACH_M7 M7 4308
pxm2 MACH_PXM2 PXM2 4309
haba_knx_lite MACH_HABA_KNX_LITE HABA_KNX_LITE 4310
tai MACH_TAI TAI 4311
prototd MACH_PROTOTD PROTOTD 4312
dst_tonto MACH_DST_TONTO DST_TONTO 4313
draco MACH_DRACO DRACO 4314
dxr2 MACH_DXR2 DXR2 4315
rut MACH_RUT RUT 4316
am180x_wsc MACH_AM180X_WSC AM180X_WSC 4317
deluxe_u MACH_DELUXE_U DELUXE_U 4318
deluxe_ul MACH_DELUXE_UL DELUXE_UL 4319
at91sam9260medths MACH_AT91SAM9260MEDTHS AT91SAM9260MEDTHS 4320
matrix516 MACH_MATRIX516 MATRIX516 4321
vid401x MACH_VID401X VID401X 4322
helios_v5 MACH_HELIOS_V5 HELIOS_V5 4323
playpaq2 MACH_PLAYPAQ2 PLAYPAQ2 4324
igam MACH_IGAM IGAM 4325
amico_i MACH_AMICO_I AMICO_I 4326
amico_e MACH_AMICO_E AMICO_E 4327
sentient_mm3_ck MACH_SENTIENT_MM3_CK SENTIENT_MM3_CK 4328
smx6 MACH_SMX6 SMX6 4329
pango MACH_PANGO PANGO 4330
ns115_stick MACH_NS115_STICK NS115_STICK 4331
bctrm3 MACH_BCTRM3 BCTRM3 4332
doctorws MACH_DOCTORWS DOCTORWS 4333
m2601 MACH_M2601 M2601 4334
vgg1111 MACH_VGG1111 VGG1111 4337
countach MACH_COUNTACH COUNTACH 4338
visstrim_sm20 MACH_VISSTRIM_SM20 VISSTRIM_SM20 4339
a639 MACH_A639 A639 4340
spacemonkey MACH_SPACEMONKEY SPACEMONKEY 4341
zpdu_stamp MACH_ZPDU_STAMP ZPDU_STAMP 4342
htc_g7_clone MACH_HTC_G7_CLONE HTC_G7_CLONE 4343
ft2080_corvus MACH_FT2080_CORVUS FT2080_CORVUS 4344
fisland MACH_FISLAND FISLAND 4345
zpdu MACH_ZPDU ZPDU 4346
urt MACH_URT URT 4347
conti_ovip MACH_CONTI_OVIP CONTI_OVIP 4348
omapl138_nagra MACH_OMAPL138_NAGRA OMAPL138_NAGRA 4349
da850_at3kp1 MACH_DA850_AT3KP1 DA850_AT3KP1 4350
da850_at3kp2 MACH_DA850_AT3KP2 DA850_AT3KP2 4351
surma MACH_SURMA SURMA 4352
stm_b2092 MACH_STM_B2092 STM_B2092 4353
mx535_ycr MACH_MX535_YCR MX535_YCR 4354
m7_wl MACH_M7_WL M7_WL 4355
m7_u MACH_M7_U M7_U 4356
omap3_stndt_evm MACH_OMAP3_STNDT_EVM OMAP3_STNDT_EVM 4357
m7_wlv MACH_M7_WLV M7_WLV 4358
xam3517 MACH_XAM3517 XAM3517 4359
a220 MACH_A220 A220 4360
aclima_odie MACH_ACLIMA_ODIE ACLIMA_ODIE 4361
vibble MACH_VIBBLE VIBBLE 4362
k2_u MACH_K2_U K2_U 4363
mx53_egf MACH_MX53_EGF MX53_EGF 4364
novpek_imx53 MACH_NOVPEK_IMX53 NOVPEK_IMX53 4365
novpek_imx6x MACH_NOVPEK_IMX6X NOVPEK_IMX6X 4366
mx25_smartbox MACH_MX25_SMARTBOX MX25_SMARTBOX 4367
eicg6410 MACH_EICG6410 EICG6410 4368
picasso_e3 MACH_PICASSO_E3 PICASSO_E3 4369
motonavigator MACH_MOTONAVIGATOR MOTONAVIGATOR 4370
varioconnect2 MACH_VARIOCONNECT2 VARIOCONNECT2 4371
deluxe_tw MACH_DELUXE_TW DELUXE_TW 4372
kore3 MACH_KORE3 KORE3 4374
mx6s_drs MACH_MX6S_DRS MX6S_DRS 4375
cmimx6 MACH_CMIMX6 CMIMX6 4376
roth MACH_ROTH ROTH 4377
eq4ux MACH_EQ4UX EQ4UX 4378
x1plus MACH_X1PLUS X1PLUS 4379
modimx27 MACH_MODIMX27 MODIMX27 4380
videon_hduac MACH_VIDEON_HDUAC VIDEON_HDUAC 4381
blackbird MACH_BLACKBIRD BLACKBIRD 4382
runmaster MACH_RUNMASTER RUNMASTER 4383
ceres MACH_CERES CERES 4384
nad435 MACH_NAD435 NAD435 4385
ns115_proto_type MACH_NS115_PROTO_TYPE NS115_PROTO_TYPE 4386
fs20_vcc MACH_FS20_VCC FS20_VCC 4387
meson6tv_skt MACH_MESON6TV_SKT MESON6TV_SKT 4389
keystone MACH_KEYSTONE KEYSTONE 4390
pcm052 MACH_PCM052 PCM052 4391
qrd_skud_prime MACH_QRD_SKUD_PRIME QRD_SKUD_PRIME 4393
guf_santaro MACH_GUF_SANTARO GUF_SANTARO 4395
sheepshead MACH_SHEEPSHEAD SHEEPSHEAD 4396
mx6_iwg15m_mxm MACH_MX6_IWG15M_MXM MX6_IWG15M_MXM 4397
mx6_iwg15m_q7 MACH_MX6_IWG15M_Q7 MX6_IWG15M_Q7 4398
at91sam9263if8mic MACH_AT91SAM9263IF8MIC AT91SAM9263IF8MIC 4399
marcopolo MACH_MARCOPOLO MARCOPOLO 4401
mx535_sdcr MACH_MX535_SDCR MX535_SDCR 4402
mx53_csb2733 MACH_MX53_CSB2733 MX53_CSB2733 4403
diva MACH_DIVA DIVA 4404
ncr_7744 MACH_NCR_7744 NCR_7744 4405
macallan MACH_MACALLAN MACALLAN 4406
wnr3500 MACH_WNR3500 WNR3500 4407
pgavrf MACH_PGAVRF PGAVRF 4408
helios_v6 MACH_HELIOS_V6 HELIOS_V6 4409
lcct MACH_LCCT LCCT 4410
csndug MACH_CSNDUG CSNDUG 4411
wandboard_imx6 MACH_WANDBOARD_IMX6 WANDBOARD_IMX6 4412
omap4_jet MACH_OMAP4_JET OMAP4_JET 4413
tegra_roth MACH_TEGRA_ROTH TEGRA_ROTH 4414
m7dcg MACH_M7DCG M7DCG 4415
m7dug MACH_M7DUG M7DUG 4416
m7dtg MACH_M7DTG M7DTG 4417
ap42x MACH_AP42X AP42X 4418
var_som_mx6 MACH_VAR_SOM_MX6 VAR_SOM_MX6 4419
pdlu MACH_PDLU PDLU 4420
hydrogen MACH_HYDROGEN HYDROGEN 4421
npa211e MACH_NPA211E NPA211E 4422
arcadia MACH_ARCADIA ARCADIA 4423
arcadia_l MACH_ARCADIA_L ARCADIA_L 4424
msm8930dt MACH_MSM8930DT MSM8930DT 4425
ktam3874 MACH_KTAM3874 KTAM3874 4426
cec4 MACH_CEC4 CEC4 4427
ape6evm MACH_APE6EVM APE6EVM 4428
tx6 MACH_TX6 TX6 4429
cfa10037 MACH_CFA10037 CFA10037 4431
ezp1000 MACH_EZP1000 EZP1000 4433
wgr826v MACH_WGR826V WGR826V 4434
exuma MACH_EXUMA EXUMA 4435
fregate MACH_FREGATE FREGATE 4436
osirisimx508 MACH_OSIRISIMX508 OSIRISIMX508 4437
st_exigo MACH_ST_EXIGO ST_EXIGO 4438
pismo MACH_PISMO PISMO 4439
atc7 MACH_ATC7 ATC7 4440
nspireclp MACH_NSPIRECLP NSPIRECLP 4441
nspiretp MACH_NSPIRETP NSPIRETP 4442
nspirecx MACH_NSPIRECX NSPIRECX 4443
maya MACH_MAYA MAYA 4444
wecct MACH_WECCT WECCT 4445
m2s MACH_M2S M2S 4446
msm8625q_evbd MACH_MSM8625Q_EVBD MSM8625Q_EVBD 4447
tiny210 MACH_TINY210 TINY210 4448
g3 MACH_G3 G3 4449
hurricane MACH_HURRICANE HURRICANE 4450
mx6_pod MACH_MX6_POD MX6_POD 4451
elondcn MACH_ELONDCN ELONDCN 4452
cwmx535 MACH_CWMX535 CWMX535 4453
m7_wlj MACH_M7_WLJ M7_WLJ 4454
qsp_arm MACH_QSP_ARM QSP_ARM 4455
msm8625q_skud MACH_MSM8625Q_SKUD MSM8625Q_SKUD 4456
htcmondrian MACH_HTCMONDRIAN HTCMONDRIAN 4457
watson_ead MACH_WATSON_EAD WATSON_EAD 4458
mitwoa MACH_MITWOA MITWOA 4459
omap3_wolverine MACH_OMAP3_WOLVERINE OMAP3_WOLVERINE 4460
mapletree MACH_MAPLETREE MAPLETREE 4461
msm8625_fih_sae MACH_MSM8625_FIH_SAE MSM8625_FIH_SAE 4462
epc35 MACH_EPC35 EPC35 4463
smartrtu MACH_SMARTRTU SMARTRTU 4464
rcm101 MACH_RCM101 RCM101 4465
amx_imx53_mxx MACH_AMX_IMX53_MXX AMX_IMX53_MXX 4466
acer_a12 MACH_ACER_A12 ACER_A12 4470
sbc6x MACH_SBC6X SBC6X 4471
u2 MACH_U2 U2 4472
smdk4270 MACH_SMDK4270 SMDK4270 4473
priscillag MACH_PRISCILLAG PRISCILLAG 4474
priscillac MACH_PRISCILLAC PRISCILLAC 4475
priscilla MACH_PRISCILLA PRISCILLA 4476
innova_shpu_v2 MACH_INNOVA_SHPU_V2 INNOVA_SHPU_V2 4477
mach_type_dep2410 MACH_MACH_TYPE_DEP2410 MACH_TYPE_DEP2410 4479
bctre3 MACH_BCTRE3 BCTRE3 4480
omap_m100 MACH_OMAP_M100 OMAP_M100 4481
flo MACH_FLO FLO 4482
nanobone MACH_NANOBONE NANOBONE 4483
stm_b2105 MACH_STM_B2105 STM_B2105 4484
omap4_bsc_bap_v3 MACH_OMAP4_BSC_BAP_V3 OMAP4_BSC_BAP_V3 4485
ss1pam MACH_SS1PAM SS1PAM 4486
primominiu MACH_PRIMOMINIU PRIMOMINIU 4488
mrt_35hd_dualnas_e MACH_MRT_35HD_DUALNAS_E MRT_35HD_DUALNAS_E 4489
kiwi MACH_KIWI KIWI 4490
hw90496 MACH_HW90496 HW90496 4491
mep2440 MACH_MEP2440 MEP2440 4492
colibri_t30 MACH_COLIBRI_T30 COLIBRI_T30 4493
cwv1 MACH_CWV1 CWV1 4494
nsa325 MACH_NSA325 NSA325 4495
dpxmtc MACH_DPXMTC DPXMTC 4497
tt_stuttgart MACH_TT_STUTTGART TT_STUTTGART 4498
miranda_apcii MACH_MIRANDA_APCII MIRANDA_APCII 4499
mx6q_moderox MACH_MX6Q_MODEROX MX6Q_MODEROX 4500
mudskipper MACH_MUDSKIPPER MUDSKIPPER 4501
urania MACH_URANIA URANIA 4502
stm_b2112 MACH_STM_B2112 STM_B2112 4503
mx6q_ats_phoenix MACH_MX6Q_ATS_PHOENIX MX6Q_ATS_PHOENIX 4505
stm_b2116 MACH_STM_B2116 STM_B2116 4506
mythology MACH_MYTHOLOGY MYTHOLOGY 4507
fc360v1 MACH_FC360V1 FC360V1 4508
gps_sensor MACH_GPS_SENSOR GPS_SENSOR 4509
gazelle MACH_GAZELLE GAZELLE 4510
mpq8064_dma MACH_MPQ8064_DMA MPQ8064_DMA 4511
wems_asd01 MACH_WEMS_ASD01 WEMS_ASD01 4512
apalis_t30 MACH_APALIS_T30 APALIS_T30 4513
armstonea9 MACH_ARMSTONEA9 ARMSTONEA9 4515
omap_blazetablet MACH_OMAP_BLAZETABLET OMAP_BLAZETABLET 4516
ar6mxq MACH_AR6MXQ AR6MXQ 4517
ar6mxs MACH_AR6MXS AR6MXS 4518
gwventana MACH_GWVENTANA GWVENTANA 4520
igep0033 MACH_IGEP0033 IGEP0033 4521
h52c1_concerto MACH_H52C1_CONCERTO H52C1_CONCERTO 4524
fcmbrd MACH_FCMBRD FCMBRD 4525
pcaaxs1 MACH_PCAAXS1 PCAAXS1 4526
ls_orca MACH_LS_ORCA LS_ORCA 4527
pcm051lb MACH_PCM051LB PCM051LB 4528
mx6s_lp507_gvci MACH_MX6S_LP507_GVCI MX6S_LP507_GVCI 4529
dido MACH_DIDO DIDO 4530
swarco_itc3_9g20 MACH_SWARCO_ITC3_9G20 SWARCO_ITC3_9G20 4531
robo_roady MACH_ROBO_ROADY ROBO_ROADY 4532
rskrza1 MACH_RSKRZA1 RSKRZA1 4533
swarco_sid MACH_SWARCO_SID SWARCO_SID 4534
mx6_iwg15s_sbc MACH_MX6_IWG15S_SBC MX6_IWG15S_SBC 4535
mx6q_camaro MACH_MX6Q_CAMARO MX6Q_CAMARO 4536
hb6mxs MACH_HB6MXS HB6MXS 4537
lager MACH_LAGER LAGER 4538
lp8x4x MACH_LP8X4X LP8X4X 4539
tegratab7 MACH_TEGRATAB7 TEGRATAB7 4540
andromeda MACH_ANDROMEDA ANDROMEDA 4541
bootes MACH_BOOTES BOOTES 4542
nethmi MACH_NETHMI NETHMI 4543
tegratab MACH_TEGRATAB TEGRATAB 4544
som5_evb MACH_SOM5_EVB SOM5_EVB 4545
venaticorum MACH_VENATICORUM VENATICORUM 4546
stm_b2110 MACH_STM_B2110 STM_B2110 4547
elux_hathor MACH_ELUX_HATHOR ELUX_HATHOR 4548
helios_v7 MACH_HELIOS_V7 HELIOS_V7 4549
xc10v1 MACH_XC10V1 XC10V1 4550
cp2u MACH_CP2U CP2U 4551
iap_f MACH_IAP_F IAP_F 4552
iap_g MACH_IAP_G IAP_G 4553
aae MACH_AAE AAE 4554
pegasus MACH_PEGASUS PEGASUS 4555
cygnus MACH_CYGNUS CYGNUS 4556
centaurus MACH_CENTAURUS CENTAURUS 4557
msm8930_qrd8930 MACH_MSM8930_QRD8930 MSM8930_QRD8930 4558
quby_tim MACH_QUBY_TIM QUBY_TIM 4559
zedi3250a MACH_ZEDI3250A ZEDI3250A 4560
grus MACH_GRUS GRUS 4561
apollo3 MACH_APOLLO3 APOLLO3 4562
cowon_r7 MACH_COWON_R7 COWON_R7 4563
tonga3 MACH_TONGA3 TONGA3 4564
p535 MACH_P535 P535 4565
sa3874i MACH_SA3874I SA3874I 4566
mx6_navico_com MACH_MX6_NAVICO_COM MX6_NAVICO_COM 4567
proxmobil2 MACH_PROXMOBIL2 PROXMOBIL2 4568
ubinux1 MACH_UBINUX1 UBINUX1 4569
istos MACH_ISTOS ISTOS 4570
benvolio4 MACH_BENVOLIO4 BENVOLIO4 4571
eco5_bx2 MACH_ECO5_BX2 ECO5_BX2 4572
eukrea_cpuimx28sd MACH_EUKREA_CPUIMX28SD EUKREA_CPUIMX28SD 4573
domotab MACH_DOMOTAB DOMOTAB 4574
pfla03 MACH_PFLA03 PFLA03 4575
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment