Commit 6b281569 authored by Fenghua Yu's avatar Fenghua Yu Committed by Thomas Gleixner

x86/cqm: Share PQR_ASSOC related data between CQM and CAT

PQR_ASSOC MSR contains the RMID used for preformance monitoring of cache
occupancy and memory bandwidth. The upper 32bit of this MSR contain the
CLOSID for cache allocation. So we need to share the information between
the two facilities.

Move the rdt data structure declaration into the shared header file and
make the per cpu data structure containing the MSR values global.
Signed-off-by: default avatarFenghua Yu <fenghua.yu@intel.com>
Cc: "Ravi V Shankar" <ravi.v.shankar@intel.com>
Cc: "Tony Luck" <tony.luck@intel.com>
Cc: "David Carrillo-Cisneros" <davidcc@google.com>
Cc: "Sai Prakhya" <sai.praneeth.prakhya@intel.com>
Cc: "Peter Zijlstra" <peterz@infradead.org>
Cc: "Stephane Eranian" <eranian@google.com>
Cc: "Dave Hansen" <dave.hansen@intel.com>
Cc: "Shaohua Li" <shli@fb.com>
Cc: "Nilay Vaish" <nilayvaish@gmail.com>
Cc: "Vikas Shivappa" <vikas.shivappa@linux.intel.com>
Cc: "Ingo Molnar" <mingo@elte.hu>
Cc: "Borislav Petkov" <bp@suse.de>
Cc: "H. Peter Anvin" <h.peter.anvin@intel.com>
Link: http://lkml.kernel.org/r/1477142405-32078-10-git-send-email-fenghua.yu@intel.comSigned-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent c1c7c3f9
...@@ -24,32 +24,13 @@ static unsigned int cqm_l3_scale; /* supposedly cacheline size */ ...@@ -24,32 +24,13 @@ static unsigned int cqm_l3_scale; /* supposedly cacheline size */
static bool cqm_enabled, mbm_enabled; static bool cqm_enabled, mbm_enabled;
unsigned int mbm_socket_max; unsigned int mbm_socket_max;
/**
* struct intel_pqr_state - State cache for the PQR MSR
* @rmid: The cached Resource Monitoring ID
* @closid: The cached Class Of Service ID
* @rmid_usecnt: The usage counter for rmid
*
* The upper 32 bits of MSR_IA32_PQR_ASSOC contain closid and the
* lower 10 bits rmid. The update to MSR_IA32_PQR_ASSOC always
* contains both parts, so we need to cache them.
*
* The cache also helps to avoid pointless updates if the value does
* not change.
*/
struct intel_pqr_state {
u32 rmid;
u32 closid;
int rmid_usecnt;
};
/* /*
* The cached intel_pqr_state is strictly per CPU and can never be * The cached intel_pqr_state is strictly per CPU and can never be
* updated from a remote CPU. Both functions which modify the state * updated from a remote CPU. Both functions which modify the state
* (intel_cqm_event_start and intel_cqm_event_stop) are called with * (intel_cqm_event_start and intel_cqm_event_stop) are called with
* interrupts disabled, which is sufficient for the protection. * interrupts disabled, which is sufficient for the protection.
*/ */
static DEFINE_PER_CPU(struct intel_pqr_state, pqr_state); DEFINE_PER_CPU(struct intel_pqr_state, pqr_state);
static struct hrtimer *mbm_timers; static struct hrtimer *mbm_timers;
/** /**
* struct sample - mbm event's (local or total) data * struct sample - mbm event's (local or total) data
......
...@@ -3,4 +3,25 @@ ...@@ -3,4 +3,25 @@
#define MSR_IA32_PQR_ASSOC 0x0c8f #define MSR_IA32_PQR_ASSOC 0x0c8f
/**
* struct intel_pqr_state - State cache for the PQR MSR
* @rmid: The cached Resource Monitoring ID
* @closid: The cached Class Of Service ID
* @rmid_usecnt: The usage counter for rmid
*
* The upper 32 bits of MSR_IA32_PQR_ASSOC contain closid and the
* lower 10 bits rmid. The update to MSR_IA32_PQR_ASSOC always
* contains both parts, so we need to cache them.
*
* The cache also helps to avoid pointless updates if the value does
* not change.
*/
struct intel_pqr_state {
u32 rmid;
u32 closid;
int rmid_usecnt;
};
DECLARE_PER_CPU(struct intel_pqr_state, pqr_state);
#endif /* _ASM_X86_INTEL_RDT_COMMON_H */ #endif /* _ASM_X86_INTEL_RDT_COMMON_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment