Commit ba608c4f authored by Sourabh Jain's avatar Sourabh Jain Committed by Michael Ellerman

powerpc/fadump: fix race between pstore write and fadump crash trigger

When we enter into fadump crash path via system reset we fail to update
the pstore.

On the system reset path we first update the pstore then we go for fadump
crash. But the problem here is when all the CPUs try to get the pstore
lock to initiate the pstore write, only one CPUs will acquire the lock
and proceed with the pstore write. Since it in NMI context CPUs that fail
to get lock do not wait for their turn to write to the pstore and simply
proceed with the next operation which is fadump crash. One of the CPU who
proceeded with fadump crash path triggers the crash and does not wait for
the CPU who gets the pstore lock to complete the pstore update.

Timeline diagram to depicts the sequence of events that leads to an
unsuccessful pstore update when we hit fadump crash path via system reset.

                 1    2     3    ...      n   CPU Threads
                 |    |     |             |
                 |    |     |             |
 Reached to   -->|--->|---->| ----------->|
 system reset    |    |     |             |
 path            |    |     |             |
                 |    |     |             |
 Try to       -->|--->|---->|------------>|
 acquire the     |    |     |             |
 pstore lock     |    |     |             |
                 |    |     |             |
                 |    |     |             |
 Got the      -->| +->|     |             |<-+
 pstore lock     | |  |     |             |  |-->  Didn't get the
                 | --------------------------+     lock and moving
                 |    |     |             |        ahead on fadump
                 |    |     |             |        crash path
                 |    |     |             |
  Begins the  -->|    |     |             |
  process to     |    |     |             |<-- Got the chance to
  update the     |    |     |             |    trigger the crash
  pstore         | -> |     |    ... <-   |
                 | |  |     |         |   |
                 | |  |     |         |   |<-- Triggers the
                 | |  |     |         |   |    crash
                 | |  |     |         |   |      ^
                 | |  |     |         |   |      |
  Writing to  -->| |  |     |         |   |      |
  pstore         | |  |     |         |   |      |
                   |                  |          |
       ^           |__________________|          |
       |               CPU Relax                 |
       |                                         |
       +-----------------------------------------+
                          |
                          v
            Race: crash triggered before pstore
                  update completes

To avoid this race condition a barrier is added on crash_fadump path, it
prevents the CPU to trigger the crash until all the online CPUs completes
their task.

A barrier is added to make sure all the secondary CPUs hit the
crash_fadump function before we initiates the crash. A timeout is kept to
ensure the primary CPU (one who initiates the crash) do not wait for
secondary CPUs indefinitely.
Signed-off-by: default avatarSourabh Jain <sourabhjain@linux.ibm.com>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20200713052435.183750-1-sourabhjain@linux.ibm.com
parent ade7667a
...@@ -32,11 +32,20 @@ ...@@ -32,11 +32,20 @@
#include <asm/fadump-internal.h> #include <asm/fadump-internal.h>
#include <asm/setup.h> #include <asm/setup.h>
/*
* The CPU who acquired the lock to trigger the fadump crash should
* wait for other CPUs to enter.
*
* The timeout is in milliseconds.
*/
#define CRASH_TIMEOUT 500
static struct fw_dump fw_dump; static struct fw_dump fw_dump;
static void __init fadump_reserve_crash_area(u64 base); static void __init fadump_reserve_crash_area(u64 base);
struct kobject *fadump_kobj; struct kobject *fadump_kobj;
static atomic_t cpus_in_fadump;
#ifndef CONFIG_PRESERVE_FA_DUMP #ifndef CONFIG_PRESERVE_FA_DUMP
static DEFINE_MUTEX(fadump_mutex); static DEFINE_MUTEX(fadump_mutex);
...@@ -668,8 +677,11 @@ early_param("fadump_reserve_mem", early_fadump_reserve_mem); ...@@ -668,8 +677,11 @@ early_param("fadump_reserve_mem", early_fadump_reserve_mem);
void crash_fadump(struct pt_regs *regs, const char *str) void crash_fadump(struct pt_regs *regs, const char *str)
{ {
unsigned int msecs;
struct fadump_crash_info_header *fdh = NULL; struct fadump_crash_info_header *fdh = NULL;
int old_cpu, this_cpu; int old_cpu, this_cpu;
/* Do not include first CPU */
unsigned int ncpus = num_online_cpus() - 1;
if (!should_fadump_crash()) if (!should_fadump_crash())
return; return;
...@@ -685,6 +697,8 @@ void crash_fadump(struct pt_regs *regs, const char *str) ...@@ -685,6 +697,8 @@ void crash_fadump(struct pt_regs *regs, const char *str)
old_cpu = cmpxchg(&crashing_cpu, -1, this_cpu); old_cpu = cmpxchg(&crashing_cpu, -1, this_cpu);
if (old_cpu != -1) { if (old_cpu != -1) {
atomic_inc(&cpus_in_fadump);
/* /*
* We can't loop here indefinitely. Wait as long as fadump * We can't loop here indefinitely. Wait as long as fadump
* is in force. If we race with fadump un-registration this * is in force. If we race with fadump un-registration this
...@@ -708,6 +722,16 @@ void crash_fadump(struct pt_regs *regs, const char *str) ...@@ -708,6 +722,16 @@ void crash_fadump(struct pt_regs *regs, const char *str)
fdh->online_mask = *cpu_online_mask; fdh->online_mask = *cpu_online_mask;
/*
* If we came in via system reset, wait a while for the secondary
* CPUs to enter.
*/
if (TRAP(&(fdh->regs)) == 0x100) {
msecs = CRASH_TIMEOUT;
while ((atomic_read(&cpus_in_fadump) < ncpus) && (--msecs > 0))
mdelay(1);
}
fw_dump.ops->fadump_trigger(fdh, str); fw_dump.ops->fadump_trigger(fdh, str);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment