Commit 3eb0ac86 authored by Andrea Parri (Microsoft)'s avatar Andrea Parri (Microsoft) Committed by Wei Liu

Drivers: hv: vmbus: Use channel_mutex in channel_vp_mapping_show()

The primitive currently uses channel->lock to protect the loop over
sc_list w.r.t. list additions/deletions but it doesn't protect the
target_cpu(s) loads w.r.t. a concurrent target_cpu_store(): while the
data races on target_cpu are hardly of any concern here, replace the
channel->lock critical section with a channel_mutex critical section
and extend the latter to include the loads of target_cpu; this same
pattern is also used in hv_synic_cleanup().
Signed-off-by: default avatarAndrea Parri (Microsoft) <parri.andrea@gmail.com>
Link: https://lore.kernel.org/r/20200617164642.37393-6-parri.andrea@gmail.comReviewed-by: default avatarMichael Kelley <mikelley@microsoft.com>
Signed-off-by: default avatarWei Liu <wei.liu@kernel.org>
parent 12d0dd8e
......@@ -507,18 +507,17 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
{
struct hv_device *hv_dev = device_to_hv_device(dev);
struct vmbus_channel *channel = hv_dev->channel, *cur_sc;
unsigned long flags;
int buf_size = PAGE_SIZE, n_written, tot_written;
struct list_head *cur;
if (!channel)
return -ENODEV;
mutex_lock(&vmbus_connection.channel_mutex);
tot_written = snprintf(buf, buf_size, "%u:%u\n",
channel->offermsg.child_relid, channel->target_cpu);
spin_lock_irqsave(&channel->lock, flags);
list_for_each(cur, &channel->sc_list) {
if (tot_written >= buf_size - 1)
break;
......@@ -532,7 +531,7 @@ static ssize_t channel_vp_mapping_show(struct device *dev,
tot_written += n_written;
}
spin_unlock_irqrestore(&channel->lock, flags);
mutex_unlock(&vmbus_connection.channel_mutex);
return tot_written;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment