Commit cb359b60 authored by Sunil Muthuswamy's avatar Sunil Muthuswamy Committed by David S. Miller

hvsock: fix epollout hang from race condition

Currently, hvsock can enter into a state where epoll_wait on EPOLLOUT will
not return even when the hvsock socket is writable, under some race
condition. This can happen under the following sequence:
- fd = socket(hvsocket)
- fd_out = dup(fd)
- fd_in = dup(fd)
- start a writer thread that writes data to fd_out with a combination of
  epoll_wait(fd_out, EPOLLOUT) and
- start a reader thread that reads data from fd_in with a combination of
  epoll_wait(fd_in, EPOLLIN)
- On the host, there are two threads that are reading/writing data to the
  hvsocket

stack:
hvs_stream_has_space
hvs_notify_poll_out
vsock_poll
sock_poll
ep_poll

Race condition:
check for epollout from ep_poll():
	assume no writable space in the socket
	hvs_stream_has_space() returns 0
check for epollin from ep_poll():
	assume socket has some free space < HVS_PKT_LEN(HVS_SEND_BUF_SIZE)
	hvs_stream_has_space() will clear the channel pending send size
	host will not notify the guest because the pending send size has
		been cleared and so the hvsocket will never mark the
		socket writable

Now, the EPOLLOUT will never return even if the socket write buffer is
empty.

The fix is to set the pending size to the default size and never change it.
This way the host will always notify the guest whenever the writable space
is bigger than the pending size. The host is already optimized to *only*
notify the guest when the pending size threshold boundary is crossed and
not everytime.

This change also reduces the cpu usage somewhat since hv_stream_has_space()
is in the hotpath of send:
vsock_stream_sendmsg()->hv_stream_has_space()
Earlier hv_stream_has_space was setting/clearing the pending size on every
call.
Signed-off-by: default avatarSunil Muthuswamy <sunilmut@microsoft.com>
Reviewed-by: default avatarDexuan Cui <decui@microsoft.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 76e21533
......@@ -211,18 +211,6 @@ static void hvs_set_channel_pending_send_size(struct vmbus_channel *chan)
set_channel_pending_send_size(chan,
HVS_PKT_LEN(HVS_SEND_BUF_SIZE));
/* See hvs_stream_has_space(): we must make sure the host has seen
* the new pending send size, before we can re-check the writable
* bytes.
*/
virt_mb();
}
static void hvs_clear_channel_pending_send_size(struct vmbus_channel *chan)
{
set_channel_pending_send_size(chan, 0);
/* Ditto */
virt_mb();
}
......@@ -292,9 +280,6 @@ static void hvs_channel_cb(void *ctx)
if (hvs_channel_readable(chan))
sk->sk_data_ready(sk);
/* See hvs_stream_has_space(): when we reach here, the writable bytes
* may be already less than HVS_PKT_LEN(HVS_SEND_BUF_SIZE).
*/
if (hv_get_bytes_to_write(&chan->outbound) > 0)
sk->sk_write_space(sk);
}
......@@ -395,6 +380,13 @@ static void hvs_open_connection(struct vmbus_channel *chan)
set_per_channel_state(chan, conn_from_host ? new : sk);
vmbus_set_chn_rescind_callback(chan, hvs_close_connection);
/* Set the pending send size to max packet size to always get
* notifications from the host when there is enough writable space.
* The host is optimized to send notifications only when the pending
* size boundary is crossed, and not always.
*/
hvs_set_channel_pending_send_size(chan);
if (conn_from_host) {
new->sk_state = TCP_ESTABLISHED;
sk->sk_ack_backlog++;
......@@ -688,23 +680,8 @@ static s64 hvs_stream_has_data(struct vsock_sock *vsk)
static s64 hvs_stream_has_space(struct vsock_sock *vsk)
{
struct hvsock *hvs = vsk->trans;
struct vmbus_channel *chan = hvs->chan;
s64 ret;
ret = hvs_channel_writable_bytes(chan);
if (ret > 0) {
hvs_clear_channel_pending_send_size(chan);
} else {
/* See hvs_channel_cb() */
hvs_set_channel_pending_send_size(chan);
/* Re-check the writable bytes to avoid race */
ret = hvs_channel_writable_bytes(chan);
if (ret > 0)
hvs_clear_channel_pending_send_size(chan);
}
return ret;
return hvs_channel_writable_bytes(hvs->chan);
}
static u64 hvs_stream_rcvhiwat(struct vsock_sock *vsk)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment