Commit be794bc5 authored by Dmitry Shulga's avatar Dmitry Shulga

Fixed bug#42503 - "Lost connection" errors when using

compression protocol.

The loss of connection was caused by a malformed packet
sent by the server in case when query cache was in use.
When storing data in the query cache, the query  cache
memory allocation algorithm had a tendency to reduce
the amount of memory block necessary to store a result
set, up to finally storing the entire result set in a single
block. With a significant result set, this memory block
could turn out to be quite large - 30, 40 MB and on.
When such a result set was sent to the client, the entire
memory block was compressed and written to network as a
single network packet. However, the length of the
network packet is limited by 0xFFFFFF (16MB), since
the packet format only allows 3 bytes for packet length.
As a result, a malformed, overly large packet
with truncated length would be sent to the client
and break the client/server protocol.

The solution is, when sending result sets from the query
cache, to ensure that the data is chopped into
network packets of size <= 16MB, so that there
is no corruption of packet length. This solution,
however, has a shortcoming: since the result set
is still stored in the query cache as a single block,
at the time of sending, we've lost boundaries of individual
logical packets (one logical packet = one row of the result
set) and thus can end up sending a truncated logical
packet in a compressed network packet.

As a result, on the client we may require more memory than 
max_allowed_packet to keep, both, the truncated
last logical packet, and the compressed next packet.
This never (or in practice never) happens without compression,
since without compression it's very unlikely that
a) a truncated logical packet would remain on the client
when it's time to read the next packet
b) a subsequent logical packet that is being read would be
so large that size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet.
To remedy this issue, we send data in 1MB sized packets,
that's below the current client default of 16MB for
max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.
parent f601c035
...@@ -170,7 +170,17 @@ my_bool net_realloc(NET *net, size_t length) ...@@ -170,7 +170,17 @@ my_bool net_realloc(NET *net, size_t length)
DBUG_ENTER("net_realloc"); DBUG_ENTER("net_realloc");
DBUG_PRINT("enter",("length: %lu", (ulong) length)); DBUG_PRINT("enter",("length: %lu", (ulong) length));
if (length >= net->max_packet_size) /*
When compression is off, net->where_b is always 0.
With compression turned on, net->where_b may indicate
that we still have a piece of the previous logical
packet in the buffer, unprocessed. Take it into account
when checking that max_allowed_packet is not exceeded.
This ensures that the client treats max_allowed_packet
limit identically, regardless of compression being on
or off.
*/
if (length >= (net->max_packet_size + net->where_b))
{ {
DBUG_PRINT("error", ("Packet too large. Max size: %lu", DBUG_PRINT("error", ("Packet too large. Max size: %lu",
net->max_packet_size)); net->max_packet_size));
......
...@@ -1330,6 +1330,55 @@ def_week_frmt: %lu, in_trans: %d, autocommit: %d", ...@@ -1330,6 +1330,55 @@ def_week_frmt: %lu, in_trans: %d, autocommit: %d",
} }
/**
Send a single memory block from the query cache.
Respects the client/server protocol limits for the
size of the network packet, and splits a large block
in pieces to ensure that individual piece doesn't exceed
the maximal allowed size of the network packet (16M).
@param[in] net NET handler
@param[in] packet packet to send
@param[in] len packet length
@return Operation status
@retval FALSE On success
@retval TRUE On error
*/
static bool
send_data_in_chunks(NET *net, const uchar *packet, ulong len)
{
/*
On the client we may require more memory than max_allowed_packet
to keep, both, the truncated last logical packet, and the
compressed next packet. This never (or in practice never)
happens without compression, since without compression it's very
unlikely that a) a truncated logical packet would remain on the
client when it's time to read the next packet b) a subsequent
logical packet that is being read would be so large that
size-of-new-packet + size-of-old-packet-tail >
max_allowed_packet. To remedy this issue, we send data in 1MB
sized packets, that's below the current client default of 16MB
for max_allowed_packet, but large enough to ensure there is no
unnecessary overhead from too many syscalls per result set.
*/
static const ulong MAX_CHUNK_LENGTH= 1024*1024;
while (len > MAX_CHUNK_LENGTH)
{
if (net_real_write(net, packet, MAX_CHUNK_LENGTH))
return TRUE;
packet+= MAX_CHUNK_LENGTH;
len-= MAX_CHUNK_LENGTH;
}
if (len && net_real_write(net, packet, len))
return TRUE;
return FALSE;
}
/* /*
Check if the query is in the cache. If it was cached, send it Check if the query is in the cache. If it was cached, send it
to the user. to the user.
...@@ -1635,11 +1684,11 @@ def_week_frmt: %lu, in_trans: %d, autocommit: %d", ...@@ -1635,11 +1684,11 @@ def_week_frmt: %lu, in_trans: %d, autocommit: %d",
ALIGN_SIZE(sizeof(Query_cache_result))))); ALIGN_SIZE(sizeof(Query_cache_result)))));
Query_cache_result *result = result_block->result(); Query_cache_result *result = result_block->result();
if (net_real_write(&thd->net, result->data(), if (send_data_in_chunks(&thd->net, result->data(),
result_block->used - result_block->used -
result_block->headers_len() - result_block->headers_len() -
ALIGN_SIZE(sizeof(Query_cache_result)))) ALIGN_SIZE(sizeof(Query_cache_result))))
break; // Client aborted break; // Client aborted
result_block = result_block->next; result_block = result_block->next;
thd->net.pkt_nr= query->last_pkt_nr; // Keep packet number updated thd->net.pkt_nr= query->last_pkt_nr; // Keep packet number updated
} while (result_block != first_result_block); } while (result_block != first_result_block);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment