1. 16 Sep, 2010 4 commits
    • Dmitry Shulga's avatar
      a684f8df
    • Dmitry Shulga's avatar
      Fixed bug#42503 - "Lost connection" errors when using · be794bc5
      Dmitry Shulga authored
      compression protocol.
      
      The loss of connection was caused by a malformed packet
      sent by the server in case when query cache was in use.
      When storing data in the query cache, the query  cache
      memory allocation algorithm had a tendency to reduce
      the amount of memory block necessary to store a result
      set, up to finally storing the entire result set in a single
      block. With a significant result set, this memory block
      could turn out to be quite large - 30, 40 MB and on.
      When such a result set was sent to the client, the entire
      memory block was compressed and written to network as a
      single network packet. However, the length of the
      network packet is limited by 0xFFFFFF (16MB), since
      the packet format only allows 3 bytes for packet length.
      As a result, a malformed, overly large packet
      with truncated length would be sent to the client
      and break the client/server protocol.
      
      The solution is, when sending result sets from the query
      cache, to ensure that the data is chopped into
      network packets of size <= 16MB, so that there
      is no corruption of packet length. This solution,
      however, has a shortcoming: since the result set
      is still stored in the query cache as a single block,
      at the time of sending, we've lost boundaries of individual
      logical packets (one logical packet = one row of the result
      set) and thus can end up sending a truncated logical
      packet in a compressed network packet.
      
      As a result, on the client we may require more memory than 
      max_allowed_packet to keep, both, the truncated
      last logical packet, and the compressed next packet.
      This never (or in practice never) happens without compression,
      since without compression it's very unlikely that
      a) a truncated logical packet would remain on the client
      when it's time to read the next packet
      b) a subsequent logical packet that is being read would be
      so large that size-of-new-packet + size-of-old-packet-tail >
      max_allowed_packet.
      To remedy this issue, we send data in 1MB sized packets,
      that's below the current client default of 16MB for
      max_allowed_packet, but large enough to ensure there is no
      unnecessary overhead from too many syscalls per result set.
      be794bc5
    • Mikael Ronstrom's avatar
      723e7c16
    • Mikael Ronstrom's avatar
  2. 14 Sep, 2010 1 commit
  3. 13 Sep, 2010 7 commits
    • Mattias Jonsson's avatar
      merge · 061769d7
      Mattias Jonsson authored
      061769d7
    • Mattias Jonsson's avatar
      merge · 640454b3
      Mattias Jonsson authored
      640454b3
    • Mattias Jonsson's avatar
      merge · 99e507e8
      Mattias Jonsson authored
      99e507e8
    • Martin Hansson's avatar
      Merge of fix for Bug#50394. · c09489eb
      Martin Hansson authored
      c09489eb
    • Martin Hansson's avatar
      Bug #50394: Regression in EXPLAIN with index scan, LIMIT, GROUP BY and · 20bdf763
      Martin Hansson authored
      ORDER BY computed col
            
      GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
      index on the first table in the query is used, and that index satisfies
      ordering according to the GROUP BY clause, the query optimizer estimates the
      number of tuples that need to be read from this index. If there is a LIMIT
      clause, table statistics on tables following this 'sort table' are employed.
      
      There may be a separate ORDER BY clause however, which mandates reading the
      whole 'sort table' anyway. But the previous estimate was left untouched.
      
      Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
      conjunction with an ORDER BY clause that mandates using a temporary table.
      20bdf763
    • Gleb Shchepa's avatar
      83c5552b
    • Gleb Shchepa's avatar
      Bug #55779: select does not work properly in mysql server · 79c1faa0
      Gleb Shchepa authored
                  Version "5.1.42 SUSE MySQL RPM"
      
      When a query was using a DATE or DATETIME value formatted
      using different formatting than "yyyy-mm-dd HH:MM:SS", a
      query with a greater-or-equal '>=' condition matched only
      greater values in an indexed TIMESTAMP column.
      
      The problem was introduced by the fix for the bug 46362
      and partially solved (for DATE and DATETIME columns only)
      by the fix for the bug 47925.
      
      The stored_field_cmp_to_item function has been modified
      to take into account TIMESTAMP columns like we do for
      DATE and DATETIME columns.
      79c1faa0
  4. 10 Sep, 2010 5 commits
  5. 09 Sep, 2010 5 commits
    • Alexey Kopytov's avatar
      Manual merge of the fix for bug #54190 and the addendum patch · 637c7529
      Alexey Kopytov authored
      to 5.5 (removed one test case as it is no longer valid).
      637c7529
    • Alexey Kopytov's avatar
      Addendum patch for bug #54190. · f563a012
      Alexey Kopytov authored
      The patch caused some test failures when merged to 5.5 because,
      unlike 5.1, it utilizes Item_cache_row to actually cache row
      values. The problem was that Item_cache_row::bring_value()
      essentially did nothing. In particular, it did not update its
      null_value, so all Item_cache_row objects were always having
      their null_values set to TRUE. This went unnoticed previously,
      but now when Arg_comparator::compare_row() actually depends on
      the row's null_value to evaluate the comparison, the problem
      has surfaced.
      
      Fixed by calling the underlying item's bring_value() and
      updating null_value in Item_cache_row::bring_value().
      
      Since the problem also exists in 5.1 code (albeit hidden, since
      the relevant code is not used anywhere), the addendum patch is
      against 5.1.
      f563a012
    • Alexey Kopytov's avatar
      Automerge. · df198b5f
      Alexey Kopytov authored
      df198b5f
    • Alexey Kopytov's avatar
      Bug #54190: Comparison to row subquery produces incorrect · 9066714c
      Alexey Kopytov authored
                  result
      
      Row subqueries producing no rows were not handled as UNKNOWN
      values in row comparison expressions.
      
      That was a result of the following two problems:
      
      1. Item_singlerow_subselect did not mark the resulting row
      value as NULL/UNKNOWN when no rows were produced.
      
      2. Arg_comparator::compare_row() did not take into account that
      a whole argument may be NULL rather than just individual scalar
      values.
      
      Before bug#34384 was fixed, the above problems were hidden
      because an uninitialized (i.e. without any stored value) cached
      object would appear as NULL for scalar values in a row subquery
      returning an empty result. After the fix
      Arg_comparator::compare_row() would try to evaluate
      uninitialized cached objects.
      
      Fixed by removing the aforementioned problems.
      9066714c
    • Dmitry Shulga's avatar
      Fix mysql_client_test failure introduced by a patch for Bug#47485. · d88a11a7
      Dmitry Shulga authored
      The problem was that mysql_stmt_next_result() (new to 5.5)
      was not properly updated.
      d88a11a7
  6. 07 Sep, 2010 9 commits
  7. 06 Sep, 2010 4 commits
  8. 03 Sep, 2010 1 commit
  9. 01 Sep, 2010 4 commits
    • Magne Mahre's avatar
      Bug#39932 "create table fails if column for FK is in different · 24fc7ca4
      Magne Mahre authored
                case than in corr index".
            
      Server was unable to find existing or explicitly created supporting
      index for foreign key if corresponding statement clause used field
      names in case different than one used in key specification and created
      yet another supporting index.
      In cases when name of constraint (and thus name of generated index)
      was the same as name of existing/explicitly created index this led
      to duplicate key name error.
            
      The problem was that unlike all other code Key_part_spec::operator==()
      compared field names in case sensitive fashion. As result routines
      responsible for getting rid of redundant generated supporting indexes
      for foreign key were not working properly for versions of field names
      using different cases.
      
      (backported from mysql-trunk)
      24fc7ca4
    • Bjorn Munch's avatar
      upmerge 56383 · 4698ed2b
      Bjorn Munch authored
      4698ed2b
    • Bjorn Munch's avatar
      merge from 5.5 · 21542111
      Bjorn Munch authored
      21542111
    • Alexander Nozdrin's avatar
      Auto-merge from mysql-5.5. · f8f23158
      Alexander Nozdrin authored
      f8f23158