1. 24 Sep, 2010 5 commits
  2. 22 Sep, 2010 2 commits
  3. 21 Sep, 2010 2 commits
  4. 19 Sep, 2010 1 commit
  5. 17 Sep, 2010 3 commits
    • Davi Arnaut's avatar
      Bug#52419: x86 assembly based atomic CAS causes test failures · 1d520943
      Davi Arnaut authored
      The problem was that the x86 assembly based atomic CAS
      (compare and swap) implementation could copy the wrong
      value to the ebx register, where the cmpxchg8b expects
      to see part of the "comparand" value. Since the original
      value in the ebx register is saved in the stack (that is,
      the push instruction causes the stack pointer to change),
      a wrong offset could be used if the compiler decides to
      put the source of the comparand value in the stack.
      
      The solution is to copy the comparand value directly from
      memory. Since the comparand value is 64-bits wide, it is
      copied in two steps over to the ebx and ecx registers.
      
      include/atomic/x86-gcc.h:
        For reference, an excerpt from a faulty binary follows.
        
        It is a disassembly of my_atomic-t, compiled at -O3 with
        ICC 11.0. Most of the code deals with preparations for
        a atomic cmpxchg8b operation. This instruction compares
        the value in edx:eax with the destination operand. If the
        values are equal, the value in ecx:ebx is stored in the
        destination, otherwise the value in the destination operand
        is copied into edx:eax.
        
        In this case, my_atomic_add64 is implemented as a compare
        and exchange. The addition is done over temporary storage
        and loaded into the destination if the original term value
        is still valid.
        
          volatile int64 a64;
          int64 b=0x1000200030004000LL;
          a64=0;
              mov    0xfffffda8(%ebx),%eax
              xor    %ebp,%ebp
              mov    %ebp,(%eax)
              mov    %ebp,0x4(%eax)
          my_atomic_add64(&a64, b);
              mov    0xfffffda8(%ebx),%ebp      # Load address of a64
              mov    0x0(%ebp),%edx             # Copy value
              mov    0x4(%ebp),%ecx
              mov    %edx,0xc(%esp)             # Assign to tmp var in the stack
              mov    %ecx,0x10(%esp)
              add    $0x30004000,%edx           # Sum values
              adc    $0x10002000,%ecx
              mov    %edx,0x8(%esp)             # Save part of result for later
              mov    0x0(%ebp),%esi             # Copy value of a64 again
              mov    0x4(%ebp),%edi
              mov    0xc(%esp),%eax             # Load the value of a64 used
              mov    0x10(%esp),%edx            # for comparison
              mov    %esi,(%esp)
              mov    %edi,0x4(%esp)
              push   %ebx                       # Push %ebx into stack. Changes esp.
              mov    0x8(%esp),%ebx             # Wrong restore of the result.
              lock cmpxchg8b 0x0(%ebp)
              sete   %cl
              pop    %ebx
      1d520943
    • Alfranio Correia's avatar
      873477ee
    • Alfranio Correia's avatar
      0c74cc0d
  6. 16 Sep, 2010 8 commits
    • Sergey Glukhov's avatar
      5.1-bugteam->5.5-merge · 5fc801dc
      Sergey Glukhov authored
      5fc801dc
    • Sergey Glukhov's avatar
      Bug#50402 Optimizer producing wrong results when using Index Merge on InnoDB · 31a38c0f
      Sergey Glukhov authored
      Subselect executes twice, at JOIN::optimize stage
      and at JOIN::execute stage. At optimize stage
      Innodb prebuilt struct which is used for the
      retrieval of column values is initialized in.
      ha_innobase::index_read(), prebuilt->sql_stat_start is true.
      After QUICK_ROR_INTERSECT_SELECT finished his job it
      restores read_set/write_set bitmaps with initial values
      and deactivates one of the handlers used by
      QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup
      (it's the case when we reuse original handler as one of
       handlers required by QUICK_ROR_INTERSECT_SELECT object).
      On second subselect execution inactive handler is activated
      in  QUICK_RANGE_SELECT::reset, file->ha_index_init().
      In ha_index_init Innodb prebuilt struct is reinitialized
      with inappropriate read_set/write_set bitmaps. Further
      reinitialization in ha_innobase::index_read() does not
      happen as prebuilt->sql_stat_start is false.
      It leads to partial retrieval of required field values
      and we get a mix of field values from different records
      in the record buffer.
      The fix is to reset
      read_set/write_set bitmaps as these values
      are required for proper intialization of
      internal InnoDB struct which is used for
      the retrieval of column values
      (see build_template(), ha_innodb.cc)
      
      
      mysql-test/include/index_merge_ror_cpk.inc:
        test case
      mysql-test/r/index_merge_innodb.result:
        test case
      mysql-test/r/index_merge_myisam.result:
        test case
      sql/opt_range.cc:
        if ROR merge scan is used we need to reset
        read_set/write_set bitmaps as these values
        are required for proper intialization of
        internal InnoDB struct which is used for
        the retrieval of column values
        (see build_template(), ha_innodb.cc)
      31a38c0f
    • Magne Mahre's avatar
      Merge from 5.1-bugteam · 55ad009b
      Magne Mahre authored
      55ad009b
    • Magne Mahre's avatar
      Bug #54606 innodb fast alter table + pack_keys=0 prevents · ebd207ba
      Magne Mahre authored
                 adding new indexes
      
      A fast alter table requires that the existing (old) table
      and indices are unchanged (i.e only new indices can be
      added).  To verify this, the layout and flags of the old
      table/indices are compared for equality with the new.
      
      The PACK_KEYS option is a no-op in InnoDB, but the flag
      exists, and is used in the table compare.  We need to
      check this (table) option flag before deciding whether an 
      index should be packed or not.  If the table has
      explicitly set PACK_KEYS to 0, the created indices should
      not be marked as packed/packable. 
      ebd207ba
    • Dmitry Shulga's avatar
      deacb7c8
    • Dmitry Shulga's avatar
      Fixed bug#42503 - "Lost connection" errors when using · 0c91b53d
      Dmitry Shulga authored
      compression protocol.
      
      The loss of connection was caused by a malformed packet
      sent by the server in case when query cache was in use.
      When storing data in the query cache, the query  cache
      memory allocation algorithm had a tendency to reduce
      the amount of memory block necessary to store a result
      set, up to finally storing the entire result set in a single
      block. With a significant result set, this memory block
      could turn out to be quite large - 30, 40 MB and on.
      When such a result set was sent to the client, the entire
      memory block was compressed and written to network as a
      single network packet. However, the length of the
      network packet is limited by 0xFFFFFF (16MB), since
      the packet format only allows 3 bytes for packet length.
      As a result, a malformed, overly large packet
      with truncated length would be sent to the client
      and break the client/server protocol.
      
      The solution is, when sending result sets from the query
      cache, to ensure that the data is chopped into
      network packets of size <= 16MB, so that there
      is no corruption of packet length. This solution,
      however, has a shortcoming: since the result set
      is still stored in the query cache as a single block,
      at the time of sending, we've lost boundaries of individual
      logical packets (one logical packet = one row of the result
      set) and thus can end up sending a truncated logical
      packet in a compressed network packet.
      
      As a result, on the client we may require more memory than 
      max_allowed_packet to keep, both, the truncated
      last logical packet, and the compressed next packet.
      This never (or in practice never) happens without compression,
      since without compression it's very unlikely that
      a) a truncated logical packet would remain on the client
      when it's time to read the next packet
      b) a subsequent logical packet that is being read would be
      so large that size-of-new-packet + size-of-old-packet-tail >
      max_allowed_packet.
      To remedy this issue, we send data in 1MB sized packets,
      that's below the current client default of 16MB for
      max_allowed_packet, but large enough to ensure there is no
      unnecessary overhead from too many syscalls per result set.
      
      
      sql/net_serv.cc:
        net_realloc() modified: consider already used memory
        when compare packet buffer length
      sql/sql_cache.cc:
        modified Query_cache::send_result_to_client: send result to client
        in chunks limited by 1 megabyte.
      0c91b53d
    • Mikael Ronstrom's avatar
      5f2bfcce
    • Mikael Ronstrom's avatar
  7. 14 Sep, 2010 1 commit
  8. 13 Sep, 2010 8 commits
    • Mattias Jonsson's avatar
      merge · 9d1ed095
      Mattias Jonsson authored
      9d1ed095
    • Mattias Jonsson's avatar
      merge · 92655149
      Mattias Jonsson authored
      92655149
    • Mattias Jonsson's avatar
      merge · b76f3912
      Mattias Jonsson authored
      b76f3912
    • Martin Hansson's avatar
      Merge of fix for Bug#50394. · dae7b019
      Martin Hansson authored
      dae7b019
    • Martin Hansson's avatar
      Bug #50394: Regression in EXPLAIN with index scan, LIMIT, GROUP BY and · 3beeb5d0
      Martin Hansson authored
      ORDER BY computed col
            
      GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an
      index on the first table in the query is used, and that index satisfies
      ordering according to the GROUP BY clause, the query optimizer estimates the
      number of tuples that need to be read from this index. If there is a LIMIT
      clause, table statistics on tables following this 'sort table' are employed.
      
      There may be a separate ORDER BY clause however, which mandates reading the
      whole 'sort table' anyway. But the previous estimate was left untouched.
      
      Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in
      conjunction with an ORDER BY clause that mandates using a temporary table.
      3beeb5d0
    • Joerg Bruehe's avatar
      Selective transfer of a bugfix patch into 5.5.6-rc. · c15b344a
      Joerg Bruehe authored
      The first part is the functional change,
      the second is needed as a compile fix on Windows
      (header file order).
      
      | committer: Marc Alff <marc.alff@oracle.com>
      | branch nick: mysql-5.5-bugfixing-56521
      | timestamp: Thu 2010-09-09 14:28:47 -0600
      | message:
      |   Bug#56521 Assertion failed: (m_state == 2), function allocated_to_free, pfs_lock.h (138)
      |
      |   Before this fix, it was possible to build the server:
      |   - with the performance schema
      |   - with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY).
      |
      |   In this case, the resulting binary will just crash,
      |   as this configuration is not supported.
      |
      |   This fix enforces that the build will fail with a compilation error in this
      |   configuration, instead of resulting in a broken binary.
      
      | committer: Tor Didriksen <tor.didriksen@oracle.com>
      | branch nick: 5.5-bugfixing-56521
      | timestamp: Fri 2010-09-10 11:10:38 +0200
      | message:
      |   Header files should be self-contained
      c15b344a
    • Gleb Shchepa's avatar
      657ba74a
    • Gleb Shchepa's avatar
      Bug #55779: select does not work properly in mysql server · daa6d1f4
      Gleb Shchepa authored
                  Version "5.1.42 SUSE MySQL RPM"
      
      When a query was using a DATE or DATETIME value formatted
      using different formatting than "yyyy-mm-dd HH:MM:SS", a
      query with a greater-or-equal '>=' condition matched only
      greater values in an indexed TIMESTAMP column.
      
      The problem was introduced by the fix for the bug 46362
      and partially solved (for DATE and DATETIME columns only)
      by the fix for the bug 47925.
      
      The stored_field_cmp_to_item function has been modified
      to take into account TIMESTAMP columns like we do for
      DATE and DATETIME columns.
      
      
      mysql-test/r/type_timestamp.result:
        Test case for bug #55779.
      mysql-test/t/type_timestamp.test:
        Test case for bug #55779.
      sql/item.cc:
        Bug #55779: select does not work properly in mysql server
                    Version "5.1.42 SUSE MySQL RPM"
        
        The stored_field_cmp_to_item function has been modified
        to take into account TIMESTAMP columns like we do for
        DATE and DATETIME.
      daa6d1f4
  9. 10 Sep, 2010 6 commits
  10. 09 Sep, 2010 4 commits
    • Alexey Kopytov's avatar
      Manual merge of the fix for bug #54190 and the addendum patch · 56d29401
      Alexey Kopytov authored
      to 5.5 (removed one test case as it is no longer valid).
      
      mysql-test/r/select.result:
        Removed a part of the test case for bug#48291 since it is not
        valid anymore. The comments for the removed part were actually
        describing a side-effect from the problem addressed by the
        addendum patch for bug #54190.
      mysql-test/t/select.test:
        Removed a part of the test case for bug#48291 since it is not
        valid anymore. The comments for the removed part were actually
        describing a side-effect from the problem addressed by the
        addendum patch for bug #54190.
      56d29401
    • Alexey Kopytov's avatar
      Addendum patch for bug #54190. · da7646b6
      Alexey Kopytov authored
      The patch caused some test failures when merged to 5.5 because,
      unlike 5.1, it utilizes Item_cache_row to actually cache row
      values. The problem was that Item_cache_row::bring_value()
      essentially did nothing. In particular, it did not update its
      null_value, so all Item_cache_row objects were always having
      their null_values set to TRUE. This went unnoticed previously,
      but now when Arg_comparator::compare_row() actually depends on
      the row's null_value to evaluate the comparison, the problem
      has surfaced.
      
      Fixed by calling the underlying item's bring_value() and
      updating null_value in Item_cache_row::bring_value().
      
      Since the problem also exists in 5.1 code (albeit hidden, since
      the relevant code is not used anywhere), the addendum patch is
      against 5.1.
      da7646b6
    • Alexey Kopytov's avatar
      Automerge. · 3ce925bf
      Alexey Kopytov authored
      3ce925bf
    • Alexey Kopytov's avatar
      Bug #54190: Comparison to row subquery produces incorrect · 453107bc
      Alexey Kopytov authored
                  result
      
      Row subqueries producing no rows were not handled as UNKNOWN
      values in row comparison expressions.
      
      That was a result of the following two problems:
      
      1. Item_singlerow_subselect did not mark the resulting row
      value as NULL/UNKNOWN when no rows were produced.
      
      2. Arg_comparator::compare_row() did not take into account that
      a whole argument may be NULL rather than just individual scalar
      values.
      
      Before bug#34384 was fixed, the above problems were hidden
      because an uninitialized (i.e. without any stored value) cached
      object would appear as NULL for scalar values in a row subquery
      returning an empty result. After the fix
      Arg_comparator::compare_row() would try to evaluate
      uninitialized cached objects.
      
      Fixed by removing the aforementioned problems.
      
      
      mysql-test/r/row.result:
        Added a test case for bug #54190.
      mysql-test/r/subselect.result:
        Updated the result for a test relying on wrong behavior.
      mysql-test/t/row.test:
        Added a test case for bug #54190.
      sql/item_cmpfunc.cc:
        If either of the argument rows is NULL, return NULL as the
        result of comparison.
      sql/item_subselect.cc:
        Adjust null_value for Item_singlerow_subselect depending on
        whether a row has been produced by the row subquery.
      453107bc