- 24 Sep, 2010 5 commits
-
-
Konstantin Osipov authored
-
Davi Arnaut authored
-
Davi Arnaut authored
Temporarily disable strict aliasing warnings in order to get wider coverage for optimized builds. Once the violations are fixed and false-positives silenced, this flag should be removed.
-
Dmitry Shulga authored
-
Dmitry Shulga authored
-
- 22 Sep, 2010 2 commits
-
-
Dmitry Shulga authored
-
Dmitry Shulga authored
-
- 21 Sep, 2010 2 commits
-
-
Ingo Struewing authored
Null-merge from 5.1
-
Ingo Struewing authored
Merge from saved bundle.
-
- 19 Sep, 2010 1 commit
-
-
Joerg Bruehe authored
-
- 17 Sep, 2010 3 commits
-
-
Davi Arnaut authored
The problem was that the x86 assembly based atomic CAS (compare and swap) implementation could copy the wrong value to the ebx register, where the cmpxchg8b expects to see part of the "comparand" value. Since the original value in the ebx register is saved in the stack (that is, the push instruction causes the stack pointer to change), a wrong offset could be used if the compiler decides to put the source of the comparand value in the stack. The solution is to copy the comparand value directly from memory. Since the comparand value is 64-bits wide, it is copied in two steps over to the ebx and ecx registers.
-
Alfranio Correia authored
-
Alfranio Correia authored
-
- 16 Sep, 2010 8 commits
-
-
Sergey Glukhov authored
-
Sergey Glukhov authored
Subselect executes twice, at JOIN::optimize stage and at JOIN::execute stage. At optimize stage Innodb prebuilt struct which is used for the retrieval of column values is initialized in. ha_innobase::index_read(), prebuilt->sql_stat_start is true. After QUICK_ROR_INTERSECT_SELECT finished his job it restores read_set/write_set bitmaps with initial values and deactivates one of the handlers used by QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup (it's the case when we reuse original handler as one of handlers required by QUICK_ROR_INTERSECT_SELECT object). On second subselect execution inactive handler is activated in QUICK_RANGE_SELECT::reset, file->ha_index_init(). In ha_index_init Innodb prebuilt struct is reinitialized with inappropriate read_set/write_set bitmaps. Further reinitialization in ha_innobase::index_read() does not happen as prebuilt->sql_stat_start is false. It leads to partial retrieval of required field values and we get a mix of field values from different records in the record buffer. The fix is to reset read_set/write_set bitmaps as these values are required for proper intialization of internal InnoDB struct which is used for the retrieval of column values (see build_template(), ha_innodb.cc)
-
Magne Mahre authored
-
Magne Mahre authored
adding new indexes A fast alter table requires that the existing (old) table and indices are unchanged (i.e only new indices can be added). To verify this, the layout and flags of the old table/indices are compared for equality with the new. The PACK_KEYS option is a no-op in InnoDB, but the flag exists, and is used in the table compare. We need to check this (table) option flag before deciding whether an index should be packed or not. If the table has explicitly set PACK_KEYS to 0, the created indices should not be marked as packed/packable.
-
Dmitry Shulga authored
-
Dmitry Shulga authored
compression protocol. The loss of connection was caused by a malformed packet sent by the server in case when query cache was in use. When storing data in the query cache, the query cache memory allocation algorithm had a tendency to reduce the amount of memory block necessary to store a result set, up to finally storing the entire result set in a single block. With a significant result set, this memory block could turn out to be quite large - 30, 40 MB and on. When such a result set was sent to the client, the entire memory block was compressed and written to network as a single network packet. However, the length of the network packet is limited by 0xFFFFFF (16MB), since the packet format only allows 3 bytes for packet length. As a result, a malformed, overly large packet with truncated length would be sent to the client and break the client/server protocol. The solution is, when sending result sets from the query cache, to ensure that the data is chopped into network packets of size <= 16MB, so that there is no corruption of packet length. This solution, however, has a shortcoming: since the result set is still stored in the query cache as a single block, at the time of sending, we've lost boundaries of individual logical packets (one logical packet = one row of the result set) and thus can end up sending a truncated logical packet in a compressed network packet. As a result, on the client we may require more memory than max_allowed_packet to keep, both, the truncated last logical packet, and the compressed next packet. This never (or in practice never) happens without compression, since without compression it's very unlikely that a) a truncated logical packet would remain on the client when it's time to read the next packet b) a subsequent logical packet that is being read would be so large that size-of-new-packet + size-of-old-packet-tail > max_allowed_packet. To remedy this issue, we send data in 1MB sized packets, that's below the current client default of 16MB for max_allowed_packet, but large enough to ensure there is no unnecessary overhead from too many syscalls per result set.
-
Mikael Ronstrom authored
-
Mikael Ronstrom authored
-
- 14 Sep, 2010 1 commit
-
-
Mattias Jonsson authored
-
- 13 Sep, 2010 8 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Martin Hansson authored
-
Martin Hansson authored
ORDER BY computed col GROUP BY implies ORDER BY in the MySQL dialect of SQL. Therefore, when an index on the first table in the query is used, and that index satisfies ordering according to the GROUP BY clause, the query optimizer estimates the number of tuples that need to be read from this index. If there is a LIMIT clause, table statistics on tables following this 'sort table' are employed. There may be a separate ORDER BY clause however, which mandates reading the whole 'sort table' anyway. But the previous estimate was left untouched. Fixed by removing the estimate from EXPLAIN output if GROUP BY is used in conjunction with an ORDER BY clause that mandates using a temporary table.
-
Joerg Bruehe authored
The first part is the functional change, the second is needed as a compile fix on Windows (header file order). | committer: Marc Alff <marc.alff@oracle.com> | branch nick: mysql-5.5-bugfixing-56521 | timestamp: Thu 2010-09-09 14:28:47 -0600 | message: | Bug#56521 Assertion failed: (m_state == 2), function allocated_to_free, pfs_lock.h (138) | | Before this fix, it was possible to build the server: | - with the performance schema | - with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY). | | In this case, the resulting binary will just crash, | as this configuration is not supported. | | This fix enforces that the build will fail with a compilation error in this | configuration, instead of resulting in a broken binary. | committer: Tor Didriksen <tor.didriksen@oracle.com> | branch nick: 5.5-bugfixing-56521 | timestamp: Fri 2010-09-10 11:10:38 +0200 | message: | Header files should be self-contained
-
Gleb Shchepa authored
-
Gleb Shchepa authored
Version "5.1.42 SUSE MySQL RPM" When a query was using a DATE or DATETIME value formatted using different formatting than "yyyy-mm-dd HH:MM:SS", a query with a greater-or-equal '>=' condition matched only greater values in an indexed TIMESTAMP column. The problem was introduced by the fix for the bug 46362 and partially solved (for DATE and DATETIME columns only) by the fix for the bug 47925. The stored_field_cmp_to_item function has been modified to take into account TIMESTAMP columns like we do for DATE and DATETIME columns.
-
- 10 Sep, 2010 6 commits
-
-
Joerg Bruehe authored
This is not the final merge!
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Bjorn Munch authored
-
Bjorn Munch authored
-
Bjorn Munch authored
-
- 09 Sep, 2010 4 commits
-
-
Alexey Kopytov authored
to 5.5 (removed one test case as it is no longer valid).
-
Alexey Kopytov authored
The patch caused some test failures when merged to 5.5 because, unlike 5.1, it utilizes Item_cache_row to actually cache row values. The problem was that Item_cache_row::bring_value() essentially did nothing. In particular, it did not update its null_value, so all Item_cache_row objects were always having their null_values set to TRUE. This went unnoticed previously, but now when Arg_comparator::compare_row() actually depends on the row's null_value to evaluate the comparison, the problem has surfaced. Fixed by calling the underlying item's bring_value() and updating null_value in Item_cache_row::bring_value(). Since the problem also exists in 5.1 code (albeit hidden, since the relevant code is not used anywhere), the addendum patch is against 5.1.
-
Alexey Kopytov authored
-
Alexey Kopytov authored
result Row subqueries producing no rows were not handled as UNKNOWN values in row comparison expressions. That was a result of the following two problems: 1. Item_singlerow_subselect did not mark the resulting row value as NULL/UNKNOWN when no rows were produced. 2. Arg_comparator::compare_row() did not take into account that a whole argument may be NULL rather than just individual scalar values. Before bug#34384 was fixed, the above problems were hidden because an uninitialized (i.e. without any stored value) cached object would appear as NULL for scalar values in a row subquery returning an empty result. After the fix Arg_comparator::compare_row() would try to evaluate uninitialized cached objects. Fixed by removing the aforementioned problems.
-