- 31 Aug, 2009 7 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Anurag Shekhar authored
-
Anurag Shekhar authored
-
- 30 Aug, 2009 3 commits
-
-
Staale Smedseng authored
-
Alexey Kopytov authored
-
Alexey Kopytov authored
results in server crash check_group_min_max_predicates() assumed the input condition item to be one of COND_ITEM, SUBSELECT_ITEM, or FUNC_ITEM. Since a condition of the form "field" is also a valid condition equivalent to "field <> 0", using such a condition in a query where the loose index scan was chosen resulted in a debug assertion failure. Fixed by handling conditions of the FIELD_ITEM type in check_group_min_max_predicates().
-
- 29 Aug, 2009 1 commit
-
-
If an EVENT is created without the DEFINER clause set explicitly or with it set to CURRENT_USER, the master and slaves become inconsistent. This issue stems from the fact that in both cases, the DEFINER is set to the CURRENT_USER of the current thread. On the master, the CURRENT_USER is the mysqld's user, while on the slave, the CURRENT_USER is empty for the SQL Thread which is responsible for executing the statement. To fix the problem, we do what follows. If the definer is not set explicitly, a DEFINER clause is added when writing the query into binlog; if 'CURRENT_USER' is used as the DEFINER, it is replaced with the value of the current user before writing to binlog.
-
- 28 Aug, 2009 8 commits
-
-
Davi Arnaut authored
-
Staale Smedseng authored
-
Staale Smedseng authored
with gcc 4.3.2 This patch fixes a number of GCC warnings about variables used before initialized. A new macro UNINIT_VAR() is introduced for use in the variable declaration, and LINT_INIT() usage will be gradually deprecated. (A workaround is used for g++, pending a patch for a g++ bug.) GCC warnings for unused results (attribute warn_unused_result) for a number of system calls (present at least in later Ubuntus, where the usual void cast trick doesn't work) are also fixed.
-
Davi Arnaut authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Alfranio Correia authored
-
Alfranio Correia authored
-
- 27 Aug, 2009 7 commits
-
-
Alfranio Correia authored
When a connection is dropped any remaining temporary table is also automatically dropped and the SQL statement of this operation is written to the binary log in order to drop such tables on the slave and keep the slave in sync. Specifically, the current code base creates the following type of statement: DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `db`.`table`; Unfortunately, appending the database to the table name in this manner circumvents the replicate-rewrite-db option (and any options that check the current database). To solve the issue, we started writing the statement to the binary as follows: use `db`; DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `table`;
-
Alfranio Correia authored
Slave does not correctly handle "expected errors" leading to inconsistencies between the mater and slave. Specifically, when a statement changes both transactional and non-transactional tables, the transactional changes are automatically rolled back on the master but the slave ignores the error and does not roll them back thus leading to inconsistencies. To fix the problem, we automatically roll back a statement that fails on the slave but note that the transaction is not rolled back unless a "rollback" command is in the relay log file.
-
Georgi Kodinov authored
field references This error requires a combination of factors : 1. An "impossible where" in the outermost SELECT 2. An aggregate in the outermost SELECT 3. A correlated subquery with a WHERE clause that includes an outer field reference as a top level WHERE sargable predicate When JOIN::optimize detects an "impossible WHERE" it will bail out without doing the rest of the work and initializations. It will not call make_join_statistics() as well. And make_join_statistics fills in various structures for each table referenced. When processing the result of the "impossible WHERE" the query must send a single row of data if there are aggregate functions in it. In this case the server marks all the aggregates as having received no rows and calls the relevant Item::val_xxx() method on the SELECT list. However if this SELECT list happens to contain a correlated subquery this subquery is evaluated in a normal evaluation mode. And if this correlated subquery has a reference to a field from the outermost "impossible where" SELECT the add_key_fields will mistakenly consider the outer field reference as a "local" field reference when looking for sargable predicates. But since the SELECT where the outer field reference refers to is not completely initialized due to the "impossible WHERE" in this level we'll get a NULL pointer reference. Fixed by making a better condition for discovering if a field is "local" to the SELECT level being processed. It's not enough to look for OUTER_REF_TABLE_BIT in this case since for outer references to constant tables the Item_field::used_tables() will return 0 regardless of whether the field reference is from the local SELECT or not.
-
Sergey Glukhov authored
-
Sergey Glukhov authored
The crash happens because select_union object is used as result set for queries which have derived tables. select_union use temporary table as data storage and if fields count exceeds 10(count of values for procedure ANALYSE()) then we get a crash on fill_record() function.
-
Alfranio Correia authored
Updated main.mysqlbinlog_row_trans's result file as TRUNCATE statements are wrapped in BEGIN...COMMIT.
-
Georgi Kodinov authored
-
- 26 Aug, 2009 5 commits
-
-
Alfranio Correia authored
binlog Mixing transactional (T) and non-transactional (N) tables on behalf of a transaction may lead to inconsistencies among masters and slaves in STATEMENT mode. The problem stems from the fact that although modifications done to non-transactional tables on behalf of a transaction become immediately visible to other connections they do not immediately get to the binary log and therefore consistency is broken. Although there may be issues in mixing T and M tables in STATEMENT mode, there are safe combinations that clients find useful. In this bug, we fix the following issue. Mixing N and T tables in multi-level (e.g. a statement that fires a trigger) or multi-table table statements (e.g. update t1, t2...) were not handled correctly. In such cases, it was not possible to distinguish when a T table was updated if the sequence of changes was N and T. In a nutshell, just the flag "modified_non_trans_table" was not enough to reflect that both a N and T tables were changed. To circumvent this issue, we check if an engine is registered in the handler's list and changed something which means that a T table was modified. Check WL 2687 for a full-fledged patch that will make the use of either the MIXED or ROW modes completely safe.
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
Problem was that the partition containing NULL values was pruned away, since '2001-01-01' < '2001-02-00' but TO_DAYS('2001-02-00') is NULL. Added the NULL partition for RANGE/LIST partitioning on TO_DAYS() function to be scanned too. Also fixed a bug that added ALLOW_INVALID_DATES to sql_mode (SELECT * FROM t WHERE date_col < '1999-99-99' on a RANGE/LIST partitioned table would add it).
-
Mattias Jonsson authored
There were a problem since pruning uses the field for comparison (while evaluate_join_record uses longlong), resulting in pruning failures when comparing DATE to DATETIME. Fix was to always comparing DATE vs DATETIME as DATETIME, by adding ' 00:00:00' to the DATE string. And adding optimization for comparing with 23:59:59, so that DATETIME_col > '2001-02-03 23:59:59' -> TO_DAYS(DATETIME_col) > TO_DAYS('2001-02-03 23:59:59') instead of '>='.
-
- 24 Aug, 2009 6 commits
-
-
Davi Arnaut authored
The problem was that creating a DECIMAL column from a decimal value could lead to a failed assertion as decimal values can have a higher precision than those attached to a table. The assert could be triggered by creating a table from a decimal with a large (> 30) scale. Also, there was a problem in calculating the number of digits in the integral and fractional parts if both exceeded the maximum number of digits permitted by the new decimal type. The solution is to ensure that truncation procedure is executed when deducing a DECIMAL column from a decimal value of higher precision. If the integer part is equal to or bigger than the maximum precision for the DECIMAL type (65), the integer part is truncated to fit and the fractional becomes zero. Otherwise, the fractional part is truncated to fit into the space left after the integer part is copied. This patch borrows code and ideas from Martin Hansson's patch.
-
Georgi Kodinov authored
The code was using a special global buffer for the value of IS NULL ranges. This was not always long enough to be copied by a regular memcpy. As a result read buffer overflows may occur. Fixed by setting the null byte to 1 and setting the rest of the field disk image to NULL with a bzero (instead of relying on the buffer and memcpy()).
-
Alfranio Correia authored
-
Alfranio Correia authored
-
Jonathan Perkin authored
- Add conditionals for bundled zlib and innodb plugin. - Apply patch from bug#46834 to install the test suite in RPMs. - Add plugins to RPMs. Disable example plugins.
-
Anurag Shekhar authored
decrease for INSERTs Bulk inserts (multiple row, CREATE ... SELECT, INSERT ... SELECT) into MyISAM tables were performed inefficiently. This was mainly affecting use cases where read_buffer_size was considerably large (>256K) and low number of rows was inserted (e.g. 30-100). The problem was that during I/O cache initialization (this happens before each bulk insert) allocated I/O buffer was unnecessarily initialized to '\0'. This was happening because of mess in flag values. MyISAM informs I/O cache to wait for free space (if out of disk space) by passing MY_WAIT_IF_FULL flag. Since MY_WAIT_IF_FULL and MY_ZEROFILL have the same values, memory allocator was initializing memory to '\0'. The performance gain provided with this patch may only be visible with non-debug binaries, since safemalloc always initializes allocated memory to 0xA5A5...
-
- 21 Aug, 2009 3 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
INSERT ... SELECT ... Problem was that when bulk insert is used on an empty table/partition, it disables the indexes for better performance, but in this specific case it also tries to read from that partition using an index, which is not possible since it has been disabled. Solution was to allow index reads on disabled indexes if there are no records. Also reverted the patch for bug#38005, since that was a workaround in the partitioning engine instead of a fix in myisam.
-
Georgi Kodinov authored
-