- 16 Sep, 2009 3 commits
-
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
- 15 Sep, 2009 2 commits
-
-
Kristofer Pettersson authored
-
Kristofer Pettersson authored
-
- 13 Sep, 2009 2 commits
-
-
Luis Soares authored
The test case rpl_do_grant fails sporadically on PB2 with "Access denied for user 'create_rout_db'@'localhost' ...". Inspecting the test case, one may find that if issues a GRANT on the master connection and immediately after it creates two new connections (one to the master and one to the slave) using the credentials set with the GRANT. Unfortunately, there is no synchronization between master and slave after the grant and before the connections are established. This can result in slave not having executed the GRANT by the time the connection is attempted. This patch fixes this by deploying a sync_slave_with_master between the grant and the connections attempt.
-
Luis Soares authored
The test case creates two temporary tables, then closes the connection, waits for it to disconnect, then syncs the slave with the master, checks for remaining opened temporary tables on slave (which should be 0) and finally drops the used database (mysqltest). Unfortunately, sometimes, the test fails with one open table on the slave. This seems to be caused by the fact that waiting for the connection to be closed is not sufficient. The test needs to wait for the DROP event to be logged and only then synchronize the slave with the master and proceed with the check. This is caused by the asynchronous nature of the disconnect wrt binlogging of the DROP temporary table statement. We fix this by deploying a call to wait_for_binlog_event.inc on the test case, which makes execution to wait for the DROP temp tables event before synchronizing master and slave.
-
- 11 Sep, 2009 2 commits
-
-
Mattias Jonsson authored
-
Ramil Kalimullin authored
Problem: LOGGER::general_log_write() relied on valid "thd" parameter passed but had inconsistent "if (thd)" check. Fix: as we always pass a valid "thd" parameter to the method, redundant check removed.
-
- 10 Sep, 2009 11 commits
-
-
Alexander Nozdrin authored
-
Sergey Glukhov authored
-
Sergey Glukhov authored
The problem is that argument buffer can be used as result buffer and it leads to argument value change. The fix is to use 'old buffer' as result buffer only if first argument is not constant item.
-
In RBR, There is an inconsistency between slaves and master. When INSERT statement which includes an auto_increment field is executed, Store engine of master will check the value of the auto_increment field. It will generate a sequence number and then replace the value, if its value is NULL or empty. if the field's value is 0, the store engine will do like encountering the NULL values unless NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE. In contrast, if the field's value is 0, Store engine of slave always generates a new sequence number whether or not NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE. SQL MODE of slave sql thread is always consistency with master's. Another variable is related to this bug. If generateing a sequence number is decided by the values of table->auto_increment_field_not_null and SQL_MODE(if includes MODE_NO_AUTO_VALUE_ON_ZERO) The table->auto_increment_is_not_null is FALSE, which causes this bug to appear. ..
-
Sergey Glukhov authored
partial backport of bug43138 fix
-
Sergey Vojtovich authored
-
Alexander Nozdrin authored
on Windows in dbug.c) -- part 2: a patch for the DBUG subsystem to detect misuse of DBUG_ENTER / DBUG_RETURN macros. 5.1 version.
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
- 09 Sep, 2009 4 commits
-
-
Georgi Kodinov authored
-
Sergey Vojtovich authored
Archive engine returns wrong values for average record length and max data length. With this fix they're calculated as following: - max data length is 2 ^ 63 where large files are supported and INT_MAX32 where this is not supported; - average record length is data length / records in data file.
-
Sergey Vojtovich authored
Create temporary InnoDB table fails on case insensitive filesystems, when lower_case_table_names is 2 (e.g. OS X) and temporary directory path contains upper case letters. The problem was that tmpdir prefix was converted to lower case when table was created, but was passed as is when table was opened. Fixed by leaving tmpdir prefix part intact.
-
Georgi Kodinov authored
Updates the results of all the out-dated test suites and adds the special mysqltest command to enable innodb for the tests that need it.
-
- 08 Sep, 2009 2 commits
-
-
Georgi Kodinov authored
-
Alexey Kopytov authored
-
- 07 Sep, 2009 6 commits
-
-
Martin Hansson authored
-
Mikael Ronstrom authored
-
Martin Hansson authored
The parser rule for expressions in a udf parameter list contains two hacks: First, the parser input stream is read verbatim, bypassing the lexer. Second, the Item::name field is overwritten. If the argument to a udf was a field, the field's name as seen by name resolution was overwritten this way. If the field name was quoted or escaped, it would appear as e.g. "`field`". Fixed by not overwriting field names.
-
Mikael Ronstrom authored
-
This test case uses mysqlbinlog to dump the content of master-bin.000001, but the content of master-bin.000001 is not that this test needs. MTR runs a lot of test cases on one server, so when this test starts, the current binlog file might not be master-bin.000001, or there are other events are written by tests before. 'RESET MASTER' command must be called at the begin, it ensures that binlog of this test is wrote to master-bin.000001 correctly. Three other tests have the same problem, They were fixed together. mysqlbinlog-cp932 binlog_incident binlog_tmp_table
-
- 05 Sep, 2009 1 commit
-
-
Alexey Kopytov authored
The external 'for' loop in remove_dup_with_compare() handled HA_ERR_RECORD_DELETED by just starting over without advancing to the next record which caused an infinite loop. This condition could be triggered on certain data by a SELECT query containing DISTINCT, GROUP BY and HAVING clauses. Fixed remove_dup_with_compare() so that we always advance to the next record when receiving HA_ERR_RECORD_DELETED from rnd_next().
-
- 04 Sep, 2009 7 commits
-
-
Davi Arnaut authored
The plugin feature is disabled, you need HAVE_DLOPEN Selectively skip tests that require dynamic loading (ie: static builds).
-
Ramil Kalimullin authored
-
Mattias Jonsson authored
(Backport) Problem is that when insert (ha_start_bulk_insert) in i partitioned table, it will call ha_start_bulk_insert for every partition, used or not. Solution is to delay the call to the partitions ha_start_bulk_insert until the first row is to be inserted into that partition
-
Kristofer Pettersson authored
Incremental patch part 2 Removing dead code and changing a note level message to a warning.
-
Ramil Kalimullin authored
on subquery inside a SP Problem: repeated call of a SP containing an incorrect query with a subselect may lead to failed ASSERT(). Fix: set proper sublelect's state in case of error occured during subquery transformation.
-
Sergey Glukhov authored
-
Sergey Vojtovich authored
SELECT with join (not only self-join) from archive table may return incomplete result set, when result set size exceeds join buffer size. The problem was that archive row counter was initialzed too early, when ha_archive::info() method was called. Later, when optimizer exceeds join buffer, it attempts to reuse handler without calling ha_archive::info() again (which is correct). Fixed by moving row counter initialization from ha_archive::info() to ha_archive::rnd_init().
-