- 18 Oct, 2013 3 commits
-
-
Luis Soares authored
Automerged from mysql-5.5 bug branch into latest mysql-5.5.
-
Aditya A authored
AS A INNODB PARTITTION. [Merged from 5.1]
-
Aditya A authored
AS A INNODB PARTITTION. PROBLEM ------- The correct engine_type was not being set during rebuild of the partition due to which the handler was always created with the default engine, which is innodb for 5.5+ ,therefore even if the table was myisam, after rebuilding the partitions ended up as innodb partitions. FIX --- Set the correct engine type during rebuild. [Approved by mattiasj #rb3599]
-
- 17 Oct, 2013 4 commits
-
-
Luis Soares authored
The assertion happens when: (i) the master and slave are configured to use the semisync plugin; (ii) the DBA disables semisync on the master; (iii) and he also unsets the option to wait for slaves ACK even if the semisync slave count reaches 0 during the waiting period. This combination of factors makes the server run into an assertion as soon as the last semisync slave disconnects and its dump thread exits. The root of the problem is the fact that when the dump thread disconnects and calls the observer hook transmit_stop, which ends up calling ReplSemiSyncMaster::remove_slave, there is no check whether the master has already disabled semisync or not. If it has, the then a second call to the switch_off member function must be avoided. The quick fix is to avoid calling switch_off if the DBA has disabled the semisync plugin interactively on the master. Also, the switch_off member function should only be called if the plugin has not been switched off already. This is basically the pattern throughout the rest of the semisync plugin and no other calls seem vulnerable to similar crashes/assertions. (This a backport of the patch to 5.5, which is also vulnerable.)
-
Anil Toshniwal authored
--Implemented CHECK TABLE...QUICK. Introduce CHECK TABLE...QUICK that would skip the btr_validate_index() and btr_search_validate() call, and count the no. of records in each index. Approved by Marko and Kevin. (rb#3567).
-
unknown authored
No commit message
-
Luis Soares authored
Merging mysql-5.5 bug branch into latest mysql-5.5.
-
- 16 Oct, 2013 6 commits
-
-
Venkatesh Duggirala authored
REPLICATION FILTERS ARE USED. Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
REPLICATION FILTERS ARE USED. Problem: When Filtered-slave applies Int_var_log_event and when it tries to write the event to its own binlog, LAST_INSERT_ID value is written wrongly. Analysis: THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt is a variable which is set when LAST_INSERT_ID() is used by a statement. If it is set, first_successful_insert_id_in_ prev_stmt_for_binlog will be stored in the statement-based binlog. This variable is CUMULATIVE along the execution of a stored function or trigger: if one substatement sets it to 1 it will stay 1 until the function/trigger ends, thus making sure that first_successful_insert_id_in_ prev_stmt_for_binlog does not change anymore and is propagated to the caller for binlogging. This is achieved using the following code if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt) { /* It's the first time we read it */ first_successful_insert_id_in_prev_stmt_for_binlog= first_successful_insert_id_in_prev_stmt; stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1; } Slave server, after receiving Int_var_log_event event from master, it is setting stmt_depends_on_first_successful_insert_id_in_prev_stmt to true(*which is wrong*) and not setting first_successful_insert_id_in_prev_stmt_for_binlog. Because of this problem, when the actual DML statement with LAST_INSERT_ID() is parsed by slave SQL thread, first_successful_insert_id_in_prev_stmt_for_binlog is not set. Hence the value zero (default value) is written to slave's binlog. Why only *Filtered slave* is effected when the code is in common place: ------------------------------------------------------- In Query_log_event::do_apply_event, THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt is reset to zero at the end of the function. In case of normal slave (No Filters), this variable will be reset. In Filtered slave, Slave SQL thread defers all IRU events's execution until IRU's Query_log event is received. Once it receives Query_log_event it executes all pending IRU events and then it executes Query_log_event. Hence the variable is not getting reset to 0, causing this bug. Fix: As described above, the root cause was setting THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt when Int_var_log_event was executed by a SQL thread. Hence removing the problematic line from the code.
-
Venkata Sidagam authored
Merging from mysql-5.1 to mysql-5.5
-
Venkata Sidagam authored
Description: Fix for bug CVE-2012-5611 (bug 67685) is incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and check_grant_db() can be overflown by up to two bytes. That's probably not enough to do anything more serious than crashing mysqld. Analysis: In acl_get() when "copy_length" is calculated it just adding the variable lengths. But when we are using them with strmov() we are adding +1 to each. This will lead to a three byte buffer overflow (i.e two +1's at strmov() and one byte for the null added by strmov() function). Similarly it happens for check_grant_db() function as well. Fix: We need to add "+2" to "copy_length" in acl_get() and "+1" to "copy_length" in check_grant_db().
-
Sujatha Sivakumar authored
REPEATED TWICE IN BINLOG Problem: ======= If LOAD DATA ... SET ... is used the last argument of SET is repeated twice in replication binlog. Analysis: ======== LOAD DATA statements are reconstructed once again before they are written to the binary log. When SET clauses are specified as part of LOAD DATA statement, these SET clause user command strings need to be stored in order to rebuild the original user command. During parsing each column and the value in the SET command are stored in two differenet lists. All the values are stored in a string list. When SET expression has more than one value as shown in the following example: SET a = @A, b = CONCAT(@b, '| 123456789'); Parser extracts values in the following manner i.e Item name , value string, actual length of the value of the item with in the string. Item a: Value for a:"= @A, b = CONCAT(@b, '| 123456789') str_length = 4 Item b: Value for b:"= CONCAT(@b, '| 123456789') str_length = 27 During reconstructing the LOAD DATA command the above strings are retrived as it is and appended to the LOAD DATA statement. Hence it becomes as shown below. SET `a`= @A, b = CONCAT(@b, '| 123456789'), `b`= CONCAT(@b, '| 123456789') Fix: === During reconstruction of SET command, retrieve exact item value string rather than reading the entire string. sql/sql_load.cc: Added code to extract the exact Item value.
-
Sreedhar.S authored
-
- 14 Oct, 2013 2 commits
-
-
Nuno Carvalho authored
Merge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
WL#7266: Dump-thread additional concurrency tests This worklog aims at testing the two following scenarios: 1) Whenever the mysql_binlog_send method (dump thread) reaches the end of file when reading events from the binlog, before checking if it should wait for more events, there was a test to check if the file being read was still active, i.e, it was the last known binlog. However, it was possible that something was written to the binary log and then a rotation would happen, after EOF was detected and before the check for active was performed. In this case, the end of the binary log would not be read by the dump thread, and this would cause the slave to lose updates. This test verifies that the problem has been fixed. It waits during this window while forcing a rotation in the binlog. 2) Verify dump thread can send events in active file, correctly after encountering an IO error.
-
- 09 Oct, 2013 4 commits
-
-
unknown authored
No commit message
-
Sreedhar.S authored
-
Praveenkumar Hulakund authored
AND 'KILL SESSION' LEAD TO CRASH Analysis: -------- This situation occurs when the connection executes query "show engine innodb status" and this connection is killed by executing statement "kill <con>" by another connection. In function "innodb_show_status", function "stat_print" is called to print the status but return value of function is not checked. After killing connection, if write to connection fails then error is returned and same is set in Diagnostic area. Since FALSE is returned from "innodb_show_status" now, assert to check no error is set in function "set_eof_status" (called from my_eof) is failing. Fix: ---- Changed code to check return value of function "stat_print" in "innodb_show_status".
-
Sreedhar.S authored
-
- 08 Oct, 2013 1 commit
-
-
Luis Soares authored
ReplSemiSyncMaster::updateSyncHeader contains redundant assignments to the local variable sync. This patch removes them.
-
- 07 Oct, 2013 8 commits
-
-
unknown authored
No commit message
-
Kent Boortz authored
-
unknown authored
No commit message
-
unknown authored
No commit message
-
unknown authored
No commit message
-
Yasufumi Kinoshita authored
-
Yasufumi Kinoshita authored
ha_innobase::records_in_range() should return HA_POS_ERROR for the table during discarded without requesting pages. The later other handler method should treat the error correctly. Approved by Sunny in rb#3433
-
unknown authored
No commit message
-
- 06 Oct, 2013 1 commit
-
-
unknown authored
No commit message
-
- 05 Oct, 2013 1 commit
-
-
Praveenkumar Hulakund authored
Description: ------------ There are 2 issues reported in the bug report, 1. One session running a "long" select, then, from the other session, you kill that first one, while select is running, and it receives that message "Server shutdown in progress". Reported Date: 02-Apr-2006 => Looks like this isuse is already fixed in 2009 by the patch pushed for bug28141. 2. Killing query which goes to filesort, logs error entries like: 120416 9:17:28 [ERROR] mysqld: Sort aborted: Server shutdown in progress 120416 9:18:48 [ERROR] mysqld: Sort aborted: Server shutdown in progress 120416 9:19:39 [ERROR] mysqld: Sort aborted: Server shutdown in progress Reported Date: 16-Apr-2012 => This issue is introduced in 5.5+ versions. Fixing this issue in this patch. Analysis: --------- In function "filesort()", on error we are logging error message. To the error message, the message related THD::killed_errno is also appeneded, if it is set.(THD::kill_errno value is obtained by calling member function THD::killed_errno) In the scenario mentioned in this bug report, when we kill the connection, THD::kill_errno is set to the THD::KILL_CONNECTION. Enum type THD::KILL_CONNECTION corressponds to value ER_SERVER_SHUTDOWN. Because of this, "Server shutdown in ...." is appended to the message logged. Fix: ---- Modified code of "filesort()" function to append "KILL_QUERY" status to error message when thread is killed and server shutdown is not in progress.
-
- 04 Oct, 2013 1 commit
-
-
unknown authored
No commit message
-
- 01 Oct, 2013 2 commits
-
-
unknown authored
No commit message
-
Mattias Jonsson authored
INDEX_READ_MAP HAD NO MATCH If index_read_map is called for exact search and no matching records exists it will position the cursor on the next record, but still having the relative position to BTR_PCUR_ON. This will make a call for index_next to read yet another next record, instead of returning the record the cursor points to. Fixed by setting pcur->rel_pos = BTR_PCUR_BEFORE if an exact [prefix] search is done, but failed. Also avoids optimistic restoration if rel_pos != BTR_PCUR_ON, since btr_cur may be different than old_rec. rb#3324, approved by Marko and Jimmy
-
- 30 Sep, 2013 5 commits
-
-
Sreedhar.S authored
As this mysql_install_db.pl file has always generated lots of confusion on Windows. This fix will make sure to get it removed only from Windows
-
Sreedhar.S authored
Registry redirection was improper and hence not able to pick the value for the OLDERVERSION property. as a result in the install upgrade dialog null value wa being passed.
-
Sreedhar.S authored
-
Yasufumi Kinoshita authored
log_buffer_extend() should fill the new buffer with 0.
-
Yasufumi Kinoshita authored
Changed to try to extend log buffer instead of crash, when log size is too large for the size. Approved by Marko in rb#3229
-
- 27 Sep, 2013 2 commits
-
-
Satya Bodapati authored
The testcase for this bug fails randomly due to two reasons. 1. Due to ibuf merge happening background 2. Due to dict stats update which brings the evicted page back into buffer pool. Fix ibuf_contract_ext() to not do any merges with ibuf_debug enabled and also changed dict_stats_update() to return fake statistics without bringing the secondary index pages into buffer pool. Approved by Marko. rb#3419
-
unknown authored
No commit message
-