- 28 Oct, 2013 1 commit
-
-
Mattias Jonsson authored
-
- 25 Oct, 2013 2 commits
-
-
unknown authored
No commit message
-
sayantan dutta authored
-
- 21 Oct, 2013 2 commits
-
-
Jon Olav Hauglid authored
-Wl,--no-undefined (=-z defs) gives linking errors when used with WITH_ASAN. According to the documentation: "When linking shared libraries, the AddressSanitizer run-time is not linked, so -Wl,-z,defs may cause link errors (don’t use it with AddressSanitizer)." This patch turns off -Wl,--no-undefined if WITH_ASAN is used.
-
Aditya A authored
ON DELETE FROM A PARTITIONED TABLE PROBLEM ------- The user first disables all the non unique indexes in the table and then rebuilds one partition. During rebuild the indexes on that particular partition are enabled. Now when we give a query the optimizer is unaware that on one partition indexes are enabled and if the optimizer selects that index,myisam thinks that the index is not active and gives an error. FIX --- Before rebuilding a partition check whether non unique indexes are disabled on the partitons. If they are disabled then after rebuild disable the index on the partition. [Approved by Mattiasj #rb3469]
-
- 19 Oct, 2013 1 commit
-
-
Mattias Jonsson authored
-
- 18 Oct, 2013 7 commits
-
-
Mattias Jonsson authored
Too restrictive assertion, failing during purge
-
Mattias Jonsson authored
Too restrictive assertion, can fail during purge
-
Mattias Jonsson authored
Regression from bug#14621190 due to disabled optimistic restoration of cursor, which required full key lookup instead of verifying if previously positioned btree cursor could be reused. Fixed by enable optimistic restore and adjust cursor afterward. rb#3324 approved by Marko.
-
Anirudh Mangipudi authored
Problem: COM_CHANGE_USER allows brute-force attempts to crack a password at a very high rate as it does not cause any significant delay after a login attempt has failed. This issue was reproduced using John-The-Ripper password cracking tool through which about 5000 passwords per second could be attempted. Solution: The non-GA version's solution was to disconnect the connection when a login attempt failed. Now since our aim to to reduce the rate at which passwords are tested, we introduced a sleep(1) after every login attempt failed. This significantly increased the delay with which the password was cracked.
-
Luis Soares authored
Automerged from mysql-5.5 bug branch into latest mysql-5.5.
-
Aditya A authored
AS A INNODB PARTITTION. [Merged from 5.1]
-
Aditya A authored
AS A INNODB PARTITTION. PROBLEM ------- The correct engine_type was not being set during rebuild of the partition due to which the handler was always created with the default engine, which is innodb for 5.5+ ,therefore even if the table was myisam, after rebuilding the partitions ended up as innodb partitions. FIX --- Set the correct engine type during rebuild. [Approved by mattiasj #rb3599]
-
- 17 Oct, 2013 4 commits
-
-
Luis Soares authored
The assertion happens when: (i) the master and slave are configured to use the semisync plugin; (ii) the DBA disables semisync on the master; (iii) and he also unsets the option to wait for slaves ACK even if the semisync slave count reaches 0 during the waiting period. This combination of factors makes the server run into an assertion as soon as the last semisync slave disconnects and its dump thread exits. The root of the problem is the fact that when the dump thread disconnects and calls the observer hook transmit_stop, which ends up calling ReplSemiSyncMaster::remove_slave, there is no check whether the master has already disabled semisync or not. If it has, the then a second call to the switch_off member function must be avoided. The quick fix is to avoid calling switch_off if the DBA has disabled the semisync plugin interactively on the master. Also, the switch_off member function should only be called if the plugin has not been switched off already. This is basically the pattern throughout the rest of the semisync plugin and no other calls seem vulnerable to similar crashes/assertions. (This a backport of the patch to 5.5, which is also vulnerable.)
-
Anil Toshniwal authored
--Implemented CHECK TABLE...QUICK. Introduce CHECK TABLE...QUICK that would skip the btr_validate_index() and btr_search_validate() call, and count the no. of records in each index. Approved by Marko and Kevin. (rb#3567).
-
unknown authored
No commit message
-
Luis Soares authored
Merging mysql-5.5 bug branch into latest mysql-5.5.
-
- 16 Oct, 2013 6 commits
-
-
Venkatesh Duggirala authored
REPLICATION FILTERS ARE USED. Merging fix from mysql-5.1
-
Venkatesh Duggirala authored
REPLICATION FILTERS ARE USED. Problem: When Filtered-slave applies Int_var_log_event and when it tries to write the event to its own binlog, LAST_INSERT_ID value is written wrongly. Analysis: THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt is a variable which is set when LAST_INSERT_ID() is used by a statement. If it is set, first_successful_insert_id_in_ prev_stmt_for_binlog will be stored in the statement-based binlog. This variable is CUMULATIVE along the execution of a stored function or trigger: if one substatement sets it to 1 it will stay 1 until the function/trigger ends, thus making sure that first_successful_insert_id_in_ prev_stmt_for_binlog does not change anymore and is propagated to the caller for binlogging. This is achieved using the following code if(!stmt_depends_on_first_successful_insert_id_in_prev_stmt) { /* It's the first time we read it */ first_successful_insert_id_in_prev_stmt_for_binlog= first_successful_insert_id_in_prev_stmt; stmt_depends_on_first_successful_insert_id_in_prev_stmt= 1; } Slave server, after receiving Int_var_log_event event from master, it is setting stmt_depends_on_first_successful_insert_id_in_prev_stmt to true(*which is wrong*) and not setting first_successful_insert_id_in_prev_stmt_for_binlog. Because of this problem, when the actual DML statement with LAST_INSERT_ID() is parsed by slave SQL thread, first_successful_insert_id_in_prev_stmt_for_binlog is not set. Hence the value zero (default value) is written to slave's binlog. Why only *Filtered slave* is effected when the code is in common place: ------------------------------------------------------- In Query_log_event::do_apply_event, THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt is reset to zero at the end of the function. In case of normal slave (No Filters), this variable will be reset. In Filtered slave, Slave SQL thread defers all IRU events's execution until IRU's Query_log event is received. Once it receives Query_log_event it executes all pending IRU events and then it executes Query_log_event. Hence the variable is not getting reset to 0, causing this bug. Fix: As described above, the root cause was setting THD::stmt_depends_on_first_successful_insert_id_in_prev_stmt when Int_var_log_event was executed by a SQL thread. Hence removing the problematic line from the code.
-
Venkata Sidagam authored
Merging from mysql-5.1 to mysql-5.5
-
Venkata Sidagam authored
Description: Fix for bug CVE-2012-5611 (bug 67685) is incomplete. The ACL_KEY_LENGTH-sized buffers in acl_get() and check_grant_db() can be overflown by up to two bytes. That's probably not enough to do anything more serious than crashing mysqld. Analysis: In acl_get() when "copy_length" is calculated it just adding the variable lengths. But when we are using them with strmov() we are adding +1 to each. This will lead to a three byte buffer overflow (i.e two +1's at strmov() and one byte for the null added by strmov() function). Similarly it happens for check_grant_db() function as well. Fix: We need to add "+2" to "copy_length" in acl_get() and "+1" to "copy_length" in check_grant_db().
-
Sujatha Sivakumar authored
REPEATED TWICE IN BINLOG Problem: ======= If LOAD DATA ... SET ... is used the last argument of SET is repeated twice in replication binlog. Analysis: ======== LOAD DATA statements are reconstructed once again before they are written to the binary log. When SET clauses are specified as part of LOAD DATA statement, these SET clause user command strings need to be stored in order to rebuild the original user command. During parsing each column and the value in the SET command are stored in two differenet lists. All the values are stored in a string list. When SET expression has more than one value as shown in the following example: SET a = @A, b = CONCAT(@b, '| 123456789'); Parser extracts values in the following manner i.e Item name , value string, actual length of the value of the item with in the string. Item a: Value for a:"= @A, b = CONCAT(@b, '| 123456789') str_length = 4 Item b: Value for b:"= CONCAT(@b, '| 123456789') str_length = 27 During reconstructing the LOAD DATA command the above strings are retrived as it is and appended to the LOAD DATA statement. Hence it becomes as shown below. SET `a`= @A, b = CONCAT(@b, '| 123456789'), `b`= CONCAT(@b, '| 123456789') Fix: === During reconstruction of SET command, retrieve exact item value string rather than reading the entire string. sql/sql_load.cc: Added code to extract the exact Item value.
-
Sreedhar.S authored
-
- 14 Oct, 2013 2 commits
-
-
Nuno Carvalho authored
Merge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
WL#7266: Dump-thread additional concurrency tests This worklog aims at testing the two following scenarios: 1) Whenever the mysql_binlog_send method (dump thread) reaches the end of file when reading events from the binlog, before checking if it should wait for more events, there was a test to check if the file being read was still active, i.e, it was the last known binlog. However, it was possible that something was written to the binary log and then a rotation would happen, after EOF was detected and before the check for active was performed. In this case, the end of the binary log would not be read by the dump thread, and this would cause the slave to lose updates. This test verifies that the problem has been fixed. It waits during this window while forcing a rotation in the binlog. 2) Verify dump thread can send events in active file, correctly after encountering an IO error.
-
- 09 Oct, 2013 4 commits
-
-
unknown authored
No commit message
-
Sreedhar.S authored
-
Praveenkumar Hulakund authored
AND 'KILL SESSION' LEAD TO CRASH Analysis: -------- This situation occurs when the connection executes query "show engine innodb status" and this connection is killed by executing statement "kill <con>" by another connection. In function "innodb_show_status", function "stat_print" is called to print the status but return value of function is not checked. After killing connection, if write to connection fails then error is returned and same is set in Diagnostic area. Since FALSE is returned from "innodb_show_status" now, assert to check no error is set in function "set_eof_status" (called from my_eof) is failing. Fix: ---- Changed code to check return value of function "stat_print" in "innodb_show_status".
-
Sreedhar.S authored
-
- 08 Oct, 2013 1 commit
-
-
Luis Soares authored
ReplSemiSyncMaster::updateSyncHeader contains redundant assignments to the local variable sync. This patch removes them.
-
- 07 Oct, 2013 8 commits
-
-
unknown authored
No commit message
-
Kent Boortz authored
-
unknown authored
No commit message
-
unknown authored
No commit message
-
unknown authored
No commit message
-
Yasufumi Kinoshita authored
-
Yasufumi Kinoshita authored
ha_innobase::records_in_range() should return HA_POS_ERROR for the table during discarded without requesting pages. The later other handler method should treat the error correctly. Approved by Sunny in rb#3433
-
unknown authored
No commit message
-
- 06 Oct, 2013 1 commit
-
-
unknown authored
No commit message
-
- 05 Oct, 2013 1 commit
-
-
Praveenkumar Hulakund authored
Description: ------------ There are 2 issues reported in the bug report, 1. One session running a "long" select, then, from the other session, you kill that first one, while select is running, and it receives that message "Server shutdown in progress". Reported Date: 02-Apr-2006 => Looks like this isuse is already fixed in 2009 by the patch pushed for bug28141. 2. Killing query which goes to filesort, logs error entries like: 120416 9:17:28 [ERROR] mysqld: Sort aborted: Server shutdown in progress 120416 9:18:48 [ERROR] mysqld: Sort aborted: Server shutdown in progress 120416 9:19:39 [ERROR] mysqld: Sort aborted: Server shutdown in progress Reported Date: 16-Apr-2012 => This issue is introduced in 5.5+ versions. Fixing this issue in this patch. Analysis: --------- In function "filesort()", on error we are logging error message. To the error message, the message related THD::killed_errno is also appeneded, if it is set.(THD::kill_errno value is obtained by calling member function THD::killed_errno) In the scenario mentioned in this bug report, when we kill the connection, THD::kill_errno is set to the THD::KILL_CONNECTION. Enum type THD::KILL_CONNECTION corressponds to value ER_SERVER_SHUTDOWN. Because of this, "Server shutdown in ...." is appended to the message logged. Fix: ---- Modified code of "filesort()" function to append "KILL_QUERY" status to error message when thread is killed and server shutdown is not in progress.
-