- 30 Aug, 2012 1 commit
-
-
Jorgen Loland authored
-
- 29 Aug, 2012 1 commit
-
-
unknown authored
No commit message
-
- 28 Aug, 2012 3 commits
-
-
Tor Didriksen authored
-
Tor Didriksen authored
The use of Thread_iterator did not work on windows (linking problems). Solution: Change the interface between the thread_pool and the server to only use simple free functions. This patch is for 5.5 only (mimicks similar solution in 5.6)
-
Jorgen Loland authored
ha_innobase::records_in_range(): Remove a debug assertion that prohibits an open range (full table). This assertion catches unnecessary calls to this method, but such calls are not harming correctness.
-
- 27 Aug, 2012 2 commits
-
-
unknown authored
No commit message
-
Aditya A authored
Backport from mysql-5.6 the fix (revision-id sunny.bains@oracle.com-20120315045831-20rgfa4cozxmz7kz) Bug#13839886 - CRASH IN INNOBASE_NEXT_AUTOINC The assertion introduce in the fix for Bug#13817703 is too strong, a negative number can be greater than the column max value, when the column value is a negative number. rb://978 Approved by Jimmy Yang. rb:1236 approved by Marko Makela
-
- 24 Aug, 2012 6 commits
-
-
unknown authored
-
Martin Hansson authored
-
Ashish Agarwal authored
ENABLE AUDI PLUGIN WHEN DDL OPERATION HAPPENING PROBLEM: While unloading the plugin, state is not checked before it is to be reaped. This can lead to simultaneous free of plugin memory by more than one thread. Multiple deallocation leads to server crash. In the present bug two threads deallocate the alog_log plugin. SOLUTION: A check is added to ensure that only one thread is unloading the plugin. NOTE: No mtr test is added as it requires multiple threads to access critical section. debug_sync cannot be used in the current senario because we dont have access to thread pointer in some of the plugin functions. IMHO no test case in the current time frame.
-
Martin Hansson authored
NUMBERS If a system variable was declared as deprecated without mention of an alternative, the message would look funny, e.g. for @@delayed_insert_limit: Warning 1287 '@@delayed_insert_limit' is deprecated and will be removed in MySQL . The message was meant to display the version number, but it's not possible to give one when declaring a system variable. The fix does two things: 1) The definition of the message ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT is changed so that it does not display a version number. I.e. in English the message now reads: Warning 1287 The syntax '@@delayed_insert_limit' is deprecated and will be removed in a future version. 2) The message ER_WARN_DEPRECATED_SYNTAX_WITH_VER is discontinued in favor of ER_WARN_DEPRECATED_SYNTAX for system variables. This change was already done in versions 5.6 and above as part of wl#5265. This part is simply back-ported from the worklog.
-
Marc Alff authored
WARNING This patch is for mysql-5.5 only, to be null-merged to mysql-5.6 and mysql-trunk. This is a partial rollback of the file io instrumentation, removing the instrumentation for mysql_file_stat in the archive engine. See the bug comments for details.
-
Gopal Shankar authored
FAILED IN CHECK_LOCK_AND_ST Problem: -------- lock_tables() is supposed to invoke check_lock_and_start_stmt() for TABLE_LIST which are directly used by top level statement. TABLE_LIST->prelocking_placeholder is set only for TABLE_LIST which are used indirectly by stored programs invoked by top level statement. Hence check_lock_and_start_stmt() should have TABLE_LIST->prelocking_placeholder==false always, but it is observed that this assert fails. The failure is found during RQG test rqg_signal_resignal. Analysis: --------- open_tables() invokes open_and_process_routines() where it finds all the TABLE_LIST that belong to the routine and adds it to thd->lex->query_tables. During this process if the open_and_process_routines() fail for some reason, we are supposed to chop-off all the TABLE_LIST found during calls to open_and_process_routines(). But, in practice this is not happening. thd->lex->query_tables_own_last is supposed to point to a node in thd->lex->query_tables, which would be a first TABLE_LIST used indirectly by stored programs invoked by top level statement. This is found to be not-set correctly when we plan to chop-off TABLE_LIST's, when open_and_process_routines() failed. close_tables_for_reopen() does chop-off all the TABLE_LIST added after thd->lex->query_table_own_last. This is invoked upon error in open_and_process_routines(). This call would not work as expected as thd->lex->query_tables_own_last is not set, or is not set to correctly. Further, when open_tables() restarts the process of finding TABLE_LIST belonging to stored programs, and as the thd->lex->query_tables_own_last points to in-correct node, there is possibility of new iteration setting the thd->lex->query_tables_own_last past some old nodes that belong to stored programs, added earlier and not removed. Later when open_tables() completes, lock_tables() ends up invoking check_lock_and_start_stmt() for TABLE_LIST which belong to stored programs, which is not expected behavior and hence we hit the assert TABLE_LIST->prelocking_placeholder==false. Due to above behavior, if a user application tries to execute a SQL statement which invokes some stored function and if the lock grant on stored function fails due to a deadlock, then mysqld crashes. Fix: ---- open_tables() remembers save_query_tables_last which points to thd-lex->query_tables_last before calls to open_and_process_routines(). If there is no known thd->lex->query_tables_own_last set, we are now setting thd->lex->query_tables_own_last to save_query_tables_last. This will make sure that the call to close_tables_for_reopen() will chop-off the list correctly, in other words we now remove all the nodes added to thd->lex->query_tables, by previous calls to open_and_process_routines(). Further, it is found that the problem exists starting from 5.5, due to a code refactoring effort related to open_tables(). Hence, the fix will be pushed in 5.5, 5.6 and trunk.
-
- 23 Aug, 2012 1 commit
-
-
Tor Didriksen authored
Documentation for class Item_outer_ref was wrong: (*ref) may point to Item_field as well (see e.g. Item_outer_ref::fix_fields) So this casting in get_store_key() was wrong: (*(Item_ref**)((Item_ref*)keyuse->val)->ref)->ref_type()
-
- 21 Aug, 2012 1 commit
-
-
Marko Mäkelä authored
HEURISTICS FOR COMPRESSED PAGE SIZE The fix of Bug#12845774 was supposed to skip known-to-fail btr_cur_optimistic_insert() calls. There was only one such call, in btr_cur_pessimistic_update(). All other callers of btr_cur_pessimistic_insert() would release and reacquire the B-tree page latch before attempting the pessimistic insert. This would allow other threads to restructure the B-tree, allowing (and requiring) the insert to succeed as an optimistic (single-page) operation. Failure to attempt an optimistic insert before a pessimistic one would trigger an attempt to split an empty page. rb:1234 approved by Sunny Bains
-
- 20 Aug, 2012 2 commits
-
-
Mattias Jonsson authored
pre-push fix, removed unused variable.
-
Mattias Jonsson authored
-
- 17 Aug, 2012 1 commit
-
-
Georgi Kodinov authored
DURING SERVER STARTUP The options parser now correctly checks for ambiguous prefixes in enumerated variables and emits an error when the value supplied is ambiguous. No test added since mysql-test-run.pl can't handle server startup failures as an expected state.
-
- 21 Aug, 2012 1 commit
-
-
Marko Mäkelä authored
-
- 20 Aug, 2012 4 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
- 17 Aug, 2012 3 commits
-
-
Mattias Jonsson authored
Additional patch to remove the part_id -> ref_buffer offset. The partitioning id and the associate record buffer can be found without having to calculate it. By initializing it for each used partition, and then reuse the key-buffer from the queue, it is not needed to have such map.
-
Alexander Barkov authored
-
Alexander Barkov authored
-
- 16 Aug, 2012 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Facebook got a case where the page compresses really well so that btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record gets updated, the compression rate radically changes so that btr_cur_insert_if_possible() can not insert in place despite reorganizing/recompressing the page, leading to the assertion failing. rb:1220 approved by Sunny Bains
-
Marko Mäkelä authored
COMPRESSED PAGE SIZE This was submitted as MySQL Bug 61456 and a patch provided by Facebook. This patch follows the same idea, but instead of adding a parameter to btr_cur_pessimistic_insert(), we simply remove the btr_cur_optimistic_insert() call there and add it to the only caller that needs it. btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert(). btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert() before invoking btr_cur_pessimistic_insert(). btr_cur_pessimistic_update(): Clarify in a comment why it is not necessary to invoke btr_cur_optimistic_insert(). btr_root_raise_and_insert(): Assert that the root page is not empty. This could happen if a pessimistic insert (involving a split or merge) is performed without first attempting an optimistic (intra-page) insert. rb:1219 approved by Sunny Bains
-
Marko Mäkelä authored
btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may fail after reorganizing the page. btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages, because compression may fail after reorganization. page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the ret_pos, which may also be the page infimum. rb:1221
-
- 15 Aug, 2012 2 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
The buffer for the current read row from each partition (m_ordered_rec_buffer) used for sorted reads was allocated on open and freed when the ha_partition handler was closed or destroyed. For tables with many partitions and big records this could take up too much valuable memory. Solution is to only allocate the memory when it is needed and free it when nolonger needed. I.e. allocate it in index_init and free it in index_end (and to handle failures also free it on reset, close etc.) Also only allocating needed memory, according to partitioning pruning. Manually tested that it does not use as much memory and releases it after queries.
-
- 14 Aug, 2012 3 commits
-
-
Venkata Sidagam authored
Problem description: mysqlhotcopy fails if a view presents in the database. Analysis: Before 5.5 'FLUSH TABLES <tbl_name> ... WITH READ LOCK' will able to get lock for all tables (i.e. base tables and view tables). In 5.5 onwards 'FLUSH TABLES <tbl_name> ... WITH READ LOCK' for 'view tables' will not work, because taking flush locks on view tables is not valid. Fix: Take flush lock for 'base tables' and read lock for 'view table' separately. Note: most of the patch has been backported from bug#13006947's patch
-
Sujatha Sivakumar authored
-
Sujatha Sivakumar authored
MASTER-MASTER AND USING SET USE Problem: ======= In a master-master set-up, a master can show a wrong 'SHOW SLAVE STATUS' output. Requirements: - master-master - log_slave_updates This is caused when using SET user-variables and then using it to perform writes. From then on the master that performed the insert will have a SHOW SLAVE STATUS that is wrong and it will never get updated until a write happens on the other master. On"Master A" the "exec_master_log_pos" is not getting updated. Analysis: ======== Slave receives a "User_var" event from the master and after applying the event, when "log_slave_updates" option is enabled the slave tries to write this applied event into its own binary log. At the time of writing this event the slave should use the "originating server-id". But in the above case the sever always logs the "user var events" by using its global server-id. Due to this in a "master-master" replication when the event comes back to the originating server the "User_var_event" doesn't get skipped. "User_var_events" are context based events and they always follow with a query event which marks their end of group. Due to the above mentioned problem with "User_var_event" logging the "User_var_event" never gets skipped where as its corresponding "query_event" gets skipped. Hence the "User_var" event always waits for the next "query event" and the "Exec_master_log_position" does not get updated properly. Fix: === `MYSQL_BIN_LOG::write' function is used to write events into binary log. Within this function a new object for "User_var_log_event" is created and this new object is used to write the "User_var" event in the binlog. "User var" event is inherited from "Log_event". This "Log_event" has different overloaded constructors. When a "THD" object is present "Log_event(thd,...)" constructor should be used to initialise the objects and in the absence of a valid "THD" object "Log_event()" minimal constructor should be used. In the above mentioned problem always default minimal constructor was used which is incorrect. This minimal constructor is replaced with "Log_event(thd,...)". sql/log_event.h: Replaced the default constructor with another constructor which takes "THD" object as an argument.
-
- 13 Aug, 2012 1 commit
-
-
Mattias Jonsson authored
-
- 11 Aug, 2012 2 commits
-
-
Venkata Sidagam authored
CONNECTIONS IF SPE Merged from mysql-5.1 to mysql-5.5
-
Venkata Sidagam authored
CONNECTIONS IF SPE Problem description: -ssl-key value is not validated, you can assign any bogus text to --ssl-key and it is not verified that it exists, and more importantly, it allows the client to connect to mysqld. Fix: Added proper validations checks for --ssl-key. Note: 1) Documentation changes require for 5.1, 5.5, 5.6 and trunk in the sections listed below and the details are : http://dev.mysql.com/doc/refman/5.6/en/ssl-options.html#option_general_ssl and REQUIRE SSL section of http://dev.mysql.com/doc/refman/5.6/en/grant.html 2) Client having with option '--ssl', should able to get ssl connection. This will be implemented as part of separate fix in 5.6 and trunk.
-
- 09 Aug, 2012 2 commits
-
-
Sergey Glukhov authored
-
Sergey Glukhov authored
When resolving outer fields, Item_field::fix_outer_fields() creates new Item_refs for each execution of a prepared statement, so these must be allocated in the runtime memroot. The memroot switching before resolving JOIN::having causes these to be allocated in the statement root, leaking memory for each PS execution. sql/item_subselect.cc: addon, fix for 11829691, item could be created in runtime memroot, so we need to use real_item instead.
-