- 15 Jan, 2018 17 commits
-
-
Marko Mäkelä authored
btr_cur_search_to_nth_level(), btr_search_guess_on_hash(), btr_pcur_open_with_no_init_func(), row_sel_open_pcur(): Replace the parameter has_search_latch with the ahi_latch (passed as NULL if the caller does not hold the latch). btr_search_update_hash_node_on_insert(), btr_search_update_hash_on_insert(), btr_search_build_page_hash_index(): Add the parameter ahi_latch. btr_search_x_lock(), btr_search_x_unlock(), btr_search_s_lock(), btr_search_s_unlock(): Remove.
-
Marko Mäkelä authored
btr_cur_search_to_nth_level(), row_sel(): Do not bother to yield to waiting exclusive lock requests on the adaptive hash index latch. When the btr_search_latch was split into an array of latches in MySQL 5.7.8 as part of the Oracle Bug#20985298 fix, the "caching" of the latch across storage engine API calls was removed. Thus, X-lock requests should have a good chance of becoming served, and starvation should not be possible. btr_search_guess_on_hash(): Clean up a debug assertion.
-
Marko Mäkelä authored
Also, use 32-bit native reads to read the 32-bit aligned FIL_PAGE_PREV and FIL_PAGE_NEXT reads, to compare them to the byte order agnostic pattern FIL_NULL (0xffffffff).
-
Marko Mäkelä authored
This is mere code clean-up; the reported problem was already fixed in commit 3fdd3907. row_sel(): Remove the variable search_latch_locked. row_sel_try_search_shortcut(): Remove the parameter search_latch_locked, which was always passed as nonzero. row_sel_try_search_shortcut(), row_sel_try_search_shortcut_for_mysql(): Do not expect the caller to acquire the AHI latch. Instead, acquire and release it inside this function. row_search_mvcc(): Remove a bogus condition on mysql_n_tables_locked. When the btr_search_latch was split into an array of latches in MySQL 5.7.8 as part of the Oracle Bug#20985298 fix, the "caching" of the latch across storage engine API calls was removed, and thus it is unnecessary to avoid adaptive hash index searches during INSERT...SELECT.
-
Marko Mäkelä authored
This is not fixing the reported problem, but a potential problem that was introduced in MDEV-11369. row_sel_try_search_shortcut(), row_sel_try_search_shortcut_for_mysql(): When an adaptive hash index search lands on top of rec_is_default_row(), we must skip the candidate and perform a normal search. This is because the adaptive hash index latch only protects the record from being deleted but does not prevent concurrent inserts into the page. Therefore, it is not safe to dereference the next-record pointer.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This race condition is a regression caused by MDEV-12121. btr_cur_update_in_place(): Determine block->index!=NULL only once in order to determine whether an adaptive hash index bucket needs to be exclusively locked and unlocked. If we evaluated block->index multiple times, and the adaptive hash index was disabled before we locked the adaptive hash index, then we would never release the adaptive hash index bucket latch, which would eventually lead to InnoDB hanging.
-
Monty authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Sergei Petrunia authored
-
Marko Mäkelä authored
innodb.truncate_inject: Replacement for innodb_zip.wl6501_error_1 Note: unlike MySQL, in some cases TRUNCATE does not return an error in MariaDB. This should be fixed in the scope of MDEV-13564 or similar.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
MariaDB inherits the MySQL limitation that ALGORITHM=INPLACE cannot create more than one FULLTEXT INDEX at a time. As part of the MDEV-11369 Instant ADD COLUMN refactoring, MariaDB 10.3.2 accidentally stopped enforcing the restriction. Actually, it is a bug in MySQL 5.6 and MariaDB 10.0 that an ALTER TABLE statement with multiple ADD FULLTEXT INDEX but without explicit ALGORITHM=INPLACE would return in an error message, rather than executing the operation with ALGORITHM=COPY. ha_innobase::check_if_supported_inplace_alter(): Enforce the restriction on multiple FULLTEXT INDEX. prepare_inplace_alter_table_dict(): Replace some code with debug assertions. A "goto error_handled" at this point would result in another error, because the reference count of ctx->new_table would be 0.
-
- 14 Jan, 2018 1 commit
-
-
Eugene Kosov authored
Speed up compilation Standard C++ headers contribute a lot to compilation time. Avoid algorithm and sstream in frequently used headers.
-
- 13 Jan, 2018 7 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
commit 3dc3ab1a introduced Rows_event_tracker, using a mismatch of size_t (the native register width) and my_off_t (the file offset width, usually 64 bits). Use my_off_t both in member fields and member functions.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This is a regression that was introduced in MySQL 5.7.6 in https://github.com/mysql/mysql-server/commit/19855664de0245ff24e0753dc82723fc4e2fb7a5 fil_node_open_file(): Use proper 64-bit arithmetics for truncating size_bytes to a multiple of a file extent size.
-
Monty authored
- Removed extra set -x -v used for debugging - Fixed that that gcc version tests works for gcc 7
-
Sergey Vojtovich authored
Server already has HMT_low/HMT_medium.
-
- 12 Jan, 2018 8 commits
-
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Marko Mäkelä authored
InnoDB originally skipped the redo logging of PAGE_MAX_TRX_ID changes until I enabled it in commit e76b873f that was part of MySQL 5.5.5 already. Later, when a more complete history of the InnoDB Plugin for MySQL 5.1 (aka branches/zip in the InnoDB subversion repository) and of the planned-to-be closed-source branches/innodb+ that became the basis of InnoDB in MySQL 5.5 was pushed to the MySQL source repository, the change was part of commit 509e761f: ------------------------------------------------------------------------ r5038 | marko | 2009-05-19 22:59:07 +0300 (Tue, 19 May 2009) | 30 lines branches/zip: Write PAGE_MAX_TRX_ID to the redo log. Otherwise, transactions that are started before the rollback of incomplete transactions has finished may have an inconsistent view of the secondary indexes. dict_index_is_sec_or_ibuf(): Auxiliary function for controlling updates and checks of PAGE_MAX_TRX_ID: check whether an index is a secondary index or the insert buffer tree. page_set_max_trx_id(), page_update_max_trx_id(), lock_rec_insert_check_and_lock(), lock_sec_rec_modify_check_and_lock(), btr_cur_ins_lock_and_undo(), btr_cur_upd_lock_and_undo(): Add the parameter mtr. page_set_max_trx_id(): Allow mtr to be NULL. When mtr==NULL, do not attempt to write to the redo log. This only occurs when creating a page or reorganizing a compressed page. In these cases, the PAGE_MAX_TRX_ID will be set correctly during the application of redo log records, even though there is no explicit log record about it. btr_discard_only_page_on_level(): Preserve PAGE_MAX_TRX_ID. This function should be unreachable, though. btr_cur_pessimistic_update(): Update PAGE_MAX_TRX_ID. Add some assertions for checking that PAGE_MAX_TRX_ID is set on all secondary index leaf pages. rb://115 tested by Michael, fixes Issue #211 ------------------------------------------------------------------------ After this fix, some bogus references to recv_recovery_is_on() remained. Also, some references could be replaced with references to index->is_dummy to prepare us for MDEV-14481 (background redo log apply).
-
Sergei Petrunia authored
-
Otto Kekäläinen authored
This commit does not touch any variable names or any other actual code, and thus should not in any way affect how the code works.
-
Varun Gupta authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
- Make Rdb_binlog_manager::unpack_value to not have a stack overrun when it is reading invalid data (which it currently does as we in MariaDB do not store binlog coordinates under BINLOG_INFO_INDEX_NUMBER, see comments in MDEV-14892 for details). - We may need to store these coordinates in the future, so instead of removing the call of this function, let's make it work properly for all possible inputs.
-
- 11 Jan, 2018 7 commits
-
-
Andrei Elkin authored
Problems -------- The slave io thread did not conduct integrity check for a group of row-based events. Specifically it tolerates missed terminal block event that must be flagged with STMT_END. Failure to react on its loss can confuse the applier thread in various ways. Another potential issue was that there were no check of impossible second in row Gtid-log-event while the slave io thread is receiving to be skipped events after reconnect. Fixes ----- The slave io thread is made by this patch to track the rows event STMT_END status. Whenever at next event reading the IO thread finds out that a preceding Rows event did not actually had the flag, an explicit error is issued. Replication can be resumed after the source of failure is eliminated, see a provided test. Note that currently the row-based group integrity check excludes the compressed version 2 Rows events (which are not generated by MariaDB master). Its uncompressed counterpart is manually tested. The 2nd issue is covered to produce an error in case the io thread receives a successive Gtid_log_event while it is post-reconnect skipping.
-
Monty authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Monty authored
-
Marko Mäkelä authored
In CREATE SEQUENCE or CREATE TEMPORARY SEQUENCE, we should not start an InnoDB transaction for inserting the sequence status record into the underlying no-rollback table. Because we did this, a debug assertion failure would fail in START TRANSACTION WITH CONSISTENT SNAPSHOT after CREATE TEMPORARY SEQUENCE was executed. row_ins_step(): Do not start the transaction. Let the caller do that. que_thr_step(): Start the transaction before calling row_ins_step(). row_ins_clust_index_entry(): Skip locking and undo logging for no-rollback tables, even for temporary no-rollback tables. row_ins_index_entry(): Allow trx->id==0 for no-rollback tables. row_insert_for_mysql(): Do not start a transaction for no-rollback tables.
-