- 15 Aug, 2018 1 commit
-
-
Vladislav Vaintroub authored
-
- 14 Aug, 2018 2 commits
-
-
Daniel Bartholomew authored
-
Vladislav Vaintroub authored
-
- 13 Aug, 2018 3 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
aws_key_management needs current directory to be datadir during initalization, it scans current directory for encrypted keys. Fix is to ensure, that plugin initialization in mariabackup happens after the call to my_setwd(mysql_real_data_home).
-
Sergei Petrunia authored
Fix a race condition in the test.
-
- 12 Aug, 2018 1 commit
-
-
Sergei Petrunia authored
The test causes simulated server crashes with DBUG_SUICIDE();. It also relies on transactions that were committed right before the crash to be visible after the crash (that is, it requires durability). Run the test with transaction durability enabled: set rocksdb-flush-log-at-trx-commit=1.
-
- 10 Aug, 2018 4 commits
-
-
Otto Kekäläinen authored
The package libmariadbclient18 contains the dialog.so plugin, which also the new libmariadb3 ships. As they both use the exact same path the latter must be marked as a with Breaks and Replaces relations ship. Note: This fix is conservative hack for stable releases 10.2 and 10.3. In 10.4, the development release at the time, we will clean up how the libmariadb3 packaging and it's -compat packages are done to match that what is done in downstream Debian official.
-
Marko Mäkelä authored
recv_parse_log_recs(): Do not check for corruption before checking for end-of-log-buffer. For some reason, adding the check to the logical-looking place would cause intermittent recovery failures in the tests innodb.innodb-index and innodb_gis.rtree_compress2.
-
Marko Mäkelä authored
recv_parse_log_recs(): Check for corruption before checking for end-of-log-buffer. mlog_parse_initial_log_record(), page_cur_parse_delete_rec(): Flag corruption for out-of-bounds values, and let the caller dump the corrupted redo log extract.
-
Marko Mäkelä authored
If recv_sys_justify_left_parsing_buf() has been invoked, it is possible that recv_previous_parsed_rec_offset is after the current offset. In this case, we must not dump any bytes before the current record.
-
- 09 Aug, 2018 5 commits
-
-
Marko Mäkelä authored
If the LOG_BLOCK_HDR_DATA_LEN field is corrupted, scanning the log records could fail in strange ways. It is better to validate the field as part of validating each log block.
-
Marko Mäkelä authored
Display the log record type in hexadecimal, not binary.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
warning: suggest a space before ‘;’ or explicit braces around empty body in ‘for’ statement
-
Sergei Golubchik authored
update the test to the new (correct) result
-
- 07 Aug, 2018 2 commits
-
-
Sergei Golubchik authored
make mysqld_multi to use same rules for my.cnf directories that all other tools are using (see my_default.c).
-
Sergei Golubchik authored
-
- 06 Aug, 2018 2 commits
-
-
Olivier Bertrand authored
filamtxt.cpp: DOSFAM::RenameTempFile: Change sprintf to snprintf. filamvct.cpp: VECFAM::RenameTempFile: Change sprintf to snprintf. javaconn.cpp: Add JAVAConn::GetUTFString function. Use it instead of env->GetStringUTFChars. Fix wrong identation. javaconn.h: Add GetUTFString declaration. jdbconn.cpp: Use GetUTFString function instead of env->GetStringUTFChars. jmgoconn.cpp: Use GetUTFString function instead of env->GetStringUTFChars. Fix wrong identation. jsonudf.cpp: change 139 to BMX line 4631. tabjmg.cpp: Add ReleaseStringUTF. Fix wrong identation. tabpivot.cpp: Fix wrong identation. tabutil.cpp: TDBPRX::GetSubTable: Change sprintf to snprintf. modified: storage/connect/filamtxt.cpp modified: storage/connect/filamvct.cpp modified: storage/connect/javaconn.cpp modified: storage/connect/javaconn.h modified: storage/connect/jdbconn.cpp modified: storage/connect/jmgoconn.cpp modified: storage/connect/jsonudf.cpp modified: storage/connect/tabjmg.cpp modified: storage/connect/tabpivot.cpp modified: storage/connect/tabutil.cpp - Fix MDEV-16895 CONNECT engine's get_error_message can cause buffer overflow and server crash with long queries ha_connect_cc: Update version. get_error_message: Remove charset conversion. modified: storage/connect/ha_connect.cc - Fix a server crash on inserting bigint to a JDBC table JDBConn::SetUUID: Suppress check on ctyp that causes a server crash because ctyp can be negative and this triggers an DEBUG_ASSERT on return. modified: storage/connect/jdbconn.cpp - Update jdbc.result mysql-test/connect/r/jdbc.result: Recorded to reflect a message change. modified: storage/connect/mysql-test/connect/r/jdbc.result
-
Alexey Botchkov authored
The charset of temporary storage (Item_func_json_insert::tmp_js) was not properly set.
-
- 05 Aug, 2018 1 commit
-
-
Alexey Botchkov authored
Item_func_json_value::val_str() produced string of wrong charset.
-
- 03 Aug, 2018 19 commits
-
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
rw_lock_get_debug_info(): Remove. This function is inherently unsafe to use, because the copied pointers can become stale between rw_lock_debug_mutex_exit() and the dereferencing of the pointer in the caller.
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
fts_query(): Remove a redundant condition (result will never be NULL), and instead check if *result is NULL, to prevent SIGSEGV in fts_query_free_result().
-
Marko Mäkelä authored
This concludes the merge of all applicable InnoDB changes from MySQL 5.7.23, with the exception of a performance fix, which we plan to rewrite in MariaDB later in such a way that it does not involve changing the storage engine API: MDEV-16849 Extending indexed VARCHAR column should be instantaneous
-
Marko Mäkelä authored
This is a port of an Oracle fix. No test case was provided by Oracle. It seems that to exploit this bug, one would have to SET foreign_key_checks=0 before TRUNCATE, and to concurrently run some DML statement that causes a foreign key constraint to be checked. commit 1f24c5aa2843fa548aa5c4b29c00f955e03e9f5b Author: Aditya A <aditya.a@oracle.com> Date: Fri May 18 12:32:37 2018 +0530 Bug #27208858 CONCURRENT DDL/DML ON FOREIGN KEYS CRASH IN PAGE_CUR_SEARCH_WITH_MATCH_BYTES
-
Marko Mäkelä authored
Similar to the tables SYS_FOREIGN and SYS_FOREIGN_COLS, the tables mysql.innodb_table_stats and mysql.innodb_index_stats are updated by the InnoDB internal SQL parser, which fails to enforce the size limits of the data. Due to this, it is possible for InnoDB to hang when there are persistent statistics defined on partitioned tables where the total length of table name, partition name and subpartition name exceeds the incorrectly defined limit VARCHAR(64). That column should have been defined as VARCHAR(199). btr_node_ptr_max_size(): Interpret the VARCHAR(64) as VARCHAR(199), to prevent a hang in the case that the upgrade script has not been run. dict_table_schema_check(): Ignore difference in the length of the table_name column. ha_innobase::max_supported_key_length(): For innodb_page_size=4k, return a larger value so that the table mysql.innodb_index_stats can be created. This could allow "impossible" tables to be created, such that it is not possible to insert anything into a secondary index when both the secondary key and the primary key are long, but this is the easiest and most consistent way. The Oracle fix would only ignore the maximum length violation for the two statistics tables. os_file_get_status_posix(), os_file_get_status_win32(): Handle ENAMETOOLONG as well. This patch is based on the following change in MySQL 5.7.23. Not all changes were applied, and our variant allows persistent statistics to work without hangs even if the table definitions were not upgraded. From fdbdce701ab8145ae234c9d401109dff4e4106cb Mon Sep 17 00:00:00 2001 From: Aditya A <aditya.a@oracle.com> Date: Thu, 17 May 2018 16:11:43 +0530 Subject: [PATCH] Bug #26390736 THE FIELD TABLE_NAME (VARCHAR(64)) FROM MYSQL.INNODB_TABLE_STATS CAN OVERFLOW. In mysql.innodb_index_stats and mysql.innodb_table_stats tables the table name column didn't take into consideration partition names which can be more than varchar(64).
-
Marko Mäkelä authored
When MySQL 5.7.1 introduced WL#6326 to reduce contention on the non-leaf levels of B-trees, it introduced a new rw-lock mode SX (not conflicting with S, but conflicting with SX and X) and new rules to go with it. A thread that is holding an dict_index_t::lock aka index->lock in SX mode is permitted to acquire non-leaf buf_block_t::lock aka block->lock X or SX mode, in monotonically descending order. That is, once the thread has acquired a block->lock, it is not allowed to acquire a lock on its parent or grandparent pages. Such arbitrary-order access is only allowed when the thread acquired the index->lock in X mode upfront. A customer encountered a repeatable hang when loading a dump into InnoDB while using multiple innodb_purge_threads (default: 4). The dump makes very heavy use of FOREIGN KEY constraints. By luck, it happened so that two purge worker threads (srv_worker_thread) deadlocked with each other. Both were operating on the index FOR_REF of the InnoDB internal table SYS_FOREIGN. One of them was legitimately holding index->lock S-latch and the root block->lock S-latch. The other had acquired index->lock SX-latch, root block->lock SX-latch, and a bunch of other latches, including the fil_space_t::latch for freeing some blocks and some leaf page latches. This other thread was inside 2 nested calls to btr_compress() and it was trying to reacquire the root block->lock in X mode, violating the WL#6326 protocol. This violation led to a deadlock, because while S is compatible with SX and a thread can upgrade an SX lock to X when there are no conflicting requests, in this case there was a conflicting S lock held by the other purge worker thread. During this deadlock, both threads are holding dict_operation_lock S-latch, which would block any subsequent DDL statements, such as CREATE TABLE. The tables SYS_FOREIGN and SYS_FOREIGN_COLS are special in that they define key columns of the type VARCHAR(0), created using the InnoDB internal SQL parser. Because InnoDB does not internally enforce the maximum length of columns, it would happily write more than 0 bytes to these columns. This caused a miscalculation of node_ptr_max_size. btr_cur_will_modify_tree(): Clean up some code. (No functional change.) btr_node_ptr_max_size(): Renamed from dict_index_node_ptr_max_size(). Use a more realistic maximum size for SYS_FOREIGN and SYS_FOREIGN_COLS. btr_cur_pessimistic_delete(): Refrain from merging pages if it is not safe. This work is based on the following MySQL 5.7.23 fix: commit 58dcf0b4a4165ed59de94a9a1e7d8c954f733726 Author: Aakanksha Verma <aakanksha.verma@oracle.com> Date: Wed May 9 18:54:03 2018 +0530 BUG#26225783 MYSQL CRASH ON CREATE TABLE (REPRODUCEABLE) -> INNODB: A LONG SEMAPHORE WAIT
-
Allen Lai authored
fsync() will just return EIO only once when the IO error happens, so, it's wrong to keep trying to call it till it return success. When fsync() returns EIO it should be treated as a hard error and InnoDB must abort immediately.
-
Sergey Vojtovich authored
trx_set_rw_mode() is never called for read-only transactions, this is guarded by callers. Removing this condition from critical section immediately gives 5% scalability improvement in OLTP index updates benchmark.
-
Marko Mäkelä authored
-