- 28 May, 2019 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
- Don't apply redo log for the corrupted page when innodb_force_recovery > 0. - Allow the table to be dropped when index root page is corrupted when innodb_force_recovery > 0.
-
- 27 May, 2019 2 commits
-
-
Thirunarayanan Balathandayuthapani authored
- create_table_def() misconstructs the dict_table_t by ignoring the stored columns of the table if virtual column is present between stored columns.
-
Daniel Black authored
-
- 24 May, 2019 2 commits
-
-
Vlad Lesin authored
MDEV-14192: mariabackup.incremental_backup failed in buildbot with Failing assertion: byte_offset % OS_FILE_LOG_BLOCK_SIZE == 0 In some cases it's possible that InnoDB redo log file header is re-written so, that checkpoint lsn and checkpoint lsn offset are updated, but checkpoint number stays the same. The fix is to re-read redo log header if at least one of those three parametes is changed at backup start. Repeat the logic of log_group_checkpoint() on choosing InnoDB checkpoint info field on backup start. This does not influence backup correctness, but simplifies bugs analysis.
-
Marko Mäkelä authored
The INFORMATION_SCHEMA plugin INNODB_SYS_VIRTUAL, which was introduced in MariaDB 10.2.2 along with the dictionary table SYS_VIRTUAL, is similar to other, much older and already stable plugins that provide access to InnoDB dictionary tables.
-
- 21 May, 2019 2 commits
-
-
Marko Mäkelä authored
-
Monty authored
-
- 20 May, 2019 3 commits
-
-
Alexey Botchkov authored
thread_pool_server_audit.result fixed.
-
Marko Mäkelä authored
btr_pcur_move_to_last_on_page(): Merge with the only caller.
-
Sujatha authored
Problem: ======== The test now fails with the following trace: CURRENT_TEST: rpl.rpl_parallel_temptable --- /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.result +++ /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.reject @@ -194,7 +194,6 @@ 30 conservative 31 conservative 32 optimistic -33 optimistic Analysis: ========= The part of test which fails with result content mismatch is given below. CREATE TEMPORARY TABLE t4 (a INT PRIMARY KEY) ENGINE=InnoDB; INSERT INTO t4 VALUES (32); INSERT INTO t4 VALUES (33); INSERT INTO t1 SELECT a, "optimistic" FROM t4; slave_parallel_mode=optimistic The expectation of the above test script is, INSERT FROM SELECT should read both 32, 33 and populate table 't1'. But this expectation fails occasionally. All three INSERT statements are handed over to three different slave parallel workers. Temporary tables are not safe for parallel replication. They were designed to be visible to one thread only, so have no table locking. Thus there is no protection against two conflicting transactions committing in parallel and things like that. So anything that uses temporary tables will be serialized with anything before it, when using parallel replication by using a "wait_for_prior_commit" function call. This will ensure that the each transaction is executed sequentially. But there exists a code path in which the above wait doesn't happen. Because of this at times INSERT from SELECT doesn't wait for the INSERT (33) to complete and it completes its executes and enters commit stage. Hence only row 32 is found in those cases resulting in test failure. The wait needs to be added within "open_temporary_table" call. The code looks like this within "open_temporary_table". Each thread tries to open temporary table in 3 different ways: case 1: Find a temporary table which is already in use by using find_temporary_table(tl) && wait_for_prior_commit() case 2: If above failed then try to look for temporary table which is marked for free for reuse. This internally calls "wait_for_prior_commit()" if table is found. find_and_use_tmp_table(tl, &table) case 3: If none of the above open a new table handle from table share. if (!table && (share= find_tmp_table_share(tl))) { table= open_temporary_table(share, tl->get_table_name(), true); } At present the "wait_for_prior_commit" happens only in case 1 & 2. Fix: ==== On slave add a call for "wait_for_prior_commit" for case 3. The above wait on slave will solve the issue. A more detailed fix would be to mark temporary tables as not safe for parallel execution on the master side. In order to do that, on the master side, mark the Gtid_log_event specific flag FL_TRANSACTIONAL to be false all the time. So that they are not scheduled parallely.
-
- 19 May, 2019 1 commit
-
-
Alexey Botchkov authored
Fix for the SET GLOBAL server_audit_loggin=on; added.
-
- 17 May, 2019 10 commits
-
-
Sergei Petrunia authored
-
Jan Lindström authored
Crash was timeout crash. Add correct waits for connections, wsrep sync waits and auto increment offset save and restore.
-
Jan Lindström authored
Use wsrep sync wait instead of unnecessary waits and correct slave setting.
-
Jan Lindström authored
Remove unnecessary sleeps and fix wait_condition to use wsrep_flow_control_paused i.e. we wait until flow control pauses a transaction on master.
-
Sergei Golubchik authored
-
Alexey Botchkov authored
JSON_MERGE_PATCH implemented. Added JSON_MERGE_PRESERVE as a synonim for the JSON_MERGE.
-
Varun Gupta authored
Fixed, now server can be configured with eq_range_index_dive_limit set in cnf file
-
Jan Lindström authored
Crash was timeout crash. Add correct waits for connections, wsrep sync waits and auto increment offset save and restore.
-
Jan Lindström authored
Use wsrep sync wait instead of unnecessary waits and correct slave setting.
-
Jan Lindström authored
Remove unnecessary sleeps and fix wait_condition to use wsrep_flow_control_paused i.e. we wait until flow control pauses a transaction on master.
-
- 16 May, 2019 10 commits
-
-
Sergey Vojtovich authored
-
Monty authored
The bug was that when using mysql_list_fields, then table_list->schema_table_name was not filled in. Fixed by using table_list->schema_table instead, which is always filled in.
-
Sergei Petrunia authored
Fix both code paths: - Change the test source code so it doesn't cause the "Unused variable" warning (which -Werror converted into error and caused CMake not to set HAVE_THREAD_LOCAL) - If the system doesn't seem to support HAVE_THREAD_LOCAL, refuse to compile (rather than producing a binary that crashes for some tests) Originally submitted at https://github.com/facebook/mysql-5.6/pull/905
-
Sergey Vojtovich authored
This test takes ~6 minutes, split it for better parallelism.
-
Sergey Vojtovich authored
Use thd_get_ha_data()/thd_set_ha_data() which protect against plugin removal until it has THD ha_data. Do not reset THD ha_data in rocksdb_close_connection(), cleaner approach is to let ha_close_connection() do it. Removed transaction objects cleanup from rocksdb_done_func(). As we lock plugin properly, there must be no transaction objects during RocksDB shutdown.
-
Sergey Vojtovich authored
-
Marko Mäkelä authored
The bug was introduced in MariaDB 10.4.0 by commit 0e5a4ac2 but it is good to have a regression test for this scenario in all applicable MariaDB versions. Cover the purge of an undo log record that was written before the completion of ADD SPATIAL INDEX.
-
Marko Mäkelä authored
-
Varun Gupta authored
we had the statistics tables in the FROM list of the select. The statistics for tables are not read in such cases, so we need to check this case separately.
-
Varun Gupta authored
Statistics were not read for a table when we had a CREATE TABLE query. Enforce reading statistics for commands CREATE TABLE, SET and DO.
-
- 15 May, 2019 2 commits
-
-
Robert Bindar authored
-
Eugene Kosov authored
log_buffer_extend(): Do not write to disk. Just allocate new bigger buffer and copy contents of old one to it. Do not acquire write_mutex. log_t::is_extending: Removed as unneeded now. LOG_BUFFER_SIZE: Removed to make the dependence on srv_log_buffer_size visible.
-
- 14 May, 2019 5 commits
-
-
Marko Mäkelä authored
Add the test case. The parent commit, which cherry-picked the MDEV-17167 fix from 10.3 (commit bad2f156) fixed the bug.
-
Sergey Vojtovich authored
truncating a temporary table TRUNCATE expects only one TABLE instance (which is used by TRUNCATE itself) to be open. However this requirement wasn't enforced after "MDEV-5535: Cannot reopen temporary table". Fixed by closing unused table instances before performing TRUNCATE.
-
Sujatha authored
Problem: ======== We have a Master/Master Setup on two servers, but are only writing to one of those servers (so it is essentially Master/Slave) We upgraded from 10.1.* to 10.2.22 last week and starting with the upgrade, we are getting duplicate key errors on the slave. BINLOG=mixed. Analysis: ========= This issue happens with LOCK TABLES and binlog_format=MIXED combination. When an UNSAFE statement is encountered in 'MIXED' mode, it is logged in the form of 'ROW' format. For all the tables that are part of LOCK TABLES list their table maps are written into the binary log. For each table in the list a check is done to see if 'check_table_binlog_row_based_done' flag is set or not. If it is not set a check process is initiated to see if table qualifies for row based binary logging or not and 'check_table_binlog_row_based_done' is set. This flag will be cleared at the time of closing thread tables. But there can be special cases where the LOCK TABLES contains more number of tables but the unsafe query is actually using subset of tables from LOCK TABLES list. For example: LOCK TABLES locks t1,t2,t3 but the unsafe statement makes use of only two tables t1,t3. In this case the 'check_table_binlog_row_based_done' flag is enabled for table 't2' while writing table map, but 'close_thread_tables' function call will not reset this flag. Since the flag is not cleared for table 't2' even a safe statement which used t2 will be logged in the form of row based format. This leads to an assert on debug builds and causes duplicate entries in release builds. In release builds a statement is logged in the form of both ROW and STATEMENT format. This causes the slave to fail with duplicate key error. Fix: === During 'close_thread_tables' when LOCK TABLE modes are active "ha_reset" is done for all the tables which were part of current statement. As mentioned in the example 'ha_reset' is called for tables 't1' and 't3'. This will clear the 'check_table_binlog_row_based_done' flag. At this point add a check for the rest of the tables to see if 'check_table_binlog_row_based_done' is enabled or not. If enabled clear the flag.
-
Sujatha authored
-
Sujatha authored
Problem: ======= Whel rpl.rpl_row_mysqlbinlog test is executed as shown below it fails with result content mismatch. perl mtr rpl_row_mysqlbinlog --mysqld=--binlog-annotate-row-events=1 Analysis: ========= When row annotations are enabled the actual query is written into the binlog which helps users to understand the query, even when row based replication is enabled. For example: Simple insert in row based replication looks like shown below. #190402 16:31:27 server id 1 end_log_pos 526 Annotate_rows: #Q> insert into t values (10) #190402 16:31:27 server id 1 end_log_pos 566 Table_map: `test`.`t` mapped to number 19 # at 566 #190402 16:31:27 server id 1 end_log_pos 600 Write_rows: table id 19 flags: STMT_END_F BINLOG ' B0GjXBMBAAAAKAAAADYCAAAAABMAAAAAAAEABHRlc3QAAXQAAQMAAQ== B0GjXBcBAAAAIgAAAFgCAAAAABMAAAAAAAEAAf/+CgAAAA== '/*!*/; # at 600 The test creates some binary log events and redirects them into a SQL file. Executes RESET MASTER and sources the SQL file back on clean master and verifies that the data is available. Please refer following steps. ../client/mysqlbinlog ./var/mysqld.1/data/master-bin.000001 > test.sql ../client/mysql -uroot -S./var/tmp/mysqld.1.sock -Dtest < test.sql ../client/mysqlbinlog ./var/mysqld.1/data/master-bin.000001 -v > row.sql When the row based replication specific SQL file is sourced once again on master the newly generated binlog will treat the entire "BASE 64" encoded event as query and write it into the binary log. Output from 'row.sql': #Q> BINLOG ' #Q> B0GjXBMBAAAAKAAAADYCAAAAABMAAAAAAAEABHRlc3QAAXQAAQMAAQ== #Q> B0GjXBcBAAAAIgAAAFgCAAAAABMAAAAAAAEAAf/+CgAAAA== #190402 16:31:27 server id 1 end_log_pos 657 Table_map: `test`.`t` mapped to number 23 # at 657 #190402 16:31:27 server id 1 end_log_pos 691 Write_rows: table id 23 flags: STMT_END_F BINLOG ' B0GjXBMBAAAAKAAAAJECAAAAABcAAAAAAAEABHRlc3QAAXQAAQMAAQ== B0GjXBcBAAAAIgAAALMCAAAAABcAAAAAAAEAAQH+CgAAAA== ### INSERT INTO `test`.`t` ### SET ### @1=10 '/*!*/; # at 691 This is expected behaviour as we cannot extract query from BASE 64 encoded input. This causes more number of binary logs to be generated when the test is executed with row annotations. The following lines from test assumes that only two binary logs will contain entire data. --echo --- Test 4 Second Remote test -- ---exec $MYSQL_BINLOG --read-from-remote-server --user=root --host=127.0.0.1 --port=$MASTER_MYPORT master-bin.000001 > $MYSQLTEST_VARDIR/tmp/remote.sql ---exec $MYSQL_BINLOG --read-from-remote-server --user=root --host=127.0.0.1 --port=$MASTER_MYPORT master-bin.000002 >> $MYSQLTEST_VARDIR/tmp/remote.sql In a case when row annotations are enabled the data gets spread across four binary logs. As test uses only the first two binary log files, data available in other binary logs gets missed. Hence test fails with result content mismatch as less data is avaialble. Fix: ==== Use "-to-the-last" option of "mysqlbinlog" tool which will ensure that all the available binary log specific contents are included in .sql file.
-
- 13 May, 2019 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
In MySQL 5.7.8 an extra level of pointer indirection was added to dict_operation_lock and some other rw_lock_t without solid justification, in mysql/mysql-server@52720f1772f9f424bf3dd62fa9c214dd608cd036. Let us revert that change and remove the rather useless rw_lock_t constructor and destructor and the magic_n field. In this way, some unnecessary pointer dereferences and heap allocation will be avoided and debugging might be a little easier.
-