1. 24 May, 2019 2 commits
    • Vlad Lesin's avatar
      MDEV-14192: mariabackup.incremental_backup failed in buildbot with Failing... · bff9b802
      Vlad Lesin authored
      MDEV-14192: mariabackup.incremental_backup failed in buildbot with Failing assertion: byte_offset % OS_FILE_LOG_BLOCK_SIZE == 0
      
      In some cases it's possible that InnoDB redo log file header is re-written so,
      that checkpoint lsn and checkpoint lsn offset are updated, but checkpoint
      number stays the same. The fix is to re-read redo log header if at least one
      of those three parametes is changed at backup start.
      
      Repeat the logic of log_group_checkpoint() on choosing InnoDB checkpoint info
      field on backup start. This does not influence backup correctness, but
      simplifies bugs analysis.
      bff9b802
    • Marko Mäkelä's avatar
      Declare INFORMATION_SCHEMA.INNODB_SYS_VIRTUAL stable · c8740407
      Marko Mäkelä authored
      The INFORMATION_SCHEMA plugin INNODB_SYS_VIRTUAL, which was introduced
      in MariaDB 10.2.2 along with the dictionary table SYS_VIRTUAL,
      is similar to other, much older and already stable plugins that
      provide access to InnoDB dictionary tables.
      c8740407
  2. 21 May, 2019 2 commits
  3. 20 May, 2019 3 commits
    • Alexey Botchkov's avatar
      MDEV-17456 Malicious SUPER user can possibly change audit log configuration without leaving traces. · 71ee69c8
      Alexey Botchkov authored
      thread_pool_server_audit.result fixed.
      71ee69c8
    • Marko Mäkelä's avatar
      Remove UT_NOT_USED · 74904a66
      Marko Mäkelä authored
      btr_pcur_move_to_last_on_page(): Merge with the only caller.
      74904a66
    • Sujatha's avatar
      MDEV-19076: rpl_parallel_temptable result mismatch '-33 optimistic' · 5a2110e7
      Sujatha authored
      Problem:
      ========
      The test now fails with the following trace:
      
      CURRENT_TEST: rpl.rpl_parallel_temptable
      --- /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.result
      +++ /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.reject
      @@ -194,7 +194,6 @@
       30    conservative
       31    conservative
       32    optimistic
      -33    optimistic
      
      Analysis:
      =========
      The part of test which fails with result content mismatch is given below.
      
      CREATE TEMPORARY TABLE t4 (a INT PRIMARY KEY) ENGINE=InnoDB;
      INSERT INTO t4 VALUES (32);
      INSERT INTO t4 VALUES (33);
      INSERT INTO t1 SELECT a, "optimistic" FROM t4;
      
      slave_parallel_mode=optimistic
      
      The expectation of the above test script is, INSERT FROM SELECT should read both
      32, 33 and populate table 't1'. But this expectation fails occasionally.
      
      All three INSERT statements are handed over to three different slave parallel
      workers. Temporary tables are not safe for parallel replication. They were
      designed to be visible to one thread only, so have no table locking.  Thus there
      is no protection against two conflicting transactions committing in parallel and
      things like that.
      
      So anything that uses temporary tables will be serialized with anything before
      it, when using parallel replication by using a "wait_for_prior_commit" function
      call. This will ensure that the each transaction is executed sequentially.
      
      But there exists a code path in which the above wait doesn't happen.  Because of
      this at times INSERT from SELECT doesn't wait for the INSERT (33) to complete
      and it completes its executes and enters commit stage.  Hence only row 32 is
      found in those cases resulting in test failure.
      
      The wait needs to be added within "open_temporary_table" call. The code looks
      like this within "open_temporary_table".
      
      Each thread tries to open temporary table in 3 different ways:
      
      case 1: Find a temporary table which is already in use by using
               find_temporary_table(tl) && wait_for_prior_commit()
      case 2: If above failed then try to look for temporary table which is marked for
              free for reuse. This internally calls "wait_for_prior_commit()" if table
              is found.
               find_and_use_tmp_table(tl, &table)
      case 3: If none of the above open a new table handle from table share.
               if (!table && (share= find_tmp_table_share(tl)))
               { table= open_temporary_table(share, tl->get_table_name(), true); }
      
      At present the "wait_for_prior_commit" happens only in case 1 & 2.
      
      Fix:
      ====
      On slave add a call for "wait_for_prior_commit" for case 3.
      
      The above wait on slave will solve the issue. A more detailed fix would be to
      mark temporary tables as not safe for parallel execution on the master side.
      In order to do that, on the master side, mark the Gtid_log_event specific flag
      FL_TRANSACTIONAL to be false all the time. So that they are not scheduled
      parallely.
      5a2110e7
  4. 19 May, 2019 1 commit
  5. 17 May, 2019 10 commits
  6. 16 May, 2019 10 commits
  7. 15 May, 2019 2 commits
  8. 14 May, 2019 5 commits
    • Marko Mäkelä's avatar
      MDEV-19449 Got error 168 for valid TRUNCATE (temporary) TABLE · 409e210e
      Marko Mäkelä authored
      Add the test case.
      
      The parent commit, which cherry-picked the MDEV-17167 fix from 10.3
      (commit bad2f156)
      fixed the bug.
      409e210e
    • Sergey Vojtovich's avatar
      MDEV-17167 - InnoDB: Failing assertion: table->get_ref_count() == 0 upon · 95fb88d5
      Sergey Vojtovich authored
                   truncating a temporary table
      
      TRUNCATE expects only one TABLE instance (which is used by TRUNCATE
      itself) to be open. However this requirement wasn't enforced after
      "MDEV-5535: Cannot reopen temporary table".
      
      Fixed by closing unused table instances before performing TRUNCATE.
      95fb88d5
    • Sujatha's avatar
      MDEV-19158: MariaDB 10.2.22 is writing duplicate entries into binary log · 43bbf88d
      Sujatha authored
      Problem:
      ========
      We have a Master/Master Setup on two servers, but are only writing to one of
      those servers (so it is essentially Master/Slave) We upgraded from 10.1.* to
      10.2.22 last week and starting with the upgrade, we are getting duplicate key
      errors on the slave. BINLOG=mixed.
      
      Analysis:
      =========
      This issue happens with LOCK TABLES and binlog_format=MIXED combination. When an
      UNSAFE statement is encountered in 'MIXED' mode, it is logged in the form of
      'ROW' format. For all the tables that are part of LOCK TABLES list their table maps
      are written into the binary log. For each table in the list a check is
      done to see if 'check_table_binlog_row_based_done' flag is set or not. If it is not set
      a check process is initiated to see if table qualifies for row based binary
      logging or not and 'check_table_binlog_row_based_done' is set. This flag will be
      cleared at the time of closing thread tables.
      
      But there can be special cases where the LOCK TABLES contains more number of
      tables but the unsafe query is actually using subset of tables from LOCK TABLES
      list.
      
      For example: LOCK TABLES locks t1,t2,t3 but the unsafe statement makes use of
      only two tables t1,t3. In this case the 'check_table_binlog_row_based_done' flag
      is enabled for table 't2' while writing table map, but 'close_thread_tables'
      function call will not reset this flag. Since the flag is not cleared for table
      't2' even a safe statement which used t2 will be logged in the form of row based
      format.
      
      This leads to an assert on debug builds and causes duplicate entries in release
      builds. In release builds a statement is logged in the form of both ROW and
      STATEMENT format. This causes the slave to fail with duplicate key error.
      
      Fix:
      ===
      During 'close_thread_tables' when LOCK TABLE modes are active "ha_reset" is done
      for all the tables which were part of current statement. As mentioned in the
      example 'ha_reset' is called for tables 't1' and 't3'. This will clear the
      'check_table_binlog_row_based_done' flag. At this point add a check for the rest
      of the tables to see if 'check_table_binlog_row_based_done' is enabled or not.
      If enabled clear the flag.
      43bbf88d
    • Sujatha's avatar
      Merge branch '10.1' into 10.2 · d0d663f3
      Sujatha authored
      d0d663f3
    • Sujatha's avatar
      MDEV-11095: rpl.rpl_row_mysqlbinlog test fails if row annotation enabled · 47637a3d
      Sujatha authored
      Problem:
      =======
      Whel rpl.rpl_row_mysqlbinlog test is executed as shown below it fails with
      result content mismatch.
      
      perl mtr rpl_row_mysqlbinlog --mysqld=--binlog-annotate-row-events=1
      
      Analysis:
      =========
      When row annotations are enabled the actual query is written into the binlog
      which helps users to understand the query, even when row based replication is
      enabled.
      
      For example: Simple insert in row based replication looks like shown below.
      
      #190402 16:31:27 server id 1  end_log_pos 526 	Annotate_rows:
      #Q> insert into t values (10)
      #190402 16:31:27 server id 1  end_log_pos 566 	Table_map: `test`.`t` mapped to number 19
      # at 566
      #190402 16:31:27 server id 1  end_log_pos 600 	Write_rows: table id 19 flags: STMT_END_F
      
      BINLOG '
      B0GjXBMBAAAAKAAAADYCAAAAABMAAAAAAAEABHRlc3QAAXQAAQMAAQ==
      B0GjXBcBAAAAIgAAAFgCAAAAABMAAAAAAAEAAf/+CgAAAA==
      '/*!*/;
      # at 600
      
      The test creates some binary log events and redirects them into a SQL file.
      Executes RESET MASTER and sources the SQL file back on clean master and verifies
      that the data is available. Please refer following steps.
      
      ../client/mysqlbinlog ./var/mysqld.1/data/master-bin.000001 > test.sql
      ../client/mysql -uroot -S./var/tmp/mysqld.1.sock -Dtest  < test.sql
      ../client/mysqlbinlog ./var/mysqld.1/data/master-bin.000001 -v > row.sql
      
      When the row based replication specific SQL file is sourced once again on master
      the newly generated binlog will treat the entire "BASE 64" encoded event as
      query and write it into the binary log.
      
      Output from 'row.sql':
      
      #Q> BINLOG '
      #Q> B0GjXBMBAAAAKAAAADYCAAAAABMAAAAAAAEABHRlc3QAAXQAAQMAAQ==
      #Q> B0GjXBcBAAAAIgAAAFgCAAAAABMAAAAAAAEAAf/+CgAAAA==
      #190402 16:31:27 server id 1  end_log_pos 657 	Table_map: `test`.`t` mapped to number 23
      # at 657
      #190402 16:31:27 server id 1  end_log_pos 691 	Write_rows: table id 23 flags: STMT_END_F
      
      BINLOG '
      B0GjXBMBAAAAKAAAAJECAAAAABcAAAAAAAEABHRlc3QAAXQAAQMAAQ==
      B0GjXBcBAAAAIgAAALMCAAAAABcAAAAAAAEAAQH+CgAAAA==
      ### INSERT INTO `test`.`t`
      ### SET
      ###   @1=10
      '/*!*/;
      # at 691
      
      
      This is expected behaviour as we cannot extract query from BASE 64 encoded
      input. This causes more number of binary logs to be generated when the test is
      executed with row annotations.
      
      The following lines from test assumes that only two binary logs will contain
      entire data.
      
       --echo --- Test 4 Second Remote test --
      ---exec $MYSQL_BINLOG --read-from-remote-server --user=root --host=127.0.0.1
      	--port=$MASTER_MYPORT master-bin.000001 > $MYSQLTEST_VARDIR/tmp/remote.sql
      ---exec $MYSQL_BINLOG --read-from-remote-server --user=root --host=127.0.0.1
      	--port=$MASTER_MYPORT master-bin.000002 >> $MYSQLTEST_VARDIR/tmp/remote.sql
      
      In a case when row annotations are enabled the data gets spread across four
      binary logs. As test uses only the first two binary log files, data available in
      other binary logs gets missed. Hence test fails with result content mismatch as
      less data is avaialble.
      
      Fix:
      ====
      Use "-to-the-last" option of "mysqlbinlog" tool which will ensure that all the
      available binary log specific contents are included in .sql file.
      47637a3d
  9. 13 May, 2019 5 commits
    • Marko Mäkelä's avatar
      Merge 10.1 into 10.2 · 50999738
      Marko Mäkelä authored
      50999738
    • Marko Mäkelä's avatar
      Remove unnecessary pointer indirection for rw_lock_t · b93ecea6
      Marko Mäkelä authored
      In MySQL 5.7.8 an extra level of pointer indirection was added to
      dict_operation_lock and some other rw_lock_t without solid justification,
      in mysql/mysql-server@52720f1772f9f424bf3dd62fa9c214dd608cd036.
      
      Let us revert that change and remove the rather useless rw_lock_t
      constructor and destructor and the magic_n field. In this way,
      some unnecessary pointer dereferences and heap allocation will be avoided
      and debugging might be a little easier.
      b93ecea6
    • Marko Mäkelä's avatar
      Merge 10.1 into 10.2 · 26a14ee1
      Marko Mäkelä authored
      26a14ee1
    • Marko Mäkelä's avatar
      MDEV-19445 heap-use-after-free related to innodb_ft_aux_table · 2647fd10
      Marko Mäkelä authored
      Try to fix the race conditions between
      SET GLOBAL innodb_ft_aux_table = ...;
      and access to the INFORMATION_SCHEMA tables that depend on
      this variable.
      
      innodb_ft_aux_table: Replaces
      fts_internal_tbl_name,fts_internal_tbl_name2. Just store the
      user-specified parameter as is.
      
      innodb_ft_aux_table_id: The table_id corresponding to
      SET GLOBAL innodb_ft_aux_table, or 0 if the table does not exist
      or does not contain FULLTEXT INDEX. If the table is renamed later,
      the INFORMATION_SCHEMA tables will continue to refer to the table.
      If the table is dropped or rebuilt, the INFORMATION_SCHEMA tables
      will not find the table.
      2647fd10
    • Marko Mäkelä's avatar
      fts_optimize_words(): Remove stray output · 1c97e07f
      Marko Mäkelä authored
      With SET GLOBAL innodb_optimize_fulltext_only=1
      in effect, OPTIMIZE TABLE would output words from the fulltext index
      to the server error log, even in non-debug builds.
      
      fts_optimize_words(): Remove the unwanted output.
      1c97e07f