An error occurred fetching the project authors.
  1. 31 Jan, 2019 1 commit
  2. 25 Jan, 2019 1 commit
  3. 24 Jan, 2019 1 commit
    • Andrei Elkin's avatar
      MDEV-10963 Fragmented BINLOG query · 5d48ea7d
      Andrei Elkin authored
      The problem was originally stated in
        http://bugs.mysql.com/bug.php?id=82212
      The size of an base64-encoded Rows_log_event exceeds its
      vanilla byte representation in 4/3 times.
      When a binlogged event size is about 1GB mysqlbinlog generates
      a BINLOG query that can't be send out due to its size.
      
      It is fixed with fragmenting the BINLOG argument C-string into
      (approximate) halves when the base64 encoded event is over 1GB size.
      The mysqlbinlog in such case puts out
      
          SET @binlog_fragment_0='base64-encoded-fragment_0';
          SET @binlog_fragment_1='base64-encoded-fragment_1';
          BINLOG @binlog_fragment_0, @binlog_fragment_1;
      
      to represent a big BINLOG.
      For prompt memory release BINLOG handler is made to reset the BINLOG argument
      user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL
      is assigned.
      
      Notice the 2 fragments are enough, though the client and server still may
      need to tweak their @@max_allowed_packet to satisfy to the fragment
      size (which they would have to do anyway with greater number of
      fragments, should that be desired).
      
      On the lower level the following changes are made:
      
      Log_event::print_base64()
        remains to call encoder and store the encoded data into a cache but
        now *without* doing any formatting. The latter is left for time
        when the cache is copied to an output file (e.g mysqlbinlog output).
        No formatting behavior is also reflected by the change in the meaning
        of the last argument which specifies whether to cache the encoded data.
      
      Rows_log_event::print_helper()
        is made to invoke a specialized fragmented cache-to-file copying function
        which is
      
      copy_cache_to_file_wrapped()
        that takes care of fragmenting also optionally wraps encoded
        strings (fragments) into SQL stanzas.
      
      my_b_copy_to_file()
        is refactored to into my_b_copy_all_to_file(). The former function
        is generalized
        to accepts more a limit argument to constraint the copying and does
        not reinitialize anymore the cache into reading mode.
        The limit does not do any effect on the fully read cache.
      5d48ea7d
  4. 17 Oct, 2018 1 commit
  5. 07 Oct, 2018 1 commit
    • Kristian Nielsen's avatar
      Fix accumulation of old rows in mysql.gtid_slave_pos · 2f4a0c5b
      Kristian Nielsen authored
      This would happen especially in optimistic parallel replication, where there
      is a good chance that a transaction will be rolled back (due to conflicts)
      after it has executed record_gtid(). If the transaction did any deletions of
      old rows as part of record_gtid(), those deletions will be undone as well.
      And the code did not properly ensure that the deletions would be re-tried.
      
      This patch makes record_gtid() remember the list of deletions done as part
      of a transaction. Then in rpl_slave_state::update() when the changes have
      been committed, we discard the list. However, in case of error and rollback,
      in cleanup_context() we will instead put the list back into
      rpl_global_gtid_slave_state so that the deletions will be re-tried later.
      
      Probably fixes part of the cause of MDEV-12147 as well.
      Signed-off-by: default avatarKristian Nielsen <knielsen@knielsen-hq.org>
      2f4a0c5b
  6. 24 Jul, 2018 1 commit
  7. 25 Jun, 2018 1 commit
    • Andrei Elkin's avatar
      MDEV-15242 Poor RBR update performance with partitioned tables · 28e1f145
      Andrei Elkin authored
      Observed and described
      partitioned engine execution time difference
      between master and slave was caused by excessive invocation
      of base_engine::rnd_init which was done also for partitions
      uninvolved into Rows-event operation.
      The bug's slave slowdown therefore scales with the number of partitions.
      
      Fixed with applying an upstream patch.
      
      References:
      ----------
      https://bugs.mysql.com/bug.php?id=73648
      Bug#25687813 REPLICATION REGRESSION WITH RBR AND PARTITIONED TABLES
      28e1f145
  8. 20 Jun, 2018 1 commit
  9. 12 Mar, 2018 1 commit
    • Andrei Elkin's avatar
      MDEV-14721 Big transaction events get lost on semisync master when · 90051082
      Andrei Elkin authored
                 replicate_events_marked_for_skip=FILTER_ON_MASTER
      
      [Note this is a cherry-pick from 10.2 branch.]
      
      When events of a big transaction are binlogged offsetting over 2GB from
      the beginning of the log the semisync master's dump thread
      lost such events.
      The events were skipped by the Dump thread that found their skipping
      status erroneously.
      
      The current fixes make sure the skipping status is computed correctly.
      The test verifies them simulating the 2GB offset.
      90051082
  10. 02 Feb, 2018 1 commit
    • Joao Gramacho's avatar
      BUG#24365972 BINLOG DECODING ISN'T RESILIENT TO CORRUPT BINLOG FILES · 3fb2f8db
      Joao Gramacho authored
      Problem
      =======
      
      When facing decoding of corrupt binary log files, server may misbehave
      without detecting the events corruption.
      
      This patch makes MySQL server more resilient to binary log decoding.
      
      Fixes for events de-serialization and apply
      ===========================================
      
      @sql/log_event.cc
      
      Query_log_event::Query_log_event: added a check to ensure query length
      is respecting event buffer limits.
      
      Query_log_event::do_apply_event: extended a debug print, added a check
      to character set to determine if it is "parseable" or not, verified if
      database name is valid for system collation.
      
      Start_log_event_v3::do_apply_event: report an error on applying a
      non-supported binary log version.
      
      Load_log_event::copy_log_event: added a check to table_name length.
      
      User_var_log_event::User_var_log_event: added checks to avoid reading
      out of buffer limits.
      
      User_var_log_event::do_apply_event: reported an sanity check error
      properly and added individual sanity checks for variable types that
      expect fixed (or minimum) amount of bytes to be read.
      
      Rows_log_event::Rows_log_event: added checks to avoid reading out of
      buffer limits.
      
      @sql/log_event_old.cc
      
      Old_rows_log_event::Old_rows_log_event: added a sanity check to avoid
      reading out of buffer limits.
      
      @sql/sql_priv.h
      
      Added a sanity check to available_buffer() function.
      3fb2f8db
  11. 23 Jan, 2018 1 commit
    • Monty's avatar
      Fix for MDEV-14141 Crash in print_keydup_error() · b3c7cf81
      Monty authored
      May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table"
      
      I am not able to reproduce a crash, however there was no protection in
      print_keydup_error() if the storage engine reported the wrong key number.
      
      This patch adds such a protection and should stop any further crashes
      in this case.
      
      Other things:
      - Added extra protection in Aria to not set errkey to more than number of
        keys. (Don't think this is cause of this crash, but better safe than
        sorry)
      - Extend test_if_equal_repl_errors() to handle different cases of
        ER_DUP_ENTRY. This is just mainly precaution for the future.
      b3c7cf81
  12. 22 Oct, 2017 1 commit
  13. 17 Oct, 2017 1 commit
  14. 07 Aug, 2017 1 commit
    • Monty's avatar
      MDEV-13179 main.errors fails with wrong errno · 74543698
      Monty authored
      The problem was that the introduction of max-thread-mem-used can cause
      an allocation error very early, even before mysql_parse() is called.
      As mysql_parse() calls thd->reset_for_next_command(), which called
      clear_error(), the error number was lost.
      
      Fixed by adding an option to have unique messages for each KILL
      signal and change max-thread-mem-used to use this new feature.
      This removes a lot of problems with the original approach, where
      one could get errors signaled silenty almost any time.
      
      ixed by moving clear_error() from reset_for_next_command() to
      do_command(), before any memory allocation for the thread.
      
      Related changes:
      - reset_for_next_command() now have an optional parameter if we should
        call clear_error() or not. By default it's called, but not anymore from
        dispatch_command() which was the original problem.
      - Added optional paramater to clear_error() to force calling of
        reset_diagnostics_area(). Before clear_error() only called
        reset_diagnostics_area() if there was no error, so we normally
        called reset_diagnostics_area() twice.
      - This change removed several duplicated calls to clear_error()
        when starting a query.
      - Reset max_mem_used on COM_QUIT, to protect against kill during
        quit.
      - Use fatal_error() instead of setting is_fatal_error (cleanup)
      - Set fatal_error if max_thead_mem_used is signaled.
        (Same logic we use for other places where we are out of resources)
      74543698
  15. 03 Jul, 2017 1 commit
    • Kristian Nielsen's avatar
      MDEV-8075: DROP TEMPORARY TABLE not marked as ddl, causing optimistic parallel replication to fail · 228479a2
      Kristian Nielsen authored
      CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
      parallel with other transactions, so they need to be marked as "ddl" in the
      binlog.
      
      This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
      tables can also be created and dropped inside a BEGIN...END transaction, and
      such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
      statement emitted implicitly when a client connection is closed.
      
      So this patch adds such ddl mark for the missing cases.
      
      The difference to Kristian's original patch is mainly a fix in
      mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
      over the temporary commit.
      228479a2
  16. 06 Apr, 2017 2 commits
  17. 15 Mar, 2017 1 commit
  18. 10 Mar, 2017 1 commit
  19. 28 Feb, 2017 1 commit
  20. 25 Jan, 2017 1 commit
  21. 23 Jan, 2017 1 commit
  22. 17 Jan, 2017 1 commit
    • Kristian Nielsen's avatar
      MDEV-11811: dual master with parallel replication memory leak in write master · 3e589d4b
      Kristian Nielsen authored
      Gtid_list_log_event::do_apply_event() did not free_root(thd->mem_root).
      It can allocate on this in record_gtid(), and in some scenarios there is
      nothing else that does free_root(), leading to temporary memory leak until
      stop of SQL thread. One scenario is in circular replication with only one
      master active. The active master receives only its own events on the slave,
      all of which are ignored. But whenever the SQL thread catches up with the IO
      thread, a Gtid_list_log_event is applied, leading to the leak.
      3e589d4b
  23. 06 Dec, 2016 1 commit
  24. 15 Nov, 2016 1 commit
    • Kristian Nielsen's avatar
      Back-port Master_info::using_parallel() to 10.0. · f1fcc1fc
      Kristian Nielsen authored
      This has no functional changes, but it helps avoid merge problems from 10.0
      to 10.1. In 10.0, code that checks for parallel replication uses
      opt_slave_parallel_threads > 0, but this check needs to be
      mi->using_parallel() in 10.1. By using the same check in 10.0 (with
      unchanged semantics), merge problems to 10.1 are avoided.
      f1fcc1fc
  25. 02 Sep, 2016 1 commit
  26. 22 Jun, 2016 1 commit
    • Vicențiu Ciorbaru's avatar
      MDEV-8638: REVOKE ALL PRIVILEGES, GRANT OPTION FROM CURRENT_ROLE breaks replication · b4496129
      Vicențiu Ciorbaru authored
      Fix the replication failure caused by incorect initialization of
      THD::invoker_host && THD::invoker_user.
      
      Breakdown of the failure is this:
      Query_log_event::host and Query_log_event::user can have their
      LEX_STRING's set to length 0, but the actual str member points to
      garbage. Code afterwards copies Query_log_event::host and user to
      THD::invoker_host and THD::invoker_user.
      
      Calling code for these members expects both members to be initialized.
      Eg. the str member be a NULL terminated string and length have
      appropriate size.
      b4496129
  27. 04 May, 2016 1 commit
    • Sujatha Sivakumar's avatar
      Bug#12818255: READ-ONLY OPTION DOES NOT ALLOW · 818b3a91
      Sujatha Sivakumar authored
      INSERTS/UPDATES ON TEMPORARY TABLES
      Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON
      READ-ONLY SERVERS
      
      Problem:
      ========
      Running 5.5.14 in read only we can create temporary tables
      but can not insert or update records in the table. When we
      try we get Error 1290 : The MySQL server is running with the
      --read-only option so it cannot execute this statement.
      
      Analysis:
      =========
      This bug is very specific to binlog being enabled and
      binlog-format being stmt/mixed. Standalone server without
      binlog enabled or with row based binlog-mode works fine.
      
      How standalone server and row based replication work:
      =====================================================
      Standalone server and row based replication mark the
      transactions as read_write only when they are modifying
      non temporary tables as part of their current transaction.
      
      Because of this when code enters commit phase it checks
      if a transaction is read_write or not. If the transaction
      is read_write and global read only mode is enabled those
      transaction will fail with 'server is read only mode'
      error.
      
      In the case of statement based mode at the time of writing
      to binary log a binlog handler is created and it is always
      marked as read_write. In case of temporary tables even
      though the engine did not mark the transaction as read_write
      but the new transaction that is started by binlog handler is
      considered as read_write.
      
      Hence in this case when code enters commit phase it finds
      one handler which has a read_write transaction even when
      we are modifying temporary table. This causes the server
      to throw an error when global read-only mode is enabled.
      
      Fix:
      ====
      At the time of commit in "ha_commit_trans" if a read_write
      transaction is found, we should check if this transaction is
      coming from a handler other than binlog_handler. This will
      ensure that there is a genuine read_write transaction being
      sent by the engine apart from binlog_handler and only then
      it should be blocked.
      818b3a91
  28. 22 Mar, 2016 2 commits
  29. 21 Mar, 2016 1 commit
  30. 04 Mar, 2016 1 commit
  31. 01 Mar, 2016 1 commit
    • Venkatesh Duggirala's avatar
      BUG#17018343 SLAVE CRASHES WHEN APPLYING ROW-BASED BINLOG ENTRIES IN CASCADING · bb32ac1d
      Venkatesh Duggirala authored
      REPLICATION
      
      Problem: In RBR mode, merge table updates are not successfully applied on a cascading
      replication.
      
      Analysis & Fix: Every type of row event is preceded by one or more table_map_log_events
      that gives the information about all the tables that are involved in the row
      event. Server maintains the list in RPL_TABLE_LIST and it goes through all the
      tables and checks for the compatibility between master and slave. Before
      checking for the compatibility, it calls 'open_tables()' which takes the list
      of all tables that needs to be locked and opened. In RBR, because of the
      Table_map_log_event , we already have all the tables including base tables in
      the list. But the open_tables() which is generic call takes care of appending
      base tables if the list contains merge tables. There is an assumption in the
      current replication layer logic that these tables (TABLE_LIST type objects) are always
      added in the end of the list. Replication layer maintains the count of
      tables(tables_to_lock_count) that needs to be verified for compatibility check
      and runs through only those many tables from the list and rest of the objects
      in linked list can be skipped. But this assumption is wrong.
      open_tables()->..->add_children_to_list() adds base tables to the list immediately
      after seeing the merge table in the list.
      
      For eg: If the list passed to open_tables() is t1->t2->t3 where t3 is merge
      table (and t1 and t2 are base tables), it adds t1'->t2' to the list after t3.
      New table list looks like t1->t2->t3->t1'->t2'. It looks like it added at the
      end of the list but that is not correct. If the list passed to open_tables()
      is t3->t1->t2 where t3 is merge table (and t1 and t2 are base tables), the new
      prepared list will be t3->t1'->t2'->t1->t2. Where t1' and t2' are of
      TABLE_LIST objects which were added by add_children_to_list() call and replication
      layer should not look into them. Here tables_to_lock_count  will not help as the
      objects are added in between the list.
      
      Fix: After investigating add_children_list() logic (which is called from open_tables()),
      there is no flag/logic in it to skip adding the children to the list even if the
      children are already included in the table list. Hence to fix the issue, a
      logic should be added in the replication layer to skip children in the list by
      checking whether  'parent_l' is non-null or not. If it is children, we will skip 'compatibility'
      check for that table.
      
      Also this patch is not removing 'tables_to_lock_count' logic for the performance issues
      if there are any children at the end of the list, those can be easily skipped directly by
      stopping the loop with tables_to_lock_count check.
      bb32ac1d
  32. 23 Feb, 2016 3 commits
    • Vicențiu Ciorbaru's avatar
    • Vicențiu Ciorbaru's avatar
      [MDEV-8411] Assertion failed in !table->write_set · de1fa452
      Vicențiu Ciorbaru authored
      The reason for the assertion failure is that the update statement for
      the minimal row image sets only the PK column in the write_set of the
      table to true. On the other hand, the trigger aims to update a different
      column.
      
      Make sure that triggers update the used columns accordingly, when being
      processed.
      de1fa452
    • Daniele Sciascia's avatar
      refs codership/mysql-wsrep#201 · c6659345
      Daniele Sciascia authored
      Fix remaining issues with wsrep_sync_wait and query cache.
      
      - Fixes misplaced call to invalidate query cache in
        Rows_log_event::do_apply_event().
        Query cache was invalidated too early, and allowed old
        entries to be inserted to the cache.
      
      - Reset thd->wsrep_sync_wait_gtid on query cache hit.
        THD->cleanup_after_query is not called in such cases,
        and thd->wsrep_sync_wait_gtid remained initialized.
      c6659345
  33. 10 Feb, 2016 1 commit
  34. 09 Feb, 2016 1 commit
  35. 01 Dec, 2015 1 commit
    • Venkatesh Duggirala's avatar
      Bug#21205695 DROP TABLE MAY CAUSE SLAVES TO BREAK · 2735f0b9
      Venkatesh Duggirala authored
          Problem:
          ========
          1) Drop table queries are re-generated by server
          before writing the events(queries) into binlog
          for various reasons. If table name/db name contains
          a non regular characters (like latin characters),
          the generated query is wrong. Hence it breaks the
          replication.
          2) In the edge case, when table name/db name contains
          64 characters, server is throwing an assert
          assert(M_TBLLEN < 128)
          3) In the edge case, when db name contains 64 latin
          characters, binlog content is interpreted badly
          which is leading replication failure.
      
          Analysis & Fix :
          ================
          1) Parser reads the table name from the query and converts
          it to standard charset(utf8) and stores it in table_name variable.
          When drop table query is regenerated with the same table_name
          variable, it should be converted back to the original charset
          from standard charset(utf8).
      
          2) Latin character takes two bytes for each character. Limit
          of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'.
          So there is a possiblity that tablename/dbname contains 3 * 64.
          Hence assert is changed to
          (M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN)
      
          3) db_len in the binlog event header is taking 1 byte.
             db_len is ranged from 0 to 192 bytes (3 * 64).
             While reading the db_len from the event, server
             is casting to uint instead of uchar which is leading
             to bad db_len. This problem is fixed by changing the
             cast type to uchar.
      2735f0b9
  36. 29 Nov, 2015 1 commit
    • Monty's avatar
      Fixes to get all test to run on MacosX Lion 10.7 · c3018b0f
      Monty authored
      This includes fixing all utilities to not have any memory leaks,
      as safemalloc warnings stopped tests from passing on MacOSX.
      
      - Ensure that all clients takes character-set-dir, as the
        libmysqlclient library will use it.
      - mysql-test-run now passes character-set-dir to all external clients.
      - Changed dynstr_free() so that it can be called twice (made freeing code easier)
      - Changed rpl_global_gtid_slave_state to be allocated dynamicly as it
        includes a mutex that needs to be initizlied/destroyed before my_end() is called.
      - Removed rpl_slave_state::init() and rpl_slave_stage::deinit() as
        their job are better handling by constructor and delete.
      - Print alias instead of table_name in check_duplicate_key as
        table_name may have been converted to lower case.
      
      Other things:
      - Fixed a case in time_to_datetime_with_warn() where we where
        using && instead of & in tests
      c3018b0f