An error occurred fetching the project authors.
- 31 Jan, 2019 1 commit
-
-
Oleksandr Byelkin authored
-
- 25 Jan, 2019 1 commit
-
-
Andrei Elkin authored
MDEV-17803: ulonglongization of table_mapping entry::table_id to fix windows compilation in particular.
-
- 24 Jan, 2019 1 commit
-
-
Andrei Elkin authored
The problem was originally stated in http://bugs.mysql.com/bug.php?id=82212 The size of an base64-encoded Rows_log_event exceeds its vanilla byte representation in 4/3 times. When a binlogged event size is about 1GB mysqlbinlog generates a BINLOG query that can't be send out due to its size. It is fixed with fragmenting the BINLOG argument C-string into (approximate) halves when the base64 encoded event is over 1GB size. The mysqlbinlog in such case puts out SET @binlog_fragment_0='base64-encoded-fragment_0'; SET @binlog_fragment_1='base64-encoded-fragment_1'; BINLOG @binlog_fragment_0, @binlog_fragment_1; to represent a big BINLOG. For prompt memory release BINLOG handler is made to reset the BINLOG argument user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL is assigned. Notice the 2 fragments are enough, though the client and server still may need to tweak their @@max_allowed_packet to satisfy to the fragment size (which they would have to do anyway with greater number of fragments, should that be desired). On the lower level the following changes are made: Log_event::print_base64() remains to call encoder and store the encoded data into a cache but now *without* doing any formatting. The latter is left for time when the cache is copied to an output file (e.g mysqlbinlog output). No formatting behavior is also reflected by the change in the meaning of the last argument which specifies whether to cache the encoded data. Rows_log_event::print_helper() is made to invoke a specialized fragmented cache-to-file copying function which is copy_cache_to_file_wrapped() that takes care of fragmenting also optionally wraps encoded strings (fragments) into SQL stanzas. my_b_copy_to_file() is refactored to into my_b_copy_all_to_file(). The former function is generalized to accepts more a limit argument to constraint the copying and does not reinitialize anymore the cache into reading mode. The limit does not do any effect on the fully read cache.
-
- 17 Oct, 2018 1 commit
-
-
Sachin authored
consistently) on Replication Slave lower_case_table_names 0 -> 1 replication works, it's safe as long as mixed case names mapping to the lower case ones is one-to-one
-
- 07 Oct, 2018 1 commit
-
-
Kristian Nielsen authored
This would happen especially in optimistic parallel replication, where there is a good chance that a transaction will be rolled back (due to conflicts) after it has executed record_gtid(). If the transaction did any deletions of old rows as part of record_gtid(), those deletions will be undone as well. And the code did not properly ensure that the deletions would be re-tried. This patch makes record_gtid() remember the list of deletions done as part of a transaction. Then in rpl_slave_state::update() when the changes have been committed, we discard the list. However, in case of error and rollback, in cleanup_context() we will instead put the list back into rpl_global_gtid_slave_state so that the deletions will be re-tried later. Probably fixes part of the cause of MDEV-12147 as well. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 24 Jul, 2018 1 commit
-
-
Jan Lindström authored
Problem was that binlog_hton was not initialized fully when needed i.e. when wsrep_on = true.
-
- 25 Jun, 2018 1 commit
-
-
Andrei Elkin authored
Observed and described partitioned engine execution time difference between master and slave was caused by excessive invocation of base_engine::rnd_init which was done also for partitions uninvolved into Rows-event operation. The bug's slave slowdown therefore scales with the number of partitions. Fixed with applying an upstream patch. References: ---------- https://bugs.mysql.com/bug.php?id=73648 Bug#25687813 REPLICATION REGRESSION WITH RBR AND PARTITIONED TABLES
-
- 20 Jun, 2018 1 commit
-
-
Sergei Golubchik authored
Revert 7bbe324f Fix the bug differently (in log_event.cc) Fix the test case to actually fail without the bug fix
-
- 12 Mar, 2018 1 commit
-
-
Andrei Elkin authored
replicate_events_marked_for_skip=FILTER_ON_MASTER [Note this is a cherry-pick from 10.2 branch.] When events of a big transaction are binlogged offsetting over 2GB from the beginning of the log the semisync master's dump thread lost such events. The events were skipped by the Dump thread that found their skipping status erroneously. The current fixes make sure the skipping status is computed correctly. The test verifies them simulating the 2GB offset.
-
- 02 Feb, 2018 1 commit
-
-
Joao Gramacho authored
Problem ======= When facing decoding of corrupt binary log files, server may misbehave without detecting the events corruption. This patch makes MySQL server more resilient to binary log decoding. Fixes for events de-serialization and apply =========================================== @sql/log_event.cc Query_log_event::Query_log_event: added a check to ensure query length is respecting event buffer limits. Query_log_event::do_apply_event: extended a debug print, added a check to character set to determine if it is "parseable" or not, verified if database name is valid for system collation. Start_log_event_v3::do_apply_event: report an error on applying a non-supported binary log version. Load_log_event::copy_log_event: added a check to table_name length. User_var_log_event::User_var_log_event: added checks to avoid reading out of buffer limits. User_var_log_event::do_apply_event: reported an sanity check error properly and added individual sanity checks for variable types that expect fixed (or minimum) amount of bytes to be read. Rows_log_event::Rows_log_event: added checks to avoid reading out of buffer limits. @sql/log_event_old.cc Old_rows_log_event::Old_rows_log_event: added a sanity check to avoid reading out of buffer limits. @sql/sql_priv.h Added a sanity check to available_buffer() function.
-
- 23 Jan, 2018 1 commit
-
-
Monty authored
May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table" I am not able to reproduce a crash, however there was no protection in print_keydup_error() if the storage engine reported the wrong key number. This patch adds such a protection and should stop any further crashes in this case. Other things: - Added extra protection in Aria to not set errkey to more than number of keys. (Don't think this is cause of this crash, but better safe than sorry) - Extend test_if_equal_repl_errors() to handle different cases of ER_DUP_ENTRY. This is just mainly precaution for the future.
-
- 22 Oct, 2017 1 commit
-
-
Sergei Golubchik authored
gcc 5.4 and 7.1, Debug and Release builds
-
- 17 Oct, 2017 1 commit
-
-
Sergei Golubchik authored
mostly caused by -Wimplicit-fallthrough
-
- 07 Aug, 2017 1 commit
-
-
Monty authored
The problem was that the introduction of max-thread-mem-used can cause an allocation error very early, even before mysql_parse() is called. As mysql_parse() calls thd->reset_for_next_command(), which called clear_error(), the error number was lost. Fixed by adding an option to have unique messages for each KILL signal and change max-thread-mem-used to use this new feature. This removes a lot of problems with the original approach, where one could get errors signaled silenty almost any time. ixed by moving clear_error() from reset_for_next_command() to do_command(), before any memory allocation for the thread. Related changes: - reset_for_next_command() now have an optional parameter if we should call clear_error() or not. By default it's called, but not anymore from dispatch_command() which was the original problem. - Added optional paramater to clear_error() to force calling of reset_diagnostics_area(). Before clear_error() only called reset_diagnostics_area() if there was no error, so we normally called reset_diagnostics_area() twice. - This change removed several duplicated calls to clear_error() when starting a query. - Reset max_mem_used on COM_QUIT, to protect against kill during quit. - Use fatal_error() instead of setting is_fatal_error (cleanup) - Set fatal_error if max_thead_mem_used is signaled. (Same logic we use for other places where we are out of resources)
-
- 03 Jul, 2017 1 commit
-
-
Kristian Nielsen authored
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in parallel with other transactions, so they need to be marked as "ddl" in the binlog. This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary tables can also be created and dropped inside a BEGIN...END transaction, and such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE statement emitted implicitly when a client connection is closed. So this patch adds such ddl mark for the missing cases. The difference to Kristian's original patch is mainly a fix in mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags over the temporary commit.
-
- 06 Apr, 2017 2 commits
-
-
Sachin Setiya authored
Signed-off-by: Sachin Setiya <sachinsetia1001@gmail.com>
-
Sachin Setiya authored
Signed-off-by: Sachin Setiya <sachinsetia1001@gmail.com>
-
- 15 Mar, 2017 1 commit
-
-
Sachin Setiya authored
The value of wsrep_affected_rows were not reseted properly for slave. Now we also wsrep_affected_rows in Xid_log_event::do_apply_event also , apart from THD::cleanup_after_query(). Signed-off-by: Sachin Setiya <sachin.setiya@mariadb.com>
-
- 10 Mar, 2017 1 commit
-
-
Sergei Golubchik authored
-
- 28 Feb, 2017 1 commit
-
-
Monty authored
-
- 25 Jan, 2017 1 commit
-
-
Sachin Setiya authored
Revert "MDEV-7409 On RBR, extend the PROCESSLIST info to include at least the name of the recently used table" This reverts commit 15f46d51.
-
- 23 Jan, 2017 1 commit
-
-
Sachin Setiya authored
MDEV-7409 On RBR, extend the PROCESSLIST info to include at least the name of the recently used table When RBR is used, add the db name to db Field and table name to Status Field of the "SHOW FULL PROCESSLIST" command for SQL thread.
-
- 17 Jan, 2017 1 commit
-
-
Kristian Nielsen authored
Gtid_list_log_event::do_apply_event() did not free_root(thd->mem_root). It can allocate on this in record_gtid(), and in some scenarios there is nothing else that does free_root(), leading to temporary memory leak until stop of SQL thread. One scenario is in circular replication with only one master active. The active master receives only its own events on the slave, all of which are ignored. But whenever the SQL thread catches up with the IO thread, a Gtid_list_log_event is applied, leading to the leak.
-
- 06 Dec, 2016 1 commit
-
-
Sergei Golubchik authored
support encrypted binlogs. Not decryption, but at least recognizing that event are encrypted and prining them as such
-
- 15 Nov, 2016 1 commit
-
-
Kristian Nielsen authored
This has no functional changes, but it helps avoid merge problems from 10.0 to 10.1. In 10.0, code that checks for parallel replication uses opt_slave_parallel_threads > 0, but this check needs to be mi->using_parallel() in 10.1. By using the same check in 10.0 (with unchanged semantics), merge problems to 10.1 are avoided.
-
- 02 Sep, 2016 1 commit
-
-
Nirbhay Choubey authored
Once THDs have been added to the global "threads" list, they must modify query_string only after acquiring per- thread LOCK_thd_data mutex.
-
- 22 Jun, 2016 1 commit
-
-
Vicențiu Ciorbaru authored
Fix the replication failure caused by incorect initialization of THD::invoker_host && THD::invoker_user. Breakdown of the failure is this: Query_log_event::host and Query_log_event::user can have their LEX_STRING's set to length 0, but the actual str member points to garbage. Code afterwards copies Query_log_event::host and user to THD::invoker_host and THD::invoker_user. Calling code for these members expects both members to be initialized. Eg. the str member be a NULL terminated string and length have appropriate size.
-
- 04 May, 2016 1 commit
-
-
Sujatha Sivakumar authored
INSERTS/UPDATES ON TEMPORARY TABLES Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON READ-ONLY SERVERS Problem: ======== Running 5.5.14 in read only we can create temporary tables but can not insert or update records in the table. When we try we get Error 1290 : The MySQL server is running with the --read-only option so it cannot execute this statement. Analysis: ========= This bug is very specific to binlog being enabled and binlog-format being stmt/mixed. Standalone server without binlog enabled or with row based binlog-mode works fine. How standalone server and row based replication work: ===================================================== Standalone server and row based replication mark the transactions as read_write only when they are modifying non temporary tables as part of their current transaction. Because of this when code enters commit phase it checks if a transaction is read_write or not. If the transaction is read_write and global read only mode is enabled those transaction will fail with 'server is read only mode' error. In the case of statement based mode at the time of writing to binary log a binlog handler is created and it is always marked as read_write. In case of temporary tables even though the engine did not mark the transaction as read_write but the new transaction that is started by binlog handler is considered as read_write. Hence in this case when code enters commit phase it finds one handler which has a read_write transaction even when we are modifying temporary table. This causes the server to throw an error when global read-only mode is enabled. Fix: ==== At the time of commit in "ha_commit_trans" if a read_write transaction is found, we should check if this transaction is coming from a handler other than binlog_handler. This will ensure that there is a genuine read_write transaction being sent by the engine apart from binlog_handler and only then it should be blocked.
-
- 22 Mar, 2016 2 commits
-
-
Alexey Botchkov authored
LOG-SLOW-SLAVE-STATEMENTS NOT DISPLAYED. These parameters were moved from the command line options to the system variables section. Treatment of the opt_log_slow_slave_statements changed to let the dynamic change of the variable.
-
Sergei Golubchik authored
delete deferred events after they're executed (otherwise they can be executed again for a sub-statement) See also commit 0e78d1d Author: Venkatesh Duggirala <venkatesh.duggirala@oracle.com> Date: Wed Mar 20 11:20:47 2013 +0530 BUG#15850951-DUPLICATE ERROR IN REPLICATION WITH SLAVE TRIGGERS AND AUTO INCREMENT
-
- 21 Mar, 2016 1 commit
-
-
Sergei Golubchik authored
don't crash in debug builds. issue an error message on corrupt event
-
- 04 Mar, 2016 1 commit
-
-
Otto Kekäläinen authored
-
- 01 Mar, 2016 1 commit
-
-
Venkatesh Duggirala authored
REPLICATION Problem: In RBR mode, merge table updates are not successfully applied on a cascading replication. Analysis & Fix: Every type of row event is preceded by one or more table_map_log_events that gives the information about all the tables that are involved in the row event. Server maintains the list in RPL_TABLE_LIST and it goes through all the tables and checks for the compatibility between master and slave. Before checking for the compatibility, it calls 'open_tables()' which takes the list of all tables that needs to be locked and opened. In RBR, because of the Table_map_log_event , we already have all the tables including base tables in the list. But the open_tables() which is generic call takes care of appending base tables if the list contains merge tables. There is an assumption in the current replication layer logic that these tables (TABLE_LIST type objects) are always added in the end of the list. Replication layer maintains the count of tables(tables_to_lock_count) that needs to be verified for compatibility check and runs through only those many tables from the list and rest of the objects in linked list can be skipped. But this assumption is wrong. open_tables()->..->add_children_to_list() adds base tables to the list immediately after seeing the merge table in the list. For eg: If the list passed to open_tables() is t1->t2->t3 where t3 is merge table (and t1 and t2 are base tables), it adds t1'->t2' to the list after t3. New table list looks like t1->t2->t3->t1'->t2'. It looks like it added at the end of the list but that is not correct. If the list passed to open_tables() is t3->t1->t2 where t3 is merge table (and t1 and t2 are base tables), the new prepared list will be t3->t1'->t2'->t1->t2. Where t1' and t2' are of TABLE_LIST objects which were added by add_children_to_list() call and replication layer should not look into them. Here tables_to_lock_count will not help as the objects are added in between the list. Fix: After investigating add_children_list() logic (which is called from open_tables()), there is no flag/logic in it to skip adding the children to the list even if the children are already included in the table list. Hence to fix the issue, a logic should be added in the replication layer to skip children in the list by checking whether 'parent_l' is non-null or not. If it is children, we will skip 'compatibility' check for that table. Also this patch is not removing 'tables_to_lock_count' logic for the performance issues if there are any children at the end of the list, those can be easily skipped directly by stopping the loop with tables_to_lock_count check.
-
- 23 Feb, 2016 3 commits
-
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
The reason for the assertion failure is that the update statement for the minimal row image sets only the PK column in the write_set of the table to true. On the other hand, the trigger aims to update a different column. Make sure that triggers update the used columns accordingly, when being processed.
-
Daniele Sciascia authored
Fix remaining issues with wsrep_sync_wait and query cache. - Fixes misplaced call to invalidate query cache in Rows_log_event::do_apply_event(). Query cache was invalidated too early, and allowed old entries to be inserted to the cache. - Reset thd->wsrep_sync_wait_gtid on query cache hit. THD->cleanup_after_query is not called in such cases, and thd->wsrep_sync_wait_gtid remained initialized.
-
- 10 Feb, 2016 1 commit
-
-
Daniele Sciascia authored
Manually merged query cache fixes from 97c02faf0a39dd189eeda4f75fb35bc5db69d541.
-
- 09 Feb, 2016 1 commit
-
-
Sergei Golubchik authored
-
- 01 Dec, 2015 1 commit
-
-
Venkatesh Duggirala authored
Problem: ======== 1) Drop table queries are re-generated by server before writing the events(queries) into binlog for various reasons. If table name/db name contains a non regular characters (like latin characters), the generated query is wrong. Hence it breaks the replication. 2) In the edge case, when table name/db name contains 64 characters, server is throwing an assert assert(M_TBLLEN < 128) 3) In the edge case, when db name contains 64 latin characters, binlog content is interpreted badly which is leading replication failure. Analysis & Fix : ================ 1) Parser reads the table name from the query and converts it to standard charset(utf8) and stores it in table_name variable. When drop table query is regenerated with the same table_name variable, it should be converted back to the original charset from standard charset(utf8). 2) Latin character takes two bytes for each character. Limit of the identifier is 64. SYSTEM_CHARSET_MBMAXLEN is set to '3'. So there is a possiblity that tablename/dbname contains 3 * 64. Hence assert is changed to (M_TBLLEN <= NAME_CHAR_LEN*SYSTEM_CHARSET_MBMAXLEN) 3) db_len in the binlog event header is taking 1 byte. db_len is ranged from 0 to 192 bytes (3 * 64). While reading the db_len from the event, server is casting to uint instead of uchar which is leading to bad db_len. This problem is fixed by changing the cast type to uchar.
-
- 29 Nov, 2015 1 commit
-
-
Monty authored
This includes fixing all utilities to not have any memory leaks, as safemalloc warnings stopped tests from passing on MacOSX. - Ensure that all clients takes character-set-dir, as the libmysqlclient library will use it. - mysql-test-run now passes character-set-dir to all external clients. - Changed dynstr_free() so that it can be called twice (made freeing code easier) - Changed rpl_global_gtid_slave_state to be allocated dynamicly as it includes a mutex that needs to be initizlied/destroyed before my_end() is called. - Removed rpl_slave_state::init() and rpl_slave_stage::deinit() as their job are better handling by constructor and delete. - Print alias instead of table_name in check_duplicate_key as table_name may have been converted to lower case. Other things: - Fixed a case in time_to_datetime_with_warn() where we where using && instead of & in tests
-