- 26 Mar, 2020 3 commits
-
-
Daniel Black authored
Otherwise fall back to libcurl3
-
Marko Mäkelä authored
page_cur_insert_rec_low(): Check the array bounds before comparing. We used to read one byte beyond the end of the 'rec' payload. The incorrect logic was originally introduced in commit 7ae21b18 and modified in commit 138cbec5.
-
Alexander Barkov authored
main.mysql_upgrade_noengine did not do "FLUSH PRIVILEGES" after restoring the original backed global_priv table. So following tests might fail on lack of some privileges. Adding the FLUSH PRIVILEGES statement.
-
- 25 Mar, 2020 8 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Fix clang-cl built
-
Monty authored
-
Monty authored
The cause was an uninitalized variable on the slave when reading a dummy event that can only be generated by the test. Fixed by ensuring that flag2 is always initialized. Fixed also some indentation issues and improved comments.
-
Kentoku SHIBA authored
-
Vladislav Vaintroub authored
by suppressing Unix only system checks
-
Marko Mäkelä authored
The test main.mysqltest could crash or hang with cmake -DWITH_ASAN=ON builds. The reason appears to be a memory leak, which was found out by manually invoking echo --replace_regex a > file ASAN_OPTIONS=log_path=/dev/shm/asan mysqltest ... < file and then examining the /dev/shm/asan.* file.
-
Marko Mäkelä authored
commit 121a5e8d revised the function buf_pool_watch_unset() in such a way that the debug field buf_page_t::in_page_hash is no longer protected by buf_pool.mutex and thus not safe to access by the debug assertion in buf_pool_watch_set(). For now, let us revert the change to buf_pool_watch_unset() and have it acquire the buf_pool.mutex for a longer time.
-
- 24 Mar, 2020 27 commits
-
-
Alexander Barkov authored
MDEV-22030 Don't grant REPLICATION MASTER ADMIN automatically on upgrade from an older JSON user table
-
Alexander Barkov authored
Adding a test to check that having a user with REPLICATION SLAVE privilege is enough to run replication. Made by Serg.
-
Monty authored
-
Monty authored
-
Monty authored
-
Monty authored
This was done to both simplify the code and also to be easier to handle storage engines that are clustered on some other index than the primary key. As pk_is_clustering_key() and is_clustering_key now are using only index_flags, these where removed from all storage engines.
-
Monty authored
Changes: - Initalize Aria early to allow it to load mysql.plugin table with --help - Don't print 'aborting' when doing --help - Don't write 'loose' error messages on log_warning < 2 (2 is default) - Don't write warnings about disabled plugings when doing --help - Don't write aria_log_control or aria log files when doing --help - When using --help, open all Aria tables in read only mode (safety) - If aria_init() fails, do a cleanup(). (Frees used memory) - If aria_log_control is locked with --help, then don't wait 30 seconds but instead return at once without initialzing Aria plugin.
-
Michael Widenius authored
-
Monty authored
-
Monty authored
-
Monty authored
MDEV-21604 Added "virtual" low level write function encrypt_or_write that is set to point to either normal or encrypted write functions. This patch also fixes a possible memory leak if writing to binary log fails.
-
Monty authored
MDEV-21605 Clean up and speed up interfaces for binary row logging MDEV-21617 Bug fix for previous version of this code The intention is to have as few 'if' as possible in ha_write() and related functions. This is done by pre-calculating once per statement the row_logging state for all tables. Benefits are simpler and faster code both when binary logging is disabled and when it's enabled. Changes: - Added handler->row_logging to make it easy to check it table should be row logged. This also made it easier to disabling row logging for system, internal and temporary tables. - The tables row_logging capabilities are checked once per "statements that updates tables" in THD::binlog_prepare_for_row_logging() which is called when needed from THD::decide_logging_format(). - Removed most usage of tmp_disable_binlog(), reenable_binlog() and temporary saving and setting of thd->variables.option_bits. - Moved checks that can't change during a statement from check_table_binlog_row_based() to check_table_binlog_row_based_internal() - Removed flag row_already_logged (used by sequence engine) - Moved binlog_log_row() to a handler:: - Moved write_locked_table_maps() to THD::binlog_write_table_maps() as most other related binlog functions are in THD. - Removed binlog_write_table_map() and binlog_log_row_internal() as they are now obsolete as 'has_transactions()' is pre-calculated in prepare_for_row_logging(). - Remove 'is_transactional' argument from binlog_write_table_map() as this can now be read from handler. - Changed order of 'if's in handler::external_lock() and wsrep_mysqld.h to first evaluate fast and likely cases before more complex ones. - Added error checking in ha_write_row() and related functions if binlog_log_row() failed. - Don't clear check_table_binlog_row_based_result in clear_cached_table_binlog_row_based_flag() as it's not needed. - THD::clear_binlog_table_maps() has been replaced with THD::reset_binlog_for_next_statement() - Added 'MYSQL_OPEN_IGNORE_LOGGING_FORMAT' flag to open_and_lock_tables() to avoid calculating of binary log format for internal opens. This flag is also used to avoid reading statistics tables for internal tables. - Added OPTION_BINLOG_LOG_OFF as a simple way to turn of binlog temporary for create (instead of using THD::sql_log_bin_off. - Removed flag THD::sql_log_bin_off (not needed anymore) - Speed up THD::decide_logging_format() by remembering if blackhole engine is used and avoid a loop over all tables if it's not used (the common case). - THD::decide_logging_format() is not called anymore if no tables are used for the statement. This will speed up pure stored procedure code with about 5%+ according to some simple tests. - We now get annotated events on slave if a CREATE ... SELECT statement is transformed on the slave from statement to row logging. - In the original code, the master could come into a state where row logging is enforced for all future events if statement could be used. This is now partly fixed. Other changes: - Ensure that all tables used by a statement has query_id set. - Had to restore the row_logging flag for not used tables in THD::binlog_write_table_maps (not normal scenario) - Removed injector::transaction::use_table(server_id_type sid, table tbl) as it's not used. - Cleaned up set_slave_thread_options() - Some more DBUG_ENTER/DBUG_RETURN, code comments and minor indentation changes. - Ensure we only call THD::decide_logging_format_low() once in mysql_insert() (inefficiency). - Don't annotate INSERT DELAYED - Removed zeroing pos_in_table_list in THD::open_temporary_table() as it's already 0
-
Monty authored
-
Monty authored
MDEV-21606 Improve update handler (long unique keys on blobs) MDEV-21470 MyISAM and Aria start_bulk_insert doesn't work with long unique MDEV-21606 Bug fix for previous version of this code MDEV-21819 2 Assertion `inited == NONE || update_handler != this' - Move update_handler from TABLE to handler - Move out initialization of update handler from ha_write_row() to prepare_for_insert() - Fixed that INSERT DELAYED works with update handler - Give an error if using long unique with an autoincrement column - Added handler function to check if table has long unique hash indexes - Disable write cache in MyISAM and Aria when using update_handler as if cache is used, the row will not be inserted until end of statement and update_handler would not find conflicting rows. - Removed not used handler argument from check_duplicate_long_entries_update() - Syntax cleanups - Indentation fixes - Don't use single character indentifiers for arguments
-
Monty authored
- Only indentation changes in sql_rename.cc - Ignore some WSREP error messages when there isn't a internet connection - Force restart of stat_tables_part.test to make result stable - Fixed compiler warnings in CONNECT
-
Monty authored
MDEV-19964 S3 replication support Added new configure options: s3_slave_ignore_updates "If the slave has shares same S3 storage as the master" s3_replicate_alter_as_create_select "When converting S3 table to local table, log all rows in binary log" This allows on to configure slaves to have the S3 storage shared or independent from the master. Other thing: Added new session variable '@@sql_if_exists' to force IF_EXIST to DDL's.
-
Monty authored
-
Sergey Vojtovich authored
- rename PFS specific rebind_psi() to generic rebind() - call rebind independently of PFS compilation status - allow rebind() return an error
-
Monty authored
-
Marko Mäkelä authored
-
mkaruza authored
MDEV-21988: Assertion failure mysqld: bool trans_commit_stmt(THD*): Assertion `thd->in_active_multi_stmt_transaction() || thd->m_transaction_psi == __null' failed. (#1476) Set temporary `SERVER_STATUS_IN_TRANS` so assert is not triggered in `trans_commit_stmt`.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The test failed to specify default-character-set when invoking the client. The compile-time default parameters of the client could be overridden by configuration files in /etc/mysql. Let us explicitly specify --default-character-set.
-
Sergei Golubchik authored
-
Rasmus Johansson authored
main.mysqlhotcopy_myisam cannot find mysqlhotcopy tool wsrep scripts are not executable in CMAKE_CURRENT_BINARY_DIR
-
Sergei Golubchik authored
* generate and install mysql_config * symlink mariadb_config (from C/C) to mariadb-config also: * .gitignore generated mariadb-config.1 * remove obsolete compiler flag from C/C
-
Sergei Golubchik authored
This reverts commit 5d1b8f41. because since 306e439c manpages use troff aliases instead of symlinks, so they should not be symlinked.
-
- 23 Mar, 2020 2 commits
-
-
Otto Kekäläinen authored
Drop excess jobs while still making sure there is a good variation of running all test suites, gcc and clang versions. Also introduce testing on architectures arm64 and ppc64le.
-
Otto Kekäläinen authored
- Properly define build dependencies via addons/homebrew, but still keep secondary Homebrew run until OS X builds fully work. - Remove references to OS X bugs that are already closed. - As long as the OS X build does not work, it is enough to attempt to run just one of them, no need for many in parallel. It will just waste resources and slow down the job from finishing quickly.
-