- 15 Mar, 2018 3 commits
-
-
Galina Shalygina authored
the non-recursive CTE defined with UNION The problem appears as the columns of the non-recursive CTE weren't renamed. The renaming procedure was called for recursive CTEs only. To fix it in the procedure st_select_lex_unit::prepare With_element::rename_columns_of_derived_unit is called now for both CTEs: recursive and non-recursive.
-
Thirunarayanan Balathandayuthapani authored
- Work around for MDEV-13942: Drop spatial index to avoid possible hang
-
Jan Lindström authored
MDEV-15540 fix for MTR tests using wsrep_recover.inc
-
- 14 Mar, 2018 12 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
add the test case (the bug was fixed in d390e501)
-
Sergei Golubchik authored
Refactor get_datetime_value() not to create Item_cache_temporal(), but do it always in ::fix_fields() or ::fix_length_and_dec(). Creating items at the execution time doesn't work very well with virtual columns and check constraints that are fixed and executed in different THDs.
-
Sergei Golubchik authored
while it should look at the actual field_type() and use get_date() or get_time() as appropriate. test case is in the following commit.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
Do not assume that it's always item->field_type() - this is not the case in temporal comparisons (e.g. when comparing DATETIME column with a TIME literal).
-
Sergei Golubchik authored
It's a generic function, not using anything from Arg_comparator. Make it a static function, not a class method, to be able to use it later without Arg_comparator
-
Sergei Golubchik authored
-
Sergei Golubchik authored
will be used in following commits
-
Sergei Golubchik authored
-
Sergei Golubchik authored
reorder items in args[] array. Instead of when1,then1,when2,then2,...[,case][,else] sort them as [case,]when1,when2,...,then1,then2,...[,else] in this case all items used for comparison take a continuous part of the array and can be aggregated directly. and all items that can be returned take a continuous part of the array and can be aggregated directly. Old code had to copy them to a temporary array before aggreation, and then copy back (thd->change_item_tree) everything that was changed.
-
Thirunarayanan Balathandayuthapani authored
- Fixing the windows failure of unsupported_redo test case. mariabackup --tables-exclude option only restricts ibd file.
-
- 13 Mar, 2018 8 commits
-
-
Jacob Mathew authored
-
Jacob Mathew authored
-
Sergey Vojtovich authored
Elaborate shutdown message.
-
Andrei Elkin authored
out of order at retry The test failures were of two sorts. One is that the number of retries what the slave thought as a temporary error exceeded the default value of the slave retry option. The 2nd issue was an out of order commit by transactions that were supposed to error out instead. Both issues are caused by the same reason that the post-temporary-error retry did not check possibly already existing error status. This is mended with refining conditions to retry. Specifically, a retrying worker checks `rpl_parallel_entry::stop_on_error_sub_id` that a potential failing predecessor could set to its own sub id. Now should the member be set the retrying follower errors out with ER_PRIOR_COMMIT_FAILED.
-
Thirunarayanan Balathandayuthapani authored
- buf_flush_LRU_list_batch() initializes the count to zero and updates them correctly.
-
Thirunarayanan Balathandayuthapani authored
Problem: ======= Mariabackup exits during prepare phase if it encounters MLOG_INDEX_LOAD redo log record. MLOG_INDEX_LOAD record informs Mariabackup that the backup cannot be completed based on the redo log scan, because some information is purposely omitted due to bulk index creation in ALTER TABLE. Solution: ======== Detect the MLOG_INDEX_LOAD redo record during backup phase and exit the mariabackup with the proper error message.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
buf_flush_page_cleaner_coordinator(): Signal the worker threads to exit while waiting for them to exit. Apparently, signals are sometimes lost, causing shutdown to occasionally hang when multiple page cleaners (and buffer pool instances) are used, that is, when innodb_buffer_pool_size is at least 1 GiB. buf_flush_page_cleaner_close(): Merge with the only caller.
-
- 12 Mar, 2018 10 commits
-
-
Alexey Botchkov authored
in trans_xa_start. test fixed.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
In thread caching code, clear THD's warnings before reuse.
-
Oleksandr Byelkin authored
There is not current SELECT during assigning SP parameters, do not use it if current_select is empty.
-
Andrei Elkin authored
replicate_events_marked_for_skip=FILTER_ON_MASTER [Note this is a cherry-pick from 10.2 branch.] When events of a big transaction are binlogged offsetting over 2GB from the beginning of the log the semisync master's dump thread lost such events. The events were skipped by the Dump thread that found their skipping status erroneously. The current fixes make sure the skipping status is computed correctly. The test verifies them simulating the 2GB offset.
-
Sergey Vojtovich authored
Based on contribution by Daniel Black.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
fts_sync(): If the dict_table_t::to_be_dropped flag is set, do not "goto begin_sync". Also, clean up the way how dict_index_t::index_fts_syncing is cleared. It looks like this regression was introduced by merging Oracle Bug #24938374 MYSQL CRASHED AFTER LONG WAIT ON DICT OPERATION LOCK WHILE SYNCING FTS INDEX https://github.com/mysql/mysql-server/commit/068f8261d4c1e134965383ff974ddf30c0758f51 from MySQL 5.6.38 into MariaDB 10.0.33, 10.1.29, 10.2.10. The same hang is present in MySQL 5.7.20.
-
Alexey Botchkov authored
in trans_xa_start. THD.transaction.xid_state.xid.rm_error should be cleaned as the thread ends.
-
- 11 Mar, 2018 2 commits
-
-
Sergei Petrunia authored
Don't call handler->position() if the last call to read a row did not succeed.
-
sjaakola authored
The error log redirection for wsrep_recover run does not work in old version. For the wsrep_recovery run, error logging is supposed to go into: mysql-test/suite/galera/include/galera_wsrep_recover.inc In old version, this works only partially, 4 first lines of error messages after mysql startup do go into the galera_wsrep_recover.log, but after that the default error log file is enforced and remaining error logging goes into the default error log file. In this patch this problem is fixed by passing --log-error option in mysql startup This fix was tested with galera_gcache_recover test, which is currently in disabled state. Note that the test does not pass even after this fix, as there are further more issues in later test phases.
-
- 10 Mar, 2018 5 commits
-
-
Marko Mäkelä authored
fil_space_t::atomic_write_supported: Always set this flag for TEMPORARY TABLESPACE and during IMPORT TABLESPACE. The page writes during these operations are by definition not crash-safe because they are not written to the redo log. fil_space_t::use_doublewrite(): Determine if doublewrite should be used. buf_dblwr_update(): Add assertions, and let the caller check whether doublewrite buffering is desired. buf_flush_write_block_low(): Disable the doublewrite buffer for the temporary tablespace and for IMPORT TABLESPACE. fil_space_set_imported(), fil_node_open_file(), fil_space_create(): Initialize or revise the space->atomic_write_supported flag. buf_page_io_complete(), buf_flush_write_complete(): Add the parameter dblwr, to indicate whether doublewrite was used for writes. buf_dblwr_sync_datafiles(): Remove an unnecessary flush of persistent tablespaces when flushing temporary tablespaces. (Move the call to buf_dblwr_flush_buffered_writes().)
-
Marko Mäkelä authored
buf_flush_init_for_writing(): Remove the parameter skip_checksum.
-
Marko Mäkelä authored
fsp_init_file_page_low(): Always initialize the page.
-
Marko Mäkelä authored
-
Jacob Mathew authored
The crash occurs when inserting into, updating or deleting from Spider system tables. These operations do not go through the normal insert, update or delete logic, so binary logging of the row is not properly set up and leads to the crash. The fix for this problem uses the same strategy as is used for the servers system table that contains entries for the servers created with CREATE SERVER. Binary logging is now temporarily disabled on insert, update and delete operations on Spider system tables. Author: Jacob Mathew. Reviewer: Kentoku Shiba.
-