- 13 Dec, 2017 3 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
they store history and the history does not have history
-
Sergei Golubchik authored
-
- 11 Dec, 2017 1 commit
-
-
Aleksey Midenkov authored
Merge branch '10.3' into trunk
-
- 10 Dec, 2017 2 commits
-
-
Aleksey Midenkov authored
-
Varun Gupta authored
commit 6d63a034 MDEV-11297: Add support for LIMIT clause in GROUP_CONCAT()
-
- 09 Dec, 2017 1 commit
-
-
Vesa Pentti authored
* Note: breaking change; since this commit, a plugin that has worked so far might get rejected due to plugin maturity * mariabackup is not affected (allows all plugins) * VERSION file defines SERVER_MATURITY, which defines the corresponding numeric value as SERVER_MATURITY_LEVEL in include/mysql_version.h * The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1 * Logs a warning if a plugin has maturity lower than SERVER_MATURITY_LEVEL * Tests suppress the plugin maturity warning * Tests use --plugin-maturity=unknown by default so as not to fail due to the stricter plugin maturity handling
-
- 08 Dec, 2017 22 commits
-
-
Eugene Kosov authored
-
Aleksey Midenkov authored
Renamed to SELECT_LEX::vers_setup_conds(). Moved optimized fields check to JOIN::vers_check_items().
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
Relax memory barrier for lock_word. rw_lock_lock_word_decr() - used to acquire rw-lock, thus we only need to issue ACQUIRE when we succeed locking. rw_lock_x_lock_func_nowait() - same as above, but used to attempt to acquire X-lock. rw_lock_s_unlock_func() - used to release S-lock, RELEASE is what we need here. rw_lock_x_unlock_func() - used to release X-lock. Ideally we'd need only RELEASE here, but due to mess with waiters (they must be loaded after lock_word is stored) we have to issue both ACQUIRE and RELEASE. rw_lock_sx_unlock_func() - same as above, but used to release SX-lock. rw_lock_s_lock_spin(), rw_lock_x_lock_func(), rw_lock_sx_lock_func() - fetch-and-store to waiters has to issue only ACQUIRE memory barrier, so that waiters are stored before lock_word is loaded. Note that there is violation of RELEASE-ACQUIRE protocol here, because we do on lock: my_atomic_fas32_explicit((int32*) &lock->waiters, 1, MY_MEMORY_ORDER_ACQUIRE); my_atomic_load32_explicit(&lock->lock_word, MY_MEMORY_ORDER_RELAXED); on unlock my_atomic_add32_explicit(&lock->lock_word, X_LOCK_DECR, MY_MEMORY_ORDER_ACQ_REL); my_atomic_load32_explicit((int32*) &lock->waiters, MY_MEMORY_ORDER_RELAXED); That is we kind of synchronize ACQUIRE on lock_word with ACQUIRE on waiters. It was there before this patch. Simple fix may have negative performance impact. Proper fix requires refactoring of lock_word.
-
Sergey Vojtovich authored
Relax memory barrier for waiters: these 2 stores must be completed before os_event_set() finishes. This is guaranteed by RELEASE barrier issued by mutex.exit() of os_event_set().
-
Sergey Vojtovich authored
Remove volatile modifier from waiters: it's not supposed for inter-thread communication, use appropriate atomic operations instead. Changed waiters to int32_t, my_atomic friendly type.
-
Sergey Vojtovich authored
Remove volatile modifier from lock_word: it's not supposed for inter-thread communication, use appropriate atomic operations instead.
-
Sergey Vojtovich authored
Change lock_word from lint to int32_t: the latter is my_atomic_* friendly type.
-
Aleksey Midenkov authored
Related to #365 bug 3.
-
Aleksey Midenkov authored
Tests affected (forced mode): main.range main.range_mrr_icp
-
Eugene Kosov authored
-
Eugene Kosov authored
-
Aleksey Midenkov authored
-
Sergei Golubchik authored
Related to VIEW fix #4 (8e12edbcc74685175d20729958c5f6a5d09e4f9c)
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
because the statement is TRUNCATE, not DELETE
-
Sergei Golubchik authored
* again, as in 10.2, NOW is a keyword only if followed by parentheses * use AS OF CURRENT_TIMESTAMP or AS OF NOW() * AS OF CURRENT_TIMESTAMP and AS OF NOW() mean AS OF NOW(6), not AS OF NOW(0), (same behavior as in a DEFAULT clause)
-
Monty authored
-
Monty authored
This is where Codership's offical rpm's puts them
-
Monty authored
- Remove not used thd_rpl_is_parallel() - Remove not used mysql_notify_thread_having_shared_lock() - Remove not needed LOCK_thread_count from MYSQL_BIN_LOG::reset_logs() - LOCK_thread_count is not protecting against rollback, so this code and comment is not needed - Remove mutex_locks in slave.cc that are not needed. Added THD::assert_not_linked() to ensure that it was safe to remove - Fixed not repeatable test load_data_stmt_view - Updated binlog_killed to test removal of mutex (thanks to Andrei Elkin for test) - More code comments
-
Varun Gupta authored
-
- 07 Dec, 2017 5 commits
-
-
Marko Mäkelä authored
When logging ROW_T_INSERT or ROW_T_UPDATE records, we did not normalize the DB_TRX_ID of the current transaction into 0 if the current transaction had started (modifying other tables) before the ALTER TABLE started. MDEV-13654 introduced this normalization for ROW_T_DELETE and for all operations with ADD PRIMARY KEY, in row_log_table_get_pk().
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_stats_process_entry_from_defrag_pool(): Release the mutex
-
- 06 Dec, 2017 6 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Follow-up fix to MDEV-13201 Assertion `srv_undo_sources || ...` failed on shutdown during DDL operation Introduce the debug flag trx_t::persistent_stats to suppress the assertion for the updates of persistent statistics during fast shutdown. dict_stats_exec_sql(): Do execute the statement even though shutdown has been initiated.
-
Marko Mäkelä authored
dict_stats_exec_sql(): Expect the caller to always provide a transaction. Remove some redundant assertions. The caller must hold dict_sys->mutex, but holding dict_operation_lock is only necessary for accessing data dictionary tables, which we are not accessing. dict_stats_save_index_stat(): Acquire dict_sys->mutex for invoking dict_stats_exec_sql(). dict_stats_save(), dict_stats_update_for_index(), dict_stats_update(), dict_stats_drop_index(), dict_stats_delete_from_table_stats(), dict_stats_delete_from_index_stats(), dict_stats_drop_table(), dict_stats_rename_in_table_stats(), dict_stats_rename_in_index_stats(), dict_stats_rename_table(): Use a single caller-provided transaction that is started and committed or rolled back by the caller. dict_stats_process_entry_from_recalc_pool(): Let the caller provide a transaction object. ha_innobase::open(): Pass a transaction to dict_stats_init(). ha_innobase::create(), ha_innobase::discard_or_import_tablespace(): Pass a transaction to dict_stats_update(). ha_innobase::rename_table(): Pass a transaction to dict_stats_rename_table(). We do not use the same transaction as the one that updated the data dictionary tables, because we already released the dict_operation_lock. (FIXME: there is a race condition; a lock wait on SYS_* tables could occur in another DDL transaction until the data dictionary transaction is committed.) ha_innobase::info_low(): Pass a transaction to dict_stats_update() when calculating persistent statistics. alter_stats_norebuild(), alter_stats_rebuild(): Update the persistent statistics as well. In this way, a single transaction will be used for updating the statistics of a whole table, even for partitioned tables. ha_innobase::commit_inplace_alter_table(): Drop statistics for all partitions when adding or dropping virtual columns, so that the statistics will be recalculated on the next handler::open(). This is a refactored version of Oracle Bug#22469660 fix. RecLock::add_to_waitq(), lock_table_enqueue_waiting(): Do not allow a lock wait to occur for updating statistics in a data dictionary transaction, such as DROP TABLE. Instead, return the previously unused error code DB_QUE_THR_SUSPENDED. row_merge_lock_table(), row_mysql_lock_table(): Remove dead code for handling DB_QUE_THR_SUSPENDED. row_drop_table_for_mysql(), row_truncate_table_for_mysql(): Drop the statistics as part of the data dictionary transaction. After TRUNCATE TABLE, the statistics will be recalculated on subsequent ha_innobase::open(), similar to how the logic after the above-mentioned Oracle Bug#22469660 fix in ha_innobase::commit_inplace_alter_table() works. btr_defragment_thread(): Use a single transaction object for updating defragmentation statistics. dict_stats_save_defrag_stats(), dict_stats_save_defrag_stats(), dict_stats_process_entry_from_defrag_pool(), dict_defrag_process_entries_from_defrag_pool(), dict_stats_save_defrag_summary(), dict_stats_save_defrag_stats(): Add a parameter for the transaction. dict_stats_empty_table(): Make public. This will be called by row_truncate_table_for_mysql() after dropping persistent statistics, to clear the memory-based statistics as well.
-
Sergei Petrunia authored
Part #2: Don't use the new code for the clustered PK, it is handled in the special way right above.
-
Sergei Petrunia authored
TABLE_SHARE::init_from_binary_frm_image() calls handler_file->index_flags() before it has set TABLE_SHARE::primary_key (it is 0 while it should be MAX_KEY in my example). This causes MyRocks to report wrong index flags (it thinks it's a PK while it is not), which causes invalid query plans later on. Do the only thing that seems feasible: adjust field->part_of key to have correct value in ha_rocksdb::open.
-