- 21 Jun, 2020 2 commits
-
-
Sergei Golubchik authored
increase spider maturity accordingly
-
Sergei Golubchik authored
-
- 19 Jun, 2020 27 commits
-
-
Elena Stepanova authored
-
Sergei Golubchik authored
-
Roman Nozdrin authored
Added binutils dependency.
-
Roman Nozdrin authored
Both RPM and DEB now conflicts on previous versions of MCS. Trim .deb packaging. MCS now depends on python. Python version varies in distributions.
-
Sergei Golubchik authored
-
Roman Nozdrin authored
Updated MCS
-
Roman Nozdrin authored
-
Roman Nozdrin authored
Update MCS ref.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Roman Nozdrin authored
-
Roman Nozdrin authored
-
Roman Nozdrin authored
-
Roman Nozdrin authored
-
Sergei Golubchik authored
-
Andrew Hutchings authored
-
Vladislav Vaintroub authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
include/maria.h is a common header included in half of the server, if should only contain definitions and declarations that are used outside of storage/maria internal definitions and declarations should be in maria_def.h also remove few duplicate declarations
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
Commit bf3c862f accidentally introduced two bugs. btr_search_update_hash_ref(): Pass the correct parameter part->heap. btr_search_sys_t::free(): Free all memory. Thanks to Michael Widenius and Thirunarayanan Balathandayuthapani for pointing out these bugs.
-
Oleksandr Byelkin authored
Second attempt to fix same bug: Use the same queue for all READ operations. Release queues for all used pages. This fixes a hang in the s3.alter2 test case
-
Monty authored
When converting a table (test.s3_table) from S3 to another engine, the following will be logged to the binary log: DROP TABLE IF EXISTS test.t1; CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine INSERT rows to test.t1 in binary-row-log-format The bug is that the above statements are logged one by one to the binary log. This means that a fast slave, configured to use the same S3 storage as the master, would be able to execute the DROP and CREATE from the binary log before the master has finished the ALTER TABLE. In this case the slave would ignore the DROP (as it's on a S3 table) but it will stop on CREATE of the local tale, as the table is still exists in S3. The REPLACE part will be ignored by the slave as it can't touch the S3 table. The fix is to ensure that all the above statements is written to binary log AFTER the table has been deleted from S3.
-
Monty authored
-
Monty authored
- Added missing test for binlog_filter to ALTER TABLE
-
Monty authored
- Rewrote bool Query_compressed_log_event::write() to make it more readable (no logic changes). - Changed DBUG_PRINT of 'is_error:' to 'is_error():' to make it easier to find error: in traces. - Ensure that 'db' is never null in Query_log_event (Simplified code).
-
- 18 Jun, 2020 11 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
-
Daniel Black authored
-
Marko Mäkelä authored
The rw_lock_s_lock() calls for the buf_pool.page_hash became a clear bottleneck after MDEV-15053 reduced the contention on buf_pool.mutex. We will replace that use of rw_lock_t with a special implementation that is optimized for memory bus traffic. The hash_table_locks instrumentation will be removed. buf_pool_t::page_hash: Use a special implementation whose API is compatible with hash_table_t, and store the custom rw-locks directly in buf_pool.page_hash.array, intentionally sharing cache lines with the hash table pointers. rw_lock: A low-level rw-lock implementation based on std::atomic<uint32_t> where read_trylock() becomes a simple fetch_add(1). buf_pool_t::page_hash_latch: The special of rw_lock for the page_hash. buf_pool_t::page_hash_latch::read_lock(): Assert that buf_pool.mutex is not being held by the caller. buf_pool_t::page_hash_latch::write_lock() may be called while not holding buf_pool.mutex. buf_pool_t::watch_set() is such a caller. buf_pool_t::page_hash_latch::read_lock_wait(), page_hash_latch::write_lock_wait(): The spin loops. These will obey the global parameters innodb_sync_spin_loops and innodb_sync_spin_wait_delay. buf_pool_t::freed_page_hash: A singly linked list of copies of buf_pool.page_hash that ever existed. The fact that we never free any buf_pool.page_hash.array guarantees that all page_hash_latch that ever existed will remain valid until shutdown. buf_pool_t::resize_hash(): Replaces buf_pool_resize_hash(). Prepend a shallow copy of the old page_hash to freed_page_hash. buf_pool_t::page_hash_table::n_cells: Declare as Atomic_relaxed. buf_pool_t::page_hash_table::lock(): Explain what prevents a race condition with buf_pool_t::resize_hash().
-
Marko Mäkelä authored
hash_get_n_cells(): Remove. Access n_cells directly. hash_get_nth_cell(): Remove. Access array directly. hash_table_clear(): Replaced with hash_table_t::clear(). hash_table_create(), hash_table_free(): Remove. hash0hash.cc: Remove.
-
Marko Mäkelä authored
btr_search_sys::parts[]: A single structure for the partitions of the adaptive hash index. Replaces the 3 separate arrays: btr_search_latches[], btr_search_sys->hash_tables, btr_search_sys->hash_tables[i]->heap. hash_table_t::heap, hash_table_t::adaptive: Remove. ha0ha.cc: Remove. Move all code to btr0sea.cc.
-
Marko Mäkelä authored
HASH_TABLE_SYNC_MUTEX was kind-of used for the adaptive hash index, even though that hash table is already protected by btr_search_latches[]. HASH_TABLE_SYNC_RWLOCK was only being used for buf_pool.page_hash. It is cleaner to decouple that synchronization from hash_table_t, and move it to the actual user. buf_pool_t::page_hash_latches[]: Synchronization for buf_pool.page_hash. LATCH_ID_HASH_TABLE_MUTEX: Remove. hash_table_t::sync_obj, hash_table_t::n_sync_obj: Remove. hash_table_t::type, hash_table_sync_t: Remove. HASH_ASSERT_OWN(), hash_get_mutex(), hash_get_nth_mutex(): Remove. ib_recreate(): Merge to the only caller, buf_pool_resize_hash(). ib_create(): Merge to the callers. ha_clear(): Merge to the only caller buf_pool_t::close(). buf_pool_t::create(): Merge the ib_create() and hash_create_sync_obj() invocations. ha_insert_for_fold_func(): Clarify an assertion. buf_pool_t::page_hash_lock(): Simplify the logic. hash_assert_can_search(), hash_assert_can_modify(): Remove. These predicates were only being invoked for the adaptive hash index, while they only are effective for buf_pool.page_hash. HASH_DELETE_AND_COMPACT(): Merge to ha_delete_hash_node(). hash_get_sync_obj_index(): Remove. hash_table_t::heaps[], hash_get_nth_heap(): Remove. It was actually unused! hash_get_heap(): Remove. It was only used in ha_delete_hash_node(), where we always use hash_table_t::heap. hash_table_t::calc_hash(): Replaces hash_calc_hash().
-
Daniel Black authored
MRI scripts cannot handle + in paths, and ubuntu CI makes use of these. So we remove the top level build dir from the script and transform it into a relative path script.
-
Daniel Black authored
Because of common dependencies between the static libraries list can contain duplicates. We reduce these down to the single last one in the list. This reduces the relative time of a rebuild from: $ (cd builddir/; time make -j) ... real 0m30.789s user 1m33.477s sys 0m19.678s and the LIB entries $ grep ADDLIB builddir/libmysqld/mysqlserver-\$\<CONFIG\>.mri.tpl | wc -l 179 $ du -h builddir/libmysqld/libmariadbd.a 4.1G builddir/libmysqld/libmariadbd.a To: $ (cd builddir/; time make -j) ... real 0m20.139s user 1m32.423s sys 0m12.208s $ grep ADDLIB builddir/libmysqld/mysqlserver-\$\<CONFIG\>.mri.tpl | wc -l 25 $ du -h builddir/libmysqld/libmariadbd.a 688M builddir/libmysqld/libmariadbd.a
-
Marko Mäkelä authored
-
Vlad Lesin authored
from configuration files
-