- 11 Jun, 2020 8 commits
-
-
Marko Mäkelä authored
For reads, the buf_pool.page_hash is protected by buf_pool.mutex or by the hash_lock. There is no need to compute or acquire hash_lock if we are not modifying the buf_pool.page_hash. However, the buf_pool.page_hash latch must be held exclusively when changing buf_page_t::in_file(), or if we desire to prevent buf_page_t::can_relocate() or buf_page_t::buf_fix_count() from changing. rw_lock_lock_word_decr(): Add a comment that explains the polling logic. buf_page_t::set_state(): When in_file() is to be changed, assert that an exclusive buf_pool.page_hash latch is being held. Unfortunately we cannot assert this for set_state(BUF_BLOCK_REMOVE_HASH) because set_corrupt_id() may already have been called. buf_LRU_free_page(): Check buf_page_t::can_relocate() before aqcuiring the hash_lock. buf_block_t::initialise(): Initialize also page.buf_fix_count(). buf_page_create(): Initialize buf_fix_count while not holding any mutex or hash_lock. Acquire the hash_lock only for the duration of inserting the block to the buf_pool.page_hash. buf_LRU_old_init(), buf_LRU_add_block(), buf_page_t::belongs_to_unzip_LRU(): Do not assert buf_page_t::in_file(), because buf_page_create() will invoke buf_LRU_add_block() before acquiring hash_lock and buf_page_t::set_state(). buf_pool_t::validate(): Rely on the buf_pool.mutex and do not unnecessarily acquire any buf_pool.page_hash latches. buf_page_init_for_read(): Clarify that we must acquire the hash_lock upfront in order to prevent a race with buf_pool_t::watch_remove().
-
Kentoku SHIBA authored
-
Marko Mäkelä authored
This regression was introduced in commit dd77f072 (MDEV-22841).
-
Marko Mäkelä authored
ut_filename_hash(): Add better casts to please the compiler: warning C4307: '*': integral constant overflow This regression was introduced in commit dd77f072 (MDEV-22841).
-
Daniel Black authored
That doesn't support STRING(APPEND ..)
-
Varun Gupta authored
MDEV-22819: Wrong result or Assertion `ix > 0' failed in read_to_buffer upon select with GROUP BY and GROUP_CONCAT In the merge_buffers phase for sorting, the sort buffer size is divided between the number of chunks. The chunks have a start and end position (m_buffer_start and m_buffer_end). Then we read the as many records that fit in this buffer for a chunk of the file. The issue here was we were resetting the end of buffer(m_buffer_end) to the number of bytes that was read, this was causing a problem because with dynamic size of sort keys it is possible that later we would not be able to accommodate even one key inside a chunk of file. So the fix was to not reset the end of buffer for a chunk of file.
-
Sachin authored
-
Sachin authored
Add missing call for handler->prepare_for_insert() in Rows_log_event::do_apply_event
-
- 10 Jun, 2020 20 commits
-
-
Marko Mäkelä authored
MONITOR_SRV_MEM_VALIDATE_MICROSECOND, MEM_PERIODIC_CHECK, SRV_MASTER_MEM_VALIDATE_INTERVAL: Remove. These were unused ever since UNIV_MEM_DEBUG was removed. MONITOR_SRV_PURGE_MICROSECOND: Remove. This was always unused.
-
Alexander Barkov authored
CREATE PROCEDURE did not detect unknown SP variables in assignments like this: SET var=a_long_var_name_with_a_typo; The error happened only during the SP execution time, and only of the control flow reaches the erroneous statement. Fixing most expressions to detect unknown identifiers. This includes simple subqueries without tables: - Query specification: SELECT list, WHERE, HAVING (inside aggregate functions) clauses, e.g. SET var= (SELECT unknown_ident+1); SET var= (SELECT 1 WHERE unknown_identifier); SET var= (SELECT 1 HAVING SUM(unknown_identifier); - Table value constructor: VALUES clause, e.g.: SET var= (VALUES(unknown_ident)); Note, in some more complex subquery cases unknown variables are still not detected (this will be fixed separately): - Derived tables: SET a=(SELECT unknown_ident FROM (SELECT 1 AS alias) t1); SET res=(SELECT * FROM t1 LEFT OUTER JOIN (SELECT unknown_ident) t2 USING (c1)); - CTE: SET a=(WITH cte1 (a) AS (SELECT unknown_ident) SELECT * FROM cte1); SET a=(WITH cte1 (a,b) AS (VALUES (unknown,2),(3,4)) SELECT * FROM cte1); SET a=(WITH cte1 (a,b) AS (VALUES (1,2),(3,4)) SELECT unknown_ident FROM cte1); - SELECT .. GROUP BY unknown_identifier - SELECT .. ORDER BY unknown_identifier - HAVING with an unknown identifier outside of any aggregate functions: SELECT .. HAVING unknown_identifier;
-
Eugene Kosov authored
Problematic mutex is dict_sys.mutex. Idea of the patch: unlink() fd under that mutex while it's still open. This way unlink() will be fast and actual file removal will happen on close(). And close() will be called outside of dict_sys.mutex. This should be safe against crash which may happen between unlink() and close(): file will be removed by OS anyway. The same applies to both *nix and Windows. I created and removed a 4G file on some NVMe SSD on ext4: write(3, "\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1"..., 1048576) = 1048576 <0.000519> fdatasync(3) = 0 <3.533763> close(3) = 0 <0.000011> unlink("file") = 0 <0.411563> write(3, "\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1\1"..., 1048576) = 1048576 <0.000520> fdatasync(3) = 0 <3.544938> unlink("file") = 0 <0.000029> close(3) = 0 <0.407057> Such systems can benefit of this patch. fil_node_t::detach(): closes fil_node_t but not file handle, returns that file handle fil_node_t::prepare_to_close_or_deatch(): 'closes' fil_node_t fil_node_t:close_to_free(): new argument detach_handle fil_system_t::detach(): now can detach file handles fil_delete_tablespace(): now can detach file handles row_drop_table_for_mysql(): performs actual file removal
-
Vladislav Vaintroub authored
-
Otto Kekäläinen authored
To install Spider one can simply drop a /etc/mysql/conf.d/spider.cnf like [mariadb] plugin-load-add=ha_spider.so This is automatically generated and installed when plugin is correctly registered to plugin.cmake with its own component name. Many other plugins such as Connect and RocksDB install in the same way. This solved MDEV-19917 as the mere adding and removing of spider.cnf automatically installs and uninstalls it. Remove the overly complex and uncecessary install.sql from Spider, if should not be needed in modern times anymore. With this change there is no need for a uninstall.sql either.
-
Otto Kekäläinen authored
- Recommend max_allowed_packet=1G which is the same as the default client value. - Remove thread_concurrency removed in 10.5. - Remove query cache, not recommended practice anymore. - Remove binlog_*, should not recommend those too easily but rather require the database administrator to read up on those themselves. - Remove chroot setting, not relevant in modern container era. - Show explicitly innodb_buffer_pool_size example as the most likely thing a database administrator should change. - Don't recommend rate limiting in slow log, logging once in a 1000 would not be optimal for the basic case, hence bad example. - Install the example configs in /usr/share/mysql. - Use correct path /run/ instead of /var/run/.
-
Otto Kekäläinen authored
Split the big my.cnf into multiple smaller files with the same filenames and contents as official Debian/Ubuntu packaging has. The config contents stays the same apart from following additions which the original MariaDB upstream configs had and probably needs to be kept: - lc-messages=en_US and skip-external-locking in server config Configs the original MariaDB upstream had that are seemingly unnecessary and thus removed: - port=3306 removed from the client config - log_warnings=2 removed from server config Also adopt update-alternatives system using mysql-common/configure-symlinks. This way it is aligned with downstream Debian/Ubuntu packaging.
-
Sujatha authored
Analysis: ======== List of values provided for "replicate_ignore_table" and "replicate_do_table" are stored in HASH. When an empty list is provided the HASH structure doesn't get initialized. Existing code treats empty element list as an error and tries to clean the uninitialized HASH. This results in above MSAN issue. Fix: === The clean up should be initiated only when there is an error while parsing the 'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in initialized state. Otherwise for empty list it should simply return success.
-
Daniel Black authored
This corrects build failures on ppc64{,le} with the WITH_EMBEDDED_SERVER option enabled. MDEV-22641 added an unusual case in which the same object file in was included twice with a different function defination. The original cmake/merge_archives_unix.cmake did not tolerate such eventualities. So we move to the highest voted answer on Stack Overflow for the merging of static libraries. https://stackoverflow.com/questions/3821916/how-to-merge-two-ar-static-libraries-into-one Thin archives generated compile failures and the libtool mechanism would of been another dependency and using .la files that isn't part of a normal cmake output. The straight Apple mechanism of libtool with static archives also failed on Linux. This leaves the MRI script mechansim which was implemented in this change.
-
Otto Kekäläinen authored
To install Spider one can simply drop a /etc/mysql/conf.d/spider.cnf like [mariadb] plugin-load-add=ha_spider.so This is automatically generated and installed when plugin is correctly registered to plugin.cmake with its own component name. Many other plugins such as Connect and RocksDB install in the same way. This solved MDEV-19917 as the mere adding and removing of spider.cnf automatically installs and uninstalls it. Remove the overly complex and uncecessary install.sql from Spider, if should not be needed in modern times anymore. With this change there is no need for a uninstall.sql either.
-
Otto Kekäläinen authored
- Recommend max_allowed_packet=1G which is the same as the default client value. - Remove thread_concurrency removed in 10.5. - Remove query cache, not recommended practice anymore. - Remove binlog_*, should not recommend those too easily but rather require the database administrator to read up on those themselves. - Remove chroot setting, not relevant in modern container era. - Show explicitly innodb_buffer_pool_size example as the most likely thing a database administrator should change. - Don't recommend rate limiting in slow log, logging once in a 1000 would not be optimal for the basic case, hence bad example. - Install the example configs in /usr/share/mysql. - Use correct path /run/ instead of /var/run/.
-
Otto Kekäläinen authored
Split the big my.cnf into multiple smaller files with the same filenames and contents as official Debian/Ubuntu packaging has. The config contents stays the same apart from following additions which the original MariaDB upstream configs had and probably needs to be kept: - lc-messages=en_US and skip-external-locking in server config Configs the original MariaDB upstream had that are seemingly unnecessary and thus removed: - port=3306 removed from the client config - log_warnings=2 removed from server config Also adopt update-alternatives system using mysql-common/configure-symlinks. This way it is aligned with downstream Debian/Ubuntu packaging.
-
Sujatha authored
Analysis: ======== List of values provided for "replicate_ignore_table" and "replicate_do_table" are stored in HASH. When an empty list is provided the HASH structure doesn't get initialized. Existing code treats empty element list as an error and tries to clean the uninitialized HASH. This results in above MSAN issue. Fix: === The clean up should be initiated only when there is an error while parsing the 'replicate_do_table' or 'replicate_ignore_table' list and the HASH is in initialized state. Otherwise for empty list it should simply return success.
-
Daniel Black authored
This corrects build failures on ppc64{,le} with the WITH_EMBEDDED_SERVER option enabled. MDEV-22641 added an unusual case in which the same object file in was included twice with a different function defination. The original cmake/merge_archives_unix.cmake did not tolerate such eventualities. So we move to the highest voted answer on Stack Overflow for the merging of static libraries. https://stackoverflow.com/questions/3821916/how-to-merge-two-ar-static-libraries-into-one Thin archives generated compile failures and the libtool mechanism would of been another dependency and using .la files that isn't part of a normal cmake output. The straight Apple mechanism of libtool with static archives also failed on Linux. This leaves the MRI script mechansim which was implemented in this change.
-
Vladislav Vaintroub authored
Change how lookup for the "auto" PSI_memory_keys is done. Lookup for filename hashes (integers), instead of C strings Generate these hashes at the compile time with constexpr, rather than at runtime.
-
Marko Mäkelä authored
Let us invoke the debug member functions of mtr_t directly. mtr_t::memo_contains(): Change the parameter type to const rw_lock_t&. This function cannot be invoked on buf_block_t::lock. The function mtr_t::memo_contains_flagged() is intended to be invoked on buf_block_t* or rw_lock_t*, and it along with mtr_t::memo_contains_page_flagged() are the way to check whether a buffer pool page has been latched within a mini-transaction.
-
Marko Mäkelä authored
xdes_get_state(), fseg_get_nth_frag_page_no(), fseg_find_free_frag_page_slot(), fseg_find_last_used_frag_page_slot(), fseg_get_n_frag_pages(), fseg_n_reserved_pages_low(), fseg_print_low(): Remove the unused parameter mtr, and add a const qualifier to the pointer to the buffer pool page frame.
-
Marko Mäkelä authored
This should have been part of commit 70d4e55d.
-
Marko Mäkelä authored
debug_sync_set_action(): Declare the dummy function inline, to silence a warning about declared-but-unused static function. This amends commit 3ccd6766.
-
Julius Goryavsky authored
-
- 09 Jun, 2020 7 commits
-
-
Varun Gupta authored
MDEV-22399: Remove multiple calls to enable and disable Handler::keyread and perform it after the plan refinement phase is done Introduce a function to enable keyreads for indexes and use this function when all the decision of plan refinement phase are done.
-
Marko Mäkelä authored
svr_n_page_hash_locks: Increase from 16 to 64. Before MDEV-15058, we used to have the buf_pool.page_hash partitioned to each instance. rw_lock_lock_word_decr(): Sleep a little in the spinloop. rw_lock_s_lock_low(): Correct a comment. The function does perform spinning. This improves scalability in read-only workloads on a 32-CPU system when the number of concurrent connections exceeds the CPU core count. Thanks to Axel Schwenke for running benchmarks.
-
Marko Mäkelä authored
-
Varun Gupta authored
The issue here is charset for Sort_param::tmp_buffer is cleared when bzero is done for Sort_param. Make sure to set the charset explicitly in the constructor for tmp_buffer.
-
Sergei Golubchik authored
reduce the amount of engine-specific code in the server, particularly as it does not serve any purpose now. may be needed for VP engine, to be reconsidered in MDEV-7795
-
Alexander Barkov authored
Disallow BIT_AND(), BIT_OR(), BIT_XOR() for data types GEOMETRY and INET6, as they cannot return any useful integer values.
-
Eugene Kosov authored
MDEV-22325 ib_logfile0 is too small for innodb_thread_concurrency=0. The size of ib_logfile0 should be bigger than 200 kB * innodb_thread_concurrency. Correct log message. IMO, we shouldn't be very precise in that message as the formula behind it is not trivial. Also performed a little cleanup.
-
- 08 Jun, 2020 5 commits
-
-
Marko Mäkelä authored
buf_LRU_make_block_young(): Merge with buf_page_make_young(). buf_pool_check_no_pending_io(): Remove. Replaced with buf_pool.any_io_pending() and buf_pool.io_pending(), which do not unnecessarily acquire buf_pool.mutex. buf_pool_t::init_flush[]: Use atomic access, so that buf_flush_wait_LRU_batch_end() can avoid acquiring buf_pool.mutex. buf_pool_t::try_LRU_scan: Declare as bool.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-