- 24 Sep, 2021 10 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_rseg_header_create(): Add a parameter for the value that is to be written to TRX_RSEG_MAX_TRX_ID. If we omit this write, then the updated test innodb.undo_truncate will fail for the 4k, 8k, 16k page sizes. This was broken ever since commit 947efe17 (MDEV-15158) removed the writes of transaction identifiers to the TRX_SYS page. srv_do_purge(): Truncate undo tablespaces also during slow shutdown (innodb_fast_shutdown=0). Thanks to Krunal Bauskar for noticing this problem.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_purge_truncate_history(): Do not force a write of the undo tablespace that is being truncated. Instead, prevent page writes by acquiring an exclusive latch on all dirty pages of the tablespace. fseg_create(): Relax an assertion that could fail if a dirty undo page is being initialized during undo tablespace truncation (and trx_purge_truncate_history() already acquired an exclusive latch on it). fsp_page_create(): If we are truncating a tablespace, try to reuse a page that we may have already latched exclusively (because it was in buf_pool.flush_list). To some extent, this helps the test innodb.undo_truncate,16k to avoid running out of buffer pool. mtr_t::commit_shrink(): Mark as clean all pages that are outside the new bounds of the tablespace, and only add the newly reinitialized pages to the buf_pool.flush_list. buf_page_create(): Do not unnecessarily invoke change buffer merge on undo tablespaces. buf_page_t::clear_oldest_modification(bool temporary): Move some assertions to the caller buf_page_write_complete(). innodb.undo_truncate: Use a bigger innodb_buffer_pool_size=24M. On my system, it would otherwise hang 1 out of 1547 attempts (on the 40th repeat of innodb.undo_truncate,16k). Other page sizes were not affected.
-
Marko Mäkelä authored
At least since commit 055a3334 (MDEV-13564) the undo log truncation in InnoDB did not work correctly. The main issue is that during the execution of trx_purge_truncate_history() some pages of the newly truncated undo tablespace could be discarded. This is improved from commit 1cb218c3 which was applied to earlier-version branches. fsp_try_extend_data_file(): Apply the peculiar rounding of fil_space_t::size_in_header only to the system tablespace, whose size can be expressed in megabytes in a configuration parameter. Other files may freely grow by a number of pages. fseg_alloc_free_page_low(): Do allow the extension of undo tablespaces, and mention the file name in the error message. mtr_t::commit_shrink(): Implement crash-safe shrinking of a tablespace: (1) durably write the log (2) release the page latches of the rebuilt tablespace (3) release the mutexes (4) truncate the file (5) release the tablespace latch This is refactored from trx_purge_truncate_history(). log_write_and_flush_prepare(), log_write_and_flush(): New functions to durably write log during mtr_t::commit_shrink().
-
Marko Mäkelä authored
While the redo log is being resized in srv_start(), we must not write checkpoint information to the old log. Thanks to Matthias Leich for noticing this.
-
Jan Lindström authored
-
- 23 Sep, 2021 2 commits
-
-
Jan Lindström authored
MDEV-26566 : galera.galera_var_cluster_address MTR failed: InnoDB: Assertion failure in file row0ins.cc line 3206 Actual problem was that we tried to calculate persistent statistics to wsrep_schema tables in this case wsrep_streaming_log. These tables should not have persistent statistics. Therefore, in table creation tables should be created with STATS_PERSISTENT=0 table option. During rolling-upgrade tables naturally already exists, thus we need to alter them to contain STATS_PERSISTENT=0 table option.
-
Jan Lindström authored
Test changes only: do not output mysql.wsrep_streaming_log contents.
-
- 22 Sep, 2021 6 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
At least since commit 055a3334 (MDEV-13564) the undo log truncation in InnoDB did not work correctly. The main issue is that during the execution of trx_purge_truncate_history() some pages of the newly truncated undo tablespace could be discarded. fsp_try_extend_data_file(): Apply the peculiar rounding of fil_space_t::size_in_header only to the system tablespace, whose size can be expressed in megabytes in a configuration parameter. Other files may freely grow by a number of pages. fseg_alloc_free_page_low(): Do allow the extension of undo tablespaces, and mention the file name in the error message. mtr_t::commit_shrink(): Implement crash-safe shrinking of a tablespace file. First, durably write the log, then shrink the file, and finally release the page latches of the rebuilt tablespace. Refactored from trx_purge_truncate_history(). log_write_and_flush_prepare(), log_write_and_flush(): New functions to durably write log during mtr_t::commit_shrink().
-
Marko Mäkelä authored
-
Daniel Ye authored
- Handle stored function conditions correctly, with the same logic as with UDFs. - When running queries on Spider SE, by default, we do not push down WHERE conditions containing usage of UDFs/stored functions to remote data nodes, unless the user demands (by setting spider_use_pushdown_udf). - Disable direct update/delete when a udf condition is skipped.
-
Daniel Ye authored
- Handle stored function conditions correctly, with the same logic as with UDFs. - When running queries on Spider SE, by default, we do not push down WHERE conditions containing usage of UDFs/stored functions to remote data nodes, unless the user demands (by setting spider_use_pushdown_udf).
-
- 21 Sep, 2021 2 commits
-
-
Vladislav Vaintroub authored
-
Alexey Bychko authored
MDEV-24629 mariadb-connector-c-config conflicts with MariaDB's MariaDB-common-10.5.8-1.fc32.x86_64.rpm this fix is adding alternative name to MariaDB-common on Fedora. the fix is placed outside of IF/ELSEIF blocks to do not overwrite existing one for Fedora.
-
- 22 Sep, 2021 2 commits
-
-
Anel Husakovic authored
Results with ``` %define api.pure /* We have threads */ ``` in `<build-dir>/sql/[yy_mariadb|yy_oracle].yy` files. Reviewed by: wlad@mariadb.com
-
Ian Gilfillan authored
-
- 21 Sep, 2021 5 commits
-
-
Vladislav Vaintroub authored
Avoid reading uninitialized memory by thd_get_error_context_description(). Note, that THD::real_id can't be initialized at this stage, so it will be zeroed.
-
Monty authored
The test depends on how the server allocates memory and may fail randomly. Fixed by accepting that TRUNCATE may work in some cases (happened to me)
-
Monty authored
The test will work after libmariadb has been updated to return correct max_length for prepared statements
-
Monty authored
The old code did not take into account unsigned numbers when calculating max_lengths of fields.
-
Alexey Bychko authored
MDEV-23506 mariadb-connector-c-devel package from standard RHEL 8 repo conflicts with MariaDB's packages added alternative name for MariaDB-devel package to replace mariadb-connector-c-devel from RHEL 8 distribution. this patch is for 10.3+ on RHEL/Centos 8
-
- 20 Sep, 2021 1 commit
-
-
Julius Goryavsky authored
SST scripts currently use Linux-specific construction to create a temporary directory if the path prefix for that directory is specified by the user. This does not work with FreeBSD. This commit adds support for FreeBSD. No separate test required.
-
- 18 Sep, 2021 3 commits
-
-
Marko Mäkelä authored
btr_defragment_save_defrag_stats_if_needed(): Do not save defragmentation statistics for temporary tables. They are exempt of defragmentation anyway (ha_innobase::optimize() never invokes defragmentation for them), and the user-visible names are not available inside InnoDB. Furthermore, InnoDB assumes that temporary tables are never accessed by other threads than the one that handles the session with which the temporary table is associated with. Furthermore, we simplify the test innodb.innodb_defrag_stats and include a test case that demonstrates that defragmentation statistics are no longer being saved for temporary tables.
-
Marko Mäkelä authored
dict_stats_process_entry_from_defrag_pool(): Acquire MDL on the table for which we are invoking dict_stats_save_defrag_stats(), to avoid any race condition with DROP TABLE or similar operations.
-
Marko Mäkelä authored
dict_stats_process_entry_from_defrag_pool(): Restore a condition as it was before commit 82b7c561.
-
- 17 Sep, 2021 9 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Thirunarayanan Balathandayuthapani authored
Test case fail to include undo tablespace while waiting for the encryption thread to encrypt all existing tablespace
-
Monty authored
The issue is that max_length for prepared statements are different from normal queries, which can optimize the max_length based on the result length.
-
Marko Mäkelä authored
When computing statistics, let us play it safe and check whether an insert into an empty table is in progress, once we have acquired the root page latch. If yes, let us pretend that the table is empty, just like MVCC reads would do. It is unlikely that this race condition could lead into any crashes before MDEV-24621, because the index tree structure should be protected by the page latches. But, at least we can avoid some busy work and return earlier. As part of this, some code that is only used for statistics calculation is being moved into static functions in that compilation unit.
-
Marko Mäkelä authored
btr_root_block_get(): Check for index->page == FIL_NULL. btr_root_get(): Declare static. Other callers can invoke btr_root_block_get() directly. btr_get_size(): Remove conditions that are checked in btr_root_block_get().
-
Krunal Bauskar authored
* buffer pool has latches that protect access to pages. * there is a latch per N pages. (check page_hash_table for more details) * N is calculated based on the cacheline size. * for example: if cacheline size is : 64 then 7 pages pointers + 1 latch can be hosted on the same cacheline : 128 then 15 pages pointers + 1 latch can be hosted on the same cacheline * arm generally have wider cacheline so with arm 1 latch is used to access 15 pages vs with x86 1 latch is used to access 7 pages. Naturally, the contention is more with arm case. * said patch help relax this contention by limiting the elements per cacheline to 7 (+ 1 latch slot). for wider-cacheline (say 128), the remaining 8 slots are kept empty. this ensures there are no 2 latches on the same cacheline to avoid latch level contention. Based on suggestion from Marko, the same logic is now extended to lock_sys_t::hash_table.
-
Julius Goryavsky authored
MDEV-19950 addendum: galera_ssl_upgrade removed from the list of disabled tests and adapted for 10.4+
-
Jan Lindström authored
Problem was that there was extra condition !thd->lex->no_write_to_binlog before call to begin TOI. It seems that this variable is not initialized. TRUNCATE does not support [NO_WRITE_TO_BINLOG | LOCAL] keywords, thus we should not check this condition. All this was hidden in a macro, so I decided to remove those macros that were used only a few places with actual function calls.
-