- 22 Oct, 2022 1 commit
-
-
Alexander Barkov authored
This is a new version of the patch instead of the reverted: MDEV-28727 ALTER TABLE ALGORITHM=NOCOPY does not work after upgrade Ignore the difference in key packing flags HA_BINARY_PACK_KEY and HA_PACK_KEY during ALTER to allow ALGORITHM=INSTANT and ALGORITHM=NOCOPY in more cases. If for some reasons (e.g. due to a bug fix such as MDEV-20704) these cumulative (over all segments) flags in KEY::flags are different for the old and new table inside compare_keys_but_name(), the difference in HA_BINARY_PACK_KEY and HA_PACK_KEY in KEY::flags is not really important: MyISAM and Aria can handle such cases well: per-segment flags are stored in MYI and MAI files anyway and they are read during ha_myisam::open() ha_maria::open() time. So indexes get opened with correct per-segment flags that were calculated during the table CREATE time, no matter what the old (CREATE time) and new (ALTER TIME) per-index compression flags are, and no matter if they are equal or not. All other engine ignore key compression flags, so this change is safe for other engines as well.
-
- 21 Oct, 2022 1 commit
-
-
Alexander Barkov authored
This reverts commit 1ea5e402
-
- 14 Oct, 2022 4 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
- 13 Oct, 2022 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
WITH_EMBEDDED_SERVER compiles the SQL parsers separately. Thanks to Vladislav Vaintroub for helping with this. Fixes up commit e05ab0cf
-
- 12 Oct, 2022 2 commits
-
-
Nikita Malyavin authored
See also commits aa8a31da and 64678c for a Bug #22990029 fix. In this scenario INSERT chose to check if delete unmarking is available for a just deleted record. To build an update vector, it needed to calculate the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation failed. Solutiuon: temporarily set abort_on_warning=true, while calculating the column for delete-unmarked insert.
-
Nikita Malyavin authored
As of now innodb does not store trx_id for each record in secondary index. The idea behind is following: let us store only per-page max_trx_id, and delete-mark the records when they are deleted/updated. If the read starts, it rememders the lowest id of currently active transaction. Innodb refers to it as trx->read_view->m_up_limit_id. See also ReadView::open. When the page is fetched, its max_trx_id is compared to m_up_limit_id. If the value is lower, and the secondary index record is not delete-marked, then this page is just safe to read as is. Else, a clustered index could be needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the corresponding switch (row_search_idx_cond_check(...)) below. Virtual columns are required to be updated in case if the record was delete-marked. The motivation behind it is documented in Row_sel_get_clust_rec_for_mysql::operator() near row_sel_sec_rec_is_for_clust_rec call. This was basically a description why virtual column computation can normally happen during SELECT, and, generally, a vcol index access. Sometimes stats tables are updated by innodb. This starts a new transaction, and it can happen that it didn't finish to the moment of SELECT execution, forcing virtual columns recomputation. If the result was a something that normally outputs a warning, like division by zero, then it could be outputted in a racy manner. The solution is to suppress the warnings when a column is computed for the described purpose. ignore_wrnings argument is added innobase_get_computed_value. Currently, it is only true for a call from row_sel_sec_rec_is_for_clust_rec.
-
- 11 Oct, 2022 9 commits
-
-
Vladislav Vaintroub authored
MDEV-19243 introduced a regression on Windows. In (supposedly rare) case, where environment variable TZ was set, @@system_time_zone no longer derives from TZ. Instead, it incorrecty refers to system default time zone, eventhough UTC time conversion takes TZ into account. The fix is to restore TZ-aware handling (timezone name derives from tzname), if TZ is set.
-
Sergei Golubchik authored
followup for e8acec89
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Zhibo Zhang authored
The statement 'Verify checksum binlog events.' is confusing. Fix word order to make it clear.
-
Julius Goryavsky authored
The problem is related to performing operations without switching wsrep off, this commit fixes this and allows disabled tests.
-
Julius Goryavsky authored
The problem is related to performing operations without switching wsrep off, this commit fixes this and allows disabled tests.
-
- 10 Oct, 2022 3 commits
-
-
Alexander Barkov authored
Adding debug output for key and keyseg flags at ha_myisam::open() time. So now there are three points of debug output: 1. In the very end of mysql_prepare_create_table() 2. In ha_myisam::create(), after the table2myisam() call 3. In ha_myisan::open(), after the mi_open() call mi_create(), which is is called between 2 and 3, modifies flags for some data types, so the output in 2 and 3 is different.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
A previous fix in commit efd8af53 failed to cover ALTER TABLE. PageBulk::isSpaceAvailable(): Check for record heap number overflow.
-
- 09 Oct, 2022 6 commits
-
-
Anel Husakovic authored
Reviewer: <wlad@mariadb.com>
-
Jan Lindström authored
MDEV-29707 : Incorrect/bad errno on enabling wsrep_on after setting dummy wsrep_provider on non-Galera build Fix error message to contain correct errno. This commit was tested interactively because mtr will notice if you provide wrong wsrep_provider in config and you may not change wsrep_provider dynamically.
-
Jan Lindström authored
MDEV-25389 : Assertion `!is_thread_specific || (mysqld_server_initialized && thd)' failed in void my_malloc_size_cb_func(long long int, my_bool) If wsrep slave thread creation fails for some reason we need to handle this error correctly and set actual running slave threads accordingly.
-
Jan Lindström authored
MDEV-26597 : Assertion `!wsrep_has_changes(thd) || (thd->lex->sql_command == SQLCOM_CREATE_TABLE && !thd->is_current_stmt_binlog_format_row())' failed. If repl.max_ws_size is set too low following CREATE TABLE could fail during commit. In this case wsrep_commit_empty should allow rolling it back if provider state is s_aborted. Furhermore, original ER_ERROR_DURING_COMMIT does not really tell anything clear for user. Therefore, this commit adds a new error ER_TOO_BIG_WRITESET. This will change some test cases output.
-
Jan Lindström authored
MDEV-27123 : auto_increment_increment and auto_increment_offset reset to 1 in current session after alter table on auto-increment column Problem was that in ALTER TABLE execution variables were set to 1 even when wsrep_auto_increment_control is OFF. We should set them only when wsrep_auto_increment_control is ON.
-
Jan Lindström authored
In test user has set WSREP_ON=OFF this causes streaming replication recovery to fail and this caused call to unireg_abort(). However, this call is not necessary and we can let transaction to fail. Naturally, if real user does this he needs to bootstrap his cluster.
-
- 07 Oct, 2022 5 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This message would always have been invoked on ptr=NULL.
-
Marko Mäkelä authored
-
- 06 Oct, 2022 5 commits
-
-
Aleksey Midenkov authored
-
Jan Lindström authored
-
Jan Lindström authored
Do not allow setting wsrep_on=ON if no provider is set.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This test is not slow, but it reliably produces an EXPLAIN difference (number of rows) on the Valgrind builder. A possible explanation could be that the purge threads are not being scheduled. Valgrind runs all threads in a single thread.
-
- 05 Oct, 2022 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-