- 15 Aug, 2019 1 commit
-
-
Jan Lindström authored
-
- 14 Aug, 2019 2 commits
-
-
Sujatha authored
Problem: ======= DROP TABLE IF EXISTS was killed. The table still exists on the master but the DDL was still logged. Analysis: ========= During the execution of DROP TABLE command "ha_delete_table" call is invoked to delete the table. If the query is killed at this point, the kill command is not handled within the code. This results in two issues. 1) The table which is not dropped also gets written into the binary log. 2) The code continues further upon receiving 'KILL QUERY'. Fix: === Upon receiving the KILL command the query should stop its current execution. Tables which were successfully dropped prior to KILL command should be included in the binary log.
-
Aleksey Midenkov authored
If there're multiple row versions in InnoDB, reading one row from PK may have O(N) complexity and reading from secondary keys may have O(N^2) complexity. The problem occurs when there are many pending versions of the same row, meaning that the primary key is the same, but a secondary key is different. The slowdown occurs when the secondary index is traversed. This patch creates a helper class for the function row_sel_get_clust_rec_for_mysql() which can remember and re-use cached_clust_rec & cached_old_vers so that rec_get_offsets() does not need to be called over and over for the clustered record. Corrections by Kevin Lewis <kevin.lewis@oracle.com> MDEV-20341 Unstable innodb.innodb_bug14704286 Removed test that tested the ability of interrupting long query which is not long anymore.
-
- 13 Aug, 2019 10 commits
-
-
Sergei Petrunia authored
Enable the rocksdb test suite. It now passes the valgrind tests.
-
Sergei Petrunia authored
-
Sergei Petrunia authored
- Include the valgrind suppressions from the FB upstream - Use HAVE_Valgrind, not HAVE_Purify (like the rest of MariaDB code does) The call to DisownData() is now actually disabled under Valgrind
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Use DEBUG_SYNC to hang the execution at the interesting point, and then kill and restart the server externally. This will work also with Valgrind. DBUG_SUICIDE() causes Valgrind to hang, and it could also cause uninteresting reports about memory leaks. While we are at it, let us clean up innodb.innodb_bulk_create_index_debug so that it will actually test the desired functionality also in future versions (with instant ADD COLUMN and DROP COLUMN) and avoid some unnecessary restarts. We are adding two DEBUG_SYNC points for ALTER TABLE, because there were none that would be executed right before ha_commit_trans().
-
Jan Lindström authored
Galera threads were not registered to performance schema and used pthread_create when mysql_thread_create should have been used. Added test case to verify current galera performance schema instrumentation does work.
-
Jan Lindström authored
Test changes only.
-
Marko Mäkelä authored
Skip the test on big-endian systems. In MariaDB Server 10.0 and 10.1 (as well as MySQL 5.6), the implementation of innodb_checksum_algorithm=crc32 wrongly assumes little-endian byte order.
-
Jan Lindström authored
Fix incorrect else that should have been else if.
-
- 12 Aug, 2019 7 commits
-
-
Marko Mäkelä authored
MDEV-17614 flags INSERT…ON DUPLICATE KEY UPDATE unsafe for statement-based replication when there are multiple unique indexes. This correctly fixes something whose attempted fix in MySQL 5.7 in mysql/mysql-server@c93b0d9a972cb6f98fd445f2b69d924350f9128a caused lock conflicts. That change was reverted in MySQL 5.7.26 in mysql/mysql-server@066b6fdd433aa6673622341f1a2f0a3a20018043 (with a substantial amount of other changes). In MDEV-17073 we already disabled the unfortunate MySQL change when statement-based replication was not being used. Now, thanks to MDEV-17614, we can actually remove the change altogether. This reverts commit 8a346f31 (MDEV-17073) and mysql/mysql-server@c93b0d9a972cb6f98fd445f2b69d924350f9128a while keeping the test cases.
-
Marko Mäkelä authored
-
Monty authored
- mysqltest didn't free read_command_buf - wait_for_slave_param did write different things to the log if valgrind was used. - Table open cache should not write the initial variable value as it can depend on the configuration or if valgrind is used - A variable in GetResult was used uninitalized
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 09 Aug, 2019 2 commits
-
-
Sergei Petrunia authored
... produces "bytes lost" warnings When rocksdb_validate_update_cf_options() returns an error, the update won't happen. Free the copy of the string in this case.
-
Sachin authored
Problem:- When mysql executes INSERT ON DUPLICATE KEY INSERT, the storage engine checks if the inserted row would generate a duplicate key error. If yes, it returns the existing row to mysql, mysql updates it and sends it back to the storage engine.When the table has more than one unique or primary key, this statement is sensitive to the order in which the storage engines checks the keys. Depending on this order, the storage engine may determine different rows to mysql, and hence mysql can update different rows.The order that the storage engine checks keys is not deterministic. For example, InnoDB checks keys in an order that depends on the order in which indexes were added to the table. The first added index is checked first. So if master and slave have added indexes in different orders, then slave may go out of sync. Solution:- Make INSERT...ON DUPLICATE KEY UPDATE unsafe while using stmt or mixed format When there is more then one unique key. Although there is two exception. 1. Auto Increment key is not counted because Innodb will get gap lock for failed Insert and concurrent insert will get a next increment value. But if user supplies auto inc value it can be unsafe. 2. Count only unique keys for which insertion is performed. So this patch also addresses the bug id #72921
-
- 08 Aug, 2019 5 commits
-
-
Monty authored
MDEV-17717 Assertion `!table->pos_in_locked_tables' failed in tc_release_table on flushing RocksDB table under SERIALIZABLE MDEV-17998 Deadlock and eventual Assertion `!table->pos_in_locked_tables' failed in tc_release_table on KILL_TIMEOUT MDEV-19591 Assertion `!table->pos_in_locked_tables' failed in tc_release_table upon altering table into S3 under lock. The problem was that thd->open_tables->pos_in_locked_tables was not reset when alter table failed to reopen a locked table.
-
Monty authored
- pcretest.c could use macro with side effect - maria_chk could access freed memory - Initialized some variables that could be accessed uninitalized - Fixed compiler warning in my_atomic-t.c
-
Monty authored
-
Monty authored
-
Eugene Kosov authored
-
- 07 Aug, 2019 2 commits
-
-
Vlad Lesin authored
The general reason why innodb redo log file is limited by 512G is that log_block_convert_lsn_to_no() returns value limited by 1G. But there is no need to have unique log block numbers in log group. The fix removes 512G limit and limits log group size by (uint32_t maximum value) * (minimum page size), which, in turns, can be removed if fil_io() is no longer used for innodb redo log io.
-
Thirunarayanan Balathandayuthapani authored
- The commit ab6dd774 wrongly sets the condition inside innobase_srv_conc_enter_innodb(). Problem is that InnoDB makes the thread to sleep indefinitely if it is a replication slave thread. Thanks to Sujatha Sivakumar for contributing the replication test case.
-
- 06 Aug, 2019 1 commit
-
-
Eugene Kosov authored
-
- 05 Aug, 2019 3 commits
-
-
Eugene Kosov authored
-
Eugene Kosov authored
Non-owning reference to elements. Use it as function argument instead of pointer+size pair or instead of const std::vector<T>. Do not use it for strings! More info is here http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines Or just google about it.
-
Sujatha authored
MDEV-18930: Failed CREATE OR REPLACE TEMPORARY not written into binary log makes data on master and slave diverge Problem: ======= Failed CREATE OR REPLACE TEMPORARY TABLE statement which dropped the table but failed at a later stage of creation of temporary table is not written to binarylog in row based replication. This causes the slave to diverge. Analysis: ======== CREATE OR REPLACE statements work as shown below. CREATE OR REPLACE TABLE table_name (a int); is basically the same as: DROP TABLE IF EXISTS table_name; CREATE TABLE table_name (a int); Hence every CREATE OR REPLACE TABLE command which dropped the table should be written to binary log, even when following CREATE TABLE part fails. In order to achieve this, during the execution of CREATE OR REPLACE command, when a table is dropped 'thd->log_current_statement' flag is set. When table creation results in an error within 'mysql_create_table' code, the error handling part looks for this flag. If it is set the failed CREATE OR REPLACE statement is written into the binary log inspite of error. This ensure that slave doesn't diverge from the master. In case of row based replication the error handling code returns very early, if the table is of type temporary. This is done based on the assumption that temporary tables are not replicated in row based replication. It fails to handle the cases where a temporary table was created as part of statement based replication at an earlier stage and the binary log format was changed to row because of an unsafe statement. In this case when a CREATE OR REPLACE statement is executed on this temporary table it will dropped but the query will not be written to binary log. Hence slave diverges. Fix: === In error handling code check the return status of create table operation. If it is successful and replication mode is row based and table is of type temporary then return. Other wise proceed further to the code which checks for thd->log_current_statement flag and does appropriate logging.
-
- 04 Aug, 2019 2 commits
-
-
Sergei Petrunia authored
A combination of: * lots of include'd test files where each has "--source include/have_rocksdb.inc" * for each such occurrence, MTR adds testsuite's arguments into server arguments * which hits some limit on the length of argv array on Windows, causing the server to get garbage data in the last argument. Work around this by commenting out one of the totally redundant "source include/have_rocksdb.inc" lines.
-
Sergei Petrunia authored
- Fix the LooseScan code to support storage engines that return HA_ERR_END_OF_FILE if the index scan goes out of provided range bounds - Add a DBUG_EXECUTE_IF("force_group_by",...) to allow a test to force a LooseScan - Adjust rocksdb.group_min_max test not to use features not present in MariaDB 10.2 (e.g. optimizer_trace. In MariaDB 10.4 it's present but it doesn't meet the assumptions that the test makes about it - Adjust the test result file: = MariaDB doesn't support "Enhanced Loose Scan" that FB/MySQL has = MariaDB has different cost calculations.
-
- 01 Aug, 2019 1 commit
-
-
Eugene Kosov authored
Help user distinguish between space ID and page number.
-
- 31 Jul, 2019 4 commits
-
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
Anel Husakovic authored
-