- 01 Sep, 2019 5 commits
-
-
Monty authored
-
Monty authored
This was done to be able to get any information why the embedded server doesn't start.
-
Monty authored
These was part of an incomplete old merge from MySQL 5.6
-
Monty authored
-
Sergei Golubchik authored
-
- 30 Aug, 2019 2 commits
-
-
Oleksandr Byelkin authored
-
Sergei Petrunia authored
Merge the changes to include/index_merge*inc from the upstream. The changes add this command in many places: +if ($engine_type == RocksDB) +{ + set global rocksdb_force_flush_memtable_now=1; +} also add it in one more place to make the test truly stable.
-
- 29 Aug, 2019 1 commit
-
-
Marko Mäkelä authored
-
- 28 Aug, 2019 5 commits
-
-
Marko Mäkelä authored
Changes of PAGE_MAX_TRX_ID must be redo-logged for correctness. That was fixed in the InnoDB Plugin for MySQL 5.1 already.
-
Marko Mäkelä authored
-
Julius Goryavsky authored
Improved handling of subdirectories in the xtrabackup-v2 SST scripts (similar to MDEV-18863) for more predictable test results (related to xtrabackup-v2 SST)
-
Oleksandr Byelkin authored
MDEV-16932: ASAN heap-use-after-free in my_charlen_utf8 / my_well_formed_char_length_utf8 on 2nd execution of SP with ALTER trying to add bad CHECK Make automatic name generation during execution (not prepare). Check result of memory allocation operation.
-
Eugene Kosov authored
-
- 27 Aug, 2019 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Revert part of fa2a74e0. trx_reference(): Remove, and merge the relevant part to the only caller trx_rw_is_active(). If the statements trx = NULL; were ever executed, the function would have dereferenced a NULL pointer and crashed in trx_mutex_exit(trx). Hence, those statements must have been unreachable, and they can be replaced with debug assertions. trx_rw_is_active(): Avoid unnecessary acquisition and release of trx->mutex when do_ref_count=false. lock_trx_release_locks(): Do not reset trx->id=0. Had the statement been necessary, we would have experienced crashes in trx_reference().
-
Sujatha authored
Cherry picking: Bug#25135304: RBR: WRONG FIELD LENGTH IN ERROR MESSAGE commit 47bd3f7cf3c8518f62b1580ec65af2ba7ac13b95 Description: ============ In row based replication, when replicating from a table with a field with character set set to UTF8mb3 to the same table with the same field set to character set UTF8mb4 I get a confusing error message: For VARCHAR: VARCHAR(1) 'utf8mb3' to VARCHAR(1) 'utf8mb4' "Column 0 of table 'test.t1' cannot be converted from type 'varchar(3)' to type 'varchar(1)'" Similar issue with CHAR type as well. Issue with respect to BLOB types: For BLOB: LONGBLOB to TINYBLOB - Error message displays incorrect blob type. "Column 0 of table 'test.t1' cannot be converted from type 'tinyblob' to type 'tinyblob'" For BINARY to BINARY - Error message displays incorrect type for master side field. "Column 0 of table 'test.t' cannot be converted from type 'char(1)' to type 'binary(10)'" Similar issue exists for VARBINARY type. It is displayed as 'VARCHAR'. Analysis: ========= In Row based replication charset information is not sent as part of metadata from master to slave. For VARCHAR field its character length is converted into equivalent octets/bytes and stored internally. At the time of displaying the data to user it is converted back to original character length. For example: VARCHAR(2)- utf8mb3 is stored as:2*3 = VARCHAR(6) At the time of displaying it to user VARCHAR(6)- charset utf8mb3:6/3= VARCHAR(2). At present the internally converted octect length is sent from master to slave with out providing the charset information. On slave side if the type conversion fails 'show_sql_type' function is used to get the type specific information from metadata. Since there is no charset information is available the filed type is displayed as VARCHAR(6). This results in confused error message. For CHAR fields CHAR(1)- utf8mb3 - CHAR(3) CHAR(1)- utf8mb4 - CHAR(4) 'show_sql_type' function which retrieves type information from metadata uses (bytes/local charset length) to get actual character length. If slave's chaset is 'utf8mb4' then CHAR(3/4)-->CHAR(0) CHAR(4/4)-->CHAR(1). This results in confused error message. Analysis for BLOB type issue: BLOB's length is represented in two forms. 1. Actual length i.e (length < 256) type= MYSQL_TYPE_TINY_BLOB; (length < 65536) type= MYSQL_TYPE_BLOB; ... 2. packlength - The number of bytes used to represent the length of the blob 1- tinyblob 2- blob ... In row based replication only the packlength is written in the binary log. On the slave side this packlength is interpreted as actual length of the blob. Hence the length is always < 256 and the type is displayed as tiny blob. Analysis for BINARY to BINARY type issue: The character set information is needed to identify a filed's type as char or binary. Since master side character set information is not available on the slave side both binary and char fields are displayed as char. Fix: === For CHAR and VARCHAR fields display their length in octets for both source and target fields. For target field display the charset information if it is relevant. For blob type changed the code to use the packlength and display appropriate blob type in error message. For binary and varbinary fields use the slave side character set as reference to map them to binary or varbinary fields.
-
Jan Lindström authored
-
Alexander Barkov authored
Also fixes: MDEV-20431 GREATEST(int_col,date_col) returns wrong results in a view
-
- 26 Aug, 2019 4 commits
-
-
Julius Goryavsky authored
After applying MDEV-18863, in some test configurations, SST may fails due to duplication of some parameters (in particular "--port") in the main part of the command line and after "--mysqld-args", as well as due to incorrect interpretation of the parameter "--port" passed after "--mysqld-args" when the SST script is invoked without explicitly specifying a port for SST. In addition, it is necessary to correctly handle spaces, quotation marks and special characters when copying original arguments from the argv[] array to a new command line (after "--mysqld-args"). This patch resolves these shortcomings.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Depending on version, "handle.exe -?" can output either "Handle v4.0" or "Nthandle v4.1"
-
Sujatha authored
Analysis: ======== As part of BUG#28642318 fix, two new test cases were added. The first test case tests a scenario where two sessions are present, in which the first session has a regular table named 't1' and another session has a temporary table named 't1'. Test executes a DELETE statement on regular table. These statements are captured from binary log and replayed back on new client connection to prove that DELETE statement is applied successfully. Note that the binlog contains only CREATE TEMPORARY TABLE part hence a temporary table gets created in new connection. This replaying logic is implemented by using '--exec $MYSQL' command. If the new connection gets disconnected within the scope of first test case the test passes, i.e the temporary table gets dropped as part thread cleanup. But on slow platforms the connection gets closed at the time of execution of test case 2. When the temporary table is dropped as part thread cleanup a "DROP TEMPORARY TABLE t1" is written into the binary log. In test case two the same sessions continue to exist and and table names are reused to test a new bug scenario. The additional "DROP TEMPORARY TABLE" command drops second test specific tables which results in "Unknown table" error. Fix: ==== Rename the second case specific table to 't2'. Even if the close connection from test case one happens later the drop command with has 'DROP /*!40005 TEMPORARY */ TABLE IF EXISTS `t1`' will not result in an error.
-
- 22 Aug, 2019 3 commits
-
-
Olivier Bertrand authored
-
Marko Mäkelä authored
Some code was duplicated near the start of the function, only for InnoDB, not XtraDB. This was noticed by comparing the InnoDB between MariaDB and MySQL.
-
Marko Mäkelä authored
For the Sphinx storage engine, this is a functional change (bug fix): we will ensure that the message buffer is always NUL-terminated.
-
- 21 Aug, 2019 5 commits
-
-
Julius Goryavsky authored
-
Marko Mäkelä authored
ha_innobase::open(): Always ignore problems with FOREIGN KEY constraints (pass DICT_ERR_IGNORE_FK_NOKEY), no matter whether foreign_key_checks is enabled. Instead, we must report errors when enforcing the FOREIGN KEY constraints. As a result of ignoring these errors, the tables will be loaded with dict_foreign_t objects whose foreign_index or referenced_index will be NULL. Also, pass DICT_ERR_IGNORE_FK_NOKEY instead of DICT_ERR_IGNORE_NONE to dict_table_open_on_id_low() in many other cases. Notably, on CREATE TABLE and ALTER TABLE, we will keep validating the FOREIGN KEY constraints as before. dict_table_open_on_name(): If no other flags than DICT_ERR_IGNORE_FK_NOKEY are set, refuse access to unreadable tables. Some encryption tests rely on this code path. For the DML code path, we used to have the problem that when one of the indexes was missing in dict_foreign_t, we would ignore the FOREIGN KEY constraint altogether. The following changes address that. row_ins_check_foreign_constraints(): Add the parameter pk. For the primary key, consider also foreign key constraints for which foreign->foreign_index=NULL (no underlying index is available). row_ins_check_foreign_constraint(): Report errors also for !check_ref. Remove a redundant check for srv_read_only_mode. row_ins_foreign_report_add_err(): Tolerate foreign->foreign_index=NULL.
-
Marko Mäkelä authored
fkerr_t: Errors for the foreign key checks. Replaces ulint, which used #define that looked like dberr_t literals. wsrep_dict_foreign_find_index(): Remove. Use dict_foreign_find_index() instead, with default parameters. dict_foreign_push_index_error(): Do not add redundant quotes around quoted table names.
-
Marko Mäkelä authored
-
Jan Lindström authored
Add wait conditions and compare cardinality etc information between nodes and print something only if they differ.
-
- 20 Aug, 2019 10 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
heap_scan() makes info->next_block to be either an integer number of share->block.records_in_block's or the total number of records in the table. So when this total number or records changes, info->next_block needs to be recalculated to take it into account. This is a different fix for "Fixes a problem with heap when scanning and insert rows at the same time"
-
Sergei Golubchik authored
This reverts commit 262927a9.
-
Marko Mäkelä authored
Let us invoke wait_all_purged.inc right before the workload. Starting with MDEV-12288 in MariaDB Server 10.3, also INSERT generates purge workload. If we do not ensure that purge has run to completion, the results on 10.3 and later could be nondeterministic.
-
Aleksey Midenkov authored
* Ensure no background purge is done; * Better result showing actual number in case of failure; * Higher and more safe difference threshold.
-
Monty authored
-
Monty authored
This causes failures in versioning.update-big in 10.3 and above
-
Marko Mäkelä authored
main.selectivity_innodb: Re-record the result. MDEV-18863: Make wsrep_sst_mariabackup use Bash due to the += syntax.
-
Marko Mäkelä authored
For MDEV-15955, the fix in create_tmp_field_from_item() would cause a compilation error. After a discussion with Alexander Barkov, the fix was omitted and only the test case was kept. In 10.3 and later, MDEV-15955 is fixed properly by overriding create_tmp_field() in Item_func_user_var.
-