- 11 Aug, 2017 2 commits
-
-
Alexey Botchkov authored
JSON_EXTRACT behaves specifically in the comparison, so we have to implement specific method for that in Arg_comparator. Conflicts: sql/item_cmpfunc.cc
-
Igor Babaev authored
-
- 10 Aug, 2017 2 commits
-
-
Igor Babaev authored
developed to cover the case of mdev-13389: "Optimization for equi-joins of derived tables with window functions".
-
Igor Babaev authored
"Optimization for equi-joins of derived tables with GROUP BY" should be considered rather as a 'proof of concept'. The task itself is targeted at an optimization that employs re-writing equi-joins with grouping derived tables / views into lateral derived tables. Here's an example of such transformation: select t1.a,t.max,t.min from t1 [left] join (select a, max(t2.b) max, min(t2.b) min from t2 group by t2.a) as t on t1.a=t.a; => select t1.a,tl.max,tl.min from t1 [left] join lateral (select a, max(t2.b) max, min(t2.b) min from t2 where t1.a=t2.a) as t on 1=1; The transformation pushes the equi-join condition t1.a=t.a into the derived table making it dependent on table t1. It means that for every row from t1 a new derived table must be filled out. However the size of any of these derived tables is just a fraction of the original derived table t. One could say that transformation 'splits' the rows used for the GROUP BY operation into separate groups performing aggregation for a group only in the case when there is a match for the current row of t1. Apparently the transformation may produce a query with a better performance only in the case when - the GROUP BY list refers only to fields returned by the derived table - there is an index I on one of the tables T used in FROM list of the specification of the derived table whose prefix covers the the fields from the proper beginning of the GROUP BY list or fields that are equal to those fields. Whether the result of the re-writing can be executed faster depends on many factors: - the size of the original derived table - the size of the table T - whether the index I is clustering for table T - whether the index I fully covers the GROUP BY list. This patch only tries to improve the chosen execution plan using this transformation. It tries to do it only when the chosen plan reaches the derived table by a key whose prefix covers all the fields of the derived table produced by the fields of the table T from the GROUP BY list. The code of the patch does not evaluates the cost of the improved plan. If certain conditions are met the transformation is applied.
-
- 09 Aug, 2017 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Sergei Petrunia authored
-
Marko Mäkelä authored
Disable change buffering, so that some data that was previously written to the encrypted redo log will not end up being copied to the unencrypted redo log due to change buffer merge.
-
Marko Mäkelä authored
The thd_destructor_proxy detects that no transactions are active and starts srv_shutdown_bg_undo_sources(), but fails to take into account that new transactions can still start, especially be slave but also by other threads. In addition there is no mutex when checking for active transaction so this is not safe. We relax the failing InnoDB debug assertion by allowing the execution of user transactions after the purge thread has been shut down. FIXME: If innodb_fast_shutdown=0, we should somehow guarantee that no new transactions can start after thd_destructor_proxy observed that trx_sys_any_active_transactions() did not hold.
-
- 08 Aug, 2017 11 commits
-
-
Marko Mäkelä authored
row_update_for_mysql(): Remove the wrapper function and rename the function from row_update_for_mysql_using_upd_graph(). Remove the unused parameter mysql_rec.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This is basically a duplicate or a reincarnation of MDEV-117. For some reason, the test innodb.mdev-117 started failing in 10.2. It is uncertain when this test started failing. The test is nondeterministic, because there is a race condition between the concurrently executing DELETE IGNORE and DELETE statements. When a deadlock is reported for DELETE IGNORE, the SQL layer would call handler::print_error() but then proceed to the next row, as if no error had happened (which is the purpose of DELETE IGNORE). So, when it proceeded to handler::ha_rnd_next(), InnoDB would hit an assertion failure, because the transaction no longer exists, and we are not executing at the start of a statement. handler::print_error(): If thd_mark_transaction_to_rollback(thd, true) was called, clear the ME_JUST_WARNING and ME_JUST_INFO errflags, so that a note or warning will be promoted to an error if the transaction was aborted by a storage engine.
-
Alexey Botchkov authored
Check for duplicating keys added.
-
Alexey Botchkov authored
outside. The result_limit variable wasn't always initialized in Item_func_json_array::fix_length_and_dec().
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The file wait_innodb_all_purged.inc waited for InnoDB purge in a way that only worked in debug builds. The file wait_all_purged.inc provides a better mechanism.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
If the latest InnoDB redo log checkpoint was stored in the first checkpoint slot and not the second one, InnoDB would incorrectly set log_sys->log.lsn to the previous checkpoint. It is possible that this logic error did not exist before commit 86927cc7, which removed traces of multiple InnoDB redo logs, to prepare for MDEV-12548 (Mariabackup for MariaDB 10.2). In the worst case, this error could mean that InnoDB unnecessarily fails to recover from redo log when the last-but-one checkpoint was overwritten, but the last checkpoint is intact. recv_find_max_checkpoint(), recv_find_max_checkpoint_0(): Do not overwrite the fields of log_sys->log with the information of an older checkpoint. recv_find_max_checkpoint(): Do not return DB_SUCCESS on an error. recv_recovery_from_checkpoint_start(): Return early if the log is in a version-tagged format but not in the latest format. (In this case, the log must be logically empty, and there is nothing to apply.)
-
Jan Lindström authored
Always read full page 0 to determine does tablespace contain encryption metadata. Tablespaces that are page compressed or page compressed and encrypted do not compare checksum as it does not exists. For encrypted tables use checksum verification written for encrypted tables and normal tables use normal method. buf_page_is_checksum_valid_crc32 buf_page_is_checksum_valid_innodb buf_page_is_checksum_valid_none Modify Innochecksum logging to file to avoid compilation warnings. fil0crypt.cc fil0crypt.h Modify to be able to use in innochecksum compilation and move fil_space_verify_crypt_checksum to end of the file. Add innochecksum logging to file. univ.i Add innochecksum strict_verify, log_file and cur_page_num variables as extern. page_zip_verify_checksum Add innochecksum logging to file and remove unnecessary code. innochecksum.cc Lot of changes most notable able to read encryption metadata from page 0 of the tablespace. Added test case where we corrupt intentionally FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION (encryption key version) FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION+4 (post encryption checksum) FIL_DATA+10 (data)
-
Alexey Botchkov authored
Comparison fixed to take the actual type of JSON value into account. Bug in escaping handling fixed.
-
- 07 Aug, 2017 14 commits
-
-
Monty authored
Added extra memcpy to get rid of valgrind warning for sequence tables with InnoDB. When reading a row from InnoDB, some of the bytes in the row are marked as not initialized. Needs to be investigated later, but this is a safe patch for now.
-
Monty authored
Problem was that SEQUENCE::table was shared among threads, which caused several threads to use the same object at the same time.
-
Monty authored
-
Alexander Barkov authored
Conflicts: mysql-test/r/func_json.result mysql-test/r/win.result mysql-test/t/func_json.test mysql-test/t/win.test sql/share/errmsg-utf8.txt storage/rocksdb/ha_rocksdb.cc storage/rocksdb/mysql-test/rocksdb/r/tbl_opt_data_index_dir.result
-
Kristian Nielsen authored
Problem was introduced with the InnoDB 5.7 merge, the code related to avoiding extra fsync at the end of commit when binlog is enabled. The MariaDB method for this was removed, but the replacement MySQL method based on thd_get_durability_property() is not functional in MariaDB. This commit reverts the offending parts of the merge and adds a test case, to fix the problem for InnoDB. But other storage engines are likely to have a similar problem.
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
- Support first_linear_tab() traversal for degenerate joins
-
Marko Mäkelä authored
The debug flag recv_no_log_write prohibits writes of redo log records for modifying page data. The debug assertion was failing when fil_names_clear() was writing the informative MLOG_FILE_NAME and MLOG_CHECKPOINT records which do not modify any data. log_reserve_and_open(), log_write_low(): Remove the debug assertion. log_pad_current_log_block(), mtr_write_log(), mtr_t::Command::prepare_write(): Add the debug assertion.
-
Marko Mäkelä authored
During InnoDB startup, change buffer merge operations are prohibited before recv_apply_hashed_log_recs(true), which performs the last phase of redo log apply. Before this call, ibuf_init_at_db_start() would be invoked, and it could trigger the debug assertion. ibuf_init_at_db_start(): Do not declare the mini-transaction as "inside change buffer", because nothing is being written in the mini-transaction. The purpose of this function is only to initialize the memory data structures from the persistent data structures.
-
Alexey Botchkov authored
Fixed the path comparison.
-
Alexey Botchkov authored
Options handling implemented for ST_AsGeoJSON.
-
Daniel Black authored
Also removed clang-3.9 Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
-
- 06 Aug, 2017 4 commits
-
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
It may produce test failures like this because of non-deterministic cost calculations: -1 SIMPLE t1 # col1 col1 259 NULL # Using where +1 SIMPLE t1 # col1 NULL NULL NULL # Using where
-
Alexey Botchkov authored
Implement the 'option' argument for the ST_GeomFromGeoJSON.
-
- 05 Aug, 2017 1 commit
-
-
Sergei Petrunia authored
-
- 04 Aug, 2017 1 commit
-
-
Alexander Barkov authored
-