- 22 Aug, 2017 3 commits
-
-
Vladislav Vaintroub authored
accept proxy protocol header from client connections. The new server variable 'proxy_protocol_networks' contains list of networks from which proxy header is accepted.
-
Alexander Barkov authored
-
Alexander Barkov authored
-
- 20 Aug, 2017 2 commits
-
-
Sergei Golubchik authored
-
Igor Babaev authored
platform independent.
-
- 19 Aug, 2017 1 commit
-
-
Igor Babaev authored
It allows to push conditions into derived with window functions not only in the cases when the window specifications of these window functions use the same partition, but also in the cases when the window functions use partitions that share only some fields. In these cases only the conditions over the common fields are pushed.
-
- 18 Aug, 2017 2 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
-
- 17 Aug, 2017 2 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
-
- 16 Aug, 2017 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
With MDEV-12288 and MDEV-13536, the InnoDB purge threads will access pages more often, causing all sorts of debug assertion failures in the B-tree code. Work around this problem by amending the corruption tests with --innodb-purge-rseg-truncate-frequency=1 --skip-innodb-fast-shutdown so that everything will be purged before the server is restarted to deal with the corruption.
-
Marko Mäkelä authored
This should have been part of MDEV-12288. trx_undo_t::del_marks: Remove. Purge needs to process all undo log records in order to reset the DB_TRX_ID. Before MDEV-12288, it sufficed to only delete the purgeable delete-marked records, and it ignore other undo log. trx_rseg_t::needs_purge: Renamed from trx_rseg_t::last_del_marks. Indicates whether a rollback segment needs to be processed by purge. TRX_UNDO_NEEDS_PURGE: Renamed from TRX_UNDO_DEL_MARKS. Indicates whether a rollback segment needs to be processed by purge. This will be 1 until trx_purge_free_segment() has been invoked. row_purge_record_func(): Set the is_insert flag for TRX_UNDO_INSERT_REC, so that the DB_ROLL_PTR will match in row_purge_reset_trx_id(). trx_purge_fetch_next_rec(): Add a comment about row_purge_record_func() going to set the is_insert flag. trx_purge_read_undo_rec(): Always attempt to read the undo log record. trx_purge_get_next_rec(): Do not skip any undo log records. Even when no clustered index record is going to be removed, we may want to reset some DB_TRX_ID,DB_ROLL_PTR. trx_undo_rec_get_cmpl_info(), trx_undo_rec_get_extern_storage(): Remove. trx_purge_add_undo_to_history(): Set the TRX_UNDO_NEEDS_PURGE flag so that the resetting will work on undo logs that were originally created before MDEV-12288 (MariaDB 10.3.1). trx_undo_roll_ptr_is_insert(), trx_purge_free_segment(): Cleanup (should be no functional change).
-
- 15 Aug, 2017 14 commits
-
-
Igor Babaev authored
-
Alexander Barkov authored
-
Alexander Barkov authored
Recording more tests for MDEV-13500 sql_mode=ORACLE: can't create a virtual column with function MOD Some affected tests require --big-test. They were forgotten in the main patch.
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Igor Babaev authored
Corrected an assertion in the constructor for the class Sys_var_flagset.
-
- 14 Aug, 2017 3 commits
-
-
Elena Stepanova authored
-
Elena Stepanova authored
-
Alexander Barkov authored
Fixing Item_func_mod::print() to print "arg1 MOD arg2" instea of "arg1 % arg2"
-
- 13 Aug, 2017 2 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
with window functions (mdev-10855). This patch just modified the function pushdown_cond_for_derived() to support this feature. Some test cases demonstrating this optimization were added to derived_cond_pushdown.test.
-
- 11 Aug, 2017 6 commits
-
-
halfspawn authored
-
Marko Mäkelä authored
If the server is upgraded from a database that was created before MDEV-12288, and if the undo logs in the database contain an incomplete transaction that performed an INSERT operation, the server would crash when rolling back that transaction. trx_commit_low(): Relax a too strict transaction. This function will also be called after completing the rollback of a recovered transaction. trx_purge_add_undo_to_history(): Merged from the functions trx_purge_add_update_undo_to_history() and trx_undo_update_cleanup(), which are removed. Remove the parameter undo_page, and instead call trx_undo_set_state_at_finish() to obtain it. trx_write_serialisation_history(): Treat undo and old_insert equally. That is, after the rollback (or XA COMMIT) of a recovered transaction before upgrade, move all logs (both insert_undo and update_undo) to the purge queue.
-
Alexey Botchkov authored
Conflicts: sql/item_cmpfunc.cc storage/innobase/buf/buf0flu.cc storage/innobase/include/ut0stage.h storage/innobase/row/row0upd.cc
-
Alexey Botchkov authored
-
Alexey Botchkov authored
JSON_EXTRACT behaves specifically in the comparison, so we have to implement specific method for that in Arg_comparator. Conflicts: sql/item_cmpfunc.cc
-
Igor Babaev authored
-
- 10 Aug, 2017 2 commits
-
-
Igor Babaev authored
developed to cover the case of mdev-13389: "Optimization for equi-joins of derived tables with window functions".
-
Igor Babaev authored
"Optimization for equi-joins of derived tables with GROUP BY" should be considered rather as a 'proof of concept'. The task itself is targeted at an optimization that employs re-writing equi-joins with grouping derived tables / views into lateral derived tables. Here's an example of such transformation: select t1.a,t.max,t.min from t1 [left] join (select a, max(t2.b) max, min(t2.b) min from t2 group by t2.a) as t on t1.a=t.a; => select t1.a,tl.max,tl.min from t1 [left] join lateral (select a, max(t2.b) max, min(t2.b) min from t2 where t1.a=t2.a) as t on 1=1; The transformation pushes the equi-join condition t1.a=t.a into the derived table making it dependent on table t1. It means that for every row from t1 a new derived table must be filled out. However the size of any of these derived tables is just a fraction of the original derived table t. One could say that transformation 'splits' the rows used for the GROUP BY operation into separate groups performing aggregation for a group only in the case when there is a match for the current row of t1. Apparently the transformation may produce a query with a better performance only in the case when - the GROUP BY list refers only to fields returned by the derived table - there is an index I on one of the tables T used in FROM list of the specification of the derived table whose prefix covers the the fields from the proper beginning of the GROUP BY list or fields that are equal to those fields. Whether the result of the re-writing can be executed faster depends on many factors: - the size of the original derived table - the size of the table T - whether the index I is clustering for table T - whether the index I fully covers the GROUP BY list. This patch only tries to improve the chosen execution plan using this transformation. It tries to do it only when the chosen plan reaches the derived table by a key whose prefix covers all the fields of the derived table produced by the fields of the table T from the GROUP BY list. The code of the patch does not evaluates the cost of the improved plan. If certain conditions are met the transformation is applied.
-