- 16 Jul, 2021 1 commit
-
-
Vladislav Vaintroub authored
Also, remove comparison lsn > flush/write lsn, prior to calling log_write_up_to. The checks and early returns are part of this function.
-
- 15 Jul, 2021 2 commits
-
-
Oleksandr Byelkin authored
The problem is that array binding uses net buffer to read parameters for each execution while each execiting with RETURNING write in the same buffer. Solution is to allocate new net buffer to avoid changing buffer we are reading from.
-
Rucha Deodhar authored
This reverts commit f88d130e.
-
- 12 Jul, 2021 1 commit
-
-
Rucha Deodhar authored
option which is called from mysql.server (extra_args). Fix: change mysql.server script to use --defaults-extra-file instead of -e
-
- 07 Jul, 2021 1 commit
-
-
Sergei Petrunia authored
Port the following patch from MySQL: commit 1b2e8ea269c80cb93cc79d8be934c40b1c58e947 Author: Kailasnath Nagarkar <kailasnath.nagarkar@oracle.com> Date: Fri Nov 30 16:43:13 2018 +0530 Bug #20939184: INNODB: UNLOCK ROW COULD NOT FIND A 2 MODE LOCK ON THE RECORD Issue: ------ Consdier tables t1 and t2 such that t1 has multiple rows and join condition for t1 left join t2 results in only single row from t2. In this case, access to table t2 is const since there is a single row that qualifies the join condition. However, while executing the query, attempt is made to unlock t2's row multiple times. The current algorithm to fetch rows approximates to: 1) Retrieve the row for t1. 2) Retrieve the row for t2. 3) Apply the join conditions. a) If condition evaluates to true: Project the row to the result. b) If condition evaluates to false: i) If t2's qep_tab->not_null_complement is true, unlock t2's row. ii) Null-complement the row by calling "evaluate_null_complemented_join_record()". In this function qep_tab->not_null_complement is set to false. The t2's only one row, that qualifies join condition, is unlocked in Step i) when t1's row is evaluated to false. When t1's next row is also evaluated to false, another attempt is made to unlock t2's already unlocked row. This results in following error being logged in error.log: "[ERROR] InnoDB: Unlock row could not find a 3 mode lock on the record. Current statement: select * from t1 left join t2 ......" Solution: --------- When a table's access method is "const", set record unlock method for this table to do no operation.
-
- 06 Jul, 2021 6 commits
-
-
Daniel Black authored
AIX grep doesn't support the grep -A syntax used in the test case.
-
Daniel Black authored
-
Daniel Black authored
-
Daniel Black authored
-
Daniel Black authored
--skip-stack-trace isn't there on AIX.
-
Julius Goryavsky authored
-
- 03 Jul, 2021 2 commits
-
-
Marko Mäkelä authored
buf_flush_relocate_on_flush_list(): Use dpage->physical_size() because bpage->zip.ssize may already have been zeroed in page_zip_set_size() invoked by buf_pool_t::realloc(). This would cause occasional failures of the test innodb.innodb_buffer_pool_resize, which creates a ROW_FORMAT=COMPRESSED table.
-
Marko Mäkelä authored
The replacement of buf_pool.page_hash with a different type of hash table in commit 5155a300 (MDEV-22871) introduced a race condition with buffer pool resizing. We have an execution trace where buf_pool.page_hash.array is changed to point to something else while page_hash_latch::read_lock() is executing. The same should also affect page_hash_latch::write_lock(). We fix the race condition by never resizing (and reallocating) the buf_pool.page_hash. We assume that resizing the buffer pool is a rare operation. Yes, there might be a performance regression if a server is first started up with a tiny buffer pool, which is later enlarged. In that case, the tiny buf_pool.page_hash.array could cause increased use of the hash bucket lists. That problem can be worked around by initially starting up the server with a larger buffer pool and then shrinking that, until changing to a larger size again. buf_pool_t::resize_hash(): Remove. buf_pool_t::page_hash_table::lock(): Do not attempt to deal with hash table resizing. If we really wanted that in a safe manner, we would probably have to introduce a global rw-lock around the operation, or at the very least, poll buf_pool.resizing, both of which would be detrimental to performance.
-
- 02 Jul, 2021 22 commits
-
-
Sergei Golubchik authored
MDEV-23004 When using GROUP BY with JSON_ARRAYAGG with joint table, the square brackets are not included make test results stable followup for 98c7916f
-
Marko Mäkelä authored
-
Marko Mäkelä authored
In other ROW_FORMAT than REDUNDANT, the InnoDB record header size calculation depends on dict_index_t::n_core_null_bytes. In ROW_FORMAT=REDUNDANT, the record header always is 6 bytes plus n_fields or 2*n_fields bytes, depending on the maximum record size. But, during online ALTER TABLE, the log records in the temporary file always use a format similar to ROW_FORMAT=DYNAMIC, even omitting the 5-byte fixed-length part of the header. While creating a temporary file record for a ROW_FORMAT=REDUNDANT table, InnoDB must refer to dict_index_t::n_nullable. The field dict_index_t::n_core_null_bytes is only valid for other than ROW_FORMAT=REDUNDANT tables. The bug does not affect MariaDB 10.3, because only commit 7a27db77 (MDEV-15563) allowed an ALGORITHM=INSTANT change of a NOT NULL column to NULL in a ROW_FORMAT=REDUNDANT table. The fix was developed by Thirunarayanan Balathandayuthapani and tested by Matthias Leich. The test case was simplified by me.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
One more result was affected by merging 768c5188.
-
Eugene Kosov authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This is a backport of 161e4bfa. trans_rollback_to_savepoint(): Only release metadata locks (MDL) if the storage engines agree, after the changes were already rolled back. Ever since commit 3792693f and mysql/mysql-server@55ceedbc3feb911505dcba6cee8080d55ce86dda we used to cheat here and always release MDL if the binlog is disabled. MDL are supposed to prevent race conditions between DML and DDL also when no replication is in use. MDL are supposed to be a superset of InnoDB table locks: InnoDB table lock may only exist if the thread also holds MDL on the table name. In the included test case, ROLLBACK TO SAVEPOINT would wrongly release the MDL on both tables and let ALTER TABLE proceed, even though the DML transaction is actually holding locks on the table. Until commit 1bd681c8 (MDEV-25506) in MariaDB 10.6, InnoDB would often work around the locking violation in a blatantly non-ACID way: If locks exist on a table that is being dropped (in this case, actually a partition of a table that is being rebuilt by ALTER TABLE), InnoDB could move the table (or partition) into a queue, to be dropped after the locks and references had been released. If the lock is not released and the original copy of the table not dropped quickly enough, a name conflict could occur on a subsequent ALTER TABLE. The scenario of commit 3792693f is unaffected by this fix, because mysqldump would use non-locking reads, and the transaction would not be holding any InnoDB locks during the execution of ROLLBACK TO SAVEPOINT. MVCC reads inside InnoDB are only covered by MDL and page latches, not by any table or record locks. FIXME: It would be nice if storage engines were specifically asked which MDL can be released, instead of only offering a choice between all or nothing. InnoDB should be able to release any locks for tables that are no longer in trx_t::mod_tables, except if another transaction had converted some implicit record locks to explicit ones, before the ROLLBACK TO SAVEPOINT had been completed. Reviewed by: Sergei Golubchik
-
Marko Mäkelä authored
Fixup for commit 768c5188
-
Thirunarayanan Balathandayuthapani authored
A table rebuild that would truncate the default value of a DATE column is expected to issue data truncation warnings. But, these warnings are not being issued if the ADD COLUMN is being executed with ALGORITHM=INSTANT. InnoDB sets the warning of the field while assigning the default value of the field during check_if_supported_inplace_alter().
-
Daniel Black authored
The error loading the client module is different
-
Daniel Black authored
-
Daniel Black authored
-
Daniel Black authored
Parital backport of 48938c57 so platform dependent AIX tests can be done.
-
Daniel Black authored
-
Daniel Black authored
-
Daniel Black authored
C:\projects\server\sql\sql_show.cc(7913): error C2220: warning treated as error - no 'object' file generated [C:\projects\server\win_build\sql\sql.vcxproj] C:\projects\server\sql\sql_show.cc(7913): warning C4267: 'initializing': conversion from 'size_t' to 'uint', possible loss of data [C:\projects\server\win_build\sql\sql.vcxproj] caused by 768c5188
-
Daniel Black authored
-
Daniel Black authored
Parital backport of 48938c57 so platform dependent AIX tests can be done.
-
- 30 Jun, 2021 5 commits
-
-
Sergei Petrunia authored
Post-merge fix in 10.4: add a testcase for pushdown into IN subquery
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
Consider a query of the form: select ... from (select item2 as COL1) as T where COL1=123 Condition pushdown into derived table will try to push "COL1=123" condition down into table T. The process of pushdown involves "substituting" the item, that is, replacing Item_field("T.COL1") with its "producing item" item2. In order to use item2, one needs to clone it (call Item::build_clone). If the item is not cloneable (e.g. Item_func_sp is not), the pushdown process will fail and nothing at all will be pushed. Fixed by introducing transform_condition_or_part() which will try to apply the transformation for as many parts of condition as possible. The parts of condition that couldn't be transformed are dropped.
-