An error occurred fetching the project authors.
- 22 Sep, 2024 8 commits
-
-
Sergei Golubchik authored
* preserve the graph in memory between statements * keep it in a TABLE_SHARE, available for concurrent searches * nodes are generally read-only, walking the graph doesn't change them * distance to target is cached, calculated only once * SIMD-optimized bloom filter detects visited nodes * nodes are stored in an array, not List, to better utilize bloom filter * auto-adjusting heuristic to estimate the number of visited nodes (to configure the bloom filter) * many threads can concurrently walk the graph. MEM_ROOT and Hash_set are protected with a mutex, but walking doesn't need them * up to 8 threads can concurrently load nodes into the cache, nodes are partitioned into 8 mutexes (8 is chosen arbitrarily, might need tuning) * concurrent editing is not supported though * this is fine for MyISAM, TL_WRITE protects the TABLE_SHARE and the graph (note that TL_WRITE_CONCURRENT_INSERT is not allowed, because an INSERT into the main table means multiple UPDATEs in the graph) * InnoDB uses secondary transaction-level caches linked in a list in in thd->ha_data via a fake handlerton * on rollback the secondary cache is discarded, on commit nodes from the secondary cache are invalidated in the shared cache while it is exclusively locked * on savepoint rollback both caches are flushed. this can be improved in the future with a row visibility callback * graph size is controlled by @@mhnsw_cache_size, the cache is flushed when it reaches the threshold
-
Sergei Golubchik authored
* mhnsw: * use primary key, innodb loves and (and the index cannot have dupes anyway) * MyISAM is ok with that, performance-wise * must be ha_rnd_init(0) because we aren't going to scan * MyISAM resets the position on ha_rnd_init(0) so query it before * oh, and use the correct handler, just in case * HA_ERR_RECORD_IS_THE_SAME is no error * innodb: * return ref_length on create * don't assume table->pos_in_table_list is set * ok, assume away, but only for system versioned tables * set alter_info on create (InnoDB needs to check for FKs) * pair external_lock/external_unlock correctly
-
Sergei Golubchik authored
-
Vicențiu Ciorbaru authored
This commit includes the work done in collaboration with Hugo Wen from Amazon: MDEV-33408 Alter HNSW graph storage and fix memory leak This commit changes the way HNSW graph information is stored in the second table. Instead of storing connections as separate records, it now stores neighbors for each node, leading to significant performance improvements and storage savings. Comparing with the previous approach, the insert speed is 5 times faster, search speed improves by 23%, and storage usage is reduced by 73%, based on ann-benchmark tests with random-xs-20-euclidean and random-s-100-euclidean datasets. Additionally, in previous code, vector objects were not released after use, resulting in excessive memory consumption (over 20GB for building the index with 90,000 records), preventing tests with large datasets. Now ensure that vectors are released appropriately during the insert and search functions. Note there are still some vectors that need to be cleaned up after search query completion. Needs to be addressed in a future commit. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc. As well as the commit: Introduce session variables to manage HNSW index parameters Three variables: hnsw_max_connection_per_layer hnsw_ef_constructor hnsw_ef_search ann-benchmark tool is also updated to support these variables in commit https://github.com/HugoWenTD/ann-benchmarks/commit/e09784e for branch https://github.com/HugoWenTD/ann-benchmarks/tree/mariadb-configurable All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc. Co-authored-by:
Hugo Wen <wenhug@amazon.com>
-
Sergei Golubchik authored
MDEV-33407 Parser support for vector indexes The syntax is create table t1 (... vector index (v) ...); limitation: * v is a binary string and NOT NULL * only one vector index per table * temporary tables are not supported MDEV-33404 Engine-independent indexes: subtable method added support for so-called "high level indexes", they are not visible to the storage engine, implemented on the sql level. For every such an index in a table, say, t1, the server implicitly creates a second table named, like, t1#i#05 (where "05" is the index number in t1). This table has a fixed structure, no frm, not accessible directly, doesn't go into the table cache, needs no MDLs. MDEV-33406 basic optimizer support for k-NN searches for a query like SELECT ... ORDER BY func() optimizer will use item_func->part_of_sortkey() to decide what keys can be used to resolve ORDER BY.
-
Sergei Golubchik authored
let the caller tell init_tmp_table_share() whether the table should be thread_specific or not. In particular, internal tmp tables created in the slave thread are perfectly thread specific
-
Sergei Golubchik authored
create templates thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n) and the same for thd->calloc(). By the default the type is char, so old usage of thd->alloc(size) works too.
-
Sergei Golubchik authored
-
- 04 Sep, 2024 1 commit
-
-
Kristian Nielsen authored
If a slave replicating an event has waited for more than @@slave_abort_blocking_timeout for a conflicting metadata lock held by a non-replication thread, the blocking query is killed to allow replication to proceed and not be blocked indefinitely by a user query. Reviewed-by:
Monty <monty@mariadb.org> Signed-off-by:
Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 19 Aug, 2024 1 commit
-
-
Dmitry Shulga authored
Running an UPDATE statement in PS mode and having positional parameter(s) bound with an array of actual values (that is prepared to be run in bulk mode) results in incorrect behaviour in presence of on update trigger that also executes an UPDATE statement. The same is true for handling a DELETE statement in presence of on delete trigger. Typically, the visible effect of such incorrect behaviour is expressed in a wrong number of updated/deleted rows of a target table. Additionally, in case UPDATE statement, a number of modified rows and a state message returned by a statement contains wrong information about a number of modified rows. The reason for incorrect number of updated/deleted rows is that a data structure used for binding positional argument with its actual values is stored in THD (this is thd->bulk_param) and reused on processing every INSERT/UPDATE/DELETE statement. It leads to consuming actual values bound with top-level UPDATE/DELETE statement by other DML statements used by triggers' body. To fix the issue, reset the thd->bulk_param temporary to the value nullptr before invoking triggers and restore its value on finishing its execution. The second part of the problem relating with wrong value of affected rows reported by Connector/C API is caused by the fact that diagnostics area is reused by an original DML statement and a statement invoked by a trigger. This fact should be take into account on finalizing a state of diagnostics area on completion running of a statement. Important remark: in case the macros DBUG_OFF is on, call of the method Diagnostics_area::reset_diagnostics_area() results in reset of the data members m_affected_rows, m_statement_warn_count. Values of these data members of the class Diagnostics_area are used on sending OK and EOF messages. In case DML statement is executed in PS bulk mode such resetting results in sending wrong result values to a client for affected rows in case the DML statement fires a triggers. So, reset these data members only in case the current statement being processed is not run in bulk mode.
-
- 10 Jul, 2024 1 commit
-
-
Dave Gosselin authored
Improve performance of queries like SELECT * FROM t1 WHERE field = NAME_CONST('a', 4); by, in this example, replacing the WHERE clause with field = 4 in the case of ref access. The rewrite is done during fix_fields and we disambiguate this case from other cases of NAME_CONST by inspecting where we are in parsing. We rely on THD::where to accomplish this. To improve performance there, we change the type of THD::where to be an enumeration, so we can avoid string comparisons during Item_name_const::fix_fields. Consequently, this patch also changes all usages of THD::where to conform likewise.
-
- 06 Jul, 2024 1 commit
-
-
Monty authored
The problem was using safe_table_name() instead of safe_table_name().str with DBUG_PRINT
-
- 05 Jul, 2024 1 commit
-
-
Brandon Nesterenko authored
The special logic used by the memory storage engine to keep slaves in sync with the master on a restart can break replication. In particular, after a restart, the master writes DELETE statements in the binlog for each MEMORY-based table so the slave can empty its data. If the DELETE is not executable, e.g. due to invalid triggers, the slave will error and fail, whereas the master will never see the problem. Instead of DELETE statements, use TRUNCATE to keep slaves in-sync with the master, thereby bypassing triggers. Reviewed By: =========== Kristian Nielsen <knielsen@knielsen-hq.org> Andrei Elkin <andrei.elkin@mariadb.com>
-
- 25 Jun, 2024 1 commit
-
-
Dmitry Shulga authored
One of possible use cases that reproduces the memory leakage listed below: set timestamp= unix_timestamp('2000-01-01 00:00:00'); create or replace table t1 (x int) with system versioning partition by system_time interval 1 hour auto partitions 3; create table t2 (x int); create trigger tr after insert on t2 for each row update t1 set x= 11; create or replace procedure sp2() insert into t2 values (5); set timestamp= unix_timestamp('2000-01-01 04:00:00'); call sp2; set timestamp= unix_timestamp('2000-01-01 13:00:00'); call sp2; # <<=== Memory leak happens there. In case MariaDB server is built with the option -DWITH_PROTECT_STATEMENT_MEMROOT, the second execution would hit assert failure. The reason of leaking a memory is that once a new partition be created the table should be closed and re-opened. It results in calling the function extend_table_list() that indirectly invokes the function sp_add_used_routine() to add routines implicitly used by the statement that makes a new memory allocation. To fix it, don't remove routines and tables the statement implicitly depends on when a table being closed for subsequent re-opening.
-
- 20 Jun, 2024 1 commit
-
-
Dave Gosselin authored
Find and fix missing virtual override markings. Updates cmake maintainer flags to include -Wsuggest-override and -Winconsistent-missing-override.
-
- 10 Jun, 2024 1 commit
-
-
Marko Mäkelä authored
In cmake -DWITH_UBSAN=ON builds with clang but not with GCC, -fsanitize=undefined will flag several runtime errors on function pointer mismatch related to the lock-free hash table LF_HASH. Let us use matching function signatures and remove function pointer casts in order to avoid potential bugs due to undefined behaviour. These errors could be caught at compilation time by -Wcast-function-type-strict, which is available starting with clang-16, but not available in any version of GCC as of now. The old GCC flag -Wcast-function-type is enabled as part of -Wextra, but it specifically does not catch these errors. Reviewed by: Vladislav Vaintroub
-
- 27 May, 2024 6 commits
-
-
Monty authored
MDEV-34150 Assertion failure in Diagnostics_area::set_error_status upon binary logging hitting tmp space limit - Moved writing to binlog_cache from close_thread_tables() to binlog_commit(). - In select_create() delete cached row events instead of flushing them to disk. This was done to avoid possible disk write error in this code.
-
Monty authored
This patch extends the timestamp from 2038-01-19 03:14:07.999999 to 2106-02-07 06:28:15.999999 for 64 bit hardware and OS where 'long' is 64 bits. This is true for 64 bit Linux but not for Windows. This is done by treating the 32 bit stored int as unsigned instead of signed. This is safe as MariaDB has never accepted dates before the epoch (1970). The benefit of this approach that for normal timestamp the storage is compatible with earlier version. However for tables using system versioning we before stored a timestamp with the year 2038 as the 'max timestamp', which is used to detect current values. This patch stores the new 2106 year max value as the max timestamp. This means that old tables using system versioning needs to be updated with mariadb-upgrade when moving them to 11.4. That will be done in a separate commit.
-
Monty authored
Fixed that no tables from 'mysql' schema are included in userstat. A beneif of this is that the server is not reading statistics tables if mysql.proc or other tables in mysql is accessed.
-
Monty authored
-
Sergei Golubchik authored
and adjust the copyright year
-
Monty authored
This is to update the plugin to be compatible with Percona's query_response_time plugin, with some additions. Some of the tests are taken from Percona server. - Added plugins QUERY_RESPONSE_TIME_READ, QUERY_RESPONSE_TIME_WRITE and QUERY_RESPONSE_TIME_READ_WRITE. - Added option query_response_time_session_stats, with possible values GLOBAL, ON or OFF, to the query_response_time plugin. Notes: - All modules are dependent on QUERY_RESPONSE_READ_TIME. This must always be enabled if any of the other modules are used. This will be auto-enabled in the near future. - Accounting are done per statement. Stored functions are regarded as part of the original statement. - For stored procedures the accounting are done per statement executed in the stored procedure. CALL will not be accounted because of this. - FLUSH commands will not be accounted for. This is to ensure that FLUSH QUERY_RESPONSE_TIME is not part of the statistics. (This helps when testing with mtr and otherwise). - FLUSH QUERY_RESPONSE_TIME_READ and FLUSH QUERY_RESPONSE_TIME_READ only resets the corresponding status. - FLUSH QUERY_RESPONSE_TIME and FLUSH QUERY_RESPONSE_TIME_READ_WRITE or changing the value of query_response_time_range_base followed by any FLUSH of QUERY_RESPOSNSE_TIME resets all status.
-
- 21 May, 2024 1 commit
-
-
Alexander Barkov authored
The patch for MDEV-31340 fixed the following bugs: MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0 MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0 MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0 MDEV-33088 Cannot create triggers in the database `MYSQL` MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0 MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0 MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0 MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0 MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0 Backporting the fixes from 11.5 to 10.5
-
- 18 Apr, 2024 1 commit
-
-
Alexander Barkov authored
This patch also fixes: MDEV-33050 Build-in schemas like oracle_schema are accent insensitive MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0 MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0 MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0 MDEV-33088 Cannot create triggers in the database `MYSQL` MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0 MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0 MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0 MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0 - Removing the virtual function strnncoll() from MY_COLLATION_HANDLER - Adding a wrapper function CHARSET_INFO::streq(), to compare two strings for equality. For now it calls strnncoll() internally. In the future it will turn into a virtual function. - Adding new accent sensitive case insensitive collations: - utf8mb4_general1400_as_ci - utf8mb3_general1400_as_ci They implement accent sensitive case insensitive comparison. The weight of a character is equal to the code point of its upper case variant. These collations use Unicode-14.0.0 casefolding data. The result of my_charset_utf8mb3_general1400_as_ci.strcoll() is very close to the former my_charset_utf8mb3_general_ci.strcasecmp() There is only a difference in a couple dozen rare characters, because: - the switch from "tolower" to "toupper" comparison, to make utf8mb3_general1400_as_ci closer to utf8mb3_general_ci - the switch from Unicode-3.0.0 to Unicode-14.0.0 This difference should be tolarable. See the list of affected characters in the MDEV description. Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters! Unlike utf8mb4_general_ci, it does not treat all BMP characters as equal. - Adding classes representing names of the file based database objects: Lex_ident_db Lex_ident_table Lex_ident_trigger Their comparison collation depends on the underlying file system case sensitivity and on --lower-case-table-names and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci. - Adding classes representing names of other database objects, whose names have case insensitive comparison style, using my_charset_utf8mb3_general1400_as_ci: Lex_ident_column Lex_ident_sys_var Lex_ident_user_var Lex_ident_sp_var Lex_ident_ps Lex_ident_i_s_table Lex_ident_window Lex_ident_func Lex_ident_partition Lex_ident_with_element Lex_ident_rpl_filter Lex_ident_master_info Lex_ident_host Lex_ident_locale Lex_ident_plugin Lex_ident_engine Lex_ident_server Lex_ident_savepoint Lex_ident_charset engine_option_value::Name - All the mentioned Lex_ident_xxx classes implement a method streq(): if (ident1.streq(ident2)) do_equal(); This method works as a wrapper for CHARSET_INFO::streq(). - Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name" in class members and in function/method parameters. - Replacing all calls like system_charset_info->coll->strcasecmp(ident1, ident2) to ident1.streq(ident2) - Taking advantage of the c++11 user defined literal operator for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h) data types. Use example: const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column; is now a shorter version of: const Lex_ident_column primary_key_name= Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
-
- 14 Mar, 2024 1 commit
-
-
Dmitry Shulga authored
MDEV-33218: Assertion `active_arena->is_stmt_prepare_or_first_stmt_execute() || active_arena->state == Query_arena::STMT_SP_QUERY_ARGUMENTS' failed in st_select_lex::fix_prepare_information In case there is a view that queried from a stored routine or a prepared statement and this temporary table is dropped between executions of SP/PS, then it leads to hitting an assertion at the SELECT_LEX::fix_prepare_information. The fired assertion was added by the commit 85f2e4f8 (MDEV-32466: Potential memory leak on executing of create view statement). Firing of this assertion means memory leaking on execution of SP/PS. Moreover, if the added assert be commented out, different result sets can be produced by the statement SELECT * FROM the hidden table. Both hitting the assertion and different result sets have the same root cause. This cause is usage of temporary table's metadata after the table itself has been dropped. To fix the issue, reload the cache of stored routines. To do it cache of stored routines is reset at the end of execution of the function dispatch_command(). Next time any stored routine be called it will be loaded from the table mysql.proc. This happens inside the method Sp_handler::sp_cache_routine where loading of a stored routine is performed in case it missed in cache. Loading is performed unconditionally while previously it was controlled by the parameter lookup_only. By that reason the signature of the method Sroutine_hash_entry::sp_cache_routine was changed by removing unused parameter lookup_only. Clearing of sp caches affects the test main.lock_sync since it forces opening and locking the table mysql.proc but the test assumes that each statement locks its tables once during its execution. To keep this invariant the debug sync points with names "before_lock_tables_takes_lock" and "after_lock_tables_takes_lock" are not activated on handling the table mysql.proc
-
- 13 Mar, 2024 1 commit
-
-
Sergei Golubchik authored
-
- 08 Mar, 2024 1 commit
-
-
Monty authored
This will makes it easier to find out what replication workers are doing and what they are waiting for. Things changed in processlist: - Slave_SQL time was not consistent. Now time for state "Slave has read all relay log; waiting for more updates" shows how long it has waited for getting the next event. - Slave_worker threads did often show "Closing tables" for a long time. Now the state is reverted to the previous state after "Closing tables" is done. - Commit and Rollback states where not shown for replication (and some other threads). Now Commit and Rollback states are always shown and the state is reverted to previous state when the Commit/Rollback have finished. Code changes: - Added thd->set_time_for_next_stage() for parallel replication when when starting to wait for prior transactions to commit, group commit, and FTWRL and for free space in thread pool. Before we reset the time only after the above events. - Moved THD_STAGE_INFO(stage_rollback) and THD_STAGE_INFO(stage_commit) from sql_parse.cc to transaction.cc to ensure this is done for all commits and not only 'normal connection queries'. Test case changes: - close_thread_tables() reverting stage to previous stage caused the counter in performance_schema to be increased. In many case it is the 'sql/starting' stage that was effected. - We only change to "Commit" stage if there is a need for a commit. This caused some "Commit" stages to disapper from perfschema reports. TODO in 11.#: - Slave_IO always showes "Waiting for master to send event" and the time is from SLAVE START. We should in 11.# change this to be the time since reading the last event.
-
- 08 Feb, 2024 1 commit
-
-
Dmitry Shulga authored
This patch fixes the issue with passing the DEFAULT or IGNORE values to positional parameters for some kind of SQL statements to be executed as prepared statements. The main idea of the patch is to associate an actual value being passed by the USING clause with the positional parameter represented by the Item_param class. Such association must be performed on execution of UPDATE statement in PS/SP mode. Other corner cases that results in server crash is on handling CREATE TABLE when positional parameter placed after the DEFAULT clause or CALL statement and passing either the value DEFAULT or IGNORE as an actual value for the positional parameter. This case is fixed by checking whether an error is set in diagnostics area at the function pack_vcols() on return from the function pack_expression()
-
- 13 Dec, 2023 1 commit
-
-
Daniel Black authored
Like all IF NOT EXISTS syntax, a Note should be generated. The original commit of Seqeuences cleared the IF NOT EXISTS part in the sql/sql_yacc.yy with lex->create_info.init(). Without this bit set there was no way it could do anything other than error. To remedy this removal, the sql_yacc.yy components have been minimised as they where all set at the beginning of the ALTER. This way the opt_if_not_exists correctly set the IF_EXISTS flag. In MDEV-13005 (bb4dd70e) the error code changed, requiring ER_UNKNOWN_SEQUENCES to be handled in the function No_such_table_error_handler::handle_condition.
-
- 11 Dec, 2023 1 commit
-
-
Dmitry Shulga authored
MDEV-31296: Crash in Item_func::fix_fields when prepared statement with subqueries and window function is executed with sql_mode = ONLY_FULL_GROUP_BY Crash was caused by referencing a null pointer on getting the number of the nesting levels of the set function for the current select_lex at the method Item_field::fix_fields. The current select for processing is taken from Name_resolution_context that filled in at the function set_new_item_local_context() and where initialization of the data member Name_resolution_context was mistakenly removed by the commit d6ee351b (Revert "MDEV-24454 Crash at change_item_tree") To fix the issue, correct initialization of data member Name_resolution_context::select_lex that was removed by the commit d6ee351b is restored.
-
- 27 Nov, 2023 1 commit
-
-
Monty authored
The problem was that table->vcol_cleanup_expr() was not called in case of error in open_table().
-
- 24 Nov, 2023 1 commit
-
-
Dmitry Shulga authored
This patch is actually follow-up for the task MDEV-23902: MariaDB crash on calling function to use correct query arena for a statement. In case invocation of a function is in progress use its call arena, else use current query arena that can be either a statement or a regular query arena.
-
- 25 Oct, 2023 1 commit
-
-
Nikita Malyavin authored
Add a debug-only field MDL_context::lock_warrant. This field can be set to the MDL context different from the one the current execution is done in. The lock warrantor has to hold an MDL for at least a duration of a table lifetime. This is needed in the subsequent commit so that the shared MDL acquired by the InnoDB purge_coordinator_task can be shared by purge_worker_task that access index records that include virtual columns. Reviewed by: Vladislav Vaintroub
-
- 03 Oct, 2023 1 commit
-
-
Monty authored
The warning is given in case of table not found or if there is a lock timeout. The warning is needed as in case of a lock timeout then the persistent table stats are going to be wrong.
-
- 29 Sep, 2023 1 commit
-
-
Jan Lindström authored
At the moment we cannot support wsrep_forced_binlog_format=[MIXED|STATEMENT] during CREATE TABLE AS SELECT. Statement will use ROW instead and give a warning. Signed-off-by:
Julius Goryavsky <julius.goryavsky@mariadb.com>
-
- 02 Sep, 2023 2 commits
-
-
Dmitry Shulga authored
In case a table accessed by a PS/SP is dropped after the first execution of PS/SP and a view created with the same name as a table just dropped then the second execution of PS/SP leads to allocation of a memory on SP/PS memory root already marked as read only on first execution. For example, the following test case: CREATE TABLE t1 (a INT); PREPARE stmt FROM "INSERT INTO t1 VALUES (1)"; EXECUTE stmt; DROP TABLE t1; CREATE VIEW t1 S SELECT 1; --error ER_NON_INSERTABLE_TABLE EXECUTE stmt; # (*) DROP VIEW t1; will hit assert on running the statement 'EXECUTE stmt' marked with (*) when allocation of a memory be performed on parsing the view. Memory allocation is requested inside the function mysql_make_view when a view definition being parsed. In order to avoid an assertion failure, call of the function mysql_make_view() must be moved after invocation of the function check_and_update_table_version(). It will result in re-preparing the whole PS statement or current SP instruction that will free currently allocated items and reset read_only flag for the memory root.
-
Dmitry Shulga authored
Moved call of the function check_and_update_table_version() just before the place where the function extend_table_list() is invoked in order to avoid allocation of memory on a PS/SP memory root marked as read only. It happens by the reason that the function extend_table_list() invokes sp_add_used_routine() to add a trigger created for the table in time frame between execution the statement EXECUTE `stmt_id` . For example, the following test case create table t1 (a int); prepare stmt from "insert into t1 (a) value (1)"; execute stmt; create trigger t1_bi before insert on t1 for each row set @message= new.a; execute stmt; # (*) adds the trigger t1_bi to a list of used routines that involves allocation of a memory on PS memory root that has been already marked as read only on first run of the statement 'execute stmt'. In result, when the statement marked with (*) is executed it results in assert hit. To fix the issue call the function check_and_update_table_version() before invocation of extend_table_list() to force re-compilation of PS/SP that resets read-only flag of its memory root.
-
- 26 Aug, 2023 1 commit
-
-
Alexander Barkov authored
Replacing my_casedn_str() called on local char[] buffer variables to CharBuffer::copy_casedn() calls. This is a sub-task for MDEV-31531 Remove my_casedn_str() Details: - Adding a helper template class IdentBuffer (a CharBuffer descendant), which assumes utf8 data. Like CharBuffer, it's initialized to an empty string in the constructor, but can be populated with lower-cased data later. - Adding a helper template class IdentBufferCasedn, which initializes to lower case right in the constructor. - Removing char[] buffers, replacing them to IdentBuffer and IdentBufferCasedn. - Changing the data type of "db" and "table" parameters from "const char*" to LEX_CSTRING in the following functions: find_field_in_table_ref() insert_fields() set_thd_db() mysql_grant() to reuse IdentBuffer easeir.
-
- 18 Aug, 2023 1 commit
-
-
Monty authored
MDEV-29693 ANALYZE TABLE still flushes table definition cache when engine-independent statistics is used This commits enables reloading of engine-independent statistics without flushing the table from table definition cache. This is achieved by allowing multiple version of the TABLE_STATISTICS_CB object and having independent pointers to it in TABLE and TABLE_SHARE. The TABLE_STATISTICS_CB object have reference pointers and are freed when no one is pointing to it anymore. TABLE's TABLE_STATISTICS_CB pointer is updated to use the TABLE_SHARE's pointer when read_statistics_for_tables() is called at the beginning of a query. Main changes: - read_statistics_for_table() will allocate an new TABLE_STATISTICS_CB object. - All get_stat_values() functions has a new parameter that tells where collected data should be stored. get_stat_values() are not using the table_field object anymore to store data. - All get_stat_values() functions returns 1 if they found any data in the statistics tables. Other things: - Fixed INSERT DELAYED to not read statistics tables. - Removed Statistics_state from TABLE_STATISTICS_CB as this is not needed anymore as wer are not changing TABLE_SHARE->stats_cb while calculating or loading statistics. - Store values used with store_from_statistical_minmax_field() in TABLE_STATISTICS_CB::mem_root. This allowed me to remove the function delete_stat_values_for_table_share(). - Field_blob::store_from_statistical_minmax_field() is implemented but is not normally used as we do not yet support EIS statistics for blobs. For example Field_blob::update_min() and Field_blob::update_max() are not implemented. Note that the function can be called if there is an concurrent "ALTER TABLE MODIFY field BLOB" running because of a bug in ALTER TABLE where it deletes entries from column_stats before it has an exclusive lock on the table. - Use result of field->val_str(&val) as a pointer to the result instead of val (safetly fix). - Allocate memory for collected statistics in THD::mem_root, not in in TABLE::mem_root. This could cause the TABLE object to grow if a ANALYZE TABLE was run many times on the same table. This was done in allocate_statistics_for_table(), create_min_max_statistical_fields_for_table() and create_min_max_statistical_fields_for_table_share(). - Store in TABLE_STATISTICS_CB::stats_available which statistics was found in the statistics tables. - Removed index_table from class Index_prefix_calc as it was not used. - Added TABLE_SHARE::LOCK_statistics to ensure we don't load EITS in parallel. First thread will load it, others will reuse the loaded data. - Eliminate read_histograms_for_table(). The loading happens within read_statistics_for_tables() if histograms are needed. One downside is that if we have read statistics without histograms before and someone requires histograms, we have to read all statistics again (once) from the statistics tables. A smaller downside is the need to call alloc_root() for each individual histogram. Before we could allocate all the space for histograms with a single alloc_root. - Fixed bug in MyISAM and Aria where they did not properly notice that table had changed after analyze table. This was not a problem before this patch as then the MyISAM and Aria tables where flushed as part of ANALYZE table which did hide this issue. - Fixed a bug in ANALYZE table where table->records could be seen as 0 in collect_statistics_for_table(). The effect of this unlikely bug was that a full table scan could be done even if analyze_sample_percentage was not set to 1. - Changed multiple mallocs in a row to use multi_alloc_root(). - Added a mutex protection in update_statistics_for_table() to ensure that several tables are not updating the statistics at the same time. Some of the changes in sql_statistics.cc are based on a patch from Oleg Smirnov <olernov@gmail.com> Co-authored-by:
Oleg Smirnov <olernov@gmail.com> Co-authored-by:
Vicentiu Ciorbaru <cvicentiu@gmail.com> Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
- 16 Aug, 2023 1 commit
-
-
Sergei Petrunia authored
Before this patch, the code in Item_field::print() used this convention (described in sql_explain.h:ExplainDataStructureLifetime): - By default, the table that Item_field refers to is accessible. - ANALYZE and SHOW {EXPLAIN|ANALYZE} may print Items after some temporary tables have been dropped. They use QT_DONT_ACCESS_TMP_TABLES flag. When it is ON, Item_field::print will not access the table it refers to, if it is a temp.table The bug was that EXPLAIN statement also may compute subqueries (depending on subquery context and @@expensive_subquery_limit setting). After the computation, the subquery calls JOIN::cleanup(true) which drops some of its temporary tables. Calling Item_field::print() that refer to such table will cause an access to free'd memory. In this patch, we take into account that query optimization can compute a subquery and discard its temporary tables. Item_field::print() now assumes that any temporary table might have already been dropped. This means QT_DONT_ACCESS_TMP_TABLES flag is not needed - we imply it is always present. But we also make one exception: derived tables are not freed in JOIN::cleanup() call. They are freed later in close_thread_tables(), at the same time when regular tables are closed. Because of that, Item_field::print may assume that temp.tables representing derived tables are available. Initial patch by: Rex Jonston Reviewed by: Monty <monty@mariadb.org>
-