An error occurred fetching the project authors.
- 02 May, 2023 1 commit
-
-
Igor Babaev authored
This bug could cause a crash of the server when processing a query with ROWNUM() if it used in its FROM list a reference to a mergeable view defined as SELECT over more than one table that contained ORDER BY clause. When a mergeable view with ORDER BY clause and without LIMIT clause is used in the FROM list of a query that does not have ORDER BY clause the ORDER BY clause of the view is moved to the query. The code that performed this transformation forgot to delete the moved ORDER BY list from the view. If a query contains ROWNUM() and uses a mergeable multi-table view with ORDER BY then according to the current code of TABLE_LIST::init_derived() the view has to be forcibly materialized. As the query and the view shared the same items in its ORDER BY lists they could not be properly resolved either in the query or in the view. This led to a crash of the server. This patch has returned back the original signature of LEX::can_not_use_merged() to comply with 10.4 code of the condition that checks whether a megeable view has to be forcibly materialized. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 27 Feb, 2023 1 commit
-
-
Igor Babaev authored
Subselect_single_value_engine cannot handle table value constructor used as subquery. That's why any table value constructor TVC used as subquery is converted into a select over derived table whose specification is TVC. Currently the names of the columns of the derived table DT are taken from the first element of TVC and if the k-th component of the element happens to be a subquery the text representation of this subquery serves as the name of the k-th column of the derived table. References of all columns of the derived table DT compose the select list of the result of the conversion. If a definition of a view contained a table value constructor used as a subquery and the view was registered after this conversion had been applied we could register an invalid view definition if the first element of TVC contained a subquery as its component: the name of this component was taken from the original subquery, while the name of the corresponding column of the derived table was taken from the text representation of the subquery produced by the function SELECT_LEX::print() and these names were usually differ from each other. To avoid registration of such invalid views the function SELECT_LEX::print() now prints the original TVC instead of the select in which this TVC has been wrapped. Now the specification of registered view looks like as if no conversions from TVC to selects were done. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 24 Oct, 2022 1 commit
-
-
Oleksandr Byelkin authored
Read the version of the view share when we read definition to prevent simultaniouse access to a view table SHARE (and so its MEM_ROOT) from different threads.
-
- 30 Sep, 2022 3 commits
-
-
Oleksandr Byelkin authored
MDEV-17124: mariadb 10.1.34, views and prepared statements: ERROR 1615 (HY000): Prepared statement needs to be re-prepared The problem is that if table definition cache (TDC) is full of real tables which are in tables cache, view definition can not stay there so will be removed by its own underlying tables. In situation above old mechanism of detection matching definition in PS and current version always require reprepare and so prevent executing the PS. One work around is to increase TDC, other - improve version check for views/triggers (which is done here). Now in suspicious cases we check: - timestamp (microseconds) of the view to be sure that version really have changed; - time (microseconds) of creation of a trigger related to time (microseconds) of statement preparation.
-
Oleksandr Byelkin authored
-
Anel Husakovic authored
- Added missing information about database of corresponding table for various types of commands - Update some typos - Reviewed by: <vicentiu@mariadb.org>
-
- 31 Aug, 2022 1 commit
-
-
Daniele Sciascia authored
Making changes to wsrep_mysqld.h causes large parts of server code to be recompiled. The reason is that wsrep_mysqld.h is included by sql_class.h, even tough very little of wsrep_mysqld.h is needed in sql_class.h. This commit introduces a new header file, wsrep_on.h, which is meant to be included from sql_class.h, and contains only macros and variable declarations used to determine whether wsrep is enabled. Also, header wsrep.h should only contain definitions that are also used outside of sql/. Therefore, move WSREP_TO_ISOLATION* and WSREP_SYNC_WAIT macros to wsrep_mysqld.h. Reviewed-by:
Jan Lindström <jan.lindstrom@mariadb.com>
-
- 23 Mar, 2022 1 commit
-
-
Igor Babaev authored
This bug could affect prepared statements for the command CREATE VIEW with specification that contained unnamed basic constant in select list. If generation of a valid name for the corresponding view column required resolution of conflicts with names of other columns that were explicitly defined then execution of such prepared statement and following deallocation of this statement led to reading from freed memory. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 22 Jan, 2022 1 commit
-
-
Dmitry Shulga authored
MDEV-20516: Assertion `!lex->proc_list.first && !lex->result && !lex->param_list.elements' failed in mysql_create_view Execution of the CREATE VIEW statement sent via binary protocol where the flags of the COM_STMT_EXECUTE request a cursor to be opened before running the statement results in an assert failure. This assert fails since the data member thd->lex->result has not null value pointing to an instance of the class Select_materialize. The data member thd->lex->result is assigned a pointer to the class Select_materialize in the function mysql_open_cursor() that invoked in case the packet COM_STMT_EXECUTE requests a cursor to be opened. After thd->lex->result is assigned a pointer to an instance of the class Select_materialize the function mysql_create_view() is called (indirectly via the function mysql_execute_statement()) and the assert fails. The assert DBUG_ASSERT(!lex->proc_list.first && !lex->result && !lex->param_list.elements); was added by the commit 591c06d4. Unfortunately , the condition !lex->result was specified incorrect. It was supposed that the thd->lex->result is set only by parser on handling the clauses SELECT ... INTO but indeed it is also set inside mysql_open_cursor() and that fact was missed by the assert's condition. So, the fix for this issue is to just remove the condition !lex->result from the failing assert.
-
- 23 Aug, 2021 1 commit
-
-
Marko Mäkelä authored
TABLE_LIST::calc_md5(): Remove an untruthful const qualifier. thd_get_query_start_data(): Pass empty_clex_str instead of an uninitialized LEX_CSTRING.
-
- 26 May, 2021 1 commit
-
-
Igor Babaev authored
In the code existed just before this patch binding of a table reference to the specification of the corresponding CTE happens in the function open_and_process_table(). If the table reference is not the first in the query the specification is cloned in the same way as the specification of a view is cloned for any reference of the view. This works fine for standalone queries, but does not work for stored procedures / functions for the following reason. When the first call of a stored procedure/ function SP is processed the body of SP is parsed. When a query of SP is parsed the info on each encountered table reference is put into a TABLE_LIST object linked into a global chain associated with the query. When parsing of the query is finished the basic info on the table references from this chain except table references to derived tables and information schema tables is put in one hash table associated with SP. When parsing of the body of SP is finished this hash table is used to construct TABLE_LIST objects for all table references mentioned in SP and link them into the list of such objects passed to a pre-locking process that calls open_and_process_table() for each table from the list. When a TABLE_LIST for a view is encountered the view is opened and its specification is parsed. For any table reference occurred in the specification a new TABLE_LIST object is created to be included into the list for pre-locking. After all objects in the pre-locking have been looked through the tables mentioned in the list are locked. Note that the objects referenced CTEs are just skipped here as it is impossible to resolve these references without any info on the context where they occur. Now the statements from the body of SP are executed one by one that. At the very beginning of the execution of a query the tables used in the query are opened and open_and_process_table() now is called for each table reference mentioned in the list of TABLE_LIST objects associated with the query that was built when the query was parsed. For each table reference first the reference is checked against CTEs definitions in whose scope it occurred. If such definition is found the reference is considered resolved and if this is not the first reference to the found CTE the the specification of the CTE is re-parsed and the result of the parsing is added to the parsing tree of the query as a sub-tree. If this sub-tree contains table references to other tables they are added to the list of TABLE_LIST objects associated with the query in order the referenced tables to be opened. When the procedure that opens the tables comes to the TABLE_LIST object created for a non-first reference to a CTE it discovers that the referenced table instance is not locked and reports an error. Thus processing non-first table references to a CTE similar to how references to view are processed does not work for queries used in stored procedures / functions. And the main problem is that the current pre-locking mechanism employed for stored procedures / functions does not allow to save the context in which a CTE reference occur. It's not trivial to save the info about the context where a CTE reference occurs while the resolution of the table reference cannot be done without this context and consequentially the specification for the table reference cannot be determined. This patch solves the above problem by moving resolution of all CTE references at the parsing stage. More exactly references to CTEs occurred in a query are resolved right after parsing of the query has finished. After resolution any CTE reference it is marked as a reference to to derived table. So it is excluded from the hash table created for pre-locking used base tables and view when the first call of a stored procedure / function is processed. This solution required recursive calls of the parser. The function THD::sql_parser() has been added specifically for recursive invocations of the parser. # Conflicts: # sql/sql_cte.cc # sql/sql_cte.h # sql/sql_lex.cc # sql/sql_lex.h # sql/sql_view.cc # sql/sql_yacc.yy # sql/sql_yacc_ora.yy
-
- 25 May, 2021 1 commit
-
-
Igor Babaev authored
In the code existed just before this patch binding of a table reference to the specification of the corresponding CTE happens in the function open_and_process_table(). If the table reference is not the first in the query the specification is cloned in the same way as the specification of a view is cloned for any reference of the view. This works fine for standalone queries, but does not work for stored procedures / functions for the following reason. When the first call of a stored procedure/ function SP is processed the body of SP is parsed. When a query of SP is parsed the info on each encountered table reference is put into a TABLE_LIST object linked into a global chain associated with the query. When parsing of the query is finished the basic info on the table references from this chain except table references to derived tables and information schema tables is put in one hash table associated with SP. When parsing of the body of SP is finished this hash table is used to construct TABLE_LIST objects for all table references mentioned in SP and link them into the list of such objects passed to a pre-locking process that calls open_and_process_table() for each table from the list. When a TABLE_LIST for a view is encountered the view is opened and its specification is parsed. For any table reference occurred in the specification a new TABLE_LIST object is created to be included into the list for pre-locking. After all objects in the pre-locking have been looked through the tables mentioned in the list are locked. Note that the objects referenced CTEs are just skipped here as it is impossible to resolve these references without any info on the context where they occur. Now the statements from the body of SP are executed one by one that. At the very beginning of the execution of a query the tables used in the query are opened and open_and_process_table() now is called for each table reference mentioned in the list of TABLE_LIST objects associated with the query that was built when the query was parsed. For each table reference first the reference is checked against CTEs definitions in whose scope it occurred. If such definition is found the reference is considered resolved and if this is not the first reference to the found CTE the the specification of the CTE is re-parsed and the result of the parsing is added to the parsing tree of the query as a sub-tree. If this sub-tree contains table references to other tables they are added to the list of TABLE_LIST objects associated with the query in order the referenced tables to be opened. When the procedure that opens the tables comes to the TABLE_LIST object created for a non-first reference to a CTE it discovers that the referenced table instance is not locked and reports an error. Thus processing non-first table references to a CTE similar to how references to view are processed does not work for queries used in stored procedures / functions. And the main problem is that the current pre-locking mechanism employed for stored procedures / functions does not allow to save the context in which a CTE reference occur. It's not trivial to save the info about the context where a CTE reference occurs while the resolution of the table reference cannot be done without this context and consequentially the specification for the table reference cannot be determined. This patch solves the above problem by moving resolution of all CTE references at the parsing stage. More exactly references to CTEs occurred in a query are resolved right after parsing of the query has finished. After resolution any CTE reference it is marked as a reference to to derived table. So it is excluded from the hash table created for pre-locking used base tables and view when the first call of a stored procedure / function is processed. This solution required recursive calls of the parser. The function THD::sql_parser() has been added specifically for recursive invocations of the parser.
-
- 21 May, 2021 1 commit
-
-
Igor Babaev authored
In the code existed just before this patch binding of a table reference to the specification of the corresponding CTE happens in the function open_and_process_table(). If the table reference is not the first in the query the specification is cloned in the same way as the specification of a view is cloned for any reference of the view. This works fine for standalone queries, but does not work for stored procedures / functions for the following reason. When the first call of a stored procedure/ function SP is processed the body of SP is parsed. When a query of SP is parsed the info on each encountered table reference is put into a TABLE_LIST object linked into a global chain associated with the query. When parsing of the query is finished the basic info on the table references from this chain except table references to derived tables and information schema tables is put in one hash table associated with SP. When parsing of the body of SP is finished this hash table is used to construct TABLE_LIST objects for all table references mentioned in SP and link them into the list of such objects passed to a pre-locking process that calls open_and_process_table() for each table from the list. When a TABLE_LIST for a view is encountered the view is opened and its specification is parsed. For any table reference occurred in the specification a new TABLE_LIST object is created to be included into the list for pre-locking. After all objects in the pre-locking have been looked through the tables mentioned in the list are locked. Note that the objects referenced CTEs are just skipped here as it is impossible to resolve these references without any info on the context where they occur. Now the statements from the body of SP are executed one by one that. At the very beginning of the execution of a query the tables used in the query are opened and open_and_process_table() now is called for each table reference mentioned in the list of TABLE_LIST objects associated with the query that was built when the query was parsed. For each table reference first the reference is checked against CTEs definitions in whose scope it occurred. If such definition is found the reference is considered resolved and if this is not the first reference to the found CTE the the specification of the CTE is re-parsed and the result of the parsing is added to the parsing tree of the query as a sub-tree. If this sub-tree contains table references to other tables they are added to the list of TABLE_LIST objects associated with the query in order the referenced tables to be opened. When the procedure that opens the tables comes to the TABLE_LIST object created for a non-first reference to a CTE it discovers that the referenced table instance is not locked and reports an error. Thus processing non-first table references to a CTE similar to how references to view are processed does not work for queries used in stored procedures / functions. And the main problem is that the current pre-locking mechanism employed for stored procedures / functions does not allow to save the context in which a CTE reference occur. It's not trivial to save the info about the context where a CTE reference occurs while the resolution of the table reference cannot be done without this context and consequentially the specification for the table reference cannot be determined. This patch solves the above problem by moving resolution of all CTE references at the parsing stage. More exactly references to CTEs occurred in a query are resolved right after parsing of the query has finished. After resolution any CTE reference it is marked as a reference to to derived table. So it is excluded from the hash table created for pre-locking used base tables and view when the first call of a stored procedure / function is processed. This solution required recursive calls of the parser. The function THD::sql_parser() has been added specifically for recursive invocations of the parser.
-
- 20 May, 2021 1 commit
-
-
Rucha Deodhar authored
m_status == DA_OK_BULK' failed in Diagnostics_area::message from get_schema_tables_record Analysis: SET NAMES changes character set for character_set_client, character_set_connection, character_set_results to 'filename'. The .frm file of view has @xx sequences in the SELECT query, which give parsing error because 'filename' character set is not parser friendly. When we get parsing error (ER_PARSE_ERROR), we directly return true without setting error status. This is caught later in assertion. Fix: Disallow 'filename' character set in SET NAMES because it is not parser friendly.
-
- 19 May, 2021 14 commits
-
-
Monty authored
Many of the changes was needed to be able to collect and print engine name and table version id's in the ddl log.
-
Monty authored
-
Monty authored
The purpose of this task is to ensure that CREATE TRIGGER is atomic When a trigger is created, we first create a trigger_name.TRN file and then create or update the table_name.TRG files. This is done by creating .TRN~ and .TRG~ files and replacing (or creating) the result files. The new logic is - Log CREATE TRIGGER to DDL log, with a marker if old trigger existsted - If old .TRN or .TRG files exists, make backup copies of these - Create the new .TRN and .TRG files as before - Remove the backups Crash recovery - If query has been logged to binary log: - delete any left over backup files - else - Delete any old .TRN~ or .TRG~ files - If there was orignally some triggers (old .TRG file existed) - If we crashed before creating all backup files - Delete existing backup files - else - Restore backup files - end - Delete .TRN and .TRG file (as there was no triggers before One benefit of the new code is that CREATE OR REPLACE TRIGGER is now totally atomic even if there existed an old trigger: Either the old trigger will be replaced or the old one will be left untouched. Other things: - If sql_create_definition_file() would fail, there could be memory leaks in CREATE TRIGGER, DROP TRIGGER or CREATE OR REPLACE TRIGGER. This is now fixed.
-
Monty authored
The logic of the new code is: - Log CREATE view to DDL log, with a marker if old view existed - If old view exists (in case of CREATE or REPLACE view), make a copy of the old view as view_name.frm- - Create the new view definition file - Delete copy of view if it was created. Crash recovery: - Delete view_name.frm~ file (Temporary file for view definition) - If query was logged to binary log - Delete copy of view if it exists - else -rename the copy of the view over the .frm file (restoring the old definition) One benefit of the new code is that CREATE OR REPLACE VIEW for an existing view is no fully atomic: Either the view will be replaced or the old one will be left unchanged.
-
Monty authored
Description of how DROP DATABASE works after this patch - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Log either DROP TABLE or DROP DATABASE to binary log - De active ddl log entry This is in line of how things where before (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - If drop database was logged to ddl log but was not found in the binary log: - drop the db.opt file and database directory. - Log DROP DATABASE to binary log - If drop database was not logged to ddl log - Update binary log with DROP TABLE of the dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple DROP TABLE/VIEW queries. Other things: - Added DDL_LOG_STATE and 'current database' as arguments to mysql_rm_table_no_locks(). This was needed to be able to combine ddl logging of DROP DATABASE and DROP TABLE and make the generated DROP TABLE statements shorter. - To make the DROP TABLE statement created by ddl log shorter, I changed the binlogged query to use current directory and omit the directory part for all tables in the current directory. - Merged some DROP TABLE and DROP VIEW code in ddl logger. This was done to be able get separate DROP VIEW and DROP TABLE statements in the binary log. - Added a 'recovery_state' variable to remember the state of dropped tables and views. - Moved out code that drops database objects (stored procedures) from mysql_rm_db_internal() to drop_database_objects() for better code reuse. - Made mysql_rm_db_internal() global so that could be used by the ddl recovery code.
-
Monty authored
Logging logic: - Log tables, just before they are dropped, to the ddl log - After the last table for the statement is dropped, log an xid for the whole ddl log event In case of crash: - Remove first any active DROP TABLE events from the ddl log that matches xids found in binary log (this mean the drop was successful and was propery logged). - Loop over all active DROP TABLE events - Ensure that the table is completely dropped - Write a DROP TABLE entry to the binary log with the dropped tables. Other things: - Added code to ha_drop_table() to be able to tell the difference if a get_new_handler() failed because of out-of-memory or because the handler refused/was not able to create a a handler. This was needed to get sequences to work as sequences needs a share object to be passed to get_new_handler() - TC_LOG_BINLOG::recover() was changed to always collect Xid's from the binary log and always call ddl_log_close_binlogged_events(). This was needed to be able to collect DROP TABLE events with embedded Xid's (used by ddl log). - Added a new variable "$grep_script" to binlog filter to be able to find only rows that matches a regexp. - Had to adjust some test that changed because drop statements are a bit larger in the binary log than before (as we have to store the xid) Other things: - MDEV-25588 Atomic DDL: Binlog query event written upon recovery is corrupt fixed (in the original commit).
-
Monty authored
- Major rewrite of ddl_log.cc and ddl_log.h - ddl_log.cc described in the beginning how the recovery works. - ddl_log.log has unique signature and is dynamic. It's easy to add more information to the header and other ddl blocks while still being able to execute old ddl entries. - IO_SIZE for ddl blocks is now dynamic. Can be changed without affecting recovery of old logs. - Code is more modular and is now usable outside of partition handling. - Renamed log file to dll_recovery.log and added option --log-ddl-recovery to allow one to specify the path & filename. - Added ddl_log_entry_phase[], number of phases for each DDL action, which allowed me to greatly simply set_global_from_ddl_log_entry() - Changed how strings are stored in log entries, which allows us to store much more information in a log entry. - ddl log is now always created at start and deleted on normal shutdown. This simplices things notable. - Added probes debug_crash_here() and debug_simulate_error() to simply crash testing and allow crash after a given number of times a probe is executed. See comments in debug_sync.cc and rename_table.test for how this can be used. - Reverting failed table and view renames is done trough the ddl log. This ensures that the ddl log is tested also outside of recovery. - Added helper function 'handler::needs_lower_case_filenames()' - Extend binary log with Q_XID events. ddl log handling is using this to check if a ddl log entry was logged to the binary log (if yes, it will be deleted from the log during ddl_log_close_binlogged_events() - If a DDL entry fails 3 time, disable it. This is to ensure that if we have a crash in ddl recovery code the server will not get stuck in a forever crash-restart-crash loop. mysqltest.cc changes: - --die will now replace $variables with their values - $error will contain the error of the last failed statement storage engine changes: - maria_rename() was changed to be more robust against crashes during rename.
-
Monty authored
-
Monty authored
This change removed 68 explict strlen() calls from the code. The following renames was done to ensure we don't use the old names when merging code from earlier releases, as using the new variables for print function could result in crashes: - charset->csname renamed to charset->cs_name - charset->name renamed to charset->coll_name Almost everything where mechanical changes except: - Changed to use the new Protocol::store(LEX_CSTRING..) when possible - Changed to use field->store(LEX_CSTRING*, CHARSET_INFO*) when possible - Changed to use String->append(LEX_CSTRING&) when possible Other things: - There where compiler issues with ensuring that all character set names points to the same string: gcc doesn't allow one to use integer constants when defining global structures (constant char * pointers works fine). To get around this, I declared defines for each character set name length.
-
Monty authored
Changes: - To detect automatic strlen() I removed the methods in String that uses 'const char *' without a length: - String::append(const char*) - Binary_string(const char *str) - String(const char *str, CHARSET_INFO *cs) - append_for_single_quote(const char *) All usage of append(const char*) is changed to either use String::append(char), String::append(const char*, size_t length) or String::append(LEX_CSTRING) - Added STRING_WITH_LEN() around constant string arguments to String::append() - Added overflow argument to escape_string_for_mysql() and escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow. This was needed as most usage of the above functions never tested the result for -1 and would have given wrong results or crashes in case of overflows. - Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING. Changed all Item_func::func_name()'s to func_name_cstring()'s. The old Item_func_or_sum::func_name() is now an inline function that returns func_name_cstring().str. - Changed Item::mode_name() and Item::func_name_ext() to return LEX_CSTRING. - Changed for some functions the name argument from const char * to to const LEX_CSTRING &: - Item::Item_func_fix_attributes() - Item::check_type_...() - Type_std_attributes::agg_item_collations() - Type_std_attributes::agg_item_set_converter() - Type_std_attributes::agg_arg_charsets...() - Type_handler_hybrid_field_type::aggregate_for_result() - Type_handler_geometry::check_type_geom_or_binary() - Type_handler::Item_func_or_sum_illegal_param() - Predicant_to_list_comparator::add_value_skip_null() - Predicant_to_list_comparator::add_value() - cmp_item_row::prepare_comparators() - cmp_item_row::aggregate_row_elements_for_comparison() - Cursor_ref::print_func() - Removes String_space() as it was only used in one cases and that could be simplified to not use String_space(), thanks to the fixed my_vsnprintf(). - Added some const LEX_CSTRING's for common strings: - NULL_clex_str, DATA_clex_str, INDEX_clex_str. - Changed primary_key_name to a LEX_CSTRING - Renamed String::set_quick() to String::set_buffer_if_not_allocated() to clarify what the function really does. - Rename of protocol function: bool store(const char *from, CHARSET_INFO *cs) to bool store_string_or_null(const char *from, CHARSET_INFO *cs). This was done to both clarify the difference between this 'store' function and also to make it easier to find unoptimal usage of store() calls. - Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*) - Changed some 'const char*' arrays to instead be of type LEX_CSTRING. - class Item_func_units now used LEX_CSTRING for name. Other things: - Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character in the prompt would cause some part of the prompt to be duplicated. - Fixed a lot of instances where the length of the argument to append is known or easily obtain but was not used. - Removed some not needed 'virtual' definition for functions that was inherited from the parent. I added override to these. - Fixed Ordered_key::print() to preallocate needed buffer. Old code could case memory overruns. - Simplified some loops when adding char * to a String with delimiters.
-
Alexander Barkov authored
The name change was to make the intention of the flag more clear and also because most usage of the old flag was to test for NOT IS_AUTOGENERATED_NAME. Note that the new flag is the inverse of the old one!
-
Monty authored
This was done to simplify copying of with_* flags Other things: - Changed Flags to C++ enums, which enables gdb to print out bit values for the flags. This also enables compiler errors if one tries to manipulate a non existing bit in a variable. - Added set_maybe_null() as a shortcut as setting the MAYBE_NULL flags was used in a LOT of places. - Renamed PARAM flag to SP_VAR to ensure it's not confused with persistent statement parameters.
-
Michael Widenius authored
The reason for the change is that neither clang or gcc can do efficient code when several bit fields are change at the same time or when copying one or more bits between identical bit fields. Updated bits explicitely with & and | is MUCH more efficient than what current compilers can do.
-
Michael Widenius authored
This is to make the Item instances smaller
-
- 21 Apr, 2021 2 commits
-
-
Vicențiu Ciorbaru authored
Replace * select_lex::offset_limit * select_lex::select_limit * select_lex::explicit_limit with select_lex::Lex_select_limit The Lex_select_limit already existed with the same elements and was used in by the yacc parser. This commit is in preparation for FETCH FIRST implementation, as it simplifies a lot of the code. Additionally, the parser is simplified by making use of the stack to return Lex_select_limit objects. Cleanup of init_query() too. Removes explicit_limit= 0 as it's done a bit later in init_select() with limit_params.empty()
-
Alexey Botchkov authored
The specific table handler for the table functions was introduced, and used to implement JSON_TABLE.
-
- 08 Apr, 2021 1 commit
-
-
Daniel Black authored
Adds an implementation for SELECT ... FOR UPDATE SKIP LOCKED / SELECT ... LOCK IN SHARED MODE SKIP LOCKED This is implemented only InnoDB at the moment, not in RockDB yet. This adds a new hander flag HA_CAN_SKIP_LOCKED than will be used when the storage engine advertises the flag. When a storage engine indicates this flag it will get TL_WRITE_SKIP_LOCKED and TL_READ_SKIP_LOCKED transaction types. The Lex structure has been updated to store both the FOR UPDATE/LOCK IN SHARE as well as the SKIP LOCKED so the SHOW CREATE VIEW implementation is simplier. "SELECT FOR UPDATE ... SKIP LOCKED" combined with CREATE TABLE AS or INSERT.. SELECT on the result set is not safe for STATEMENT based replication. MIXED replication will replicate this as row based events." Thanks to guidance from Facebook commit https://github.com/facebook/mysql-5.6/commit/193896c466d43fd905a62a60f1d73fd9c551a6e4 This helped verify basic test case, and components that need implementing (even though every part was implemented differently). Thanks Marko for guidance on simplier InnoDB implementation. Reviewers: Marko, Monty
-
- 07 Dec, 2020 1 commit
-
-
Igor Babaev authored
For table references to CTEs the field TABLE_LIST::db must be set to an empty string as it's done for table references to derived tables in order CTEs to be processed similar to how derived tables are processed. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 29 Jul, 2020 1 commit
-
-
Sergei Golubchik authored
if mysql_create_view() is aborted when view is linked into lex (when WSREP_TO_ISOLATION_BEGIN fails), it should not be linked there again on err:.
-
- 14 Jun, 2020 1 commit
-
-
Monty authored
- Produce a "Note" for all wrongly dropped objects (Like doing DROP VIEW on a table). - IF EXISTS ends with a list of all not existing objects, instead of a separate note for every not existing object. Other things: - Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times and create multiple temporary sequences with the same name.
-
- 12 Jun, 2020 2 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
if mysql_create_view is aborted when `view` isn't unlinked, it should not be linked back on cleanup
-
- 03 Apr, 2020 2 commits
-
-
Aleksey Midenkov authored
libmariadb revision updated.
-
Sergey Vojtovich authored
TDC_RT_REMOVE_ALL -> tdc_remove_table(). Some occurrences replaced with TDC_element::flush() (whenver TABLE_SHARE is available). TDC_RT_REMOVE_NOT_OWN[_KEEP_SHARE] -> TDC_element::flush(). These modes assume that current thread owns TABLE_SHARE reference, which means we can avoid hash lookup and flush unused TABLE instances directly. TDC_RT_REMOVE_UNUSED -> TDC_element::flush_unused(). Only [ab]used by mysql_admin_table() currently. Should be removed eventually. Part of MDEV-17882 - Cleanup refresh version
-
- 10 Mar, 2020 2 commits
-
-
Alexander Barkov authored
-
Oleksandr Byelkin authored
Added CYCLE ... RESTRICT (nonstandard) clause to recursive CTE.
-