An error occurred fetching the project authors.
- 16 Sep, 2024 3 commits
-
-
Christian Gonzalez authored
Update `SESSION_USER()` behaviour to be comparable with `CURRENT_USER()`. `SESSION_USER()` will return the user and host columns from `mysql.user` used to authenticate the user when the session was created. Historically `SESSION_USER()` was an alias of `USER()` function. The main difference with `USER()` behaviour after this changes is that `SESSION_USER()` now returns the host column from `mysql.user` instead of the client host or ip. NOTE: `SESSION_USER_IS_USER` old mode is added to make the change backward compatible. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Alexander Barkov authored
Changing the return type of the following functions: - CURRENT_TIMESTAMP, CURRENT_TIMESTAMP(), NOW() - SYSDATE() - FROM_UNIXTIME() from DATETIME to TIMESTAMP. Note, the old function NOW() returning DATETIME is still available as LOCALTIMESTAMP or LOCALTIMESTAMP(), e.g.: SELECT LOCALTIMESTAMP, -- DATETIME CURRENT_TIMESTAMP; -- TIMESTAMP The change in the functions return data type fixes some problems that occurred near a DST change: - Problem #1 INSERT INTO t1 (timestamp_field) VALUES (CURRENT_TIMESTAMP); INSERT INTO t1 (timestamp_field) VALUES (COALESCE(CURRENT_TIMESTAMP)); could result into two different values inserted. - Problem #2 INSERT INTO t1 (timestamp_field) VALUES (FROM_UNIXTIME(1288477526)); INSERT INTO t1 (timestamp_field) VALUES (FROM_UNIXTIME(1288477526+3600)); could result into two equal TIMESTAMP values near a DST change. Additional changes: - FROM_UNIXTIME(0) now returns SQL NULL instead of '1970-01-01 00:00:00' (assuming time_zone='+00:00') - UNIX_TIMESTAMP('1970-01-01 00:00:00') now returns SQL NULL instead of 0 (assuming time_zone='+00:00' These additional changes are needed for consistency with TIMESTAMP fields, which cannot store '1970-01-01 00:00:00 +00:00'.
-
Alexander Barkov authored
Adding support for the ROW data type in the stored function RETURNS clause: - explicit ROW(..members...) for both sql_mode=DEFAULT and sql_mode=ORACLE CREATE FUNCTION f1() RETURNS ROW(a INT, b VARCHAR(32)) ... - anchored "ROW TYPE OF [db1.]table1" declarations for sql_mode=DEFAULT CREATE FUNCTION f1() RETURNS ROW TYPE OF test.t1 ... - anchored "[db1.]table1%ROWTYPE" declarations for sql_mode=ORACLE CREATE FUNCTION f1() RETURN test.t1%ROWTYPE ... Adding support for anchored scalar data types in RETURNS clause: - "TYPE OF [db1.]table1.column1" for sql_mode=DEFAULT CREATE FUNCTION f1() RETURNS TYPE OF test.t1.column1; - "[db1.]table1.column1" for sql_mode=ORACLE CREATE FUNCTION f1() RETURN test.t1.column1%TYPE; Details: - Adding a new sql_mode_t parameter to sp_head::create() sp_head::sp_head() sp_package::create() sp_package::sp_package() to guarantee early initialization of sp_head::m_sql_mode. Before this change, this member was not initialized at all during CREATE FUNCTION/PROCEDURE/PACKAGE statements, and was not used. Now it needs to be initialized to write properly the mysql.proc.returns column, according to the create time sql_mode. - Code refactoring to make the things simpler and functions smaller: * Adding a new method Field_row::row_create_fields(THD *thd, List<Spvar_definition> *list) to make a Virtual_tmp_table with Fields for ROW members from an explicit definition. * Adding a new method Field_row::row_create_fields(THD *thd, const Spvar_definition &def) to make a Virtual_tmp_table with Fields for ROW members from an explicit or a table anchored definition. * Adding a new method Item_args::add_array_of_item_field(THD *thd, const Virtual_tmp_table &vtable) to create and array of Item_field corresponding to all Field instances in a Virtual_tmp_table * Removing Item_field_row::row_create_items(). It was decomposed into the new methods described above. * Moving the code from the loop body in sp_rcontext::init_var_items() into a separate method Spvar_definition::make_item_field_row(), to make the code clearer (smaller functions). make_item_field_row() itself uses the new methods described above. - Changing the data type of sp_head::m_return_field_def from Column_definition to Spvar_definition. So now it supports not only SQL column field types, but also explicit ROW and anchored ROW data types, as well as anchored column types. - Adding a new Column_definition parameter to sp_head::create_result_field(). Before this patch, create_result_field() took the definition only from m_return_field_def. Now it's also called with a local Column_definition variable which contains the explicit definition resolved from an anchored defition. - Modifying sql_yacc.yy to support the new grammar. Adding new helper methods: * sf_return_fill_definition_row() * sf_return_fill_definition_rowtype_of() * sf_return_fill_definition_type_of() - Fixing tests in: * Virtual_tmp_table::setup_field_pointers() in sql_select.cc * Send_field::normalize() in field.h * store_column_type() to prevent calling Type_handler_row::field_type(), which is implemented a DBUG_ASSERT(0). Before this patch the affected methods and functions were called only for scalar data types. Now ROW is also possible. - Adding a new virtual method Field::cols() - Overriding methods: Item_func_sp::cols() Item_func_sp::element_index() Item_func_sp::check_cols() Item_func_sp::bring_value() to support the ROW data type. - Extending the rule sp_return_type to support * explicit ROW and anchored ROW data types * anchored scalar data types - Overriding Field_row::sql_type() to print the data type of an explicit ROW.
-
- 14 Sep, 2024 4 commits
-
-
Sergei Golubchik authored
MDEV-33407 Parser support for vector indexes The syntax is create table t1 (... vector index (v) ...); limitation: * v is a binary string and NOT NULL * only one vector index per table * temporary tables are not supported MDEV-33404 Engine-independent indexes: subtable method added support for so-called "high level indexes", they are not visible to the storage engine, implemented on the sql level. For every such an index in a table, say, t1, the server implicitly creates a second table named, like, t1#i#05 (where "05" is the index number in t1). This table has a fixed structure, no frm, not accessible directly, doesn't go into the table cache, needs no MDLs. MDEV-33406 basic optimizer support for k-NN searches for a query like SELECT ... ORDER BY func() optimizer will use item_func->part_of_sortkey() to decide what keys can be used to resolve ORDER BY.
-
Sergei Golubchik authored
create templates thd->alloc<X>(n) to use instead of (X*)thd->alloc(sizeof(X)*n) and the same for thd->calloc(). By the default the type is char, so old usage of thd->alloc(size) works too.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
- 25 Jul, 2024 1 commit
-
-
Monty authored
MDEV-33856: Alternative Replication Lag Representation via Received/Executed Master Binlog Event Timestamps This commit adds 3 new status variables to 'show all slaves status': - Master_last_event_time ; timestamp of the last event read from the master by the IO thread. - Slave_last_event_time ; Master timestamp of the last event committed on the slave. - Master_Slave_time_diff: The difference of the above two timestamps. All the above variables are NULL until the slave has started and the slave has read one query event from the master that changes data. - Added information_schema.slave_status, which allows us to remove: - show_master_info(), show_master_info_get_fields(), send_show_master_info_data(), show_all_master_info() - class Sql_cmd_show_slave_status. - Protocol::store(I_List<i_string_pair>* str_list) as it is not used anymore. - Changed old SHOW SLAVE STATUS and SHOW ALL SLAVES STATUS to use the SELECT code path, as all other SHOW ... STATUS commands. Other things: - Xid_log_time is set to time of commit to allow slave that reads the binary log to calculate Master_last_event_time and Slave_last_event_time. This is needed as there is not 'exec_time' for row events. - Fixed that Load_log_event calculates exec_time identically to Query_event. - Updated RESET SLAVE to reset Master/Slave_last_event_time - Updated SQL thread's update on first transaction read-in to only update Slave_last_event_time on group events. - Fixed possible (unlikely) bugs in sql_show.cc ...old_format() functions if allocation of 'field' would fail. Reviewed By: Brandon Nesterenko <brandon.nesterenko@mariadb.com> Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 16 Jul, 2024 1 commit
-
-
Daniel Black authored
Gain MySQL compatibility by allowing table aliases in a single table statement. This now supports the syntax of: DELETE [delete_opts] FROM tbl_name [[AS] tbl_alias] [PARTITION (partition_name [, partition_name] ...)] .... The delete.test is from MySQL commit 1a72b69778a9791be44525501960b08856833b8d / Change-Id: Iac3a2b5ed993f65b7f91acdfd60013c2344db5c0. Co-Author: Gleb Shchepa <gleb.shchepa@oracle.com> (for delete.test) Reviewed by Igor Babaev (igor@mariadb.com)
-
- 10 Jul, 2024 1 commit
-
-
Dave Gosselin authored
Improve performance of queries like SELECT * FROM t1 WHERE field = NAME_CONST('a', 4); by, in this example, replacing the WHERE clause with field = 4 in the case of ref access. The rewrite is done during fix_fields and we disambiguate this case from other cases of NAME_CONST by inspecting where we are in parsing. We rely on THD::where to accomplish this. To improve performance there, we change the type of THD::where to be an enumeration, so we can avoid string comparisons during Item_name_const::fix_fields. Consequently, this patch also changes all usages of THD::where to conform likewise.
-
- 04 Jun, 2024 1 commit
-
-
Alexander Barkov authored
The @@global.character_set_client variable could erroneously be set to a non-default collation of its character set, which further made the `SET NAMES DEFAULT` statement crash the server. Fixing the code to make sure that the global value these variables: @@character_set_client @@character_set_connection @@character_set_server @@character_set_database @@character_set_connection point to the default compiled collations of the character set.
-
- 27 May, 2024 3 commits
-
-
Sergei Golubchik authored
add old-mode that restores inconsistent legacy behavior for FLUSH STATUS. It doesn't affect FLUSH { SESSION | GLOBAL } STATUS.
-
Monty authored
- FLUSH GLOBAL STATUS now resets most global_status_vars. At this stage, this is mainly to be used for testing. - FLUSH SESSION STATUS added as an alias for FLUSH STATUS. - FLUSH STATUS does not require any privilege (before required RELOAD). - FLUSH GLOBAL STATUS requires RELOAD privilege. - All global status reset moved to FLUSH GLOBAL STATUS. - Replication semisync status variables are now reset by FLUSH GLOBAL STATUS. - In test cases, the only changes are: - Replace FLUSH STATUS with FLUSH GLOBAL STATUS - Replace FLUSH STATUS with FLUSH STATUS; FLUSH GLOBAL STATUS. This was only done in a few tests where the test was using SHOW STATUS for both local and global variables. - Uptime_since_flush_status is now always provided, independent if ENABLED_PROFILING is enabled when compiling MariaDB. - @@global.Uptime_since_flush_status is reset on FLUSH GLOBAL STATUS and @@session.Uptime_since_flush_status is reset on FLUSH SESSION STATUS. - When connected, @@session.Uptime_since_flush_status is set to 0.
-
Monty authored
This task is to ensure we have a clear definition and rules of how to repair or optimize a table. The rules are: - REPAIR should be used with tables that are crashed and are unreadable (hardware issues with not readable blocks, blocks with 'unexpected data' etc) - OPTIMIZE table should be used to optimize the storage layout for the table (recover space for delete rows and optimize the index structure. - ALTER TABLE table_name FORCE should be used to rebuild the .frm file (the table definition) and the table (with the original table row format). If the table is from and older MariaDB/MySQL release with a different storage format, it will convert the data to the new format. ALTER TABLE ... FORCE is used as part of mariadb-upgrade Here follows some more background: The 3 ways to repair a table are: 1) ALTER TABLE table_name FORCE" (not other options). As an alias we allow: "ALTER TABLE table_name ENGINE=original_engine" 2) "REPAIR TABLE" (without FORCE) 3) "OPTIMIZE TABLE" All of the above commands will optimize row space usage (which means that space will be needed to hold a temporary copy of the table) and re-generate all indexes. They will also try to replicate the original table definition as exact as possible. For ALTER TABLE and "REPAIR TABLE without FORCE", the following will hold: If the table is from an older MariaDB version and data conversion is needed (for example for old type HASH columns, MySQL JSON type or new TIMESTAMP format) "ALTER TABLE table_name FORCE, algorithm=COPY" will be used. The differences between the algorithms are 1) Will use the fastest algorithm the engine supports to do a full repair of the table (except if data conversions are is needed). 2) Will use the storage engine internal REPAIR facility (MyISAM, Aria). If the engine does not support REPAIR then "ALTER TABLE FORCE, ALGORITHM=COPY" will be used. If there was data incompatibilities (which means that FORCE was used) then there will be a warning after REPAIR that ALTER TABLE FORCE is still needed. The reason for this is that REPAIR may be able to go around data errors (wrong incompatible data, crashed or unreadable sectors) that ALTER TABLE cannot do. 3) Will use the storage engine internal OPTIMIZE. If engine does not support optimize, then "ALTER TABLE FORCE" is used. The above will ensure that ALTER TABLE FORCE is able to correct almost any errors in the row or index data. In case of corrupted blocks then REPAIR possible followed by ALTER TABLE is needed. This is important as mariadb-upgrade executes ALTER TABLE table_name FORCE for any table that must be re-created. Bugs fixed with InnoDB tables when using ALTER TABLE FORCE: - No error for INNODB_DEFAULT_ROW_FORMAT=COMPACT even if row length would be too wide. (Independent of innodb_strict_mode). - Tables using symlinks will be symlinked after any of the above commands (Independent of the setting of --symbolic-links) If one specifies an algorithm together with ALTER TABLE FORCE, things will work as before (except if data conversion is required as then the COPY algorithm is enforced). ALTER TABLE .. OPTIMIZE ALL PARTITIONS will work as before. Other things: - FORCE argument added to REPAIR to allow one to first run internal repair to fix damaged blocks and then follow it with ALTER TABLE. - REPAIR will not update frm_version if ha_check_for_upgrade() finds that table is still incompatible with current version. In this case the REPAIR will end with an error. - REPAIR for storage engines that does not have native repair, like InnoDB, is now using ALTER TABLE FORCE. - REPAIR csv-table USE_FRM now works. - It did not work before as CSV tables had extension list in wrong order. - Default error messages length for %M increased from 128 to 256 to not cut information from REPAIR. - Documented HA_ADMIN_XX variables related to repair. - Added HA_ADMIN_NEEDS_DATA_CONVERSION to signal that we have to do data conversions when converting the table (and thus ALTER TABLE copy algorithm is needed). - Fixed typo in error message (caused test changes).
-
- 27 Apr, 2024 1 commit
-
-
Alexander Barkov authored
Fixing the problem that an operation involving a mix of two or more GEOMETRY operands did not preserve their SRIDs. Now SRIDs are preserved by hybrid functions, subqueries, TVCs, UNIONs, VIEWs. Incompatible change: An attempt to mix two different SRIDs now raises an error. Details: - Adding a new class Type_extra_attributes. It's a generic container which can store very specific data type attributes. For now it can store one uint32 and one const pointer attribute (for GEOMETRY's SRID and for ENUM/SET TYPELIB respectively). In the future it can grow as needed. Type_extra_attributes will also be reused soon to store "const Type_zone*" pointers for the TIMESTAMP's "WITH TIME ZONE 'tz'" attribute (a timestamp data type with a fixed time zone independent from @@time_zone). The time zone attribute will be stored in exactly the same way like a TYPELIB pointer is stored by ENUM/SET. - Removing Column_definition_attributes members "interval" and "srid". Deriving Column_definition_attributes from the generic attribute container Type_extra_attributes instead. - Adding a new class Type_typelib_attributes, to store the TYPELIB of the ENUM and SET data types. Deriving Field_enum from it. Removing the member Field_enum::typelib. - Adding a new class Type_geom_attributes, to store the GEOMETRY related attributes. Deriving Field_geom from it. Removing the member Field_geom::srid. - Removing virtual methods: Field::get_typelib() Type_all_attributes::get_typelib() and Type_all_attributes::set_typelib() They were very specific to TYPELIB. Adding more generic virtual methods instead: * Field::type_extra_attributes() - to get extra attributes * Type_all_attributes::type_extra_attributes() - to get extra attributes * Type_all_attributes::type_extra_attributes_addr() - to set extra attributes - Removing Item_type_holder::enum_set_typelib. Deriving Item_type_holder from the generic attribute container Type_extra_attributes instead. This makes it possible for UNION to preserve SRID (in addition to preserving TYPELIB). - Deriving Item_hybrid_func from Type_extra_attributes. This makes it possible for hybrid functions (e.g. CASE, COALESCE, LEAST, GREATEST etc) to preserve SRID. - Deriving Item_singlerow_subselect from Type_extra_attributes and overriding methods: * Item_cache::type_extra_attributes() * subselect_single_select_engine::fix_length_and_dec() * Item_singlerow_subselect::type_extra_attributes() * Item_singlerow_subselect::type_extra_attributes_addr() This is needed to preserve SRID in subqueries and TVCs - Cleanup: fixing the data type of members * Binlog_type_info::m_enum_typelib * Binlog_type_info::m_set_typelib from "TYPELIB *" to "const TYPELIB *"
-
- 23 Apr, 2024 1 commit
-
-
Monty authored
I checked all stack overflow potential problems found with gcc -Wstack-usage=16384 and clang -Wframe-larger-than=16384 -no-inline Fixes: Added '#pragma clang diagnostic ignored "-Wframe-larger-than="' to a lot of function to where stack usage large but resonable. - Added stack check warnings to BUILD scrips when using clang and debug. Function changed to use malloc instead allocating things on stack: - read_bootstrap_query() now allocates line_buffer (20000 bytes) with malloc() instead of using stack. This has a small performance impact but this is not releant for bootstrap. - mroonga grn_select() used 65856 bytes on stack. Changed it to use malloc(). - Wsrep_schema::replay_transaction() and Wsrep_schema::recover_sr_transactions(). - Connect zipOpen3() Not fixed: - mroonga/vendor/groonga/lib/expr.c grn_proc_call() uses 43712 byte on stack. However this is not easy to fix as the stack used is caused by a lot of code generated by defines. - Most changes in mroonga/groonga where only adding of pragmas to disable stack warnings. - rocksdb/options/options_helper.cc uses 20288 of stack space. (no reason to fix except to get rid of the compiler warning) - Causes using alloca() where the allocation size is resonable. - An issue in libmariadb (reported to connectors).
-
- 18 Apr, 2024 1 commit
-
-
Alexander Barkov authored
This patch also fixes: MDEV-33050 Build-in schemas like oracle_schema are accent insensitive MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0 MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0 MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0 MDEV-33088 Cannot create triggers in the database `MYSQL` MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0 MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0 MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0 MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0 - Removing the virtual function strnncoll() from MY_COLLATION_HANDLER - Adding a wrapper function CHARSET_INFO::streq(), to compare two strings for equality. For now it calls strnncoll() internally. In the future it will turn into a virtual function. - Adding new accent sensitive case insensitive collations: - utf8mb4_general1400_as_ci - utf8mb3_general1400_as_ci They implement accent sensitive case insensitive comparison. The weight of a character is equal to the code point of its upper case variant. These collations use Unicode-14.0.0 casefolding data. The result of my_charset_utf8mb3_general1400_as_ci.strcoll() is very close to the former my_charset_utf8mb3_general_ci.strcasecmp() There is only a difference in a couple dozen rare characters, because: - the switch from "tolower" to "toupper" comparison, to make utf8mb3_general1400_as_ci closer to utf8mb3_general_ci - the switch from Unicode-3.0.0 to Unicode-14.0.0 This difference should be tolarable. See the list of affected characters in the MDEV description. Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters! Unlike utf8mb4_general_ci, it does not treat all BMP characters as equal. - Adding classes representing names of the file based database objects: Lex_ident_db Lex_ident_table Lex_ident_trigger Their comparison collation depends on the underlying file system case sensitivity and on --lower-case-table-names and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci. - Adding classes representing names of other database objects, whose names have case insensitive comparison style, using my_charset_utf8mb3_general1400_as_ci: Lex_ident_column Lex_ident_sys_var Lex_ident_user_var Lex_ident_sp_var Lex_ident_ps Lex_ident_i_s_table Lex_ident_window Lex_ident_func Lex_ident_partition Lex_ident_with_element Lex_ident_rpl_filter Lex_ident_master_info Lex_ident_host Lex_ident_locale Lex_ident_plugin Lex_ident_engine Lex_ident_server Lex_ident_savepoint Lex_ident_charset engine_option_value::Name - All the mentioned Lex_ident_xxx classes implement a method streq(): if (ident1.streq(ident2)) do_equal(); This method works as a wrapper for CHARSET_INFO::streq(). - Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name" in class members and in function/method parameters. - Replacing all calls like system_charset_info->coll->strcasecmp(ident1, ident2) to ident1.streq(ident2) - Taking advantage of the c++11 user defined literal operator for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h) data types. Use example: const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column; is now a shorter version of: const Lex_ident_column primary_key_name= Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
-
- 28 Feb, 2024 1 commit
-
-
Alexander Barkov authored
Under terms of MDEV 27490 we'll add support for non-BMP identifiers and upgrade casefolding information to Unicode version 14.0.0. In Unicode-14.0.0 conversion to lower and upper cases can increase octet length of the string, so conversion won't be possible in-place any more. This patch removes virtual functions performing in-place casefolding: - my_charset_handler_st::casedn_str() - my_charset_handler_st::caseup_str() and fixes the code to use the non-inplace functions instead: - my_charset_handler_st::casedn() - my_charset_handler_st::caseup()
-
- 21 Feb, 2024 1 commit
-
-
Yuchen Pei authored
- Add `as <int_type>` to sequence creation options - int_type can be signed or unsigned integer types, including tinyint, smallint, mediumint, int and bigint - Limitation: when alter sequence as <new_int_type>, cannot have any other alter options in the same statement - Limitation: increment remains signed longlong, and the hidden constraint (cache_size x abs(increment) < longlong_max) stays for unsigned types. This means for bigint unsigned, neither abs(increment) nor (cache_size x abs(increment)) can be between longlong_max and ulonglong_max - Truncating maxvalue and minvalue from user input to the nearest max or min value of the type, plus or minus 1. When the truncation happens, a warning is emitted - Information schema table for sequences
-
- 17 Feb, 2024 1 commit
-
-
Sergei Golubchik authored
-
- 24 Jan, 2024 1 commit
-
-
Alexey Botchkov authored
The IDENT_sys doesn't include keywords, so the function with the keyword name can be created, but cannot be called. Moving keywords to new rules keyword_func_sp_var_and_label and keyword_func_sp_var_not_label so the functions with these names are allowed.
-
- 08 Jan, 2024 1 commit
-
-
Sergei Golubchik authored
-
- 18 Dec, 2023 1 commit
-
-
Alexander Barkov authored
This patch adds PACKAGE support with SQL/PSM dialect for sql_mode=DEFAULT: - CREATE PACKAGE - DROP PACKAGE - CREATE PACKAGE BODY - DROP PACKAGE BODY - Package function and procedure invocation from outside of the package: -- using two step identifiers SELECT pkg.f1(); CALL pkg.p1() -- using three step identifiers SELECT db.pkg.f1(); CALL db.pkg.p1(); This is a non-standard MariaDB extension. However, later this code can be used to implement the SQL Standard and DB2 dialects of CREATE MODULE.
-
- 13 Dec, 2023 1 commit
-
-
Daniel Black authored
Like all IF NOT EXISTS syntax, a Note should be generated. The original commit of Seqeuences cleared the IF NOT EXISTS part in the sql/sql_yacc.yy with lex->create_info.init(). Without this bit set there was no way it could do anything other than error. To remedy this removal, the sql_yacc.yy components have been minimised as they where all set at the beginning of the ALTER. This way the opt_if_not_exists correctly set the IF_EXISTS flag. In MDEV-13005 (bb4dd70e) the error code changed, requiring ER_UNKNOWN_SEQUENCES to be handled in the function No_such_table_error_handler::handle_condition.
-
- 07 Dec, 2023 1 commit
-
-
Aleksey Midenkov authored
1. WITHOUT/WITH VALIDATION may be added to EXCHANGE PARTITION or CONVERT TABLE: alter table tp exchange partition p1 with table t with validation; alter table tp exchange partition p1 with table t; -- same as with validation alter table tp exchange partition p1 with table t without validation; 2. Optional THAN keyword for RANGE partitioning. Normally you type: create table tp (a int primary key) partition by range (a) ( partition p0 values less than (100), partition p1 values less than maxvalue); Now you may type (PARTITION keyword is also optional): create table tp (a int primary key) partition by range (a) ( p0 values less (100), p1 values less maxvalue);
-
- 08 Nov, 2023 1 commit
-
-
Alexander Barkov authored
The crash happened with an indexed virtual column whose value is evaluated using a function that has a different meaning in sql_mode='' vs sql_mode=ORACLE: - DECODE() - LTRIM() - RTRIM() - LPAD() - RPAD() - REPLACE() - SUBSTR() For example: CREATE TABLE t1 ( b VARCHAR(1), g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, KEY g(g) ); So far we had replacement XXX_ORACLE() functions for all mentioned function, e.g. SUBSTR_ORACLE() for SUBSTR(). So it was possible to correctly re-parse SUBSTR_ORACLE() even in sql_mode=''. But it was not possible to re-parse the MariaDB version of SUBSTR() after switching to sql_mode=ORACLE. It was erroneously mis-interpreted as SUBSTR_ORACLE(). As a result, this combination worked fine: SET sql_mode=ORACLE; CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...; INSERT ... FLUSH TABLES; SET sql_mode=''; INSERT ... But the other way around it crashed: SET sql_mode=''; CREATE TABLE t1 ... g CHAR(1) GENERATED ALWAYS AS (SUBSTR(b,0,0)) VIRTUAL, ...; INSERT ... FLUSH TABLES; SET sql_mode=ORACLE; INSERT ... At CREATE time, SUBSTR was instantiated as Item_func_substr and printed in the FRM file as substr(). At re-open time with sql_mode=ORACLE, "substr()" was erroneously instantiated as Item_func_substr_oracle. Fix: The fix proposes a symmetric solution. It provides a way to re-parse reliably all sql_mode dependent functions to their original CREATE TABLE time meaning, no matter what the open-time sql_mode is. We take advantage of the same idea we previously used to resolve sql_mode dependent data types. Now all sql_mode dependent functions are printed by SHOW using a schema qualifier when the current sql_mode differs from the function sql_mode: SET sql_mode=''; CREATE TABLE t1 ... SUBSTR(a,b,c) ..; SET sql_mode=ORACLE; SHOW CREATE TABLE t1; -> mariadb_schema.substr(a,b,c) SET sql_mode=ORACLE; CREATE TABLE t2 ... SUBSTR(a,b,c) ..; SET sql_mode=''; SHOW CREATE TABLE t1; -> oracle_schema.substr(a,b,c) Old replacement names like substr_oracle() are still understood for backward compatibility and used in FRM files (for downgrade compatibility), but they are not printed by SHOW any more.
-
- 27 Oct, 2023 1 commit
-
-
Yuchen Pei authored
MDEV-27106 added REMOTE_TABLE, REMOTE_DATABASE, REMOTE_SERVER spider table options. In this commit, we add all remaining options for table params that are not marked to be deprecated. All these options are parsed as strings from sql statements and have string values at the sql level, so that we can determine whether it is specified by checking its nullness. The string values are further parsed by Spider into their actual types in the SPIDER_SHARE, including string list, bounded nonnegative int, bounded nonnegative int list, nonnegative longlong, boolean, and key hints. Except for string lists, all other types are validated during this parsing process. Most of the options are backward compatible, i.e. they accept any values that is accepted by there corresponding param parser. The only exception is the index hint IDX which corresponds to the idxNNN param name. For example, 'idx000 "f PRIMARY", idx001 "u k1"' translates to IDX="f PRIMARY u k1". We include a test with all options specified, and tests involving spider table options of all actual types. Any table options, if present, will cause comments to be ignored with a warning. The warning can be disabled by setting a new spider global/session system variable spider_suppress_comment_ignored_warning to 1. Another global/session variable introduced is spider_ignore_comments, which if set to 1, will cause COMMENT and CONNECTION strings to be ignored unconditionally, whether or not table options are specified.
-
- 23 Oct, 2023 2 commits
-
-
Alexander Barkov authored
Changing the code handling sql_mode-dependent function DECODE(): - removing parser tokens DECODE_MARIADB_SYM and DECODE_ORACLE_SYM - removing the DECODE() related code from sql_yacc.yy/sql_yacc_ora.yy - adding handling of DECODE() with help of a new Create_func_func_decode
-
Brandon Nesterenko authored
New Feature: ============ This patch extends the START SLAVE UNTIL command with options SQL_BEFORE_GTIDS and SQL_AFTER_GTIDS to allow user control of whether the replica stops before or after a provided GTID state. Its syntax is: START SLAVE UNTIL (SQL_BEFORE_GTIDS|SQL_AFTER_GTIDS)=”<gtid_list>” When providing SQL_BEFORE_GTIDS=”<gtid_list>”, for each domain specified in the gtid_list, the replica will execute transactions up to the GTID found, and immediately stop processing events in that domain (without executing the transaction of the specified GTID). Once all domains have stopped, the replica will stop. Events originating from domains that are not specified in the list are not replicated. START SLAVE UNTIL SQL_AFTER_GTIDS=”<gtid_list>” is an alias to the default behavior of START SLAVE UNTIL master_gtid_pos=”<gtid_list>”. That is, the replica will only execute transactions originating from domain ids provided in the list, and will stop once all transactions provided in the UNTIL list have all been executed. Example: ========= If a primary server has a binary log consisting of the following GTIDs: 0-1-1 1-1-1 0-1-2 1-1-2 0-1-3 1-1-3 If a fresh replica (i.e. one with an empty GTID position, @@gtid_slave_pos='') is started with SQL_BEFORE_GTIDS, i.e. START SLAVE UNTIL SQL_BEFORE_GTIDS=”1-1-2” The resulting gtid_slave_pos of the replica will be “1-1-1”. This is because the replica will execute only events from domain 1 until it sees the transaction with sequence number 2, and immediately stop without executing it. If the replica is started with SQL_AFTER_GTIDS, i.e. START SLAVE UNTIL SQL_AFTER_GTIDS=”1-1-2” then the resulting gtid_slave_pos of the replica will be “1-1-2”. This is because it will only execute events from domain 1 until it has executed the provided GTID. Reviewed By: ============ Kristian Nielson <knielsen@knielsen-hq.org>
-
- 17 Oct, 2023 1 commit
-
-
Oleksandr Byelkin authored
-
- 30 Sep, 2023 1 commit
-
-
Sergei Golubchik authored
remove old deprecation helpers that were not used anywhere. create new deprecation helpers and enforce their usage this also removes inconsistencies in reporting deprecation: sometimes it was ER_WARN_DEPRECATED_SYNTAX (1287), sometimes ER_WARN_DEPRECATED_SYNTAX_NO_REPLACEMENT (1681), sometimes a warning, sometimes a note. it should always be * ER_WARN_DEPRECATED_SYNTAX * a warning (because it's something actionable, not purely informational)
-
- 21 Sep, 2023 2 commits
-
-
Alexander Barkov authored
- Removing two copies of the drop_routine. Adding a shared and much simplified version. - Removing LEX metods: bool stmt_drop_function(const DDL_options_st &options, const Lex_ident_sys_st &db, const Lex_ident_sys_st &name); bool stmt_drop_function(const DDL_options_st &options, const Lex_ident_sys_st &name); bool stmt_drop_procedure(const DDL_options_st &options, sp_name *name); The code inside the methods was very similar. Adding one method instead: bool stmt_drop_routine(const Sp_handler *sph, const DDL_options_st &options, const Lex_ident_sys_st &db, const Lex_ident_sys_st &name); - Adding a new virtual method Sp_handler:sqlcom_drop(). It helped to unify the code inside the new stmt_drop_routine().
-
Alexander Barkov authored
Resolving the shift/reduce conflict conflict in: GRANT .. ON /*ambiguity*/ FUNCTION f1 TO foo@localhost; GRANT ... ON /*ambiguity*/ [TABLE] function TO foo@localhost; and in REVOKE .. ON /*ambiguity*/ FUNCTION f1 TO foo@localhost; REVOKE ... ON /*ambiguity*/ [TABLE] function TO foo@localhost; using a new %prec directive.
-
- 21 Aug, 2023 1 commit
-
-
Alexander Barkov authored
Changing LEX_CSTRING* parameters of LEX::make_sp_name() to Lex_ident_sys_st. This makes the code clear because a value of Lex_ident_sys_st has some guaranteed additional constraints over a base LEX_CSTRING: - Its LEX_CSTRING::str is not NULL (sql_yacc.yy would abort otherwise) - Its LEX_CSTRING::str is 0-terminated - Its a valid utf8 string - The string pointed by LEX_CSTRING::str was created on THD::mem_root Also changing "pass by pointer" to "pass by reference", as these parameters can never be NULL - they are Bison stack variables.
-
- 15 Aug, 2023 1 commit
-
-
Sergei Golubchik authored
it was redundant, duplicating vcol_type == VCOL_GENERATED_STORED. Note that VCOL_DEFAULT is not "stored", "stored vcol" means that after rnd_next or index_read/etc the field value is already in the record[0] and does not need to be calculated separately
-
- 20 Jul, 2023 3 commits
-
-
Dmitry Shulga authored
Added re-parsing of failed statements inside a stored routine. General idea of the patch is to install an instance of the class Reprepare_observer before executing a next SP instruction and re-parse a statement of this SP instruction in case of its execution failure. To implement the described approach the class sp_lex_keeper has been extended with the method validate_lex_and_exec_core() that is just a wrapper around the method reset_lex_and_exec_core() with additional setting/resetting an instance of the class Reprepare_observer on each iteration of SP instruction execution. If reset_lex_and_exec_core() returns error and an instance of the class Reprepare_observer is installed before running a SP instruction then a number of attempts to re-run the SP instruction is checked against a max. limit and in case it doesn't reach the limit a statement for the failed SP instruction is re-parsed. Re-parsing of a statement for the failed SP instruction is implemented by the new method sp_le_inst::parse_expr() that prepends a SP instruction's statement with the clause 'SELECT' and parse it. Own SP instruction MEM_ROOT and a separate free_list is used for parsing of a SP statement. On successful re-parsing of SP instruction's statement the virtual methods adjust_sql_command() and on_after_expr_parsing() of the class sp_lex_instr is called to update the SP instruction state with a new data created on parsing the statement. Few words about reason for prepending a SP instruction's statement with the clause 'SELECT' - this is required step to produce a valid SQL statement, since for some SP instructions the instructions statement is not a valid SQL statement. Wrapping such text into 'SELECT ( )' produces a correct operator from SQL syntax point of view.
-
Dmitry Shulga authored
For those SP instructions that need to get access to ia LEX object on execution, added storing of their original sql expressions inside classes derived from the class sp_lex_instr. A stored sql expression is returned by the abstract method sp_lex_instr::get_expr_query redefined in derived classes. Since an expression constituting a SP instruction can be invalid SQL statement in general case (not parseable statement), the virtual method sp_lex_instr::get_query() is introduced to return a valid string for a statement that corresponds to the given instruction. Additionally, introduced the rule remember_start_opt in the grammar. The new rule intended to get correct position of a current token taking into attention the fact whether lookahead was done or not.
-
Dmitry Shulga authored
This is the prerequisite patch to move the sp_instr class and classes derived from it into the files sp_instr.cc/sp_instr.h. The classes sp_lex_cursor and sp_lex_keeper are also moved to the files files sp_instr.cc/sp_instr.h. Additionally, * all occurrences of macroses NULL, FALSE, TRUE are replaced with the corresponding C++ keywords nullptr, false, true. * the keyword 'override' is added in and the keyword 'virtual' is removed from signatures of every virtual method implemented in classes derived from the base class sp_instr. * the keyword 'final' is added into declaration of the class sp_lex_keeper since this class shouldn't have a derived class by design. * the function cmp_rqp_locations is made static since it is not called outside the file sp_instr.cc. * the function subst_spvars() is moved into the file sp_instr.cc since this function used only by the method sp_instr_stmt::execute
-
- 18 Jul, 2023 1 commit
-
-
Alexander Barkov authored
MDEV-26186 280 Bytes lost in mysys/array.c, mysys/hash.c, sql/sp.cc, sql/sp.cc, sql/item_create.cc, sql/item_create.cc, sql/sql_yacc.yy:10748 when using oracle sql_mode There was a memory leak under these conditions: - YYABORT was called in the end-of-rule action of a rule containing expr_lex - This expr_lex was not bound to any sp_lex_keeper Bison did not call %destructor <expr_lex> in this case, because its stack already contained a reduced upper-level rule. Fixing rules starting with RETURN, CONTINUE, EXIT keywords: Turning end-of-rule actions with YYABORT into mid-rule actions by adding an empty trailing { } block. This prevents the upper level rule from being reduced without calling %destructor <expr_lex>. In other rules expr_lex is used not immediately before the last end-of-rule { } block, so they don't need changes.
-
- 17 Jul, 2023 1 commit
-
-
Alexander Barkov authored
This patch adds a way to override default collations (or "character set collations") for desired character sets. The SQL standard says: > Each collation known in an SQL-environment is applicable to one > or more character sets, and for each character set, one or more > collations are applicable to it, one of which is associated with > it as its character set collation. In MariaDB, character set collations has been hard-coded so far, e.g. utf8mb4_general_ci has been a hard-coded character set collation for utf8mb4. This patch allows to override (globally per server, or per session) character set collations, so for example, uca1400_ai_ci can be set as a character set collation for Unicode character sets (instead of compiled xxx_general_ci). The array of overridden character set collations is stored in a new (session and global) system variable @@character_set_collations and can be set as a comma separated list of charset=collation pairs, e.g.: SET @@character_set_collations='utf8mb3=uca1400_ai_ci,utf8mb4=uca1400_ai_ci'; The variable is empty by default, which mean use the hard-coded character set collations (e.g. utf8mb4_general_ci for utf8mb4). The variable can also be set globally by passing to the server startup command line, and/or in my.cnf.
-