An error occurred fetching the project authors.
- 01 Jul, 2018 1 commit
-
-
Anel Husakovic authored
One can create table with the same name for `field` and `table` `check` constraint. For example: `create table t(a int check(a>0), constraint a check(a>10));` But when inserting new rows same error is always raised. For example with ```insert into t values (-1);``` and ```insert into t values (10);``` same error `ER_CONSTRAINT_FAILED` is obtained and it is not clear which constraint is violated. This patch solve this error so that in case if field constraint is violated the first parameter in the error message is `table.field_name` and if table constraint is violated the first parameter in error message is `constraint_name`.
-
- 30 Jun, 2018 1 commit
-
-
Aleksey Midenkov authored
* ignore CHECK constraint for historical rows; * FOREIGN KEY test case. TODO: MDEV-16301 IB: use real table name for error messages on ALTER Closes tempesta-tech/mariadb#491 Closes #748
-
- 19 Jun, 2018 3 commits
-
-
Monty authored
The bug was that innobase_get_computed_value() trashed record[0] and data in Field_blob::value Fixed by using a record on the heap for innobase_get_computed_value() Reviewer: Marko Mäkelä
-
Monty authored
This is to mark that a field is indirectly part of a key, which simplifes checking if we need to have this field up to date to evaluate a key. For example: CREATE TABLE t1 (a int, b int as (a) virtual, c int as (b) virtual, index(c)) would mark a and b with PART_INDIRECT_KEY_FLAG. c is marked with PART_KEY_FLAG as before.
-
Alexander Barkov authored
-
- 14 Jun, 2018 1 commit
-
-
Sergei Golubchik authored
followup for d8da9202
-
- 09 Jun, 2018 1 commit
-
-
Varun Gupta authored
MDEV-16374: Filtered shows 0 for materilization scan for a semi join, which makes optimizer always picks materialization scan over materialization lookup For non-mergeable semi-joins we don't store the estimates of the IN subquery in table->file->stats.records. In the function TABLE_LIST::fetch_number_of_rows, we store the number of rows in the tables (estimates in case of derived table/views). Currently we don't store the estimates for non-mergeable semi-joins, which leads to a problem of selecting materialization scan over materialization lookup. Fixed this by storing these estimated appropriately
-
- 06 Jun, 2018 1 commit
-
-
Oleksandr Byelkin authored
-
- 05 Jun, 2018 1 commit
-
-
Alexander Barkov authored
The problem described in the bug report happened because the code did not test check_cols(1) after fix_fields() in a few places. Additionally, fix_fields() could be called multiple times for SP variables, because they are all fixed at a early stage in append_for_log(). Solution: 1. Adding a few helper methods - fix_fields_if_needed() - fix_fields_if_needed_for_scalar() - fix_fields_if_needed_for_bool() - fix_fields_if_needed_for_order_by() and using it in many cases instead of fix_fields() where the "fixed" status is not definitely known to be "false". 2. Adding DBUG_ASSERT(!fixed) into Item_splocal*::fix_fields() to catch double execution. 3. Adding tests. As a good side effect, the patch removes a lot of duplicate code (~60 lines): if (!item->fixed && item->fix_fields(..) && item->check_cols(1)) return true;
-
- 27 May, 2018 1 commit
-
-
Monty authored
Fixed by deleting the sequence if we where not able to initialize it I also noticed that we didn't always set the error message when check_killed(), which could lead to aborted queries without error beeing properly set. Fixed by default setting error message if check_error() noticed that killed had been called. This allowed me to remove a lot of calls to thd->send_kill_message().
-
- 26 May, 2018 2 commits
- 24 May, 2018 1 commit
-
-
Monty authored
The cause of this was several different bugs: - When using binary logging with binlog_row_image=FULL the all bits in read_set was set, which caused a different (wrong) pattern for marking vcol_set. - TABLE::mark_virtual_columns_for_write() didn't in all cases mark vcol_set with the vcol_field. - TABLE::update_virtual_fields() has to update all vcol fields on REPLACE if binary logging with FULL is used. - VCOL_UPDATE_INDEXED should update all vcol fields part of an index that was not updated by VCOL_UPDATE_FOR_READ - max_row_length() calculated length of NULL and not used fields. This didn't cause any crash, but used more memory than needed.
-
- 23 May, 2018 1 commit
-
-
Eugene Kosov authored
-
- 15 May, 2018 2 commits
-
-
Monty authored
Problem was that verify_constraints() didn't check if there was an error as part of evaluating constraints (can happen in strict mode). In one-row-insert the error was ignored when using binary logging as binary logging clear errors if insert succeeded. In multi-row-insert the error was noticed for the second row. After this fix one will get an error for both one and multi-row inserts if the constraints generates a warning in strict mode.
-
Alexander Barkov authored
MDEV-16100 FOR SYSTEM_TIME erroneously resolves string user variables as transaction IDs Problem: Vers_history_point::resolve_unit() tested item->result_type() before item->fix_fields() was called. - Item_func_get_user_var::result_type() returned REAL_RESULT by default. This caused MDEV-16100. - Item_func_sp::result_type() crashed on assert. This caused MDEV-16094 Changes: 1. Adding item->fix_fields() into Vers_history_point::resolve_unit() before using data type specific properties of the history point expression. 2. Adding a new virtual method Type_handler::Vers_history_point_resolve_unit() 3. Implementing type-specific Type_handler_xxx::Type_handler::Vers_history_point_resolve_unit() in the way to: a. resolve temporal and general purpose string types to TIMESTAMP b. resolve BIT and general purpose INT types to TRANSACTION c. disallow use of non-relevant data type expressions in FOR SYSTEM_TIME Note, DOUBLE and DECIMAL data types are disallowed intentionally. - DOUBLE does not have enough precision to hold huge BIGINT UNSIGNED values - DECIMAL rounds on conversion to INT Both lack of precision and rounding might potentionally lead to very unpredictable results when a wrong transaction ID would be chosen. If one really wants dangerous use of DOUBLE and DECIMAL, explicit CAST can be used: FOR SYSTEM_TIME AS OF CAST(double_or_decimal AS UNSIGNED) QQ: perhaps DECIMAL(N,0) could still be allowed. 4. Adding a new virtual method Item::type_handler_for_system_time(), to make HEX hybrids and bit literals work as TRANSACTION rather than TIMESTAMP. 5. sql_yacc.yy: replacing the rule temporal_literal to "TIMESTAMP TEXT_STRING". Other temporal literals now resolve to TIMESTAMP through the new Type_handler methods. No special grammar needed. This removed a few shift/resolve conflicts. (TIMESTAMP related conflicts in "history_point:" will be removed separately) 6. Removing the "timestamp_only" parameter from vers_select_conds_t::resolve_units() and Vers_history_point::resolve_unit(). It was a hint telling that a table did not have any TRANSACTION-aware system time columns, so it's OK to resolve to TIMESTAMP in case of uncertainty. In the new reduction it works as follows: - the decision between TIMESTAMP and TRANSACTION is first made based only on the expression data type only - then, in case if the expression resolved to TRANSACTION, the table is checked if TRANSACTION-aware columns really exist. This way is safer against possible ALTER TABLE statements changing ROW START and ROW END columns from "BIGINT UNSIGNED" to "TIMESTAMP(x)" or the other way around.
-
- 14 May, 2018 1 commit
-
-
Michael Widenius authored
-
- 12 May, 2018 7 commits
-
-
Galina Shalygina authored
failure upon SELECT with impossible condition The problem appears because of a wrong implementation of the Item_func_in::build_clone() method. It didn't clone 'array' and 'cmp_fields' fields for the cloned IN predicate and this could cause crashes. The Item_func_in::fix_length_and_dec() method was refactored and a new method named Item_func_in::create_array() was created. It allowed to create 'array' for cloned IN predicates in a proper way.
-
Aleksey Midenkov authored
Store transaction start time in thd->transaction.start_time. THD::transaction_time() wraps over transaction.start_time taking into account current status of BEGIN.
-
Aleksey Midenkov authored
-
Aleksey Midenkov authored
-
Sergei Golubchik authored
Don't use hidden system time in versioning, but keep the system time logic in THD to workaround low-res system clock and replication not versioned to versioned. This reverts MDEV-14788 (System versioning cannot be based on local timestamps, as it is now). Versioning is based on local timestamps again, but timestamps are protected by MDEV-15923 (option to control who can set session @@timestamp).
-
Sergei Golubchik authored
rename LString/XString classes, remove unused ones
-
Sergei Golubchik authored
Make sure that SELECT_LEX_UNIT::derived, behaves as documented (points to the "TABLE_LIST representing this union in the embedding select"). For recursive CTE this was not necessarily the case, it could've pointed to the TABLE_LIST inside the CTE, not in the embedding select. To fix: * don't update unit->derived in mysql_derived_prepare(), pass derived as an argument to st_select_lex_unit::prepare() * prefer to set unit->derived in TABLE_LIST::init_derived() to the TABLE_LIST in the embedding select, not to the recursive reference. Fail if there are many TABLE_LISTs in the embedding select with conflicting FOR SYSTEM_TIME clauses. cleanup: * remove redundant THD* argument from st_select_lex_unit::prepare()
-
- 10 May, 2018 1 commit
-
-
Sergei Golubchik authored
table.cc: virtual columns must be computed for INSERT, if they're part of the partitioning expression. this change broke gcol.gcol_partition_innodb. fix CHECK TABLE for partitioned tables and vcols. sql_partition.cc: mark prerequisite base columns in full_part_field_set ha_partition.cc initialize vcol_set accordingly
-
- 06 May, 2018 1 commit
-
-
Monty authored
Added to: - if (error) - Lex - sql_yacc.yy and sql_yacc_ora.yy - In header files to alloc() calls - Added thd argument to thd_net_is_killed()
-
- 26 Apr, 2018 1 commit
-
-
Monty authored
- Removed never used warning that explicit_defaults_for_timestamp was not set
-
- 24 Apr, 2018 1 commit
-
-
Marko Mäkelä authored
Modern compilers (such as GCC 8) emit warnings that the 'register' keyword is deprecated and not valid C++17. Let us remove most use of the 'register' keyword. Code in 'extra/' is not touched.
-
- 10 Apr, 2018 3 commits
-
-
Aleksey Midenkov authored
[closes tempesta-tech#472]
-
Sergei Golubchik authored
the function xxx_eq(a,b) returns true if two elements are equal and false if they are not.
-
Sergei Golubchik authored
(will be added back when it'll be used)
-
- 02 Apr, 2018 1 commit
-
-
Galina Shalygina authored
Item::derived_field_transformer_for_having The crash occurred due to an inappropriate handling of multiple equalities when pushing conditions into materialized views/derived tables. If equalities extracted from a multiple equality can be pushed into a materialized view/derived table they should be plainly conjuncted with other pushed predicates rather than form a separate AND sub-formula.
-
- 25 Feb, 2018 1 commit
-
-
Sergei Golubchik authored
Because NOW() is set to system time, unless overriden. And both should follow big manual system time changes, while still coping with lowres system clocks. Ignoring system time changes is both confusing and breaks with restarts.
-
- 24 Feb, 2018 2 commits
-
-
Sergei Golubchik authored
as these fields are always declared NOT NULL anyway
-
Sergei Golubchik authored
and few indentation changes
-
- 23 Feb, 2018 5 commits
-
-
Sergei Golubchik authored
Remove 1668efb7 that introduced a special magic behavior for UNIX_TIMESTAMP() in the AS OF context
-
Aleksey Midenkov authored
Vers SQL: TRT fix getting TRX_ID by COMMIT_TS Fixed wrong assumption that records are ordered by COMMIT_TS. This is anyway a quick hack until tempesta-tech#314 is done. See also FIXME and TODO in TR_table::query(MYSQL_TIME, bool). Test: SEES case for trx_id.test [closes #456]
-
Sergei Golubchik authored
Lots of changes: * calculate the current history partition in ::external_lock(), not in ::write_row() or ::update_row() * remove dynamically collected per-partition row_end stats * no full table scan in open_table_from_share to calculate these stats, no manual MDL/thr_locks in open_table_from_share * no shared stats in TABLE_SHARE = no mutexes or condition waits when calculating current history partition * always compare timestamps, don't convert them to MYSQL_TIME (avoid DST ambiguity, and it's faster too) * correct interval handling, 1 month = 1 month, not 30 * 24 * 3600 seconds * save/restore first partition start time, and count intervals from there * only allow to drop first partitions if INTERVAL * when adding new history partitions, split the data in the last history parition, if it was overflowed * show partition boundaries in INFORMATION_SCHEMA.PARTITIONS
-
Aleksey Midenkov authored
Unit-based history point (vers_history_point_t; Vers_history_point).
-
Sergei Golubchik authored
don't allow to discover WITH SYSTEM VERSIONING clause originally by: Aleksey Midenkov
-