- 04 Mar, 2019 1 commit
-
-
Teemu Ollakka authored
The test is based in the testcase attached in Jira MDEV.
-
- 03 Mar, 2019 1 commit
-
-
Alexander Barkov authored
-
- 01 Mar, 2019 2 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
-
- 28 Feb, 2019 14 commits
-
-
Igor Babaev authored
When the chosen execution plan accesses a join table employing a range rowid filter a quick select to scan this range has to be built. This quick select is built by a call of SQL_SELECT::test_quick_select(). At this call the function should allow to evaluate only single index range scans. In order to be able to do this a new parameter was added to this function.
-
Sergei Golubchik authored
on long unique conflict, set table->file->dup_ref for engines that support it
-
Sergei Golubchik authored
MDEV-18747 InnoDB: Failing assertion: table->get_ref_count() == 0 upon dropping temporary table with unique blob delete update handler clone also for temporary tables
-
Sergei Golubchik authored
MDEV-18720 Assertion `inited==NONE' failed in ha_index_init upon update on versioned table with key on blob * update system versioning fields before generaled columns * don't presume that ha_write_row() means INSERT. It could still be UPDATE * use the correct handler in check_duplicate_long_entry_key()
-
Sergei Golubchik authored
MDEV-18722 Assertion `templ->mysql_null_bit_mask' failed in row_sel_store_mysql_rec upon modifying indexed column into blob don't assert that virtual columns are always nullable
-
Sergei Golubchik authored
MDEV-18713 Assertion `strcmp(share->unique_file_name,filename) || share->last_version' failed in test_if_reopen upon REPLACE into table with key on blob close table->update_handler in close_thread_tables(). it's not enough to do it in sql_update.cc only, because sql_insert.cc can also do updates (REPLACE) and even sql_delete.cc can (DELETE ... FOR PORTION OF)
-
Sergei Golubchik authored
MDEV-18712 InnoDB indexes are inconsistent with what defined in .frm for table after rebuilding table with index on blob when auto-adding a virtual LONG_UNIQUE_HASH_FIELD, fill in a Virtual_column_info for it, so that fill_alter_inplace_info() would know we're adding a virtual field (ALTER_ADD_VIRTUAL_COLUMN).
-
Sergei Golubchik authored
MDEV-18707 Server crash in my_hash_sort_bin, ASAN heap-use-after-free in Field::is_null, server hang, corrupted double-linked list adjust share->stored_rec_length for LONG_UNIQUE_HASH_FIELD, just like it's done for normal virtual fields
-
sachin authored
-
Sergei Golubchik authored
do it for all key types uniformly. In particular, don't give "prefix keyseg" treatment for hash keys where field->key_length() == key_part->length
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
because they *are* prefix keys, even if long and hashed
-
Alexander Barkov authored
MDEV-18767 Port "MDEV-16294: INSTALL PLUGIN IF NOT EXISTS / UNINSTALL PLUGIN IF EXISTS" to sql_yacc_ora.yy
-
- 27 Feb, 2019 1 commit
-
-
Igor Babaev authored
The bug manifested itself when executing a query with materialized view/derived/CTE whose specification was a SELECT query contained another materialized derived and impossible WHERE/HAVING condition was detected for this SELECT. As soon as such condition is detected the join structures of all derived tables used in the SELECT are destroyed. So optimization of the queries specifying these derived tables is impossible. Besides it's not needed. In 10.3 optimization of a materialized derived table is performed before detection of impossible WHERE/HAVING condition in the embedding SELECT.
-
- 26 Feb, 2019 4 commits
-
-
Vladislav Vaintroub authored
server shutdown code. Fix fixes a race condition, if an active connection either writes, or will be writing to the socket after it is closed. Previous call to socket shutdown() is fully enough to wake up and idle connection, so that close_connection is obsolete and dangerous.
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
seppo authored
Refactored wsrep patch to not use LOCK_thread_count and COND_thread_count anymore. This has partially been replaced by using old LOCK_wsrep_slave_threads mutex. For slave thread count change waiting, new COND_wsrep_slave_threads signal has been added Added LOCK_wsrep_cluster_config mutex to control that cluster address change cannot happen in parallel Protected wsrep_slave_threads variable changes with LOCK_cluster_config mutex This is for avoiding concurrent slave thread count and cluster joining operations to happen Fixes according to Teemu's review
-
- 25 Feb, 2019 6 commits
-
-
Marko Mäkelä authored
The prtype & DATA_LONG_TRUE_VARCHAR flag only plays a role when converting between InnoDB internal format and the MariaDB SQL layer row format. Ideally this flag would never have been persisted in the InnoDB data dictionary. There were bogus assertion failures when an instant ADD, DROP, or column reordering was combined with a change of extending a VARCHAR from less than 256 bytes to more than 255 bytes. Such changes are allowed starting with MDEV-15563 in MariaDB 10.4.3. dict_table_t::instant_column(), dict_col_t::same_format(): Ignore the DATA_LONG_TRUE_VARCHAR flag, because it does not affect the persistent storage format.
-
Daniel Bartholomew authored
-
Alexander Barkov authored
MDEV-18408 Assertion `0' failed in Item::val_native_result / Timestamp_or_zero_datetime_native_null::Timestamp_or_zero_datetime_native_null upon mysqld_list_fields after crash recovery The problem happened because Item_ident_for_show did not implement val_native(). Solution: - Removing class Item_ident_for_show - Implementing a new method Protocol::send_list_fields() instead, which accepts a List<Field> instead of List<Item> as input. Now no any Item creation is done during mysqld_list_fields(). Adding helper methods, to reuse the code easier: - Moved a part of Protocol::send_result_set_metadata(), responsible for sending an individual field metadata, into a new method Protocol_text::store_field_metadata(). Reusing it in both send_list_fields() and send_result_set_metadata(). - Adding Protocol_text::store_field_metadata() - Adding Protocol_text::store_field_metadata_for_list_fields() Note, this patch also automatically fixed another bug: MDEV-18685 mysql_list_fields() returns DEFAULT 0 instead of DEFAULT NULL for view columns The reason for this bug was that Item_ident_for_show::val_xxx() and get_date() did not check field->is_null() before calling field->val_xxx()/get_date(). Now the default value is correctly sent by Protocol_text::store(Field*).
-
Teemu Ollakka authored
Disabled GCF-437 which relies on InnoDB redo log size limitation which does not seem to exist or is increased in MariaDB 10.4. Require debug sync for mysql-wsrep#215.
-
Teemu Ollakka authored
Wsrep-lib is now guaranteed to hold the underlying mutex which is wrapped in lock object passed to Wsrep_client_service interrupted() call. The library part will now take care of checking the wsrep::transaction specific state, so it is enough to check the thd->killed state for the result.
-
Teemu Ollakka authored
The InnoDB DeadlockChecker::check_and_resolve() was missing a call to wsrep_handle_SR_rollback() in the case when the transaction running deadlock detection was chosen as victim. Refined wsrep_handle_SR_rollback() to skip store_globals() calls if the transaction was BF aborting itself. Made mysql-wsrep-features#165 more deterministic by waiting until the update is in progress before sending next update.
-
- 24 Feb, 2019 4 commits
-
-
Daniel Black authored
-
Oleksandr Byelkin authored
-
Igor Babaev authored
st_select_lex::pushdown_from_having_into_where upon query with impossible WHERE condition Do not push from HAVING into impossible WHERE
-
Igor Babaev authored
Do not do substitution for best equal field in HAVING conditions. It's not needed.
-
- 23 Feb, 2019 2 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
- 22 Feb, 2019 4 commits
-
-
Oleksandr Byelkin authored
-
sachin authored
-
Sergei Golubchik authored
post-merge fixes
-
Sergei Golubchik authored
sql_field->key_length was 0 for blob fields when a field was being added, but Field_blob::character_octet_length() on subsequent ALTER TABLE's (when the Field object in the old table already existed). This means mysql_prepare_create_table() couldn't reliably detect if the keyseg was a prefix.
-
- 21 Feb, 2019 1 commit
-
-
Sachin authored
This patch implements engine independent unique hash index. Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key length > handler->max_key_length() or it can be explicitly specified. Automatic Creation:- Create TABLE t1 (a blob unique); Explicit Creation:- Create TABLE t1 (a int , unique(a) using HASH); Internal KEY_PART Representations:- Long unique key_info will have 2 representations. (lets understand this with an example create table t1(a blob, b blob , unique(a, b)); ) 1. User Given Representation:- key_info->key_part array will be similar to what user has defined. So in case of example it will have 2 key_parts (a, b) 2. Storage Engine Representation:- In this case there will be only one key_part and it will point to HASH_FIELD. This key_part will be always after user defined key_parts. So:- User Given Representation [a] [b] [hash_key_part] key_info->key_part ----^ Storage Engine Representation [a] [b] [hash_key_part] key_info->key_part ------------^ Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function. Working:- 1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH. 2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags) 3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields, When Explicit length is given by user then Item_left is used to concatenate Item_field values. 4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result field by field.
-