- 14 May, 2018 5 commits
-
-
Igor Babaev authored
Forced columns of recursive CTEs to be nullable. SQL standard requires this only from recursive columns, but in our code so far we do not differentiate between recursive and non-recursive columns when aggregating types of the union that specifies a recursive CTE.
-
Alexander Barkov authored
-
Michael Widenius authored
Problem was that we used table->s->db_type() for accessing handlerton of opened file instead of table->file->ht Other bug fixed: - Ensure that we set error if reopen_tables() fails (This was the cause of assert)
-
Michael Widenius authored
-
Michael Widenius authored
-
- 12 May, 2018 27 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The predicate dict_table_is_discarded() checks whether ALTER TABLE…DISCARD TABLESPACE has been executed. Replace most occurrences of dict_table_is_discarded() with checks of dict_table_t::space. A few checks for the flag DICT_TF2_DISCARDED are necessary; write them inline. Because !is_readable() implies !space, some checks for dict_table_is_discarded() were redundant.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Sergei Golubchik authored
-
Eugene Kosov authored
Fixed by using exactly the same filtering conditions as specified by standard in 7.6 <table reference> General Rules
-
Aleksey Midenkov authored
MDEV-14823 Wrong error message upon selecting from a system_time partition MDEV-15956 Strange ER_UNSUPPORTED_ACTION_ON_GENERATED_COLUMN upon ALTER on versioning column
-
Aleksey Midenkov authored
MDEV-16043 Assertion thd->Item_change_list::is_empty() failed in mysql_parse upon SELECT from a view reading from a versioned table Lost restore_active_arena(). Using of Query_arena_stmt is suggested instead.
-
Sergei Golubchik authored
-
Aleksey Midenkov authored
Store transaction start time in thd->transaction.start_time. THD::transaction_time() wraps over transaction.start_time taking into account current status of BEGIN.
-
Aleksey Midenkov authored
-
Aleksey Midenkov authored
-
Eugene Kosov authored
-
Eugene Kosov authored
-
Sergei Golubchik authored
Don't use hidden system time in versioning, but keep the system time logic in THD to workaround low-res system clock and replication not versioned to versioned. This reverts MDEV-14788 (System versioning cannot be based on local timestamps, as it is now). Versioning is based on local timestamps again, but timestamps are protected by MDEV-15923 (option to control who can set session @@timestamp).
-
Sergei Golubchik authored
--secure-timestamp=NO|SUPER|REPLICATION|YES
-
Sergei Golubchik authored
remove the redundant declaration tail
-
Sergei Golubchik authored
-
Sergei Golubchik authored
this is always enabled now, no need for a conditional
-
Sergei Golubchik authored
-
Sergei Golubchik authored
rename LString/XString classes, remove unused ones
-
Sergei Golubchik authored
Make sure that SELECT_LEX_UNIT::derived, behaves as documented (points to the "TABLE_LIST representing this union in the embedding select"). For recursive CTE this was not necessarily the case, it could've pointed to the TABLE_LIST inside the CTE, not in the embedding select. To fix: * don't update unit->derived in mysql_derived_prepare(), pass derived as an argument to st_select_lex_unit::prepare() * prefer to set unit->derived in TABLE_LIST::init_derived() to the TABLE_LIST in the embedding select, not to the recursive reference. Fail if there are many TABLE_LISTs in the embedding select with conflicting FOR SYSTEM_TIME clauses. cleanup: * remove redundant THD* argument from st_select_lex_unit::prepare()
-
Sergei Golubchik authored
it's internal storage engine error, don't let it leak into the upper layer.
-
Sergei Golubchik authored
Make --gdb to take an optional argument *only* if it's written after '=', not after a space. followup for 339b9055
-
Sergei Golubchik authored
-
Marko Mäkelä authored
In InnoDB, CREATE TEMPORARY TABLE does not allow FULLTEXT INDEX. Replace a condition with a debug assertion, and add a test.
-
Marko Mäkelä authored
-
- 11 May, 2018 8 commits
-
-
Marko Mäkelä authored
-
Sachin Agarwal authored
Problem: Fix for Bug #21348684 (#Rb9581) introduced a conditional debug execute 'buf_pool_resize_chunk_null', which causes new chunks memory for 2nd buffer pool instance is freed. Buffer pool resize function removes all old chunks entry from 'buf_chunk_map_reg' and add new chunks entry into it. But when 'buf_pool_resize_chunk_null' is set true, 2nd buffer pool instance's chunk entries are not added into 'buf_chunk_map_reg'. When purge thread tries to access that buffer chunk, it leads to debug assertion. Fix: Added old chunk entries into 'buf_chunk_map_reg' for 2nd buffer pool instance when 'buf_pool_resize_chunk_null' debug condition is set to true. Reviewed by: Jimmy <Jimmy.Yang@oracle.com> RB: 18664
-
Aakanksha Verma authored
PROBLEM Issue found during ntest run is a regression of Bug #27141613. The issue is basically when index is being freed due to an error during its creation,when the index isn't added to dictionary cache its field columns are not set, the derefrencing of null col pointer during the clean of index from the virtual column's leads to a crash. NOTE: Also test i_innodb.virtual_debug was failing on 32k page size and above for the newly added scenario. Fixed that. FIX Added a check that if only the index is cached , the virtual index freeing from the virtual cols index list is performed. Reviewed by: Satya Bodapati<satya.bodapati@oracle.com> RB: 18670
-
Aakanksha Verma authored
PROBLEM ======= When add of virtual index fails with DB_TOO_BIG_RECORD , the virtual index being freed isn't removed from the list of indexes a virtual column(which is part of the index). This while the undo log is read could fetch a wrong value during rollback and cause the assertion reported in the bug particularly. FIX === Added a function that is called when the virtual index being freed would allow the index be removed from the index list of virtual column which was a field of that index. Reviwed By: Jimmy Yang<Jimmy.Yang@oracle.com> RB: 18528
-
Marko Mäkelä authored
-
Aditya A authored
PROBLEM ------- Whenever an fts table is created it registers itself in a queue which is operated by a background thread whose job is to optimize the fts tables in background. Additionally we place these fts tables in non-LRU list so that they cannot be evicted from cache. But in the scenario when a node is brought up which is already having fts tables ,we first try to load the fts tables in dictionary ,but we skip the part where it is added in background queue and in non-LRU list because the background thread is not yet created,so these tables are loaded but they can be evicted from the cache. Now coming to the deadlock scenario 1. A Server background thread is trying to evict a table from the cache because the cache is full,so it scans the LRU list for the tables it can evict.It finds the fts table (because of the reason explained above) can be evicted and it takes the dict_sys->mutex (this is a system wide mutex) submits a request to the background thread to remove this table from queue and waits it to be completed. 2. In the mean time fts_optimize_thread() is processing another job in the queue and needs dict_sys->mutex for a small amount of time, but it cannot get it because it is blocked by the first background thread. So Thread 1 is waiting for its job to be completed by Thread 2,whereas Thread 2 is waiting for dict_sys->mutex held by thread 1 ,causing the deadlock. FIX
-
Sachin Agarwal authored
Problem: when incorrect value is assigned to innodb_data_file_path or innodb_temp_data_file_path parameter, Innodb returns error and logs error message in mysqlds.err file but there is no information in error message about the parameter which causes Innodb initialization is failed. Fix: Added error message with parameter name and value, which causes Innodb initialization is failed. Reviewed by: Jimmy <Jimmy.Yang@oracle.com> RB: 18206
-
Sergey Vojtovich authored
Compressed blob columns didn't accept data at their capacity. E.g. storing 255 bytes to TINYBLOB results in "Data too long" error. Now it is allowed assuming compression method was able to produce shorter string (so that both metadata and compressed data fits blob) and column_compression_threshold is lower than blob. If no compression was performed, we still have to reserve additional byte for metadata and thus we perform normal data truncation and return it's status.
-