- 24 May, 2018 3 commits
-
-
Monty authored
The cause of this was several different bugs: - When using binary logging with binlog_row_image=FULL the all bits in read_set was set, which caused a different (wrong) pattern for marking vcol_set. - TABLE::mark_virtual_columns_for_write() didn't in all cases mark vcol_set with the vcol_field. - TABLE::update_virtual_fields() has to update all vcol fields on REPLACE if binary logging with FULL is used. - VCOL_UPDATE_INDEXED should update all vcol fields part of an index that was not updated by VCOL_UPDATE_FOR_READ - max_row_length() calculated length of NULL and not used fields. This didn't cause any crash, but used more memory than needed.
-
Marko Mäkelä authored
thd_destructor_proxy(): Ensure that purge actually exits, like the logic should have been ever since MDEV-14080. srv_purge_shutdown(): A new function to wait for the purge coordinator to exit. Before exiting, the purge coordinator will ensure that all purge workers have exited.
-
Marko Mäkelä authored
This is the MariaDB 10.2 version of the patch. field_store_string(): Simplify the code. field_store_index_name(): Remove, and use field_store_string() instead. Starting with MariaDB 10.2.2, there is the predicate dict_index_t::is_committed(), and dict_index_t::name never contains the magic byte 0xff. Correct some comments to refer to TEMP_INDEX_PREFIX_STR. i_s_cmp_per_index_fill_low(): Use the appropriate value NULL to identify that an index was not found. Check that storing each column value succeeded. i_s_innodb_buffer_page_fill(), i_s_innodb_buf_page_lru_fill(): Only invoke Field::set_notnull() if the index was found. (This fixes the bug.) i_s_dict_fill_sys_indexes(): Adjust the index->name that was directly loaded from SYS_INDEXES.NAME (which can start with the 0xff byte). This was the only function that depended on the translation in field_store_index_name().
-
- 23 May, 2018 1 commit
-
-
Sergei Petrunia authored
Fix an obvious typo: replace_column should be applied to SHOW TABLE STATUS, not to SELECT * FROM t1.
-
- 22 May, 2018 4 commits
-
-
Monty authored
-
Monty authored
Crash happened when deleting all columns that was part of a check constraint The bug was that read map for from table was used when checking CHECK constraint and was not properly reset in copy_data_between_tables()
-
Sergei Petrunia authored
Step#1: RocksDB files require a special #define when they are compiled with valgrind. Without that, valgrind fails with an 'unimplemented syscall' error for fcntl call.
-
Jacob Mathew authored
The failures with valgrind occur as a result of Spider sometimes using the wrong transaction for operations in background threads that send requests to the data nodes. The use of the wrong transaction caused the networking to the data nodes to use the wrong thread in some cases. Valgrind eventually detects this when such a thread is destroyed before it is used to disconnect from the data node by that wrong transaction when it is freed. I have fixed the problem by correcting the transaction used in each of these cases. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged: Commit 4d576d9d on branch bb-10.3-MDEV-12900
-
- 21 May, 2018 1 commit
-
-
Sergei Petrunia authored
-
- 20 May, 2018 2 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
- 19 May, 2018 6 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
MDEV-16153 Server crashes in Apc_target::disable, ASAN heap-use-after-free in Explain_query::~Explain_query upon/after EXECUTE IMMEDIATE Explain_query must be created in the execution arena. But JOIN::optimize_inner temporarily switches to the statement arena under `if (sel->first_cond_optimization)`. This might cause Explain_query to be allocated in the statement arena. Usually it is harmless (although technically incorrect and a waste of memory), but in case of EXECUTE IMMEDIATE, Prepared_statement object and its statement arena are destroyed before log_slow_statement() call, which uses Explain_query. Fix: 1. Create Explain_query before switching arenas. 2. Before filling earlier-created Explain_query with data, set thd->mem_root from the Explain_query::mem_root
-
Sergei Golubchik authored
-
Vladislav Vaintroub authored
Itcan happen that the connection is already gone during the window between quering I_S.PROCESSLIST and KILL QUERY. Fix is to tolerate ER_NO_SUCH_THREAD returned from KILL QUERY. Add small improvement in message "Killing MDL query " to actually output the query. Also do not try to kill queries that are already in Killed state.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
fix a typo that broke the main.view test followup for ef295c31
-
- 18 May, 2018 7 commits
-
-
Sergei Petrunia authored
Apply patch by Oleksandr Byelkin: Do not use "only index read" in analyzing indices if there is a field which present in the index only partially.
-
Sergei Petrunia authored
Mark the plugin as dynamic-only.
-
Vladislav Vaintroub authored
It should work ok on all Unixes, but on Windows ,only worked by accident in the past, with client not being Unicode safe. It stopped working with Visual Studio 2017 15.7 update now.
-
Jacob Mathew authored
The crash occurs when a thread that is closing its connection attempts to access Spider transaction information when another thread has freed that memory while processing Spider plugin deinit. This occurs because Spider does not adjust the plugin's reference count when it sets a transaction information pointer for the plugin. The fix I implemented changes the way Spider sets the transaction information pointer to use thd_set_ha_data() so that Spider's plugin reference counter is adjusted as well. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Commit ab9d420d on branch 10.2
-
Sergei Petrunia authored
Fix two issues: 1. Rdb_ddl_manager::rename() loses the value of m_hidden_pk_val. new object used to get 0, which means "not loaded from the db yet". 2. ha_rocksdb::load_hidden_pk_value() uses current transaction (and its snapshot) when loading hidden PK value from disk. This may cause it to load an out-of-date value.
-
Igor Babaev authored
The current code does not support recursive CTEs whose specifications contain a mix of ALL UNION and DISTINCT UNION operations. This patch catches such specifications and reports errors for them.
-
Daniel Bartholomew authored
-
- 17 May, 2018 4 commits
-
-
Igor Babaev authored
with recursive subquery There were two problems: 1. The code did not report that usage of global ORDER BY / LIMIT clauses was not supported yet. 2. The code just reset fake_select_lex of the the unit specifying a recursive CTE to NULL and that caused memory leaks in some cases.
-
Jacob Mathew authored
The crash occurs when a thread that is closing its connection attempts to access Spider transaction information when another thread has freed that memory while processing Spider plugin deinit. This occurs because Spider does not adjust the plugin's reference count when it sets a transaction information pointer for the plugin. The fix I implemented changes the way Spider sets the transaction information pointer to use thd_set_ha_data() so that Spider's plugin reference counter is adjusted as well. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Commit eabfadce on branch bb-10.3-MDEV-7914
-
Sergei Golubchik authored
When Item_insert_value needs a dummy field, use zero-length Field_string, not Field_null. The latter isn't compatible with CREATE ... SELECT.
-
Sergei Golubchik authored
-
- 16 May, 2018 12 commits
-
-
Sergei Golubchik authored
jemalloc > 5.0.0 doesn't like to be linked with a dlopen-ed module. Don't link tokudb with jemalloc on Fedora 28, LD_PRELOAD it instead with mysqld_safe and with systemd.
-
Sergei Golubchik authored
-
Monty authored
Fixed by extending unique_table() with a flag to not allow usage of the replaced table. I also cleaned up find_dup_table() to not use goto next. I also added more comments to the code in find_dup_table()
-
Sergey Vojtovich authored
Analyze core independently of max-save-datadir and max-save-core setting. Increment $num_saved_cores only if core was actually saved. "Move any core files from e.g. mysqltest" independently of max-save-datadir setting. Note: it may overwrite core from mysqld, which might not be desired (it did work this way even before).
-
Marko Mäkelä authored
srv_purge_coordinator_thread(): Wait for all purge worker threads to actually exit. An analysis of a core dump of a hung 10.3 server revealed that one srv_worker_thread did not exit, even though the purge coordinator had exited. This caused kill_server_thread and mysqld_main to wait indefinitely. The main InnoDB shutdown was never called, because unireg_end() was never called.
-
Monty authored
- Added missing test case for MyISAM
-
Thirunarayanan Balathandayuthapani authored
Imported the following test case from mysql to MariaDB 1) innodb.alter_kill 2) innodb.alter_foreign_crash 3) innodb.alter_rename_files 4) innodb.analyze_table 5) Appended the case in innodb-online-alter-gis
-
Shaohua Wang authored
Problem: We keep pinning pages in dict_stats_analyze_index_below_cur(), but doesn't release these pages. When we have a relative small buffer pool size, and big innodb_stats_persistent_sample_pages, there will be no free pages for use. Solution: Use a separate mtr in dict_stats_analyze_index_below_cur(), and commit mtr before return. Reviewed-by: Jimmy Yang <jimmy.yang@oracle.com> RB: 11362
-
Thirunarayanan Balathandayuthapani authored
-
Marko Mäkelä authored
In debug builds of MySQL, there is an configuration variable that allows an InnoDB log checkpoint to be initiated: SET GLOBAL innodb_log_checkpoint_now=ON; Setting this variable while a table-rebuilding ALTER TABLE is executing may result in an infinite loop. checkpoint_now_set(): Account for log_sys->append_on_checkpoint->size(). Note that this function contains race conditions, because it is accessing fields of log_sys without holding log_sys->mutex. We think that this is acceptable, because this variable only exists for debugging purposes, in debug builds of MySQL. RB: 9947 Reviewed-by: Sunny Bains <sunny.bains@oracle.com>
-
Thirunarayanan Balathandayuthapani authored
to innodb.innodb-online-alter-gis
-
Annamalai Gurusami authored
Problem: The function row_build_index_entry_low() takes a dtuple_t object ('row') and dict_index_t object ('index') as input and returns a new dtuple_t object ('entry') as output. The dtuple_t object 'row' that is given as input might have been constructed from a different dict_index_t object (!= index). So when accessing the externally stored data of the given 'row' we need to make use of the correct index object. Solution: Store the page size information in the associated row_ext_t object. rb#6086 approved by Vasil and Jimmy.
-