- 19 Apr, 2019 2 commits
-
-
Monty authored
-
Alexander Barkov authored
-
- 18 Apr, 2019 4 commits
-
-
Sergei Petrunia authored
Do not attempt to set param->table->with_impossible_ranges if the range optimizer is using pseudo-indexes (which is true when we are computing EITS selectivity estimates or doing partition pruning).
-
Marko Mäkelä authored
DROP DATABASE would internally execute DROP TABLE on every contained table and finally remove the directory. In InnoDB, DROP TABLE is sometimes executed in the background. The table would be renamed to a name that starts with #sql. The existence of these files would prevent DROP DATABASE from succeeding. CREATE OR REPLACE DATABASE can internally execute DROP DATABASE if the directory already exists. This could fail due to the InnoDB background DROP TABLE, possibly due to some tables that were leftovers from earlier tests.
-
Vladislav Vaintroub authored
Do not try to compile threadpool_common.cc, if threadpool is not available on current OS. Also introduced undocumented/uncached CMake variable DISABLE_THREADPOOL, to emulate builds on exotic OSes.
-
Igor Babaev authored
When pushing a condition from HAVING into WHERE the function st_select_lex::pushdown_from_having_into_where() transforms column references in the pushed condition then performs cleanup of items of the condition and finally calls fix_fields() for the condition items. The cleanup is performed by a call of the method walk() with cleanup_processor as the first parameter. Unfortunately this sequence of calls does not work if the condition contains cached items, because fix_fields() cannot go through Item_cache items and this leaves underlying items unfixed. The solution of this problem used in this patch is just does not allow to process Item_cache objects when performing cleanup of the pushed condition. In order to let the traversal procedure walk() not to process Item_cache objects the third parameter of the used call of walk() is set to a fictitious pointer (void *) 1. And Item_cache::walk() is changed to prevent any action when it gets such value as the third parameter.
-
- 17 Apr, 2019 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
InnoDB crash recovery used to read every data page for which redo log exists. This is unnecessary for those pages that are initialized by the redo log. If a newly created page is corrupted, recovery could unnecessarily fail. It would suffice to reinitialize the page based on the redo log records. To add insult to injury, InnoDB crash recovery could hang if it encountered a corrupted page. We will fix also that problem. InnoDB would normally refuse to start up if it encounters a corrupted page on recovery, but that can be overridden by setting innodb_force_recovery=1. Data pages are completely initialized by the records MLOG_INIT_FILE_PAGE2 and MLOG_ZIP_PAGE_COMPRESS. MariaDB 10.4 additionally recognizes MLOG_INIT_FREE_PAGE, which notifies that a page has been freed and its contents can be discarded (filled with zeroes). The record MLOG_INDEX_LOAD notifies that redo logging has been re-enabled after being disabled. We can avoid loading the page if all buffered redo log records predate the MLOG_INDEX_LOAD record. For the internal tables of FULLTEXT INDEX, no MLOG_INDEX_LOAD records were written before commit aa3f7a10. Hence, we will skip these optimizations for tables whose name starts with FTS_. This is joint work with Thirunarayanan Balathandayuthapani. fil_space_t::enable_lsn, file_name_t::enable_lsn: The LSN of the latest recovered MLOG_INDEX_LOAD record for a tablespace. mlog_init: Page initialization operations discovered during redo log scanning. FIXME: This really belongs in recv_sys->addr_hash, and should be removed in MDEV-19176. recv_addr_state: Add the new state RECV_WILL_NOT_READ to indicate that according to mlog_init, the page will be initialized based on redo log record contents. recv_add_to_hash_table(): Set the RECV_WILL_NOT_READ state if appropriate. For now, we do not treat MLOG_ZIP_PAGE_COMPRESS as page initialization. This works around bugs in the crash recovery of ROW_FORMAT=COMPRESSED tables. recv_mark_log_index_load(): Process a MLOG_INDEX_LOAD record by resetting the state to RECV_NOT_PROCESSED and by updating the fil_name_t::enable_lsn. recv_init_crash_recovery_spaces(): Copy fil_name_t::enable_lsn to fil_space_t::enable_lsn. recv_recover_page(): Add the parameter init_lsn, to ignore any log records that precede the page initialization. Add DBUG output about skipped operations. buf_page_create(): Initialize FIL_PAGE_LSN, so that recv_recover_page() will not wrongly skip applying the page-initialization record due to the field containing some newer LSN as a leftover from a different page. Do not invoke ibuf_merge_or_delete_for_page() during crash recovery. recv_apply_hashed_log_recs(): Remove some unnecessary lookups. Note if a corrupted page was found during recovery. After invoking buf_page_create(), do invoke ibuf_merge_or_delete_for_page() via mlog_init.ibuf_merge() in the last recovery batch. ibuf_merge_or_delete_for_page(): Relax a debug assertion. innobase_start_or_create_for_mysql(): Abort startup if a corrupted page was found during recovery. Corrupted pages will not be flagged if innodb_force_recovery is set. However, the recv_sys->found_corrupt_fs flag can be set regardless of innodb_force_recovery if file names are found to be incorrect (for example, multiple files with the same tablespace ID).
-
Marko Mäkelä authored
Similar to what was done in commit aa3f7a10 for FULLTEXT INDEX, we must ensure that MLOG_INDEX_LOAD records will always be written if redo logging was disabled. row_merge_build_indexes(): Invoke row_merge_write_redo() also when online operation is not being executed or an error occurs. In case of an error, invoke flush_observer->interrupted() so that the pages will not be flushed but merely evicted from the buffer pool. Before resuming redo logging, it is crucial for the correctness of mariabackup and InnoDB crash recovery to flush or evict all affected pages and to write MLOG_INDEX_LOAD records.
-
Alexander Barkov authored
-
- 16 Apr, 2019 10 commits
-
-
Oleksandr Byelkin authored
Fix case of prelocking placeholder
-
Kentoku SHIBA authored
-
Igor Babaev authored
Currently usage of range rowid filters can be combined only with ref access and single index range access. So if the optimizer has chosen some other quick select method to access a joined table then no range rowid filter can be used for this table.
-
Marko Mäkelä authored
With the MDEV-15562 instant DROP COLUMN, clustered index records will contain traces of dropped columns, as follows: In ROW_FORMAT=REDUNDANT, dropped columns will be stored as 0 bytes, but they will consume 1 or 2 bytes per column in the record header. In ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC, dropped columns will be stored as NULL if allowed. This will consume 1 bit per nullable column. In ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC, dropped NOT NULL columns will be stored as 0 bytes if allowed. This will consume 1 byte per NOT NULL variable-length column. Fixed-length columns will be stored using the fixed number of bytes. The metadata record will be 20 bytes larger than user records, because it will contain a metadata BLOB pointer. We must refuse ALGORITHM=INSTANT (and require a table rebuild) if the metadata record would grow too big to fit in the index page. If SQL_MODE includes STRICT_TRANS_TABLES or STRICT_ALL_TABLES, we should refuse ALGORITHM=INSTANT if the maximum length of user records would exceed the maximum size of an index page, similar to what row_create_index_for_mysql() does during CREATE TABLE. This limit would kick in when the default values for any instantly added columns in the metadata record are NULL or short, but the allowed maximum values are long. instant_alter_column_possible(): Add the parameter "bool strict" to enable checks for the user record size, and always check the metadata record size.
-
willhan authored
test case: t1 is a spider engine table; CREATE TABLE `t1` ( `id` int(11) NOT NULL DEFAULT '0', `name` char(64) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=SPIDER query: "select * from t1 where name not like 'x%' " would dispatch "select xxx name name like 'x%' " to remote mysqld, is wrong
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
MDEV-17362: SIGSEGV in JOIN::optimize_inner or Assertion `fixed == 0' failed in Item_equal::fix_fields, server crashes after 2nd execution of PS Move reinitialisation of pushdown variables for every query, because it used now not only for derived tables.
-
Oleksandr Byelkin authored
-
wayne authored
-
Alexander Barkov authored
Turning initializing code into constructors.
-
- 15 Apr, 2019 3 commits
-
-
Marko Mäkelä authored
innobase_init(): Add a missing space to a warning message. Apparently, this message was corrupted in MariaDB 10.2.2 in commit fec844ac related to a conflict resolution when applying a change from MySQL 5.7.12.
-
Marko Mäkelä authored
srv_start(): Restore the call to buf_pool_invalidate() that was removed in commit 09af00cb. It turns out that the call is necessary to work around a bug that should be fixed in MDEV-19229.
-
Alexander Barkov authored
-
- 13 Apr, 2019 1 commit
-
-
Kentoku SHIBA authored
* Fix valgrind error caused by vp/spider.handler test. * Fix valgrind error caused by vp/spider.ha test.
-
- 12 Apr, 2019 7 commits
-
-
Kentoku SHIBA authored
add changes of test results
-
Kentoku SHIBA authored
MDEV-16530 Spider datanodes needs adjusted wait_timeout for long running queries on spider head node (#1258) Add the following parameters. - spider_remote_wait_timeout Set remote wait_timeout at connecting for improvement performance of connection if you know. -1,0 : No set. 1 or more : Seconds. The default value is -1 - spider_wait_timeout The wait time to remote servers. -1,0 : No set. 1 or more : Seconds. The default value is 604800
-
Kentoku SHIBA authored
-
Marko Mäkelä authored
-
Eugene Kosov authored
remove a sometimes misleading word INPLACE from error message
-
Kentoku SHIBA authored
MDEV-18993 The keep-alive connection (set spider_conn_recycle_mode = 1) in spider would cause cash in MariaDB (#1269) Fix the following valgrind error. ==94390== Thread 29: ==94390== Invalid read of size 8 ==94390== at 0x78389D: thd_increment_bytes_sent (sql_class.cc:4265) ==94390== by 0xC8EC46: net_real_write (net_serv.cc:730) ==94390== by 0xC8E0C8: net_flush (net_serv.cc:383) ==94390== by 0xC8E4D0: net_write_command (net_serv.cc:521) ==94390== by 0xADCE61: cli_advanced_command (client.c:468) ==94390== by 0xAE3CAF: mysql_close_slow_part (client.c:3671) ==94390== by 0xAE3D28: mysql_close (client.c:3683) ==94390== by 0x149E69A8: spider_db_mbase::disconnect() (spd_db_mysql.cc:2217) ==94390== by 0x1491EA26: spider_db_disconnect(st_spider_conn*) (spd_db_conn.cc:297) ==94390== by 0x14948EBE: spider_free_conn_alloc(st_spider_conn*) (spd_conn.cc:196) ==94390== by 0x1494B26A: spider_free_conn(st_spider_conn*) (spd_conn.cc:1251) ==94390== by 0x1494941F: spider_free_conn_from_trx(st_spider_transaction*, st_spider_conn*, bool, bool, int*) (spd_conn.cc:315) ==94390== Address 0x1f0e0990 is 4,832 bytes inside a block of size 25,728 free'd ==94390== at 0x4C2ACBD: free (vg_replace_malloc.c:530) ==94390== by 0x13F5545: my_free (my_malloc.c:222) ==94390== by 0x6C75B7: ilink::operator delete(void*, unsigned long) (sql_list.h:618) ==94390== by 0x77B9F6: THD::~THD() (sql_class.cc:1724) ==94390== by 0x1494FCE0: spider_bg_conn_action(void*) (spd_conn.cc:2580) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so) ==94390== Block was alloc'd at ==94390== at 0x4C29BC3: malloc (vg_replace_malloc.c:299) ==94390== by 0x13F4DFA: my_malloc (my_malloc.c:101) ==94390== by 0x1491CF06: ilink::operator new(unsigned long) (sql_list.h:614) ==94390== by 0x1494F7FD: spider_bg_conn_action(void*) (spd_conn.cc:2501) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so) ==94390== Invalid write of size 8 ==94390== at 0x7838AF: thd_increment_bytes_sent (sql_class.cc:4265) ==94390== by 0xC8EC46: net_real_write (net_serv.cc:730) ==94390== by 0xC8E0C8: net_flush (net_serv.cc:383) ==94390== by 0xC8E4D0: net_write_command (net_serv.cc:521) ==94390== by 0xADCE61: cli_advanced_command (client.c:468) ==94390== by 0xAE3CAF: mysql_close_slow_part (client.c:3671) ==94390== by 0xAE3D28: mysql_close (client.c:3683) ==94390== by 0x149E69A8: spider_db_mbase::disconnect() (spd_db_mysql.cc:2217) ==94390== by 0x1491EA26: spider_db_disconnect(st_spider_conn*) (spd_db_conn.cc:297) ==94390== by 0x14948EBE: spider_free_conn_alloc(st_spider_conn*) (spd_conn.cc:196) ==94390== by 0x1494B26A: spider_free_conn(st_spider_conn*) (spd_conn.cc:1251) ==94390== by 0x1494941F: spider_free_conn_from_trx(st_spider_transaction*, st_spider_conn*, bool, bool, int*) (spd_conn.cc:315) ==94390== Address 0x1f0e0990 is 4,832 bytes inside a block of size 25,728 free'd ==94390== at 0x4C2ACBD: free (vg_replace_malloc.c:530) ==94390== by 0x13F5545: my_free (my_malloc.c:222) ==94390== by 0x6C75B7: ilink::operator delete(void*, unsigned long) (sql_list.h:618) ==94390== by 0x77B9F6: THD::~THD() (sql_class.cc:1724) ==94390== by 0x1494FCE0: spider_bg_conn_action(void*) (spd_conn.cc:2580) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so) ==94390== Block was alloc'd at ==94390== at 0x4C29BC3: malloc (vg_replace_malloc.c:299) ==94390== by 0x13F4DFA: my_malloc (my_malloc.c:101) ==94390== by 0x1491CF06: ilink::operator new(unsigned long) (sql_list.h:614) ==94390== by 0x1494F7FD: spider_bg_conn_action(void*) (spd_conn.cc:2501) ==94390== by 0x4E3DDD4: start_thread (in /usr/lib64/libpthread-2.17.so) ==94390== by 0x5FBFEAC: clone (in /usr/lib64/libc-2.17.so)
-
Jan Lindström authored
-
- 10 Apr, 2019 3 commits
-
-
Marko Mäkelä authored
-
Jan Lindström authored
-
Jan Lindström authored
-
- 09 Apr, 2019 1 commit
-
-
Sergei Golubchik authored
and don't pass BUILD_CONFIG twice, once is enough.
-
- 08 Apr, 2019 4 commits
-
-
Marko Mäkelä authored
When freeing a file page, write a MLOG_INIT_FREE_PAGE record. This allows us to avoid page flush and instead punch holes later, in the page flushing. To implement that, we may want to make buf_page_t::file_page_was_freed available in non-debug builds. Crash recovery can choose to ignore or apply the record. In BtrBulk::finish() we must not write this record, because redo logging is being disabled for the page.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
fseg_free_page_func(): Avoid an unnecessary tablespace ID lookup. The callers should pass the tablespace that they already know.
-