- 06 May, 2018 7 commits
-
-
Daniel Black authored
Compiles to same vector code, just a bit simplier. vec_crc32.c is now identical to upstream (https://github.com/antonblanchard/crc32-vpmsum/). C code by Rogerio Alves <rogealve@br.ibm.com> This implemention has been tested on big endian too. Signed-off-by: Daniel Black <daniel@linux.ibm.com>
-
Daniel Black authored
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
-
Oleksandr Byelkin authored
It is test only (fix was done by Monty in ha_sequence::open by allocating ref)
-
Varun Gupta authored
t1.pk IS NOT NULL where pk is a PRIMARY KEY For equalites in the WHERE clause we create a keyuse array that contains the set of all equalities. For each KEYUSE inside the keyuse array we have a field "null_rejecting" which tells that the equality will not hold if either the left or right hand side of the equality is NULL. If the equality is NULL rejecting then we accordingly add a NOT NULL condition for the field present in the item val(present in the KEYUSE struct) when we are doing ref access. For the optimization of splitting with GROUP BY we always set the null_rejecting to TRUE and we are doing ref access on the GROUP by field. This does create a problem when the equality is NOT NULL rejecting. This happens in this case as in the equality we have the right hand side as t1.pk where pk is a PRIMARY KEY , hence it is NOT NULLABLE. So we should have null rejecting set to FALSE for such a case.
-
Monty authored
The bug was the we copied the lock type to the underlying engine even when external_lock failed.
-
Monty authored
-
Alexander Barkov authored
-
- 04 May, 2018 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
log_sys_init(), log_buffer_extend(): Add TRASH_ALLOC() instrumentation log_write_up_to(): Correctly calculate the byte offset.
-
Marko Mäkelä authored
The number of records in INFORMATION_SCHEMA.COLUMNS depends on the build options, and could easily change when features are added. We are not interested in the number of rows returned. The test was originally added because of problem 15 reported in MDEV-13900 (testing for MDEV-11369 instant ADD COLUMN). The issue was an assertion failure ut_ad(!rec_is_default_row(rec, index)) in lock_clust_rec_cons_read_sees(), because the 'default row' record was not being properly ignored by the b-tree cursor.
-
- 03 May, 2018 5 commits
-
-
Elena Stepanova authored
-
Jacob Mathew authored
The fix for this bug was automatically merged from 10.2. However, that fix was incomplete in 10.3. This commit is for the additional changes that are necessary in 10.3. Author: Jacob Mathew. Reviewer: Kentoku Shiba.
-
Monty authored
Fixed by removing the check of single lock in sequence insert and let MDL code handle deadlock detection
-
Marko Mäkelä authored
MDEV-15060 Assertion in row_log_table_apply_op after instant ADD when the table is emptied during subsequent ALTER TABLE During an online table rebuild, a table could be emptied and converted from 'instant ADD' format to plain (pre-10.3) format. All online_log records for rebuilding the table must be written and parsed in the format of the table that existed at the start of the operation. row_log_t::n_core_fields: A new field for recording index->n_core_fields when online ALTER is initiated in row_log_allocate(). row_log_t::is_instant(): Determine if the log is in the instant format. Only invoked by the row_log_table_ family of functions. dict_index_t::get_n_nullable(): Remove is_instant() debug assertions. Because a table can be converted to non-instant format during a table-rebuilding ALTER TABLE, these assertions would be bogus when executing row_log_table_apply(). rec_init_offsets_temp(): Add the parameter n_core for passing the original index->n_core_fields. rec_init_offsets_temp(): Add a 3-parameter variant. rec_init_offsets_comp_ordinary(): Add the parameter n_core for passing the index->n_core_fields.
-
Marko Mäkelä authored
MDEV-12266 changed dict_table_t::space to a pointer. Displaying pointer values in error messages would be even more meaningless than displaying numeric tablespace identifiers. row_create_table_for_mysql(): Do not display table->space when deleting the file fails. We cannot dereference table->space here, because fil_delete_tablespace() would have freed the object. fil_wait_crypt_bg_threads(): Do not display table->space. We could display table->space_id here, but it should not really add any value, because the table reference-counts have no direct connection to files or tablespaces.
-
- 02 May, 2018 4 commits
-
-
Marko Mäkelä authored
btr_pcur_store_position(): Assert that the 'default row' record never is the only record in a page. (If that would happen, an empty root page would be re-created in the non-instant format, not containing the special record.) When the cursor is positioned on the page infimum, never use the 'default row' as the BTR_PCUR_BEFORE reference. (This is additional cleanup, not fixing the bug.) rec_copy_prefix_to_buf(): When converting a record prefix to the non-instant-add format, copy the original number of null flags. Rename the variable instant_len to instant_omit, and introduce a few more variables to make the code easiser to read. Note: In purge, rec_copy_prefix_to_buf() is also used for storing the persistent cursor position on a 'default row' record. The stored record reference will be garbage, but row_search_on_row_ref() will do special handling to reposition the cursor on the 'default row', based on ref->info_bits. innodb.dml_purge: Also cover the 'default row'.
-
Thirunarayanan Balathandayuthapani authored
MDEV-16071 Server crashed in innobase_build_col_map / prepare_inplace_alter_table_dict or Assertion `tuple' failed in dtuple_get_nth_field upon altering table with virtual column - Virtual column should be considered during innobase_build_col_map() to find out whether the field changed from NULL to NOT NULL.
-
Thirunarayanan Balathandayuthapani authored
- Added two new test case for it.
-
Jacob Mathew authored
When a comma separator is missing between COMMENT fields, Spider ignores the parameter values that are beyond the last expected parameter value. There are also some error messages that Spider does generate on COMMENT fields that are incorrectly formed. I have introduced additional infrastructure in Spider to fix these problems. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit c10da98b on branch bb-10.3-MDEV-15698
-
- 01 May, 2018 5 commits
-
-
Jacob Mathew authored
When a comma separator is missing between COMMENT fields, Spider ignores the parameter values that are beyond the last expected parameter value. There are also some error messages that Spider does generate on COMMENT fields that are incorrectly formed. I have introduced additional infrastructure in Spider to fix these problems. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Branch bb-10.3-MDEV-15698
-
Marko Mäkelä authored
Remove unused InnoDB function parameters and functions. i_s_sys_virtual_fill_table(): Do not allocate heap memory. mtr_is_block_fix(): Replace with mtr_memo_contains(). mtr_is_page_fix(): Replace with mtr_memo_contains_page().
-
Daniel Black authored
mysql_test_db.sql is in the srcdir
-
Jacob Mathew authored
MDEV-15712: If remote server used by Spider table is unavailable, some operations hang for a long time When an attempt to connect to the remote server fails, Spider retries to connect to the remote server 1000 times or until the connection attempt succeeds. This is perceived as a hang if the remote server remains unavailable. I have introduced changes in Spider's table status handler to fix this problem. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit 6ee6933a on branch bb-10.3-MDEV-15712
-
Jacob Mathew authored
MDEV-15712: If remote server used by Spider table is unavailable, some operations hang for a long time When an attempt to connect to the remote server fails, Spider retries to connect to the remote server 1000 times or until the connection attempt succeeds. This is perceived as a hang if the remote server remains unavailable. I have introduced changes in Spider's table status handler to fix this problem. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Branch bb-10.3-MDEV-15712.
-
- 30 Apr, 2018 13 commits
-
-
Sergey Vojtovich authored
Unexpected data truncation may occur when storing data to compressed blob column having multi byte variable length character sets. The reason was incorrect number of characters limit was enforced for blobs.
-
Sergey Vojtovich authored
Added --skip-test-db option to mysql_install_db. If specified, no test database created and relevant grants issued. Removed --skip-auth-anonymous-user option of mysql_install_db. Now it is covered by --skip-test-db. Dropped some Debian patches that did the same. Removed unused make_win_bin_dist.1, make_win_bin_dist and mysql_install_db.pl.in.
-
Marko Mäkelä authored
Implement innodb_flush_method as an enum parameter in Mariabackup, instead of ignoring the option and hard-wiring it to a default value. xb0xb.h: Remove. Only xtrabackup.cc refers to the enum parameters. innodb_flush_method_names[], innodb_flush_method_typelib[]: Define as non-static, so that mariabackup can share the definitions. srv_file_flush_method: Change the type to ulong, to match the assignment in init_one_value() and handle_options() in mariabackup.
-
Marko Mäkelä authored
Replace most use of #error. Some checks were impossible to evaluate in the preprocessor due to the use of named integer constants or enumerations.
-
Marko Mäkelä authored
The checks that used to be enabled by the flags UNIV_AHI_DEBUG, UNIV_DDL_DEBUG, UNIV_DEBUG_FILE_ACCESSES were already enabled in debug builds. So, there is no point in setting these. Only UNIV_ZIP_DEBUG is set independently of the debug build. Allow WITH_INNODB_EXTRA_DEBUG to be set for non-debug builds as well. Currently it only implies UNIV_ZIP_DEBUG, that is, extra validation for operations on ROW_FORMAT=COMPRESSED tables. page_zip_validate_low(): Allow the code to be built on non-debug server. buf_LRU_block_remove_hashed(): Allow the code to be built without WITH_INNODB_AHI.
-
Monty authored
- Removed test if HA_FT_WTYPE == HA_KEYTYPE_FLOAT as this never worked (HA_KEYTYPE_FLOAT is an enum) - Define HA_FT_MAXLEN to 126 (was tested before but never defined)
-
Monty authored
Change all my_stcasecmp() calls that uses lexical keywords to use lex_string_eq. This is faster as we only call strcasecmp() for strings of different lengths. Removed not used function lex_string_syseq()
-
Monty authored
- test-alter now correctly drops all columns - test-alter has a new test that times adding columns in middle of table - test-insert has a new test to check updates that doesn't change data - test-insert: update_with_key_prefix didn't change data. Now fixed
-
Marko Mäkelä authored
The assertion would fail with the following trace: rec_init_offsets_comp_ordinary(..., format=REC_LEAF_COLUMNS_ADDED) rec_init_offsets() rec_get_offsets_func() rec_copy_prefix_to_dtuple() dict_index_build_data_tuple() btr_pcur_restore_position_func() When btr_cur_store_position() had stored pcur->old_rec, the table contained instantly added columns. The table was emptied (dict_index_t::remove_instant() invoked) between the 'store' and 'restore' operations, causing the assertion to fail. Here is a non-deterministic test case to repeat the scenario: --source include/have_innodb.inc --connect (con1,localhost,root,,test) CREATE TABLE t1 (pk INT PRIMARY KEY) ENGINE = InnoDB; INSERT INTO t1 VALUES (0); ALTER TABLE t1 ADD COLUMN a INT; ALTER TABLE t1 ADD UNIQUE KEY (a); DELETE FROM t1; send INSERT INTO t1 VALUES (1,0),(2,0); --connection default DELETE FROM t1; # the assertion could fail here DROP TABLE t1; --disconnect con1 The fix is to normalize the pcur->old_rec so that when the record prefix is stored, it will always be in the plain format. This can be done, because the record prefix never includes any instantly added columns. (It can only include key columns, which can never be instantly added.) rec_copy_prefix_to_buf(): Convert REC_STATUS_COLUMNS_ADDED to REC_STATUS_ORDINARY format.
-
Marko Mäkelä authored
dict_index_copy_rec_order_prefix(): Avoid invoking dict_index_get_n_unique_in_tree_nonleaf(). create_index(): Simplify code for creating SPATIAL or FULLTEXT index. rec_copy_prefix_to_buf(): Skip the loop for SPATIAL INDEX.
-
Marko Mäkelä authored
Only allocate n_uniq elements for offsets, instead of index->n_fields. (Statistics are never computed on spatial indexes, so we never need to access more fields even in rec_copy_prefix_to_buf().)
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 29 Apr, 2018 3 commits
-
-
Marko Mäkelä authored
InnoDB was using int64_t instead of ha_rows (unsigned 64-bit).
-
Marko Mäkelä authored
There is only one log_sys and only one log_sys.log. log_t::files::create(): Replaces log_init(). log_t::files::close(): Replaces log_group_close(), log_group_close_all(). fil_close_log_files(): if (free) log_sys.log_close(); The callers that passed free=true used to call log_group_close_all(). log_header_read(): Replaces log_group_header_read(). log_t::files::file_header_bufs_ptr: Use a single allocation. log_t::files::file_header_bufs[]: Statically allocate the pointers. log_t::files::set_fields(): Replaces log_group_set_fields(). log_t::files::calc_lsn_offset(): Replaces log_group_calc_lsn_offset(). Simplify the computation by using fewer variables. log_t::files::read_log_seg(): Replaces log_group_read_log_seg(). log_sys_t::complete_checkpoint(): Replaces log_io_complete_checkpoint(). fil_aio_wait(): Move the logic from log_io_complete().
-
Marko Mäkelä authored
There is only one redo log subsystem in InnoDB. Allocate the object statically, to avoid unnecessary dereferencing of the pointer. log_t::create(): Renamed from log_sys_init(). log_t::close(): Renamed from log_shutdown(). log_t::checkpoint_buf_ptr: Remove. Allocate log_t::checkpoint_buf statically.
-