- 22 Jan, 2018 13 commits
-
-
Sergei Golubchik authored
translate clang __has_feature to gcc macros
-
Vicențiu Ciorbaru authored
The assertion failure was caused by an incorrectly set read_set for functions in the ORDER BY clause in part of a union, when we are using a mergeable view and the order by clause can be skipped (removed). An order by clause can be skipped if it's part of one part of the UNION as the result set is not meaningful when multiple SELECT queries are UNIONed. The server is aware of this optimization and tries to remove the order by clause before JOIN::prepare. The problem is that we need to throw an error when the ORDER BY clause contains invalid columns. To do this, we attempt resolving the ORDER BY expressions, then subsequently drop them if resolution succeeded. However, ORDER BY resolution had the side effect of adding the expressions to the all_fields list, which is used to construct temporary tables to store the result. We may be ignoring the ORDER BY statement, but the tmp table still tried to compute the values for the expressions, even if the columns are never used. The assertion only shows itself if the order by clause contains members which were not previously in the select list, and are part of a function. There is an additional question as to why this only manifests when using VIEWS and not when using a regular table. The difference lies with the "reset" of the read_set for the temporary table during SELECT_LEX::update_used_tables() in JOIN::optimize(). The changes introduced in fdf789a7 cleared the read_set when a mergeable view is encountered in the TABLE_LIST defintion. Upon initial order_list resolution, the table's read_set is updated correctly. JOIN::optimize() will only reset the read_set if it encounters a VIEW. Since we no longer have ORDER BY clause in JOIN::optimize() we never get to correctly update the read_set again. Other relevant commit by Timour, which first introduced the order resolution when we "can_skip_sort_order": 883af99e Solution: Don't add the resolved ORDER BY elements to all_fields. We only resolve them to check if an error should be returned for the query. Ignore them completely otherwise.
-
Vicențiu Ciorbaru authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
It has its limitations, e.g. it assumes that there's only one gdb and only one valgrind process is running. And a hard-coded one-second delay might be too short for slow machines. Still, it's better than "doesn't work at all"
-
Sergei Golubchik authored
-
Sergei Golubchik authored
instrument table->record[0], table->record[1] and share->default_values. One should not access record image beyond share->reclength, even if table->record[0] has some unused space after it (functions that work with records, might get a copy of the record as an argument, and that copy - not being record[0] - might not have this buffer space at the end). See b80fa400 and 444587d8
-
Sergei Golubchik authored
-
Sergei Golubchik authored
more complete TRASH-ing of memroots
-
Sergei Golubchik authored
mark freed memory as not accessible, not merely undefined
-
Sergei Golubchik authored
TRASH was mapped to TRASH_FREE and was supposed to be used for memory that should not be accessed anymore, while TRASH_ALLOC() is to be used for uninitialized but to-be-used memory. But sometimes TRASH() was used in the latter sense. Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
-
Sergei Golubchik authored
-
Marko Mäkelä authored
InnoDB limited the maximum number of bytes per character to 4. But, the filename character set that was introduced in MySQL 5.1 uses up to 5 bytes per character. To allow InnoDB tables to be created with wider characters, let us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase the limit to 7 bytes per character. This will increase the payload size of dtype_t and dict_col_t by one bit. The storage size will be unchanged (54 bits and 77 bits will use the same number of bytes as the previous sizes 53 and 76 bits).
-
- 19 Jan, 2018 4 commits
-
-
Vicențiu Ciorbaru authored
-
Daniel Bartholomew authored
-
Vicențiu Ciorbaru authored
Resolving a stacktrace including functions in dynamic libraries requires us to look inside the libraries for the symbols. Addr2line needs to be started with the correct binary for each address on the stack. To do this, figure out which library it is using dladdr, then if the addr2line binary was started with a different binary, fork it again with the correct one. We only have one addr2line process running at any point during the stacktrace resolving step. The maximum number of forks for addr2line should generally be around 6. One for server stacktrace code, one for plugin code, one when going back into server code, one for pthread library, one for libc, one for the _start function in the server. More can come up if plugin calls server function which goes back to a plugin, etc.
-
Varun Gupta authored
In this case we were using the optimization derived_with_keys but we could not create a key because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH). To do the join we needed to create a hash join key instead, but in the explain output it showed that we were still referring to derived keys which were created but not used.
-
- 18 Jan, 2018 8 commits
-
-
Igor Babaev authored
In the function JOIN::shrink_join_buffers the iteration over joined tables was organized in a wrong way. This could cause a crash if the optimizer chose to materialize a semi-join that used join caches for which the sizes must be adjusted.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Vladislav Vaintroub authored
Fix WinUIDialogBmp.jpg to use correct dimensions
-
Vladislav Vaintroub authored
Use new grey logo.
-
- 17 Jan, 2018 1 commit
-
-
Sergei Golubchik authored
-
- 16 Jan, 2018 4 commits
-
-
Sergei Golubchik authored
MEMORY engine needs the record length to be at least sizeof(void*), because it stores a pointer there (linking deleted records into a list). So when the reclength is less than sizeof(void*), it's set to sizeof(void*). That is done inside heap_create(), and the upper layer doesn't know that the engine writes beyond share->reclength. While it's usually safe (in-memory record size is rounded up to sizeof(double), so even if share->reclength is too small, share->rec_buff_len is not), it could cause problems in the code that copies records and expects them to fix in share->reclength, e.g. in partitioning.
-
Sergei Golubchik authored
* get_rec_bits() was always reading two bytes, even if the bit field contained only of one byte * In various places the code used field->pack_length() bytes starting from field->ptr, while it should be field->pack_length_in_rec() * Field_bit::key_cmp and Field_bit::cmp_max passed field_length as an argument to memcmp(), but field_length is the number of bits!
-
Sergei Golubchik authored
-
Sergei Golubchik authored
if the property is not found, set it to the empty string, otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker command line, and the linker won't like it. Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES already put it into LINK_FLAGS.
-
- 15 Jan, 2018 7 commits
-
-
Igor Babaev authored
optimizer_switch For DATE and DATETIME columns defined as NOT NULL, "date_notnull IS NULL" has to be modified to: "date_notnull IS NULL OR date_notnull == 0" if date_notnull is from an inner table of outer join); "date_notnull == 0" - otherwise. This must hold for such columns of mergeable views and derived tables as well. So far the code did the above re-writing only for columns of base tables and temporary tables.
-
Sergei Golubchik authored
1. test readdir_r() availability under -Werror 2. don't protect readdir() with mutexes, it's not needed for the way we use readdir()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
gcc 6 issues a warning about a suspicious construct while(0); { some code }
-
Sergey Vojtovich authored
Enumerate plugins that use password field.
-
Daniel Black authored
-
Alexander Barkov authored
The function trans_rollback_to_savepoint(), unlike trans_savepoint(), did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT inside an XA transaction incorrectly returned an error "...command cannot be executed ... in the ACTIVE state...". Partially merging a MySQL patch: 7fb5c47390311d9b1b5367f97cb8fedd4102dd05 This is WL#7193 (Decouple THD and st_transactions)... The currently merged part includes these changes: - Introducing st_xid_state::check_has_uncommitted_xa() - Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(), so now both allow XA_ACTIVE.
-
- 14 Jan, 2018 1 commit
-
-
Oleksandr Byelkin authored
The problem was in such scenario: T1 - starts registering query and locked QC T2 - starts disabling QC and wait for UNLOCK T1 - unlock QC T2 - disable QC and destroy signals without waiting for query unlock T1 a) - not yet unlocked query in qc and crash on attempt to unlock because QC signals are destroyed b) if above was done before destruction, it execute end_of results first time at exit on after try_lock which see QC disables and return TRUE. But it do not reset query_cache_tls->first_query_block which lead to second call of end_of_result when diagnostic arena has already inappropriate status (not is_eof()). Fix is: 1) wait for all queries unlocked before destroying them by locking and unlocking 2) remove query_cache_tls->first_query_block if QC disabled
-
- 13 Jan, 2018 1 commit
-
-
Sergey Vojtovich authored
Regression after 5ea28015.
-
- 12 Jan, 2018 1 commit
-
-
Igor Babaev authored
with joins, SQ, ORDER BY, semijoin=on A bug in get_sort_by_table() could mislead the function setup_semijoin_dups_elimination(). As a result the optimizer could produce invalid execution plans for queries with ORDER BY and subquery predicates that could be converted to semi-joins.
-