- 29 Nov, 2010 3 commits
-
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
- 28 Nov, 2010 1 commit
-
-
Sergey Petrunya authored
subsequent asserts have the wrong meaning.
-
- 26 Nov, 2010 1 commit
-
-
Sergey Petrunya authored
-
- 25 Nov, 2010 2 commits
-
-
Sergey Petrunya authored
-
Sergey Petrunya authored
- Address Monty's review feedback, part 4
-
- 23 Nov, 2010 1 commit
-
-
Sergey Petrunya authored
- Address Monty's review feedback, part 3
-
- 22 Nov, 2010 2 commits
-
-
Sergey Petrunya authored
- Address Monty's review feedback, part 1 - Fix buildbot failure
-
Sergey Petrunya authored
- Address Monty's review feedback, part 1
-
- 19 Nov, 2010 3 commits
-
-
Igor Babaev authored
companions out of sql_select.h into a separate file sql_join_cache.h.
-
Igor Babaev authored
The bug happened when BKA join algorithm used an incremental buffer and some of the fields over which access keys were constructed - were allocated in the previous join buffers - were non-nullable - belonged to inner tables of outer joins. For such fields an offset to the field value in the record is saved in the postfix of the record, and a zero offset indicates that the value is null. Before the key using the field value is constructed the value is read into the corresponding field of the record buffer and the null bit is set for the field if the offset is 0. However if the field is non-nullable the table->null_row must be set to 1 for null values and to 0 for non-null values to ensure proper reading of the value from the record buffer.
-
Igor Babaev authored
The condition that was supposed to check whether a join table is an inner table of a nested outer join or semi-join was not quite correct in the code of the function check_join_cache_usage. That's why some queries with nested outer joins triggered an assertion failure. Encapsulated this condition in the new method called JOIN_TAB::is_nested_inner and provided a proper code for it. Also corrected a bug in the code of check_join_cache_usage() that caused a downgrade of not first join buffers of the level 5 and 7 to level 4 and 6 correspondingly.
-
- 16 Nov, 2010 1 commit
-
-
Igor Babaev authored
When pushing the condition for a table in the function JOIN_TAB::make_scan_filter the optimizer must not push conditions from WHERE if the table is some inner table of an outer join..
-
- 15 Nov, 2010 3 commits
-
-
Igor Babaev authored
The condition over outer tables extracted from the on expression for a outer join must be ANDed to the condition pushed to the first inner table of this outer join only. Nested outer joins cannot use flat join buffers. So if join_cache_level is set to 1 then any join algorithm employing join buffers cannot be used for nested outer joins.
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
- 13 Nov, 2010 4 commits
-
-
Igor Babaev authored
The patch that introduced the new enumeration type Match_flag for the values of match flags in the records put into join buffers missed the necessary modifications in JOIN_CACHE::set_match_flag_if_none. This could cause wrong results for outer joins with on expressions only over outer tables.
-
Igor Babaev authored
-
Igor Babaev authored
A non-incremental join buffer cannot be used for inner tables of nested outer joins. That's why when join_cache_level is set to 7 it must be downgraded to level 6 for the inner tables of nested outer joins. For the same reason with join_cache_level set to 3 no join buffer is used for the inner tables of outer joins (we could downgrade it to level 2, but this level does not support ref access).
-
Igor Babaev authored
-
- 12 Nov, 2010 3 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
Made sure that the function that copy a long varchar field from the record buffer into a key buffer does not copy bytes after the field value.
-
Igor Babaev authored
-
- 11 Nov, 2010 1 commit
-
-
Igor Babaev authored
Miscalculation of the minimum possible buffer size could trigger an assert in JOIN_CACHE_HASHED::put_record when if join_buffer_size was set to the values that is less than the length of one record to stored in the join buffer. It happened due to the following mistakes: - underestimation of space needed for a key in the hash table (we have to take into account that hash table can have more buckets than the expected number of records). - the value of maximum total length of all records stored in the join buffer was not saved in the field max_used_fieldlength by the function calc_used_field_length.
-
- 10 Nov, 2010 2 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
-
- 09 Nov, 2010 7 commits
-
-
Sergey Petrunya authored
BUG#671361: virtual int Mrr_ordered_index_reader::refill_buffer(): Assertion `!know_key_tuple_params - Make sure we have enough space for both rowids and keys.
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
unknown authored
-
Sergey Petrunya authored
-
Igor Babaev authored
The pushdown condition for the sorted table in a query can be complemented by the conditions from HAVING. This transformation is done in JOIN::exec pretty late after the original pushdown condition have been saved in the field pre_idx_push_select_cond for the sorted table. So this field must be updated after the inclusion of the condition from HAVING.
-
- 08 Nov, 2010 5 commits
-
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
- Disable identical key handling optimization when IndexConditionPushdown is used
-
Sergey Petrunya authored
- Make mi_open() use less stack space
-
Sergey Petrunya authored
- Code cleanup - Always propagate the error code we got from storage engine all the way up
-
- 07 Nov, 2010 1 commit
-
-
Igor Babaev authored
Currently BNLH join uses a simplified implementation of hash function when hash function is calculated over the whole key buffer, not only the significant bytes of it. It means that both building keys and probing keys both must fill insignificant bytes with the same filler. Usually 0 is used as such a filler. Yet the code before patch filled insignificant bytes only for probing keys.
-