An error occurred fetching the project authors.
- 15 Jan, 2011 1 commit
-
-
Igor Babaev authored
An assertion failure was triggered for a 6-way join query that used two join buffers. The failure happened because every call of JOIN_CACHE::join_matching_records saved and restored status of all tables that were accessed before the table join_tab. It must do it only for those of them that follow the last table using a join buffer.
-
- 13 Jan, 2011 1 commit
-
-
Sergey Petrunya authored
Date: Mon, 01 Nov 2010 15:15:25 -0000 3272 Roy Lyseng 2010-11-01 Bug#52068: Optimizer generates invalid semijoin materialization plan When MaterializeScan semijoin strategy was used and there were one or more outer dependent tables before the semijoin tables, the scan over the materialized table was not properly reset for each row of the prefix outer tables. Example: suppose we have a join order: ot1 SJ-Mat-Scan(it2 it3) ot4 Notice that this is called a MaterializeScan, even though there is an outer table ahead of the materialized tables. Usually a MaterializeScan has the outer tables after the materialized table, but this is a special (but legal) case with outer dependent tables both before and after the materialized table. For each qualifying row from ot1, a new scan over the materialized table must be set up. The code failed to do that, so all scans after the first one returned zero rows from the materialized table.
-
- 05 Jan, 2011 1 commit
-
-
Igor Babaev authored
for hash join in the cases when there are no suitable indexes for these conditions.
-
- 27 Dec, 2010 1 commit
-
-
Igor Babaev authored
One of the hash functions employed by the BNLH join algorithm calculates the the value of hash index for key value utilizing every byte of the key buffer. To make this calculation valid one has to ensure that for any key value unused bytes of the buffer are filled with with a certain filler. We choose 0 as a filler for these bytes. Added an optional boolean parameter with_zerofill to the function key_copy. If the value of the parameter is TRUE all unused bytes of the key buffer is filled with 0.
-
- 22 Dec, 2010 1 commit
-
-
Igor Babaev authored
Lifted the limitation that hash join could not be used over varchar fields with non-binary collation.
-
- 13 Dec, 2010 1 commit
-
-
Sergey Petrunya authored
- Address review feedback: change return type of RANGE_SEQ_IF::next()
-
- 11 Dec, 2010 1 commit
-
-
Igor Babaev authored
-
- 02 Dec, 2010 1 commit
-
-
Sergey Petrunya authored
- Address Monty's review feedback, part 5
-
- 22 Nov, 2010 1 commit
-
-
Sergey Petrunya authored
- Address Monty's review feedback, part 1 - Fix buildbot failure
-
- 19 Nov, 2010 1 commit
-
-
Igor Babaev authored
The bug happened when BKA join algorithm used an incremental buffer and some of the fields over which access keys were constructed - were allocated in the previous join buffers - were non-nullable - belonged to inner tables of outer joins. For such fields an offset to the field value in the record is saved in the postfix of the record, and a zero offset indicates that the value is null. Before the key using the field value is constructed the value is read into the corresponding field of the record buffer and the null bit is set for the field if the offset is 0. However if the field is non-nullable the table->null_row must be set to 1 for null values and to 0 for non-null values to ensure proper reading of the value from the record buffer.
-
- 13 Nov, 2010 1 commit
-
-
Igor Babaev authored
The patch that introduced the new enumeration type Match_flag for the values of match flags in the records put into join buffers missed the necessary modifications in JOIN_CACHE::set_match_flag_if_none. This could cause wrong results for outer joins with on expressions only over outer tables.
-
- 11 Nov, 2010 1 commit
-
-
Igor Babaev authored
Miscalculation of the minimum possible buffer size could trigger an assert in JOIN_CACHE_HASHED::put_record when if join_buffer_size was set to the values that is less than the length of one record to stored in the join buffer. It happened due to the following mistakes: - underestimation of space needed for a key in the hash table (we have to take into account that hash table can have more buckets than the expected number of records). - the value of maximum total length of all records stored in the join buffer was not saved in the field max_used_fieldlength by the function calc_used_field_length.
-
- 05 Nov, 2010 1 commit
-
-
Igor Babaev authored
When probing into the hash table of a hashed join cache is performed the key value should not constructed in the buffer used to build keys in the hash tables. The constant parts of these keys copied only once, so they should not be ever overwritten. Otherwise wrong results can be produced by queries that employ hashed join buffers.
-
- 03 Nov, 2010 1 commit
-
-
Igor Babaev authored
plans or wrong results due to the fact that JOIN_CACHE functions ignored the possibility of interleaving materialized semijoin tables with tables whose records were stored in join buffers. This fixes would become mostly unnecessary if the new code of mwl 90 was merged into 5.3 right now. Yet the fix the code of optimize_wo_join_buffering was needed in any case.
-
- 27 Oct, 2010 1 commit
-
-
Igor Babaev authored
-
- 22 Oct, 2010 2 commits
-
-
Igor Babaev authored
After the patch for bug 663840 had been applied the test case for bug 663818 triggered the assert introduced by this patch. It happened because the the patch turned out to be incomplete: the space needed for a key entry must be taken into account for the record written into the buffer, and, for the next record as well, when figuring out whether the record being written is the last for the buffer or not.
-
Igor Babaev authored
When adding a new record into the join buffer that is employed by BNLH join algorithm the writing procedure JOIN_CACHE::write_record_data checks whether there is enough space for the record in the buffer. When doing this it must take into account a possible new key entry added to the buffer. It might happen, as it has been demonstrated by the bug test case, that there is enough remaining space in the buffer for the record, but not for the additional key entry for this record. In this case the key entry overwrites the end of the record that might cause a crash or wrong results. Fixed by taking into account a possible addition of new key entry when estimating the remaining free space in the buffer.
-
- 18 Oct, 2010 1 commit
-
-
Igor Babaev authored
about the employed join algorithms. Refactored constructors of the JOIN_CACHE* classes.
-
- 10 Oct, 2010 2 commits
-
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
- 06 Oct, 2010 1 commit
-
-
Igor Babaev authored
Employed the same kind of optimization as in the fix for the cases when join buffer is used. The optimization performs early evaluation of the conditions from on expression with table references to only outer tables of an outer join.
-
- 04 Oct, 2010 1 commit
-
-
Igor Babaev authored
The fix aligns join_null_complements() with join_matching_records() making both call generate_full_extensions(). There should not be any difference between how the WHERE clause is applied to NULL-complemented records from a partial join and how it is applied to other partially joined records:the latter happens in join_matching_records(), precisely in generate_full_extensions().
-
- 21 Sep, 2010 1 commit
-
-
Igor Babaev authored
When an incremental join cache is used to join a table whose fields are not referenced anywhere in the query the association pointer to the last record in the such cache can be the same as the pointer to the end of the buffer. The function JOIN_CACHE_BKA::get_next_key must take into consideration this when iterating over the keys of the records from the join buffer. The assertion in JOIN_TAB_SCAN_MRR::next also must take this into consideration. Borrowed a slightly changed test case from a patch attached to the bug #52394.
-
- 03 Sep, 2010 1 commit
-
-
Igor Babaev authored
-
- 02 Sep, 2010 2 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
-
- 01 Sep, 2010 2 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
-
- 31 Aug, 2010 1 commit
-
-
Igor Babaev authored
-
- 17 Jul, 2010 1 commit
-
-
Sergey Petrunya authored
- Lots of TODO comments - add mrr_sort_keys flag to @@optimizer_switch - [from Igor] SQL layer part passes HA_MRR_MATERIALIZED_KEYS flag - Don't call rnd_pos() many times in a row if sorted rowid buffer has the same rowid value for multiple consequive (rowid, range_id) pairs.
-
- 02 Jul, 2010 1 commit
-
-
Igor Babaev authored
join cache module. Without these calls SELECTs over tables with virtual columns that used join cache could return wrong results. This could be seen with the test case added into vcol_misc.test
-
- 22 Jun, 2010 1 commit
-
-
Sergey Petrunya authored
- Remove back key_parts from multi_range_read_init() parameters - Related code simplification/cleanup
-
- 19 Jun, 2010 1 commit
-
-
Sergey Petrunya authored
- First code (will need code cleanup)
-
- 07 Mar, 2010 1 commit
-
-
Sergey Petrunya authored
- The problem was that DuplicateWeedout strategy setup code wasn't aware of the fact that join buffering will be used and applied optimization that doesn't work together with join buffering. Fixed by making DuplicateWeedout setup code to have a pessimistic check about whether there is a chance that join buffering will be used. - Make JOIN_CACHE_BKA::init() correctly process Copy_field elements that denote saving current rowids in the join buffer. mysql-test/r/subselect_sj2.result: Update test results mysql-test/r/subselect_sj2_jcl6.result: Update test results mysql-test/r/subselect_sj_jcl6.result: Testcase mysql-test/t/subselect_sj2.test: Update test results mysql-test/t/subselect_sj_jcl6.test: Testcase sql/opt_subselect.cc: - The problem was that DuplicateWeedout strategy setup code wasn't aware of the fact that join buffering will be used and applied optimization that doesn't work together with join buffering. Fixed by making DuplicateWeedout setup code to have a pessimistic check about whether there is a chance that join buffering will be used. sql/sql_join_cache.cc: Make JOIN_CACHE_BKA::init() correctly process Copy_field elements that denote saving current rowids in the join buffer. sql/sql_select.cc: Added a question note
-
- 06 Mar, 2010 1 commit
-
-
Igor Babaev authored
The function JOIN_CACHE::read_all_record_fields could return 0 for an incremental join cache in two cases: 1. there were no more records in the associated join buffer 2. there was no table fields stored in the join buffer. As a result the function JOIN_CACHE::get_record() could return prematurely and did not read all needed fields from join buffers into the record buffer. Now the function JOIN_CACHE::read_all_record_fields returns -1 if there are no more records in the associated join buffer.
-
- 05 Mar, 2010 1 commit
-
-
Igor Babaev authored
Made sure that join buffers could be used for inner tables of any semi-join when the first match strategy is employed.
-
- 15 Feb, 2010 1 commit
-
-
Sergey Petrunya authored
- Factor out subquery code into sql/opt_subselect.{h,cc} - Stop using the term "confluent" (was used due to misreading the dictionary)
-
- 18 Jan, 2010 1 commit
-
-
Sergey Petrunya authored
- Enable semi-join handling in the join cache code
-
- 21 Dec, 2009 1 commit
-
-
Igor Babaev authored
WL#2771 "Block Nested Loop Join and Batched Key Access Join"
-