- 16 Feb, 2011 3 commits
-
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
- 15 Feb, 2011 5 commits
-
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
-
Sergey Petrunya authored
- Merge with 5.3 (3)
-
- 12 Feb, 2011 4 commits
-
-
http://www.vtk.org/Bug/view.php?id=11240Vladislav Vaintroub authored
Huge static libraries like libmysqld might not build if /MACHINE flag is missing for librarian with the correct processor architecture. Fix is to add /MACHINE flag for x64 builds
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
When mariadb 5.3 is compiler with VS2010, several tests would enter infinite loop in sel_arg_range_seq_next(). The reason is compiler backend bug. This bug is not present in either VS2008 or VS2010 SP1 RC. Workaround is to compile this function without most aggresive optimization flag (-Og ) using #pragma optimize ("g", {on|off}) for this version of MSVC compiler.
-
Vladislav Vaintroub authored
- add forgotten source file
-
- 11 Feb, 2011 1 commit
-
-
Sergey Petrunya authored
Merge: BUG#716293: "Range checked for each record" is not used if condition refers to outside of subquery
-
- 10 Feb, 2011 1 commit
-
-
Sergey Petrunya authored
- Assume that outside subquery references are known when doing "Range-checked-for-each-record" check.
-
- 09 Feb, 2011 2 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
-
- 07 Feb, 2011 1 commit
-
-
Igor Babaev authored
-
- 06 Feb, 2011 1 commit
-
-
Igor Babaev authored
with the test case added by this patch. The bug cannot be reproduced with the same test case for the main 5.3 tree because the backported fix for bug 59696 masks the problem that causes the crash in the mentioned test case. It's not clear weather this fix masks this problem in all possible cases. Anyway the patch for bug 698882 introduced some inconsistent data structures that could contain indirect references to deleted object. It happened when two Item_equal objects were merged and the Item_field list of the second object was joined to such list of the first object. This operation required adjustment of the backward pointers in Item fields from the joined list. However the adjustment was missing and this caused crashes in the tree for mwl#128. Now the backward pointers are set only when Item_equal items are completely built and are not changed anymore.
-
- 05 Feb, 2011 1 commit
-
-
Igor Babaev authored
When this flag is 'off' the size of the used join buffer is taken directly from the system variable 'join_buffer_size'. When this flag is 'on' then the size of the buffer depends on the estimated number of rows in the partial join whose records are to be stored in the buffer. By default this flag is set 'on'.
-
- 01 Feb, 2011 2 commits
-
-
Vladislav Vaintroub authored
* declaration in the middle of the block in C file. * round() is only available in C99.
-
Igor Babaev authored
The patch fixed the following optimizer defect: when performing substitution for best equal fields into where conditions to be able to do their evaluations as soon as possible the optimizer skipped conditions over views. That could lead to suboptimal execution of queries that used views. Slightly changed the test case to demonstrate the performance improvements if this fix.
-
- 29 Jan, 2011 1 commit
-
-
Igor Babaev authored
-
- 28 Jan, 2011 1 commit
-
-
Igor Babaev authored
This bug could manifest itself when hash join over a varchar column with NULL values in some rows was used. It happened because the function key_buf_cmp erroneously returned FALSE when one of the joined key fields was null while the second was not. Also fixed two other bugs in the functions key_hashnr and key_buf_cmp that could possibly lead to wrong results for some queries that used hash join over several columns with nulls. Also reverted the latest addition of the test case for bug #45092. It had been already backported earlier.
-
- 27 Jan, 2011 1 commit
-
-
Igor Babaev authored
This was another bug in the patch for bug 698882. The new code from this patch did not ensured that substitutions of fields for best equal fields were performed on all AND-OR levels. As a result substitutions for best fields in some predicates that had been used by the range optimizer were not actually performed while range plans could employ these substitutions. This could lead to inconsistent data structures and ultimately to a crash.
-
- 26 Jan, 2011 1 commit
-
-
Igor Babaev authored
The bug was in the code of the patch fixing bug 698882. With improper casting the method store_key_field::change_source_field was called for the elements of the array TABLE_REF::key_copy that were either of a different type or not allocated at all. This caused crashes in some queries.
-
- 24 Jan, 2011 1 commit
-
-
Igor Babaev authored
hash join over equi-join conditions without supporting indexes.
-
- 23 Jan, 2011 2 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
of sort_intersect scans.
-
- 22 Jan, 2011 3 commits
-
-
Igor Babaev authored
-
Igor Babaev authored
-
Igor Babaev authored
hash join over equi-join conditions without supporting indexes.
-
- 21 Jan, 2011 1 commit
-
-
unknown authored
-
- 14 Jan, 2011 8 commits
-
-
Sergei Golubchik authored
(incorrect block size)
-
Sergei Golubchik authored
-
Sergei Golubchik authored
(less not-needed copies of key pages) storage/maria/ma_rkey.c: Fixed wrong test if SEARCH_SAVE_BUFF should be set. Now we assume that if we are doing HA_READ_KEY_EXACT, we don't have to copy the last key buffer (in other words, it's not likely this will be followed by a read-next call)
-
Sergei Golubchik authored
Aria and MyISAM in create_internal_tmp_table_from_heap() (safe, as duplicates are impossible). This gives a HUGE speed boost! sql/opt_subselect.cc: Fixed problem with wrong recinfo in create_duplicate_weedout_tmp_tabl() Tagged the table with 'no_rows' so that when we create the table on disk, we only store the index data. This gave us a major speedup! sql/sql_select.cc: create_tmp_table_from_heap() now uses bulk_insert + repair_by_sort when creating Aria/MyISAM tables from HEAP tables. This gives a HUGE speed boost! storage/maria/ha_maria.cc: Extended bulk_insert() to recreate UNIQUE keys for internal temporary tables storage/maria/ma_open.c: Initialize m_info->lock.type properly for temporarly tables (needed for start_bulk_insert()) storage/maria/ma_write.c: Don't check uniques that are disabled storage/myisam/ha_myisam.cc: Extended bulk_insert() to recreate UNIQUE keys for internal temporary tables.
-
Sergei Golubchik authored
This will also enable us in the future to collect statistics for writes to internal tmp tables. sql/handler.h: Added ha_write_tmp_row() sql/opt_subselect.cc: ha_write_row -> ha_write_tmp_row sql/sql_class.h: Added ha_write_tmp_row() sql/sql_select.cc: ha_write_row -> ha_write_tmp_row
-
Sergei Golubchik authored
This makes the keys smaller (no row pointer) and gives us proper errors if we use the table wrongly. sql/sql_select.cc: Use NO_RECORD for tables that doesn't need row data. storage/maria/Makefile.am: Added ma_norec.c storage/maria/ma_check.c: Added support for NO_RECORD record format (don't store any row data) storage/maria/ma_norec.c: Added support for NO_RECORD record format storage/maria/ma_open.c: Added support for NO_RECORD record format storage/maria/ma_search.c: Added support for 0 size row pointers (used with NO_RECORD) storage/maria/ma_test1.c: Added testing of NO_RECORD record format. storage/maria/maria_chk.c: Added support for NO_RECORD storage/maria/maria_def.h: Added support for NO_RECORD storage/maria/unittest/ma_test_all-t: Added testing of NO_RECORD record format
-
Sergei Golubchik authored
temptables, not "uniques", that are hash-based keys. sql/sql_expression_cache.cc: Don't set uniques (we don't want or need an unique constraint on this table)
-
Sergei Golubchik authored
This was needed as the old code caused us to have LOTS of duplicate hash values when used by optimizer. include/m_ctype.h: Made my_hash_sort_bin() exernal storage/maria/ma_unique.c: Better hash for packed numeric data for unique handling. This was needed as the old code caused us to have LOTS of duplicate hash values when used by optimizer.
-