- 27 Oct, 2011 1 commit
-
-
Alexander Nozdrin authored
-
- 26 Oct, 2011 2 commits
-
-
Karen Langford authored
-
Hery Ramilison authored
-
- 24 Oct, 2011 2 commits
-
-
Nirbhay Choubey authored
-
Nirbhay Choubey authored
Fixed a misplaced parenthesis, injected due to syncing from libedit CVS head.
-
- 23 Oct, 2011 1 commit
-
-
Dmitry Lenev authored
NEW_FRM_MEM WITHOUT NEEDING TO". During the process of opening tables for a statement, we allocated memory which was used only during view loading even in cases when the statement didn't use any views. Such an unnecessary allocation (and corresponding freeing) might have caused significant performance overhead in some workloads. For example, it caused up to 15% slowdown in a simple stored routine calculating Fibonacci's numbers. This memory was pre-allocated as part of "new_frm_mem" MEM_ROOT initialization at the beginning of open_tables(). This patch addresses this issue by turning off memory pre-allocation during initialization for this MEM_ROOT. Now, memory on this root will be allocated only at the point when the first .FRM for a view is opened. The patch doesn't contain a test case since it is hard to test the performance improvements or the absence of memory allocation in our test framework.
-
- 22 Oct, 2011 1 commit
-
-
Ashish Agarwal authored
TESTS: CRASH, CORRUPTION, 4G MEMOR Issue: Valgrind errors due to checksum and optimize query against archive tables with null columns. Table record buffer was not initialized. Solution: Initialize the record buffer.
-
- 21 Oct, 2011 7 commits
-
-
Nirbhay Choubey authored
-
Nirbhay Choubey authored
BREAKS SOURCE RELEASE BUILD Some of the required files were not getting copied while performing 'make dist' and hence the build failed for the created distribution source. Added the missing files to Makefile.am.
-
Ashish Agarwal authored
TESTS: CRASH, CORRUPTION, 4G MEMOR Issue: Valgrind errors due to checksum and optimize query angaist archive tables with null columns. Table record buffer was not initialized. Solution: Initialize the record buffer.
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
btr_record_not_null_field_in_rec(): Remove the parameter rec. Use rec_offs_nth_sql_null() instead of rec_get_nth_field(). rb:788 approved by Jimmy Yang
-
- 20 Oct, 2011 1 commit
-
-
Sergey Vojtovich authored
USING MYISAM_USE_MMAP ON WINDOWS When OPTIMIZE/REPAIR TABLE is switching to new data file, old data file is removed while memory mapping is still active. With 5.1 implementation of nt_share_delete() it is not permitted to remove mmaped file. This fix disables memory mapping for mi_repair() operations. mysql-test/r/myisam.result: A test case for BUG#11757032. mysql-test/t/myisam.test: A test case for BUG#11757032. storage/myisam/ha_myisam.cc: mi_repair*() functions family use file I/O even if memory mapping is available. Since mixing mmap I/O and file I/O may cause various artifacts, memory mapping must be disabled. storage/myisam/mi_delete_all.c: Clean-up: do not attempt to remap file after truncate, since there is nothing to map.
-
- 19 Oct, 2011 4 commits
-
-
Bjorn Munch authored
-
Joerg Bruehe authored
-
Bjorn Munch authored
-
unknown authored
modified function do_get_error in mysqltest.cc to handle multiple variable passed added test case to mysqltest.test to verify handling to multiple errors passed
-
- 18 Oct, 2011 2 commits
-
-
Nirbhay Choubey authored
-
Nirbhay Choubey authored
WITH LIBEDIT Libedit won't build on platforms that do not provide "sys/cdefs.h". Removed the inclusion of cdefs.h from all files other that sys.h, which includes this file only when the header is found while configuring.
-
- 17 Oct, 2011 2 commits
-
-
unknown authored
This patch corrects a defect whereby the bootstrap_server() method was returning 0 instead of the error code generated. The code has been changed to return the correct value returned from the bootstrap command.
-
unknown authored
This patch corrects a defect in the building of the DELETE commands for disabling a plugin whereby only the original plugin data was deleted. If there were other plugins, the delete did not remove the rows. The code has been changed to remove all rows from the mysql.plugin table that were inserted when the plugin was loaded. The test has also been changed to correctly identify if all rows have been deleted.
-
- 14 Oct, 2011 1 commit
-
-
Alfranio Correia authored
warnings are converted to errors, the compiler complains about the fact that binlog_can_be_corrupted is defined but never used. We need to check if this is a dead code or if someone removed any code by mistake.
-
- 13 Oct, 2011 2 commits
-
-
Nirbhay Choubey authored
-
Nirbhay Choubey authored
Updated libedit library.
-
- 12 Oct, 2011 6 commits
-
-
Sergey Glukhov authored
-
Sergey Glukhov authored
When temporary tables is used for result sorting result field for gconcat function is created using group_concat_max_len size. It leads to result truncation when character_set_results is multi-byte character set due to insufficient tmp table field size. The fix is to increase temporary table field size for gconcat. Method make_string_field() is overloaded for Item_func_group_concat class and uses max_characters * collation.collation->mbmaxlen size for result field. max_characters is maximum number of characters what can fit into max_length size. mysql-test/r/ctype_utf16.result: test result mysql-test/r/ctype_utf32.result: test result mysql-test/r/ctype_utf8.result: test result mysql-test/t/ctype_utf16.test: test case mysql-test/t/ctype_utf32.test: test case mysql-test/t/ctype_utf8.test: test case sql/item.h: make Item::make_string_field() virtual sql/item_sum.cc: added Item_func_group_concat::make_string_field(TABLE *table) method which uses max_characters * collation.collation->mbmaxlen size for result item. max_characters is maximum number of characters what can fit into max_length size. sql/item_sum.h: added Item_func_group_concat::make_string_field(TABLE *table) method
-
Marko Mäkelä authored
-
Marko Mäkelä authored
hash index at shutdown btr_search_disable(): Just drop the entire adaptive hash index, without dropping every record separately. buf_pool_clear_hash_index(): Renamed and simplified from buf_pool_drop_hash_index(). Set block->index = NULL for every block in the buffer pool. Do not release the btr_search_latch. The caller will have to adjust other data structures. Remove block->is_hashed. It is redundant, should be always equal to block->index != NULL. Remove btr_search_fully_disabled, btr_search_enabled_mutex, and SYNC_SEARCH_SYS_CONF. We drop the AHI in one pass, without releasing the btr_search_latch in between. Replace void* with const rec_t* and add assertions on btr_search_latch and btr_search_enabled to ha0ha.h, ha0ha.ic, ha0ha.c. page_set_max_trx_id(): Ignore the adaptive hash index. I forgot to push this in rb:750. btr0sea.c: Always after acquiring btr_search_latch, check for block->index==NULL or !btr_search_enabled. We can now set block->index=NULL while only holding btr_search_latch in exclusive mode. Always acquire btr_search_latch before reading block->index, except in shortcuts when testing for block->index == NULL. ha_clear(), ha_search(): Unused function, remove. buf_page_peek_if_search_hashed(): Remove. This function may avoid latching a page at the cost of doing a duplicate buf_pool->page_hash lookup. rb:775 approved by Inaam Rana
-
Vinay Fisrekar authored
adjust/modify tests as they were failing if system time zone is set differently.
-
Vinay Fisrekar authored
bug#11766457 - adjusting/modifying the the tests as tests were failing if system time zone is set differently.
-
- 10 Oct, 2011 1 commit
-
-
Joerg Bruehe authored
because the search pattern for the "INFO_*" files was not general enough: Fixed.
-
- 06 Oct, 2011 1 commit
-
-
Magne Mahre authored
Sun Studio 12 has an error when calculating the compile-time length of a constant character string. The error is only present when building an optimized 32-bits version, using the -xbuiltin=(%all) compiler flag. During compilation, the compiler recognizes the use of the strlen() function used on a constant string. It optimizes the strlen and replaces it with the actual length of the string. This optimization seems to calculate the length wrongly in this particular case. Replacing the "const char *" with a "const char []" solves the problem.
-
- 05 Oct, 2011 6 commits
-
-
Bjorn Munch authored
-
Bjorn Munch authored
-
Bjorn Munch authored
-
Bjorn Munch authored
but the latter only takes one argument, duh! Fixed by concatenating the args (replace , with .)
-
Bjorn Munch authored
This is a redo for 5.5 Added 'innodb_file_format_max' as variable to ignore change to. Tests that had to restore this amended Two tests assumed it to be Antelope, make sure these run on a freshly started server
-
Sergey Glukhov authored
-