- 12 Aug, 2009 4 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
unknown authored
-
unknown authored
Replication SQL thread does not set database default charset to thd->variables.collation_database properly, when executing LOAD DATA binlog. This bug can be repeated by using "LOAD DATA" command in STATEMENT mode. This patch adds code to find the default character set of the current database then assign it to thd->db_charset when slave server begins to execute a relay log. The test of this bug is added into rpl_loaddata_charset.test
-
- 11 Aug, 2009 5 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
-
Davi Arnaut authored
-
Davi Arnaut authored
-
unknown authored
-
- 10 Aug, 2009 4 commits
-
-
Davi Arnaut authored
-
unknown authored
-
Davi Arnaut authored
-
Martin Hansson authored
-
- 08 Aug, 2009 1 commit
-
-
Davi Arnaut authored
The problem is that the lexer could inadvertently skip over the end of a query being parsed if it encountered a malformed multibyte character. A specially crated query string could cause the lexer to jump up to six bytes past the end of the query buffer. Another problem was that the laxer could use unfiltered user input as a signed array index for the parser maps (having upper and lower bounds 0 and 256 respectively). The solution is to ensure that the lexer only skips over well-formed multibyte characters and that the index value of the parser maps is always a unsigned value. mysql-test/r/ctype_recoding.result: Update test case result: ending backtick is not skipped over anymore. sql/sql_lex.cc: Characters being analyzed must be unsigned as they can be used as indexes for the parser maps. Only skip over if the string is a valid multi-byte sequence. tests/mysql_client_test.c: Add test case for Bug#45010
-
- 07 Aug, 2009 1 commit
-
-
Martin Hansson authored
Problem 1: When the 'Using index' optimization is used, the optimizer may still - after cost-based optimization - decide to use another index in order to avoid using a temporary table. But when this happens, the flag to the storage engine to read index only (not table) was still set. Fixed by resetting the flag in the storage engine and TABLE structure in the above scenario, unless the new index allows for the same optimization. Problem 2: When a 'ref' access method was employed by cost-based optimizer, (when the column is non-NULLable), it was assumed that it needed no initialization if 'quick' access methods (since they are based on range scan). When ORDER BY optimization overrides the decision, however, it expects to have this initialized and hence crashes. Fixed in 5.1 (was fixed in 6.0 already) by initializing 'quick' even when there's 'ref' access. mysql-test/r/order_by.result: Bug#46454: Test result. mysql-test/t/order_by.test: Bug#46454: Test case. sql/sql_select.cc: Bug#46454: Problem 1 fixed in make_join_select() Problem 2 fixed in test_if_skip_sort_order() sql/table.h: Bug#46454: Added comment to field.
-
- 06 Aug, 2009 5 commits
-
-
Ignacio Galarza authored
-
Ignacio Galarza authored
- Remove offensive quotes.
-
Mattias Jonsson authored
when partition is reoganized. Problem was that table->timestamp_field_type was not changed before copying rows between partitions. fixed by setting it to TIMESTAMP_NO_AUTO_SET as the first thing in fast_alter_partition_table, so that all if-branches is covered.
-
Satya B authored
column on partitioned table An assertion 'ASSERT_COULUMN_MARKED_FOR_READ' is failed if the query is executed with index containing double column on partitioned table. The problem is that assertion expects all the fields which are read, to be in the read_set. In this query only the field 'a' is in the readset as the tables in the query are joined by the field 'a' and so the assertion fails expecting other field 'b'. Since the function cmp() is just comparison of two parameters passed, the assertion is not required. Fixed by removing the assertion in the double fields comparision function and also fixed the index initialization to do ordered index scan with RW LOCK which ensures all the fields from a key are in the read_set. Note: this bug is not reproducible with other datatypes because the assertion doesn't exist in comparision function for other datatypes. mysql-test/r/partition.result: Testcase for BUG#45816 mysql-test/t/partition.test: Testcase for BUG#45816 sql/field.cc: Removed the assertion ASSERT_COLUMN_MARED_FOR_READ in Field_double::cmp() function sql/ha_partition.cc: Fixed index_int() method to make it initialize the read_set properly if ordered index scan with RW lock is requested.
-
unknown authored
The server shutdown and start code triggered the valgrind failures within nptl_pthread_exit_hack_handler on Ubuntu 9.04, x86 (but not amd64) in rpl_trigger.test file. For fixing the bug, suppress valgrind failures within nptl_pthread_exit_hack_handler on Ubuntu 9.04, x86 (but not amd64). Because the server shutdown and start code has been heavily used in mysql test set. mysql-test/valgrind.supp: Add code for suppressing valgrind failures within nptl_pthread_exit_hack_handler on Ubuntu 9.04, x86 (but not amd64).
-
- 05 Aug, 2009 1 commit
-
-
Jim Winstead authored
-
- 04 Aug, 2009 3 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
-
Davi Arnaut authored
-
- 03 Aug, 2009 2 commits
-
-
Alfranio Correia authored
Install procedure does not copy *.inc files located under the mysql-test/t directory. Therefore, this patch moves the rpl_trigger.inc to the mysql-test/include directory.
-
Alfranio Correia authored
-
- 02 Aug, 2009 1 commit
-
-
Alfranio Correia authored
The test case fails sporadically on Windows while trying to overwrite an unused binary log. The problem stems from the fact that MySQL on Windows does not immediately unlock/release a file while the process that opened and closed it is still running. In BUG 38603, this issue was circumvented by stopping the MySQL process, copying the file and then restarting the MySQL process. Unfortunately, such facilities are not available in the 5.0. Other approaches such as stopping the slave and issuing change master do not work because the relay log file and index are not closed when a slave is stopped. So to fix the problem, we simply don't run on windows the part of the test that was failing.
-
- 01 Aug, 2009 2 commits
-
-
Davi Arnaut authored
http://lists.mysql.com/commits/53569 sql/ha_ndbcluster_binlog.cc: Remove extraneous mutex lock which could cause the server to deadlock.
-
Jim Winstead authored
were included in the configure tests. (Bug #46310)
-
- 31 Jul, 2009 11 commits
-
-
Jim Winstead authored
-
Jim Winstead authored
-
Davi Arnaut authored
engine to the partition_csv test. Also remove test case that was duplicated. Fix connection procedure with the embedded server. mysql-test/r/partition.result: Update test case result. mysql-test/r/partition_csv.result: Update test case result. mysql-test/t/partition.test: Move test cases to the partition_csv test. mysql-test/t/partition_csv.test: Move tests from partition.test and remove duplicate. Tweaky connection procedure to work with embedded.
-
Ignacio Galarza authored
-
Tatiana A. Nurnberg authored
-
Ignacio Galarza authored
-
Ignacio Galarza authored
- Define and pass compile time path variables as pre-processor definitions to mimic the makefile build. - Set new CMake version and policy requirements explicitly. - Changed DATADIR to MYSQL_DATADIR to avoid conflicting definition in Platform SDK header ObjIdl.h which also defines DATADIR.
-
Gleb Shchepa authored
when used with --tab 1) New syntax: added CHARACTER SET clause to the SELECT ... INTO OUTFILE (to complement the same clause in LOAD DATA INFILE). mysqldump is updated to use this in --tab mode. 2) ESCAPED BY/ENCLOSED BY field parameters are documented as accepting CHAR argument, however SELECT .. INTO OUTFILE silently ignored rests of multisymbol arguments. For the symmetrical behavior with LOAD DATA INFILE the server has been modified to fail with the same error: ERROR 42000: Field separator argument is not what is expected; check the manual 3) Current LOAD DATA INFILE recognizes field/line separators "as is" without converting from client charset to data file charset. So, it is supposed, that input file of LOAD DATA INFILE consists of data in one charset and separators in other charset. For the compatibility with that [buggy] behaviour SELECT INTO OUTFILE implementation has been saved "as is" too, but the new warning message has been added: Non-ASCII separator arguments are not fully supported This message warns on field/line separators that contain non-ASCII symbols. client/mysqldump.c: mysqldump has been updated to call SELECT ... INTO OUTFILE statement with a charset from the --default-charset command line parameter. mysql-test/r/mysqldump.result: Added test case for bug #30946. mysql-test/r/outfile_loaddata.result: Added test case for bug #30946. mysql-test/t/mysqldump.test: Added test case for bug #30946. mysql-test/t/outfile_loaddata.test: Added test case for bug #30946. sql/field.cc: String conversion code has been moved from check_string_copy_error() to convert_to_printable() for reuse. sql/share/errmsg.txt: New WARN_NON_ASCII_SEPARATOR_NOT_IMPLEMENTED message has been added. sql/sql_class.cc: The select_export::prepare() method has been modified to: 1) raise the ER_WRONG_FIELD_TERMINATORS error on multisymbol ENCLOSED BY/ESCAPED BY field arguments like LOAD DATA INFILE; 2) warn with a new WARN_NON_ASCII_SEPARATOR_NOT_IMPLEMENTED message on non-ASCII field or line separators. The select_export::send_data() merhod has been modified to convert item data to output charset (see new SELECT INTO OUTFILE syntax). By default the BINARY charset is used for backward compatibility. sql/sql_class.h: The select_export::write_cs field added to keep output charset. sql/sql_load.cc: mysql_load has been modified to warn on non-ASCII field or line separators with a new WARN_NON_ASCII_SEPARATOR_NOT_IMPLEMENTED message. sql/sql_string.cc: New global function has been added: convert_to_printable() (common code has been moved from check_string_copy_error()). sql/sql_string.h: New String::is_ascii() method and new global convert_to_printable() function have been added. sql/sql_yacc.yy: New syntax: added CHARACTER SET clause to the SELECT ... INTO OUTFILE (to complement the same clause in LOAD DATA INFILE). By default the BINARY charset is used for backward compatibility.
-
Davi Arnaut authored
If using statement based replication (SBR), repeatedly calling statements which are unsafe for SBR will cause a warning message to be written to the error for each statement. This might lead to filling up the error log and there is no way to disable this behavior. The solution is to only log these message (about statements unsafe for statement based replication) if the log_warnings option is set. For example: SET GLOBAL LOG_WARNINGS = 0; INSERT INTO t1 VALUES(UUID()); SET GLOBAL LOG_WARNINGS = 1; INSERT INTO t1 VALUES(UUID()); In this case the message will be printed only once: [Warning] Statement may not be safe to log in statement format. Statement: INSERT INTO t1 VALUES(UUID()) mysql-test/suite/binlog/r/binlog_stm_unsafe_warning.result: Add test case result for Bug#46265 mysql-test/suite/binlog/t/binlog_stm_unsafe_warning-master.opt: Make log_error value available. mysql-test/suite/binlog/t/binlog_stm_unsafe_warning.test: Add test case for Bug#46265 sql/sql_class.cc: Print warning only if the log_warnings is enabled.
-
Tatiana A. Nurnberg authored
We disallow the partitioning of a log table. You could however partition a table first, and then point logging to it. This is not only against the docs, it also crashes the server. We catch this case now. mysql-test/r/partition.result: results for 40281 mysql-test/t/partition.test: test for 40281: show that trying to log to partitioned table fails rather to crash the server sql/ha_partition.cc: Signal that we no longer support logging to partitioned tables, as per the docs. sql/sql_partition.cc: Some commands like "USE ..." have no select, yet we may try to parse partition info after their execution if user set a partitioned table as log target. This shouldn't lead to a NULL-deref/crash.
-
Jim Winstead authored
-