- 05 May, 2009 2 commits
-
-
Narayanan V authored
When a user selected an unsupported character set for an IBMDB2I table, error 2501 or 2511 may have been returned, giving the appearance of an internal programming error. This patch consolidates these errors into a single descriptive error message for the common case of an unsupported character set. The new error number is 2504 and indicates a user error. The errors 2501 and 2511 remain to indicate cases of internal programming errors.
-
Alexander Barkov authored
on cp932 and sjis environment. Problem: case conversion erroneously changes the second bytes of multi-byte sequences because single-byte functions were called in a mistake. Fix: call multi-byte aware functions instead.
-
- 04 May, 2009 4 commits
-
-
Sergei Golubchik authored
removed few sprintf's
-
Martin Hansson authored
'INSERT ... SELECT' statements Merge
-
Martin Hansson authored
'INSERT ... SELECT' statements The code that produces result rows expected that a duplicate row error could not occur in INSERT ... SELECT statements with unfulfilled WHERE conditions. This may happen, however, if the SELECT list contains only aggregate functions. Fixed by checking if an error occured before trying to send EOF to the client.
-
Andrei Elkin authored
-
- 02 May, 2009 1 commit
-
-
Serge Kozlov authored
1. Replace waiting of SQL thread stop by waiting of SQL error on slave and stopped SQL thread. 2. Remove debug code because it already implemented in MTR2.
-
- 30 Apr, 2009 25 commits
-
-
Gleb Shchepa authored
-
Gleb Shchepa authored
EXPLAIN EXTENDED of nested query containing a error: 1054 Unknown column '...' in 'field list' may cause a server crash. Parse error like described above forces a call to JOIN::destroy() on malformed subquery. That JOIN::destroy function closes and frees temporary tables. However, temporary fields of these tables may be listed in st_select_lex::group_list of outer query, and that st_select_lex may not cleanup them properly. So, after the JOIN::destroy call that st_select_lex::group_list may have Item_field objects with dangling pointers to freed temporary table Field objects. That caused a crash.
-
Georgi Kodinov authored
-
Sergey Vojtovich authored
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Andrei Elkin authored
-
Andrei Elkin authored
-
Narayanan V authored
-
Andrei Elkin authored
-
Narayanan V authored
This patch adds corrections to the original patch submitted 2009-04-08 (http://lists.mysql.com/commits/71607): - fixed that the original patch didn't work because of an incorrect condition; - added a test case.
-
Andrei Elkin authored
-
Andrei Elkin authored
my_error() was invoked in reset_slave()'s with purge_relay_logs()-failing branch without passing sql_errno to it. Fixed with setting sql_errno= ER_RELAY_LOG_FAIL in the purge_relay_logs()-failing branch.
-
Matthias Leich authored
into actual tree
-
Sergey Glukhov authored
-
Satya B authored
-
Satya B authored
table corruption Moved the testcase from the file myisam.test to the new testfile mysiam_debug.test
-
Matthias Leich authored
This is a "null" merge because the fix is already in 5.1
-
Sergey Glukhov authored
Error happens because sp_head::MULTI_RESULTS is not set for SP which has 'show table status' command. The fix is to add a SQLCOM_SHOW_TABLE_STATUS case into sp_get_flags_for_command() func.
-
Alexey Botchkov authored
-
Alexey Botchkov authored
per-file comments: tests/mysql_client_test.c the test for bug 37956 isn't relevant anymore. The query there 'select point(?,?)' doesn't produce an error.
-
Satya B authored
Killing the insert-select statement corrupts the MyISAM table only when the destination table is empty and when it has indexes. When we bulk insert huge data and if the destination table is empty we disable the indexes for fast inserts, data is then inserted and indexes are re-enabled after bulk_insert operation Killing the query, aborts the repair table operation during enable indexes phase leading to table corruption. We now truncate the table when we detect that enable indexes is killed for bulk insert query.As we have an empty table before the operation, we can fix by truncating the table.
-
- 29 Apr, 2009 8 commits
-
-
Martin Hansson authored
-
Vladislav Vaintroub authored
-
Martin Hansson authored
A bug in the initialization of key segment information made it point to the wrong bit, since a bit index was used when its int value was needed. This lead to misinterpretation of bit columns read from MyISAM record format when a NULL bit pushed them over a byte boundary. Fixed by using the int value of the bit instead.
-
Vladislav Vaintroub authored
key_buffer_size. The cause of corruption was number overflow when multiplying two ulong values, number of used keycache blocks with size of a single block. The result of multiplication exceeded ulong range (4G) and this lead to incorrectly calculated buffer offset in the key cache. The fix is to use size_t for multiplication result. This patch also fixes pointless cast in safemalloc (size of allocated block to uint), that creates lot of false alarm warnings when using big keycache (> 4GB) in debug mode.
-
Narayanan V authored
The storage engine was not correctly handling the case in which rnd_pos is executed for a handler without a preceding rnd_next or index read operation. As a result, an unitialized file handle was sometimes being passed to the QMY_READ API. The fix clears the rrnAssocHandle at the beginning of each read operation and then checks to see whether it has been set to a valid handle value before attempting to use it in rnd_pos. If rrnAssocHandle has not been set by a previous read operation, rnd_pos instead falls back to the use of the currently active handle.
-
Alexey Botchkov authored
-
Alexey Botchkov authored
-
Alexey Botchkov authored
-