- 29 Sep, 2010 2 commits
-
-
Dmitry Lenev authored
detector" that doesn't introduce bug #56715 "Concurrent transactions + FLUSH result in sporadical unwarranted deadlock errors". Deadlock could have occurred when workload containing a mix of DML, DDL and FLUSH TABLES statements affecting the same set of tables was executed in a heavily concurrent environment. This deadlock occurred when several connections tried to perform deadlock detection in the metadata locking subsystem. The first connection started traversing wait-for graph, encountered a sub-graph representing a wait for flush, acquired LOCK_open and dived into sub-graph inspection. Then it encountered sub-graph corresponding to wait for metadata lock and blocked while trying to acquire a rd-lock on MDL_lock::m_rwlock, since some,other thread had a wr-lock on it. When this wr-lock was released it could have happened (if there was another pending wr-lock against this rwlock) that the rd-lock from the first connection was left unsatisfied but at the same time the new rd-lock request from the second connection sneaked in and was satisfied (for this to be possible the second rd-request should come exactly after the wr-lock is released but before pending the wr-lock manages to grab rwlock, which is possible both on Linux and in our own rwlock implementation). If this second connection continued traversing the wait-for graph and encountered a sub-graph representing a wait for flush it tried to acquire LOCK_open and thus the deadlock was created. The previous patch tried to workaround this problem by not allowing the deadlock detector to lock LOCK_open mutex if some other thread doing deadlock detection already owns it and current search depth is greater than 0. Instead deadlock was reported. As a result it has introduced bug #56715. This patch solves this problem in a different way. It introduces a new rw_pr_lock_t implementation to be used by MDL subsystem instead of one based on Linux rwlocks or our own rwlock implementation. This new implementation never allows situation in which an rwlock is rd-locked and there is a blocked pending rd-lock. Thus the situation which has caused this bug becomes impossible with this implementation. Due to fact that this implementation is optimized for wr-lock/unlock scenario which is most common in the MDL subsystem it doesn't introduce noticeable performance regressions in sysbench tests. Moreover it significantly improves situation for POINT_SELECT test when many connections are used. No test case is provided as this bug is very hard to repeat in MTR environment but is repeatable with the help of RQG tests. This patch also doesn't include a test for bug #56715 "Concurrent transactions + FLUSH result in sporadical unwarranted deadlock errors" as it takes too much time to be run as part of normal test-suite runs.
-
Jon Olav Hauglid authored
This patch moves the regression test from variables.test to variables_debug.test as the debug system variable is not available on release builds.
-
- 28 Sep, 2010 1 commit
-
-
Jon Olav Hauglid authored
This crash occured if the same debug trace file was closed twice, leading to the same memory being free'd twice. This could occur if the "debug" server system variable refered to the same trace file in both global and session scope. Example of an order of events that would lead to a crash: 1) Enable debug tracing to a trace file (global scope) 2) Enable debug tracing to the same trace file (session scope) 3) Reset debug settings (global scope) 4) Reset debug settings (session scope) This caused a crash because the trace file was, by mistake, closed in 3), leading to the same memory being free'd twice when the file was closed again in 4). Internally, the debug settings are stored in a stack, with session settings (if any) on top and the global settings below. Each connection has its own stack. When a set of settings is changed, it must be determined if its debug trace file is to be closed. Before, this was done by only checking below on the settings stack. So if the global settings were changed, an existing debug trace file reference in session settings would be missed. This caused the file to be closed even if it was in use, leading to a crash later when it was closed again. This patch fixes the problem by preventing the trace file from being shared between global and session settings. If session debug settings are set without specifying a new trace file, stderr is used for output. This is a change in behaviour and should be reflected in the documentation. Test case added to variables.test.
-
- 24 Sep, 2010 3 commits
-
-
Jon Olav Hauglid authored
After the patch for Bug#54579, multi inserts done with INSERT DELAYED are binlogged as normal INSERT. During processing of the statement, a new query string without the DELAYED keyword is made. The problem was that this new string was incorrectly made when the INSERT DELAYED was part of a prepared statement - data was read outside the allocated buffer. The reason for this bug was that a pointer to the position of the DELAYED keyword inside the query string was stored when parsing the statement. This pointer was then later (at runtime) used (via pointer subtraction) to find the number of characters to skip when making a new query string without DELAYED. But when the statement was re-executed as part of a prepared statement, the original pointer would be invalid and the pointer subtraction would give a wrong/random result. This patch fixes the problem by instead storing the offsets from the beginning of the query string to the start and end of the DELAYED keyword. These values will not depend on the memory position of the query string at runtime and therefore not give wrong results when the statement is executed in a prepared statement. This bug was a regression introduced by the patch for Bug#54579. No test case added as this bug is already covered by the existing binlog.binlog_unsafe test case when running with valgrind.
-
Jon Olav Hauglid authored
but broken. Before this patch, it was allowed to use stored functions in HANDLER ... READ statements. The problem was that this functionality was not really supported by the code. Proper locking would for example not be performed, and it was also possible to break replication by having stored functions that performed updates. This patch disallows the use of stored functions in HANDLER ... READ. Any such statement will now give an ER_NOT_SUPPORTED_YET error. This is an incompatible change and should be reflected in the documentation. Test case added to handler_myisam/handler_innodb.test.
-
Jon Olav Hauglid authored
-
- 23 Sep, 2010 2 commits
-
-
Mats Kindahl authored
-
Jon Olav Hauglid authored
reports corruption along with timeout This patch updates the result file for the parts.partition_special_innodb test case which was, by mistake, not updated in the original patch.
-
- 22 Sep, 2010 1 commit
-
-
Jon Olav Hauglid authored
REPAIR of merge table Bug #56422 CHECK TABLE run when the table is locked reports corruption along with timeout The crash happened if a table maintenance statement (ANALYZE TABLE, REPAIR TABLE, etc.) was executed on a MERGE table and opening and locking a child table failed. This could for example happen if a child table did not exist or if a lock timeout happened while waiting for a conflicting metadata lock to disappear. Since opening and locking the MERGE table and its children failed, the tables would be closed and the metadata locks released. However, TABLE_LIST::table for the MERGE table would still be set, with its value invalid since the tables had been closed. This caused the table maintenance statement to try to continue and upgrade the metadata lock on the MERGE table. But since the lock already had been released, this caused a segfault. This patch fixes the problem by setting TABLE_LIST::table to NULL if open_and_lock_tables() fails. This prevents maintenance statements from continuing and trying to upgrade the metadata lock. The patch includes a 5.5 version of the fix for Bug #46339 crash on REPAIR TABLE merge table USE_FRM. This bug caused REPAIR TABLE ... USE_FRM to give an assert when used on merge tables. The patch also enables the CHECK TABLE statement for log tables. Before, CHECK TABLE for log tables gave ER_CANT_LOCK_LOG_TABLE, yet still counted the statement as successfully executed. With the changes to table maintenance statement error handling in this patch, CHECK TABLE would no longer be considered as successful in this case. This would have caused upgrade scripts to mistakenly think that the general and slow logs are corrupted and have to be repaired. Enabling CHECK TABLES for log tables prevents this from happening. Finally, the patch changes the error message from "Corrupt" to "Operation failed" for a number of issues not related to table corruption. For example "Lock wait timeout exceeded" and "Deadlock found trying to get lock". Test cases added to merge.test and check.test.
-
- 21 Sep, 2010 2 commits
-
-
Mats Kindahl authored
-
Evgeny Potemkin authored
-
- 17 Sep, 2010 4 commits
-
-
Marc Alff authored
CHECKSUM TABLE for performance schema tables could cause uninitialized memory reads. The root cause is a design flaw in the implementation of mysql_checksum_table(), which do not honor null fields. However, fixing this bug in CHECKSUM TABLE is risky, as it can cause the checksum value to change. This fix implements a work around, to systematically reset fields values even for null fields, so that the field memory representation is always initialized with a known value.
-
Marc Alff authored
-
Alfranio Correia authored
-
Marc Alff authored
Before this fix, the test output for perfschema.server_init would vary between executions, because some of the objects tested were not guaranteed to exist in all configurations / code paths. This fix removes these weak tests. Also, comments referring to abandonned code have been cleaned up.
-
- 16 Sep, 2010 3 commits
-
-
Jon Olav Hauglid authored
-
Dmitry Lenev authored
tree for embedded server Test case for bug #56251 "Deadlock with INSERT DELAYED and MERGE tables" can't be run against embedded server. Embedded server converts all DELAYED INSERTs into ordinary INSERTs and this test can't work properly if such conversion happens. Moved this test from merge.test to delayed.test which is skipped if test suite is run with --embedded-server option.
-
Jon Olav Hauglid authored
The problem was that RENAME TABLE caused an assert if the system variable lower_case_table_names was 2 (default on Mac OS X) and the old table name was given in upper case. This caused lowercase_table2.test to fail. The assert checks that an exclusive metadata lock is held by the connection trying to do RENAME TABLE - specificially during updates of table triggers. The assert was triggered since the check is case sensitive and the lock was held on the normalized (lower case) version of the table name. This patch fixes the problem by making sure a normalized version of the table name is used for the metadata lock check, while using a non-normalized version of the table name for the rename of trigger files. The same is done for ALTER TABLE ... RENAME. Regression testing for the bug itself is already covered by lowercase_table2.test. Additional coverage added to lowercase_fs_off.test.
-
- 15 Sep, 2010 3 commits
-
-
Marc Alff authored
Before this fix, the server could crash inside a memcpy when reading data from the EVENTS_WAITS_CURRENT / HISTORY / HISTORY_LONG tables. The root cause is that the length used in a memcpy could be corrupted, when another thread writes data in the wait record being read. Reading unsafe data is ok, per design choice, and the code does sanitize the data in general, but did not sanitize the length given to memcpy. The fix is to also sanitize the schema name / object name / file name length when extracting the data to produce a row.
-
Dmitry Lenev authored
tables". Attempting to issue an INSERT DELAYED statement for a MERGE table might have caused a deadlock if it happened as part of a transaction or under LOCK TABLES, and there was a concurrent DDL or LOCK TABLES ... WRITE statement which tried to lock one of its underlying tables. The problem occurred when a delayed insert handler thread tried to open a MERGE table and discovered that to do this it had also to open all underlying tables and hence acquire metadata locks on them. Since metadata locks on the underlying tables were not pre-acquired by the connection thread executing INSERT DELAYED, attempts to do so might lead to waiting. In this case the connection thread had to wait for the delayed insert thread. If the thread which was preventing the lock on the underlying table from being acquired had to wait for the connection thread (due to this or other metadata locks), a deadlock occurred. This deadlock was not detected by the MDL deadlock detector since waiting for the handler thread by the connection thread is not represented in the wait-for graph. This patch solves the problem by ensuring that the delayed insert handler thread never tries to open underlying tables of a MERGE table. Instead open_tables() is aborted right after the parent table is opened and a ER_DELAYED_NOT_SUPPORTED error is emitted (which is passed to the connection thread and ultimately to the user).
-
Olav Sandstaa authored
The crash during boot was caused by a DBUG_PRINT statement in fill_schema_schemata() (in sql_show.cc). This DBUG_PRINT statement contained several instances of %s in the format string and for one of these we gave a NULL pointer as the argument. This caused the call to vsnprintf() to crash when running on Solaris. The fix for this problem is to replace the call to vsnprintf() with my_vsnprintf() which handles that a NULL pointer is passed as argumens for %s. This patch also extends my_vsnprintf() to support %i in the format string.
-
- 14 Sep, 2010 1 commit
-
-
Marc Alff authored
-
- 13 Sep, 2010 2 commits
-
-
Marc Alff authored
Implemented post review comments. Added --force to the mysql_upgrade command in the test scripts, so that the test output does not depends on whether other tests involving an upgrade have been executed or not in the same test suite execution.
-
Jon Olav Hauglid authored
The problem was that issuing XA END when the XA transaction was already ended, caused an assertion. This assertion tests that the server does not try to send OK to the client if there has already been an error reported. The bug was only noticeable on debug versions of the server. The reason for the problem was that the trans_xa_end() function reported success if the transaction was at XA_IDLE state at the end regardless of any errors occured during processing of trans_xa_end(). So if the transaction state was XA_IDLE already, reported errors would be ignored. This patch fixes the problem by having trans_xa_end() take into consideration any reported errors. The patch also fixes a similar bug with XA PREPARE. Test case added to xa.test.
-
- 10 Sep, 2010 2 commits
-
-
Tor Didriksen authored
-
Jon Olav Hauglid authored
-
- 09 Sep, 2010 9 commits
-
-
Marc Alff authored
Before this fix, the server could crash during shutdown, due to race conditions, that occured when killing the server. In particular, the performance schema instrumentation handle, PSI_server, and the performance schema itself would be cleaned up too soon, causing race conditions with a running kill server thread. The specifics of the race condition found are that: the main thread executing "PSI_server= NULL" can cause crashes in other threads still running, which are executing "if (PSI_server != NULL) PSI_server->xxx()" as part of the performance schema instrumentation. While the bug was reported for the kill server thread, in theory the same crash could happen with the signal thread, as found by code analysis. The correct fix would be to only shutdown the performance schema and set PSI_server to NULL after every other thread is guaranteed to be completed, including the kill_server_thread. However, due to the existing mysqld server design, this is not the case. See in particular bug number 56666. The work around used to fix this race condition is to simply not perform the call to shutdown_performance_schema() when the server exits, and to keep the PSI_server pointer unchanged. This will cause memory leaks to be reported by tools like valgrind, but no memory leak actually happen because the process is about to exit(). As a result, the file mysql-test/valgrind.supp has been updated to filter out these false positive messages. This code has been tested with running in a loop the following tests in parallel, which have been known to fail with race conditions in the past: - rpl_change_master - binlog_max_extension - events_restart - rpl_heartbeat_basic and no crash of test failure has been seen with the changed code.
-
Marc Alff authored
Before this fix, it was possible to build the server: - with the performance schema - with a dummy implementation of my_atomic (MY_ATOMIC_MODE_DUMMY). In this case, the resulting binary will just crash, as this configuration is not supported. This fix enforces that the build will fail with a compilation error in this configuration, instead of resulting in a broken binary.
-
Marc Alff authored
-
Dmitry Lenev authored
table causes assert failure". Attempting to use FLUSH TABLE table_list WITH READ LOCK statement for a MERGE table led to an assertion failure if one of its children was not present in the list of tables to be flushed. The problem was not visible in non-debug builds. The assertion failure was caused by the fact that in such situations FLUSH TABLES table_list WITH READ LOCK implementation tried to use (e.g. lock) such child tables without acquiring metadata lock on them. This happened because when opening tables we assumed metadata locks on all tables were already acquired earlier during statement execution and a such assumption was false for MERGE children. This patch fixes the problem by ensuring at open_tables() time that we try to acquire metadata locks on all tables to be opened. For normal tables such requests are satisfied instantly since locks are already acquired for them. For MERGE children metadata locks are acquired in normal fashion. Note that FLUSH TABLES merge_table WITH READ LOCK will lock for read both the MERGE table and its children but will flush only the MERGE table. To flush children one has to mention them in table list explicitly. This is expected behavior and it is consistent with usage patterns for this statement (e.g. in mysqlhotcopy script).
-
Tor Didriksen authored
-
Vasil Dimov authored
mysys/my_sync.c: In function 'my_sync_dir': mysys/my_sync.c:103:29: error: unused parameter 'dir_name' mysys/my_sync.c:103:43: error: unused parameter 'my_flags' mysys/my_sync.c: In function 'my_sync_dir_by_file': mysys/my_sync.c:144:37: error: unused parameter 'file_name' mysys/my_sync.c:144:52: error: unused parameter 'my_flags'
-
Vasil Dimov authored
mysys/my_gethwaddr.c: In function 'my_gethwaddr': mysys/my_gethwaddr.c:67:11: error: pointer targets in assignment differ in signedness
-
Davi Arnaut authored
Add a virtual destructor. Class has virtual functions.
-
Evgeny Potemkin authored
-
- 08 Sep, 2010 4 commits
-
-
Marc Alff authored
With recent changes in the performance schema default sizing parameters, the memory used by a mysqld binary increased accordingly. This negatively affects the MTR test suite, because running several tests in parallel now consumes more ressources. The fix is to leave the default production values unchanged, and to configure the MTR environment to limit memory used when running tests in the test suite, which is ok because only a few objects are typically used within a test script. This fix: - changed the default configuration in MTR to use less memory - adjusted the performance schema tests accordingly Note that 1,000 mutex instances was too short and caused test failures in the past in team trees, so the default used is now 10,000 in MTR. The amount of memory used by the performance schema itself can be observed with the statement SHOW ENGINE PERFORMANCE_SCHEMA STATUS
-
Alexey Botchkov authored
-
Alexey Botchkov authored
-
Jon Olav Hauglid authored
ALTER TABLE on a MERGE table could cause a deadlock with two other connections if we reached a situation where: 1) A connection doing ALTER TABLE can't upgrade to MDL_EXCLUSIVE on the parent table, but holds TL_READ_NO_INSERT on the child tables. 2) A connection doing DELETE on a child table can't get TL_WRITE on it since ALTER TABLE holds TL_READ_NO_INSERT. 3) A connection doing SELECT on the parent table can't get TL_READ on the child tables since TL_WRITE is ahead in the lock queue, but holds MDL_SHARED_READ on the parent table preventing ALTER TABLE from upgrading. For regular tables, this deadlock is avoided by having ALTER TABLE take a MDL_SHARED_NO_WRITE metadata lock on the table. This prevents DELETE from acquiring MDL_SHARED_WRITE on the table before ALTER TABLE tries to upgrade to MDL_EXCLUSIVE. In the example above, SELECT would therefore not be blocked by the pending DELETE as DELETE would not be able to enter TL_WRITE in the table lock queue. This patch fixes the problem for merge tables by using the same metadata lock type for child tables as for the parent table. The child tables will in this case therefore be locked with MDL_SHARED_NO_WRITE, preventing DELETE from acquiring a metadata lock and enter into the table lock queue. Change in behavior: By taking the same metadata lock for child tables as for the parent table, LOCK TABLE on the parent table will now also implicitly lock the child tables. Since LOCK TABLE on the parent table now takes more than one metadata lock, it is possible for LOCK TABLE ... WRITE on the parent table or child tables to give ER_LOCK_DEADLOCK error. Test case added to mdl_sync.test. Merge.test/.result has been updated to reflect the change to LOCK TABLE.
-
- 07 Sep, 2010 1 commit
-
-
Evgeny Potemkin authored
The Item_func_str_to_date class wasn't providing correct integer DATETIME representation as expected. This led to wrong comparison result and didn't allowed the STR_TO_DATE function to be used with indexes. Also, STR_TO_DATE function was inconsisted on throwing warnings/errors. Fixed now. val_int and result_as_longlong methods were added to the Item_func_str_to_date class.
-