- 08 Oct, 2010 10 commits
-
-
Mats Kindahl authored
-
Mats Kindahl authored
Adding a comment to scheduler_types on the default values used.
-
Mats Kindahl authored
The server contained code for the server variable and option thread_pool_size, but this server variable where not used anywhere. The variable is probably remains from backporting too much from 6.0 (specifically, the thread pool implementation was not backported from 6.0, which this variable is associated with). This patch eliminates the variable from the server.
-
Davi Arnaut authored
Only wait for a single debug signal at a time as the signal state is global. Also, do not activate the query cache debug sync points if the thread has no associated THD session.
-
Tor Didriksen authored
Buffer overrun when trying to format DBL_MAX
-
Sergey Vojtovich authored
engine is not available. We need to add loose prefix to example load option.
-
Alexey Botchkov authored
now do no initializations for the --help. Do it for --verbose --help though. per-file comments: sql/mysqld.cc Bug#30025 Mysqld prints out warnings/errors being run with --no-defaults --help quit with the help message at once as --help was given
-
Alexey Botchkov authored
Issue an error if user specifies multiple commands to run. Also there was an unnoticed bug that DO_CHECK was actually 0 which lead to wrong actions in some cases. The mysqlcheck.test contained commands with the suspicious meaning for the above reason. Extra commands removed from there. per-file commands: client/mysqlcheck.c Bug#35269 mysqlcheck behaves different depending on order of parameters Drop with an error if multiple commands. mysql-test/r/mysqlcheck.result Bug#35269 mysqlcheck behaves different depending on order of parameters result completed. mysql-test/t/mysqlcheck.test Bug#35269 mysqlcheck behaves different depending on order of parameters testcase added. not-working commands removed from some mysqlcheck calls.
-
Davi Arnaut authored
Fix warnings related to the use of the deprecated gets() function and passing NULL to non-pointer argument of the sys_var constructor.
-
Davi Arnaut authored
Move Query_cache_wait_state declaration out of a debug block.
-
- 07 Oct, 2010 21 commits
-
-
Luis Soares authored
-
Davi Arnaut authored
The problem was that threads waiting on the query cache lock are not easily seen due to the lack of a state indicating that the thread is waiting on the said lock. This made it difficult for users to quickly spot (for example, via SHOW PROCESSLIST) a query cache contention problem. The solution is to update the thread state when the query cache lock needs to be acquired. Whenever the lock is to be acquired, the thread state is updated to "Waiting for query cache lock" and is reset once the lock is granted or the wait is interrupted. The intention is to make query cache related hangs more evident. To further investigate query cache related locking problems, one may use PERFORMANCE_SCHEMA to track the overhead associated with the locking bits and determine which particular lock is being a contention point.
-
Evgeny Potemkin authored
-
Evgeny Potemkin authored
The coalesce function returned DATETIME type due to a DATETIME argument, but since it's not a date/time function it can't return correct int value for it. Nevertheless Item_datetime_cache was chosen to cache coalesce's result and that led to a wrong result. Now Item_datetime_cache is used only for those function that could return correct int representation of DATETIME values.
-
Sergey Vojtovich authored
-
Luis Soares authored
The error message for ER_SLAVE_HEARTBEAT_VALUE_OUT_OF_RANGE was hard coded. Additionally, the same error was used in three separate error symptoms: 1. when heartbeat period exceeds the value of slave_net_timeout, 2. when it is smaller than 1 milisecond and 3. when it was not in range, ie, either negative or greater than the maximum allowed. We fix this by splitting into three distinct errors and by removing the message from the source code and moving it to the errmsg-utf8.txt file.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Jon Olav Hauglid authored
No conflicts
-
Dmitry Shulga authored
set to 128k.
-
Vasil Dimov authored
-
Calvin Sun authored
-
Martin Hansson authored
This is the 5.5 version of the fix. The 5.1 version was too complicated to merge and was null merged. This is a regression from the fix for bug no 38999. A storage engine capable of reading only a subset of a table's columns updates corresponding bits in the read buffer to signal that it has read NULL values for the corresponding columns. It cannot, and should not, update any other bits. Bug no 38999 occurred because the implementation of UPDATE statements compare the NULL bits using memcmp, inadvertently comparing bits that were never requested from the storage engine. The regression was caused by the storage engine trying to alleviate the situation by writing to all NULL bits, even those that it had no knowledge of. This has devastating effects for the index merge algorithm, which relies on all NULL bits, except those explicitly requested, being left unchanged. The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and changes the server's method of comparing records. For engines that always read entire rows, we proceed as usual. For engines capable of reading only select columns, the record buffers are now compared on a column by column basis. An assertion was also added so that non comparable buffers are never read. Some relevant copy-pasted code was also consolidated in a new function.
-
Magnus Blåudd authored
- Fix the version of NDB in MySQL Server to 5.5.7(although it's actually 6.2.18)
-
Magnus Blåudd authored
- Move file ha_ndbcluster.m4 - Move "sinclude" directive from configure.in to storag/ndb/plug.in
-
Evgeny Potemkin authored
-
Martin Hansson authored
-
Evgeny Potemkin authored
-
Martin Hansson authored
This is a regression from the fix for bug no 38999. A storage engine capable of reading only a subset of a table's columns updates corresponding bits in the read buffer to signal that it has read NULL values for the corresponding columns. It cannot, and should not, update any other bits. Bug no 38999 occurred because the implementation of UPDATE statements compare the NULL bits using memcmp, inadvertently comparing bits that were never requested from the storage engine. The regression was caused by the storage engine trying to alleviate the situation by writing to all NULL bits, even those that it had no knowledge of. This has devastating effects for the index merge algorithm, which relies on all NULL bits, except those explicitly requested, being left unchanged. The fix reverts the fix for bug no 38999 in both InnoDB and InnoDB plugin and changes the server's method of comparing records. For engines that always read entire rows, we proceed as usual. For engines capable of reading only select columns, the record buffers are now compared on a column by column basis. An assertion was also added so that non comparable buffers are never read. Some relevant copy-pasted code was also consolidated in a new function.
-
Evgeny Potemkin authored
The subtime function wasn't able to produce correct int representation of its result. For constant expressions the Item_datetime_cache is used to speedup evaluation and Item_datetime_cache expects underlying item to return correct int representation of DATETIME value. These two factors combined led to a wrong query result. Now the Item_func_add_time has function val_datetime which performs the calculation and saves result into given MYSQL_TIME struct, it also sets null_value to appropriate value. val_int and val_str member functions convert the result obtained from val_datetime to int or string respectively and returns it.
-
- 06 Oct, 2010 8 commits
-
-
Alexander Nozdrin authored
The fix is to: - introduce ORACLE_WELCOME_COPYRIGHT_NOTICE define to have a single place to specify copyright notice; - replace custom copyright notices with ORACLE_WELCOME_COPYRIGHT_NOTICE in programs.
-
Davi Arnaut authored
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock - Incompatible change: truncate no longer resorts to a row by row delete if the storage engine does not support the truncate method. Consequently, the count of affected rows does not, in any case, reflect the actual number of rows. - Incompatible change: it is no longer possible to truncate a table that participates as a parent in a foreign key constraint, unless it is a self-referencing constraint (both parent and child are in the same table). To work around this incompatible change and still be able to truncate such tables, disable foreign checks with SET foreign_key_checks=0 before truncate. Alternatively, if foreign key checks are necessary, please use a DELETE statement without a WHERE condition. Problem description: The problem was that for storage engines that do not support truncate table via a external drop and recreate, such as InnoDB which implements truncate via a internal drop and recreate, the delete_all_rows method could be invoked with a shared metadata lock, causing problems if the engine needed exclusive access to some internal metadata. This problem originated with the fact that there is no truncate specific handler method, which ended up leading to a abuse of the delete_all_rows method that is primarily used for delete operations without a condition. Solution: The solution is to introduce a truncate handler method that is invoked when the engine does not support truncation via a table drop and recreate. This method is invoked under a exclusive metadata lock, so that there is only a single instance of the table when the method is invoked. Also, the method is not invoked and a error is thrown if the table is a parent in a non-self-referencing foreign key relationship. This was necessary to avoid inconsistency as some integrity checks are bypassed. This is inline with the fact that truncate is primarily a DDL operation that was designed to quickly remove all data from a table.
-
Alexander Barkov authored
Problem: CASE didn't work with a mixture of different character sets in THEN/ELSE in some cases. This happened because after character set aggregation newly created Item_func_conv_charset items corresponding to THEN/ELSE arguments were not put back to args[] array. Fix: put all Item_func_conv_charset back to args[]. @ mysql-test/include/ctype_numconv.inc @ mysql-test/r/ctype_ucs.result Adding tests @ sql/item_cmpfunc.cc Put "agg" back to args[] after character set aggregation.
-
Vladislav Vaintroub authored
-
Luis Soares authored
x86_64 debug_max Removed test cases affected by this bug from experimental list.
-
Jon Olav Hauglid authored
-
Magne Mahre authored
thd->in_sub_stmt In a precursor patch for Bug#52044 (revid:bzr/kostja@stripped), a number of reorganizations of code was made. In addition some assertions were added to ensure the correct transactional state. The reorganization had a small glitch so statements that was active in the query cache was not followed by a statement commit/rollback (this code was removed). A section in the trans_commit_stmt/trans_rollback_stmt code is to clear the thd->transaction.stmt list of affected storage engines. When a new statement is initiated, an assert introduced by the 523044 patch checks if this list is cleared. When the query cache is accessed, this list may be populated, and since it's not committed it will not be cleared. This fix adds explicit statement commit or rollback for statements that is contained in the query cache.
-
Jon Olav Hauglid authored
for ALTER TABLE + MERGE tables The patch for Bug#56292 changed how metadata locks are taken for MERGE tables. After the patch, locking the MERGE table will also lock the children tables with the same metadata lock type. This means that LOCK TABLES on a MERGE table also will implicitly do LOCK TABLES on the children tables. A consequence of this change, is that it is possible to do LOCK TABLES on a child table both explicitly and implicitly with the same statement and that these two locks can be of different strength. For example, LOCK TABLES child READ, merge WRITE. In LOCK TABLES mode, we are not allowed to take new locks and each statement must therefore try to find an existing TABLE instance with a suitable lock. The code that searched for a suitable TABLE instance, only considered table level locks. If a child table was locked twice, it was therefore possible for this code to find a TABLE instance with suitable table level locks but without suitable metadata lock. This problem caused the assert in upgrade_shared_lock_to_exclusive() to be triggered as it tried to upgrade a MDL_SHARED lock to EXCLUSIVE. The problem was a regression caused by the patch for Bug#56292. This patch fixes the problem by partially reverting the changes done by Bug#56292. Now, the children tables will only use the same metadata lock as the MERGE table for MDL_SHARED_NO_WRITE when not in locked tables mode. This means that LOCK TABLE on a MERGE table will not implicitly lock the children tables. This still fixes the original problem in Bug#56292 without causing a regression. Test case added to merge.test.
-
- 05 Oct, 2010 1 commit
-
-
Calvin Sun authored
to local repo.
-