An error occurred fetching the project authors.
- 24 Feb, 2014 1 commit
-
-
Sergey Petrunya authored
- Change the default flag value to ON. - Update the testcases to be run extended_keys=ON: = trivial test result updates = If extended_keys setting makes a difference for a testcase, run the testcase with extended_keys=off. There were only a few such cases - Update to vcol_select_innodb looks like a worse plan but it will be gone in 10.0.
-
- 05 Aug, 2013 1 commit
-
-
Sergey Petrunya authored
-
- 09 Apr, 2013 2 commits
-
-
Sergei Golubchik authored
may not exist (the table exists only in the engine).
-
Sergei Golubchik authored
* print "table doesn't exist in engine" when a table doesn't exist in the engine, instead of "file not found" (if no file was involved) * print a complete filename that cannot be found ('t1.MYI', not 't1') * it's not an error for a DROP if a table doesn't exist in the engine (or some table files cannot be found) - if the DROP succeeded regardless
-
- 15 Jan, 2013 1 commit
-
-
Olav Sandstaa authored
WITH A VARIABLE AND ORDER BY Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX This is a fix for a regression introduced by Bug#12667154: Bug#12667154 attempted to fix a performance problem with subqueries that did filesort. For doing filesort, the optimizer creates a quick select object to use when building the sort index. This quick select object was deleted after the first call to create_sort_index(). Thus, for queries where the subquery was executed multiple times, the quick object was only used for the first execution. For all later executions of the subquery, filesort used a complete table scan for building the sort index. The fix for Bug#12667154 tried to fix this by not deleting the quick object after the first execution of create_sort_index() so that it would be re-used for building the sort index by the following executions of the subquery. This regression introduced in Bug#12667154 is that due to not deleting the quick select object after building the sort index, the quick object could in some cases be used also during the second phase of the execution of the subquery instead of using the created sort index. This caused wrong results to be returned. The fix for this issue is to delete the reference to the select object after it has been used in create_sort_index(). In this way the select and quick objects will not be available when doing the second phase of the execution of the select operation. To ensure that the select object can be re-used for the following executions of the subquery we make a copy of the select pointer. This is used for restoring the select object after the select operation is completed. mysql-test/suite/innodb/r/innodb_mysql.result: Changed explain output: The explain now contains "Using where" since we have restored the select pointer after doing the filesort operation. sql/sql_select.cc: Change create_sort_index() so that it always sets the pointer to the select object to NULL. This is done in order to avoid that the select->quick object can be used when execution the main part of the select operation. sql/sql_select.h: New member in JOIN_TAB: saved_select. Used by create_sort_index to make a backup copy of the select pointer.
-
- 14 Jan, 2013 1 commit
-
-
Olav Sandstaa authored
WITH A VARIABLE AND ORDER BY Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX This is a fix for a regression introduced by Bug#12667154: Bug#12667154 attempted to fix a performance problem with subqueries that did filesort. For doing filesort, the optimizer creates a quick select object to use when building the sort index. This quick select object was deleted after the first call to create_sort_index(). Thus, for queries where the subquery was executed multiple times, the quick object was only used for the first execution. For all later executions of the subquery, filesort used a complete table scan for building the sort index. The fix for Bug#12667154 tried to fix this by not deleting the quick object after the first execution of create_sort_index() so that it would be re-used for building the sort index by the following executions of the subquery. This regression introduced in Bug#12667154 is that due to not deleting the quick select object after building the sort index, the quick object could in some cases be used also during the second phase of the execution of the subquery instead of using the created sort index. This caused wrong results to be returned. The fix for this issue is to delete the reference to the select object after it has been used in create_sort_index(). In this way the select and quick objects will not be available when doing the second phase of the execution of the select operation. To ensure that the select object can be re-used for the following executions of the subquery we make a copy of the select pointer. This is used for restoring the select object after the select operation is completed. mysql-test/suite/innodb/r/innodb_mysql.result: Changed explain output: The explain now contains "Using where" since we have restored the select pointer after doing the filesort operation. sql/sql_select.cc: Change create_sort_index() so that it always sets the pointer to the select object to NULL. This is done in order to avoid that the select->quick object can be used when execution the main part of the select operation. sql/sql_select.h: New member in JOIN_TAB: saved_select. Used by create_sort_index to make a backup copy of the select pointer.
-
- 18 Sep, 2012 1 commit
-
-
Michael Widenius authored
create table t1 (a smallint primary key auto_increment); insert into t1 values(32767); insert into t1 values(NULL); ERROR 1062 (23000): Duplicate entry '32767' for key 'PRIMARY Now on always gets error HA_ERR_AUTOINC_RANGE=167 "Out of range value for column", independent of store engine, SQL Mode or number of inserted rows. This is an unique error that is easier to test for in replication. Another bug fix is that we now get an error when trying to insert a too big auto-generated value, even in non-strict mode. Before one get insted the max column value inserted. This patch also fixes some issues with inserting negative numbers in an auto-increment column. Fixed the ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave. This ensures that replication works between an old server to a new slave for auto-increment overflow errors. Added SQLSTATE errors for handler errors Smaller bug fixes: * Added warnings for duplicate key errors when using INSERT IGNORE * Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0" * Allow one to see how cmake is called by using --just-print --just-configure BUILD/FINISH.sh: --just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake. cmake/configure.pl: --just-print --just-configure now shows how cmake would be invoked. Good for understanding parameters to cmake. include/CMakeLists.txt: Added handler_state.h include/handler_state.h: SQLSTATE for handler error messages. Required for HA_ERR_AUTOINC_ERANGE, but solves also some other cases. mysql-test/extra/binlog_tests/binlog.test: Fixed old wrong behaviour Added more tests mysql-test/extra/binlog_tests/binlog_insert_delayed.test: Reset binary log to only print what's necessary in show_binlog_events mysql-test/extra/rpl_tests/rpl_auto_increment.test: Update to new error codes mysql-test/extra/rpl_tests/rpl_insert_delayed.test: Ignore warnings as this depends on how the test is run mysql-test/include/strict_autoinc.inc: On now gets an error on overflow mysql-test/r/auto_increment.result: Update results after fixing error message mysql-test/r/auto_increment_ranges_innodb.result: Test new behaviour mysql-test/r/auto_increment_ranges_myisam.result: Test new behaviour mysql-test/r/commit_1innodb.result: Added warnings for duplicate key error mysql-test/r/create.result: Added warnings for duplicate key error mysql-test/r/insert.result: Added warnings for duplicate key error mysql-test/r/insert_select.result: Added warnings for duplicate key error mysql-test/r/insert_update.result: Added warnings for duplicate key error mysql-test/r/mix2_myisam.result: Added warnings for duplicate key error mysql-test/r/myisam_mrr.result: Added warnings for duplicate key error mysql-test/r/null_key.result: Added warnings for duplicate key error mysql-test/r/replace.result: Update to new error codes mysql-test/r/strict_autoinc_1myisam.result: Update to new error codes mysql-test/r/strict_autoinc_2innodb.result: Update to new error codes mysql-test/r/strict_autoinc_3heap.result: Update to new error codes mysql-test/r/trigger.result: Added warnings for duplicate key error mysql-test/r/xtradb_mrr.result: Added warnings for duplicate key error mysql-test/suite/binlog/r/binlog_innodb_row.result: Updated result mysql-test/suite/binlog/r/binlog_row_binlog.result: Out of range data for auto-increment is not inserted anymore mysql-test/suite/binlog/r/binlog_statement_insert_delayed.result: Updated result mysql-test/suite/binlog/r/binlog_stm_binlog.result: Out of range data for auto-increment is not inserted anymore mysql-test/suite/binlog/r/binlog_unsafe.result: Updated result mysql-test/suite/innodb/r/innodb-autoinc.result: Update to new error codes mysql-test/suite/innodb/r/innodb-lock.result: Updated results mysql-test/suite/innodb/r/innodb.result: Updated results mysql-test/suite/innodb/r/innodb_bug56947.result: Updated results mysql-test/suite/innodb/r/innodb_mysql.result: Updated results mysql-test/suite/innodb/t/innodb-autoinc.test: Update to new error codes mysql-test/suite/maria/maria3.result: Updated result mysql-test/suite/maria/mrr.result: Updated result mysql-test/suite/optimizer_unfixed_bugs/r/bug43617.result: Updated result mysql-test/suite/rpl/r/rpl_auto_increment.result: Update to new error codes mysql-test/suite/rpl/r/rpl_insert_delayed,stmt.rdiff: Updated results mysql-test/suite/rpl/r/rpl_loaddatalocal.result: Updated results mysql-test/t/auto_increment.test: Update to new error codes mysql-test/t/auto_increment_ranges.inc: Test new behaviour mysql-test/t/auto_increment_ranges_innodb.test: Test new behaviour mysql-test/t/auto_increment_ranges_myisam.test: Test new behaviour mysql-test/t/replace.test: Update to new error codes mysys/my_getopt.c: Fixed bug when using --skip-log-bin followed by --log-bin, which did set log-bin to "0" sql/handler.cc: Ignore negative values for signed auto-increment columns Always give an error if we get an overflow for an auto-increment-column (instead of inserting the max value) Ensure that the row number is correct for the out-of-range-value error message. ****** Fixed wrong printing of column namn for "Out of range value" errors Fixed that INSERT_ID is correctly replicated also for out-of-range autoincrement values Fixed that print_keydup_error() can also be used to generate warnings ****** Return HA_ERR_AUTOINC_ERANGE (167) instead of ER_WARN_DATA_OUT_OF_RANGE for auto-increment overflow sql/handler.h: Allow INSERT IGNORE to continue also after out-of-range inserts. Fixed that print_keydup_error() can also be used to generate warnings sql/log_event.cc: Added DBUG_PRINT Fixed the ER_AUTOINC_READ_FAILED, ER_DUP_ENTRY and HA_ERR_AUTOINC_ERANGE are compared the same between master and slave. This ensures that replication works between an old server to a new slave for auto-increment overflow errors. sql/sql_insert.cc: Add warnings for duplicate key errors when using INSERT IGNORE sql/sql_state.c: Added handler errors sql/sql_table.cc: Update call to print_keydup_error() storage/innobase/handler/ha_innodb.cc: Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB. storage/xtradb/handler/ha_innodb.cc: Fixed increment handling of auto-increment columns to be consistent with rest of MariaDB.
-
- 23 May, 2012 1 commit
-
-
Sergey Petrunya authored
-
- 17 May, 2012 1 commit
-
-
Igor Babaev authored
The optimizer chose a less efficient execution plan due to the following defects of the code: 1. the generic handler function handler::keyread_time did not take into account that in clustered primary keys record data is included into each index entry 2. the function make_join_readinfo erroneously decided that index only scan could not be used if join cache was empoyed. Added no additional test case. Adjusted some of the test results.
-
- 16 Feb, 2012 1 commit
-
-
Sergey Petrunya authored
timestamp: Thu 2011-12-01 15:12:10 +0100 Fix for Bug#13430436 PERFORMANCE DEGRADATION IN SYSBENCH ON INNODB DUE TO ICP When running sysbench on InnoDB there is a performance degradation due to index condition pushdown (ICP). Several of the queries in sysbench have a WHERE condition that the optimizer uses for executing these queries as range scans. The upper and lower limit of the range scan will ensure that the WHERE condition is fulfilled. Still, the WHERE condition is part of the queries' condition and if ICP is enabled the condition will be pushed down to InnoDB as an index condition. Due to the range scan's upper and lower limits ensure that the WHERE condition is fulfilled, the pushed index condition will not filter out any records. As a result the use of ICP for these queries results in a performance overhead for sysbench. This overhead comes from using resources for determining the part of the condition that can be pushed down to InnoDB and overhead in InnoDB for executing the pushed index condition. With the default configuration for sysbench the range scans will use the primary key. This is a clustered index in InnoDB. Using ICP on a clustered index provides the lowest performance benefit since the entire record is part of the clustered index and in InnoDB it has the highest relative overhead for executing the pushed index condition. The fix for removing the overhead ICP introduces when running sysbench is to disable use of ICP when the index used by the query is a clustered index. When WL#6061 is implemented this change should be re-evaluated.
-
- 14 Jan, 2012 1 commit
-
-
Igor Babaev authored
Adjusted results for a few test cases.
-
- 19 Dec, 2011 2 commits
-
-
Igor Babaev authored
If the sorted table belongs to a dependent subquery then the function create_sort_index() should not clear TABLE:: select and TABLE::select for this table after the sort of the table has been performed, because these members are needed for the second execution of the subquery.
-
unknown authored
The patch differs from the original MySQL patch as follows: - All test case differences have been reviewed one by one, and care has been taken to restore the original plan so that each test case executes the code path it was designed for. - A bug was found and fixed in MariaDB 5.3 in Item_allany_subselect::cleanup(). - ORDER BY is not removed because we are unsure of all effects, and it would prevent enabling ORDER BY ... LIMIT subqueries. - ref_pointer_array.m_size is not adjusted because we don't do array bounds checking, and because it looks risky. Original comment by Jorgen Loland: ------------------------------------------------------------- WL#5953 - Optimize away useless subquery clauses For IN/ALL/ANY/SOME/EXISTS subqueries, the following clauses are meaningless: * ORDER BY (since we don't support LIMIT in these subqueries) * DISTINCT * GROUP BY if there is no HAVING clause and no aggregate functions This WL detects and optimizes away these useless parts of the query during JOIN::prepare()
-
- 23 Nov, 2011 1 commit
-
-
Michael Widenius authored
-
- 21 Nov, 2011 1 commit
-
-
Igor Babaev authored
-
- 21 Jul, 2011 4 commits
-
-
Igor Babaev authored
off by default.
-
unknown authored
-
unknown authored
-
unknown authored
Fixed explains of previous patch. mysql-test/r/explain.result: Fixed explains of previous patch. mysql-test/r/join_outer.result: Fixed explains of previous patch. mysql-test/r/negation_elimination.result: Fixed explains of previous patch. mysql-test/r/view.result: Fixed explains of previous patch. mysql-test/suite/innodb/r/innodb_mysql.result: Removed duplicate test suite. mysql-test/suite/innodb/t/innodb_mysql.test: Removed duplicate test suite. mysql-test/suite/innodb_plugin/r/innodb_mysql.result: Removed duplicate test suite. mysql-test/suite/innodb_plugin/t/innodb_mysql.test: Removed duplicate test suite. sql/opt_range.h: Removed incorrect fix. sql/records.cc: Removed incorrect fix.
-
- 06 Jun, 2011 1 commit
-
-
Sergei Golubchik authored
compilation error in mysys/my_getsystime.c fixed some redundant code removed sec_to_time, time_to_sec, from_unixtime, unix_timestamp, @@timestamp now use decimal, not double for numbers with a fractional part. purge_master_logs_before_date() fixed many bugs in corner cases fixed mysys/my_getsystime.c: compilation failure fixed sql/sql_parse.cc: don't cut corners. it backfires.
-
- 26 May, 2011 2 commits
-
-
Dmitry Lenev authored
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". Attempt to update an InnoDB temporary table under LOCK TABLES led to assertion failure in both debug and production builds if this temporary table was explicitly locked for READ. The same scenario works fine for MyISAM temporary tables. The assertion failure was caused by discrepancy between lock that was requested on the rows of temporary table at LOCK TABLES time and by update operation. Since SQL-layer requested a read-lock at LOCK TABLES time InnoDB engine assumed that upcoming statements which are going to be executed under LOCK TABLES will only read table and therefore should acquire only S-lock. An update operation broken this assumption by requesting X-lock. Possible approaches to fixing this problem are: 1) Skip locking of temporary tables as locking doesn't make any sense for connection-local objects. 2) Prohibit changing of temporary table locked by LOCK TABLES ... READ. Unfortunately both of these approaches have drawbacks which make them unviable for stable versions of server. So this patch takes another approach and changes code in such way that LOCK TABLES for a temporary table will always request write lock. In 5.5 version of this patch switch from read lock to write lock is done on SQL-layer. mysql-test/suite/innodb/r/innodb_mysql.result: Added test for bug #11762012 - "54553: INNODB ASSERTS IN HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". mysql-test/suite/innodb/t/innodb_mysql.test: Added test for bug #11762012 - "54553: INNODB ASSERTS IN HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". sql/sql_parse.cc: Since a temporary table locked by LOCK TABLES can be updated even if it was only locked for read we always request TL_WRITE locks for such tables at LOCK TABLES time. This allows to avoid discrepancy between locks acquired at LOCK TABLES time and by a statement executed under LOCK TABLES. Such a discrepancy has caused problems for InnoDB storage engine. To support this change a part of code implementing LOCK TABLES has been moved to a helper function.
-
Dmitry Lenev authored
HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". Attempt to update an InnoDB temporary table under LOCK TABLES led to assertion failure in both debug and production builds if this temporary table was explicitly locked for READ. The same scenario works fine for MyISAM temporary tables. The assertion failure was caused by discrepancy between lock that was requested on the rows of temporary table at LOCK TABLES time and by update operation. Since SQL-layer requested a read-lock at LOCK TABLES time InnoDB engine assumed that upcoming statements which are going to be executed under LOCK TABLES will only read table and therefore should acquire only S-lock. An update operation broken this assumption by requesting X-lock. Possible approaches to fixing this problem are: 1) Skip locking of temporary tables as locking doesn't make any sense for connection-local objects. 2) Prohibit changing of temporary table locked by LOCK TABLES ... READ. Unfortunately both of these approaches have drawbacks which make them unviable for stable versions of server. So this patch takes another approach and changes code in such way that LOCK TABLES for a temporary table will always request write lock. In 5.1 version of this patch switch from read lock to write lock is done inside of InnoDBs handler methods as doing it on SQL-layer causes compatibility troubles with FLUSH TABLES WITH READ LOCK. mysql-test/suite/innodb/r/innodb_mysql.result: Added test for bug #11762012 - "54553: INNODB ASSERTS IN HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". mysql-test/suite/innodb/t/innodb_mysql.test: Added test for bug #11762012 - "54553: INNODB ASSERTS IN HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". mysql-test/suite/innodb_plugin/r/innodb_mysql.result: Added test for bug #11762012 - "54553: INNODB ASSERTS IN HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". mysql-test/suite/innodb_plugin/t/innodb_mysql.test: Added test for bug #11762012 - "54553: INNODB ASSERTS IN HA_INNOBASE::UPDATE_ROW, TEMPORARY TABLE, TABLE LOCK". storage/innobase/handler/ha_innodb.cc: Assume that a temporary table locked by LOCK TABLES can be updated even if it was only locked for read and therefore an X-lock should be always requested for such tables. storage/innodb_plugin/handler/ha_innodb.cc: Assume that a temporary table locked by LOCK TABLES can be updated even if it was only locked for read and therefore an X-lock should be always requested for such tables.
-
- 25 May, 2011 1 commit
-
-
Igor Babaev authored
-
- 19 May, 2011 2 commits
-
-
Igor Babaev authored
Adjusted the result files in the pbxt, innodb and innodb_plugin suites. Commented out a failing test case in innodb.innodb_multi_update.test. It should be returned back when the problem with multi-update of derived tables (bug 784297) is resolved.
-
Sergei Golubchik authored
sql/event_parse_data.cc: don't use "not_used" variable sql/item_timefunc.cc: Item_temporal_func::fix_length_and_dec() and other changes sql/item_timefunc.h: introducing Item_timefunc::fix_length_and_dec() sql/share/errmsg.txt: don't say "column X" in the error message that used not only for columns
-
- 25 Apr, 2011 1 commit
-
-
Sergei Golubchik authored
-
- 19 Mar, 2011 1 commit
-
-
Sergey Petrunya authored
-
- 04 Mar, 2011 1 commit
-
-
Igor Babaev authored
The bug was a result of the fix for bug 668644 that turned out to be not quite correct. A problem appeared with HAVING conditions containing more than one predicate. If a query with an ORDER BY clause uses such HAVING condition and the required order can be obtained with a range/index scan then the HAVING condition has to be pushed into two different formulas (items). To be able to do it we have to create a copy of the ANDOR structure of the pushed condition.
-
- 24 Feb, 2011 1 commit
-
-
Igor Babaev authored
even in the cases when there existed range/index-merge scans that were cheaper than the full table scan. This was a defect/bug of the implementation of mwl #128. Now hash join can work not only with full table scan of the joined table, but also with full index scan, range and index-merge scans. Accordingly, in the cases when hash join is used the column 'type' in the EXPLAINs can contain now 'hash_ALL', 'hash_index', 'hash_range' and 'hash_index_merge'. If hash join is coupled with a range/index_merge scan then the columns 'key' and 'key_len' contain info not only on the used hash index, but also on the indexes used for the scan.
-
- 17 Feb, 2011 1 commit
-
-
Sergey Petrunya authored
-
- 05 Jan, 2011 2 commits
-
-
Michael Widenius authored
-
Igor Babaev authored
for hash join in the cases when there are no suitable indexes for these conditions.
-
- 08 Dec, 2010 1 commit
-
-
unknown authored
MBug#687320: Fix sporadic test failures in innodb_mysql.test and partition_innodb_semi_consistent.test Problem is that these tests run with --innodb-lock-wait-timeout=2 in .opt (and this is necessary as built-in innodb does not allow to change this dynamically). This cases another part of the test to occasionally time out an UPDATE, which subsequently caused the test case to timeout due to waiting for a condition (successful UPDATE) that never occurs. Fixed by re-trying the update in case of timeout. Tested by inserting a sleep() in the connection that the UPDATE is waiting for, and checking that the retry loops a couple of times until the other connection is done and COMMITs.
-
- 04 Dec, 2010 1 commit
-
-
Igor Babaev authored
independent execution plans. Fixed a bug in Unique::unique_add that caused a crash for a query from index_intersect_innodb on some platforms. Fixed two bugs in opt_range.cc that led to the choice of not the cheapest plans for index intersections.
-
- 23 Nov, 2010 1 commit
-
-
Sergey Glukhov authored
In case of low memory sort buffer QUICK_INDEX_MERGE_SELECT creates temporary file where is stores row ids which meet QUICK_SELECT ranges except of clustered pk range, clustered range is processed separately. In init_read_record we check if temporary file is used and choose appropriate record access method. It does not take into account that temporary file contains partial result in case of QUICK_INDEX_MERGE_SELECT with clustered pk range. The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT with clustered pk range is used. mysql-test/suite/innodb/r/innodb_mysql.result: test case mysql-test/suite/innodb/t/innodb_mysql.test: test case mysql-test/suite/innodb_plugin/r/innodb_mysql.result: test case mysql-test/suite/innodb_plugin/t/innodb_mysql.test: test case sql/opt_range.h: added new method sql/records.cc: The fix is always to use rr_quick if QUICK_INDEX_MERGE_SELECT with clustered pk range is used.
-
- 10 Nov, 2010 1 commit
-
-
Dmitry Shulga authored
ALTER TABLE RENAME, DISABLE KEYS. The code of ALTER TABLE RENAME, DISABLE KEYS could issue a commit while holding LOCK_open mutex. This is a regression introduced by the fix for Bug 54453. This failed an assert guarding us against a potential deadlock with connections trying to execute FLUSH TABLES WITH READ LOCK. The fix is to move acquisition of LOCK_open outside the section that issues ha_autocommit_or_rollback(). LOCK_open is taken to protect against concurrent operations with .frms and the table definition cache, and doesn't need to cover the call to commit. A test case added to innodb_mysql.test. The patch is to be null-merged to 5.5, which already has 54453 null-merged to it. mysql-test/suite/innodb/r/innodb_mysql.result: Added test results for test for bug#56619. mysql-test/suite/innodb/t/innodb_mysql.test: Added test for bug#56619. sql/sql_table.cc: mysql_alter_table() modified: moved acquisition of LOCK_open after call to ha_autocommit_or_rollback.
-
- 09 Nov, 2010 1 commit
-
-
Igor Babaev authored
The pushdown condition for the sorted table in a query can be complemented by the conditions from HAVING. This transformation is done in JOIN::exec pretty late after the original pushdown condition have been saved in the field pre_idx_push_select_cond for the sorted table. So this field must be updated after the inclusion of the condition from HAVING.
-
- 05 Nov, 2010 1 commit
-
-
Sergei Golubchik authored
mysql bug#58011
-
- 06 Oct, 2010 1 commit
-
-
Davi Arnaut authored
Bug#54678: InnoDB, TRUNCATE, ALTER, I_S SELECT, crash or deadlock - Incompatible change: truncate no longer resorts to a row by row delete if the storage engine does not support the truncate method. Consequently, the count of affected rows does not, in any case, reflect the actual number of rows. - Incompatible change: it is no longer possible to truncate a table that participates as a parent in a foreign key constraint, unless it is a self-referencing constraint (both parent and child are in the same table). To work around this incompatible change and still be able to truncate such tables, disable foreign checks with SET foreign_key_checks=0 before truncate. Alternatively, if foreign key checks are necessary, please use a DELETE statement without a WHERE condition. Problem description: The problem was that for storage engines that do not support truncate table via a external drop and recreate, such as InnoDB which implements truncate via a internal drop and recreate, the delete_all_rows method could be invoked with a shared metadata lock, causing problems if the engine needed exclusive access to some internal metadata. This problem originated with the fact that there is no truncate specific handler method, which ended up leading to a abuse of the delete_all_rows method that is primarily used for delete operations without a condition. Solution: The solution is to introduce a truncate handler method that is invoked when the engine does not support truncation via a table drop and recreate. This method is invoked under a exclusive metadata lock, so that there is only a single instance of the table when the method is invoked. Also, the method is not invoked and a error is thrown if the table is a parent in a non-self-referencing foreign key relationship. This was necessary to avoid inconsistency as some integrity checks are bypassed. This is inline with the fact that truncate is primarily a DDL operation that was designed to quickly remove all data from a table. mysql-test/suite/innodb/t/innodb-truncate.test: Add test cases for truncate and foreign key checks. Also test that InnoDB resets auto-increment on truncate. mysql-test/suite/innodb/t/innodb.test: FK is not necessary, test is related to auto-increment. Update error number, truncate is no longer invoked if table is parent in a FK relationship. mysql-test/suite/innodb/t/innodb_mysql.test: Update error number, truncate is no longer invoked if table is parent in a FK relationship. Use delete instead of truncate, test is used to check the interaction of FKs, triggers and delete. mysql-test/suite/parts/inc/partition_check.inc: Fix typo. mysql-test/suite/sys_vars/t/foreign_key_checks_func.test: Update error number, truncate is no longer invoked if table is parent in a FK relationship. mysql-test/t/mdl_sync.test: Modify test case to reflect and ensure that truncate takes a exclusive metadata lock. mysql-test/t/trigger-trans.test: Update error number, truncate is no longer invoked if table is parent in a FK relationship. sql/ha_partition.cc: Reorganize the various truncate methods. delete_all_rows is now passed directly to the underlying engines, so as truncate. The code responsible for truncating individual partitions is moved to ha_partition::truncate_partition, which is invoked when a ALTER TABLE t1 TRUNCATE PARTITION p statement is executed. Since the partition truncate no longer can be invoked via delete, the bitmap operations are not necessary anymore. The explicit reset of the auto-increment value is also removed as the underlying engines are now responsible for reseting the value. sql/handler.cc: Wire up the handler truncate method. sql/handler.h: Introduce and document the truncate handler method. It assumes certain use cases of delete_all_rows. Add method to retrieve the list of foreign keys referencing a table. Method is used to avoid truncating tables that are parent in a foreign key relationship. sql/share/errmsg-utf8.txt: Add error message for truncate and FK. sql/sql_lex.h: Introduce a flag so that the partition engine can detect when a partition is being truncated. Used to give a special error. sql/sql_parse.cc: Function mysql_truncate_table no longer exists. sql/sql_partition_admin.cc: Implement the TRUNCATE PARTITION statement. sql/sql_truncate.cc: Change the truncate table implementation to use the new truncate handler method and to not rely on row-by-row delete anymore. The truncate handler method is always invoked with a exclusive metadata lock. Also, it is no longer possible to truncate a table that is parent in some non-self-referencing foreign key. storage/archive/ha_archive.cc: Rename method as the description indicates that in the future this could be a truncate operation. storage/blackhole/ha_blackhole.cc: Implement truncate as no operation for the blackhole engine in order to remain compatible with older releases. storage/federated/ha_federated.cc: Introduce truncate method that invokes delete_all_rows. This is required to support partition truncate as this form of truncate does not implement the drop and recreate protocol. storage/heap/ha_heap.cc: Introduce truncate method that invokes delete_all_rows. This is required to support partition truncate as this form of truncate does not implement the drop and recreate protocol. storage/ibmdb2i/ha_ibmdb2i.cc: Introduce truncate method that invokes delete_all_rows. This is required to support partition truncate as this form of truncate does not implement the drop and recreate protocol. storage/innobase/handler/ha_innodb.cc: Rename delete_all_rows to truncate. InnoDB now does truncate under a exclusive metadata lock. Introduce and reorganize methods used to retrieve the list of foreign keys referenced by a or referencing a table. storage/myisammrg/ha_myisammrg.cc: Introduce truncate method that invokes delete_all_rows. This is required in order to remain compatible with earlier releases where truncate would resort to a row-by-row delete.
-
- 01 Sep, 2010 1 commit
-
-
Magne Mahre authored
case than in corr index". Server was unable to find existing or explicitly created supporting index for foreign key if corresponding statement clause used field names in case different than one used in key specification and created yet another supporting index. In cases when name of constraint (and thus name of generated index) was the same as name of existing/explicitly created index this led to duplicate key name error. The problem was that unlike all other code Key_part_spec::operator==() compared field names in case sensitive fashion. As result routines responsible for getting rid of redundant generated supporting indexes for foreign key were not working properly for versions of field names using different cases. (backported from mysql-trunk) sql/sql_class.cc: Make field name comparison case-insensitive like it is in the rest of server.
-