- 10 Oct, 2008 7 commits
-
-
Mattias Jonsson authored
on non-partitioned table Problem was that partitioning specific commands was accepted for non partitioned tables and treated like ANALYZE/CHECK/OPTIMIZE/REPAIR TABLE, after bug-20129 was fixed, which changed the code path from mysql_alter_table to mysql_admin_table. Solution was to check if the table was partitioned before trying to execute the admin command
-
Georgi Kodinov authored
-
Gleb Shchepa authored
-
Gleb Shchepa authored
-
Gleb Shchepa authored
Select with a "NULL NOT IN" condition containing complex subselect from the same table as in the outer select failed with an assertion. The failure was caused by a concatenation of circumstances: 1) an inner select was optimized by make_join_statistics to use the QUICK_RANGE_SELECT access method (that implies an index scan of the table); 2) a subselect was independent (constant) from the outer select; 3) a condition was pushed down into inner select. During the evaluation of a constant IN expression an optimizer temporary changed the access method from index scan to table scan, but an engine handler was already initialized for index access by make_join_statistics. That caused an assertion. Unnecessary index initialization has been removed from the QUICK_RANGE_SELECT::init method (QUICK_RANGE_SELECT::reset reinvokes this initialization).
-
Gleb Shchepa authored
with COALESCE and JOIN The server returned to a client the VARBINARY column type instead of the DATE type for a result of the COALESCE, IFNULL, IF, CASE, GREATEST or LEAST functions if that result was filesorted in an anonymous temporary table during the query execution. For example: SELECT COALESCE(t1.date1, t2.date2) AS result FROM t1 JOIN t2 ON t1.id = t2.id ORDER BY result; To create a column of various date/time types in a temporary table the create_tmp_field_from_item() function uses the Item::tmp_table_field_from_field_type() method call. However, fields of the MYSQL_TYPE_NEWDATE type were missed there, and the VARBINARY columns were created by default. Necessary condition has been added.
-
Georgi Kodinov authored
- fixed an unitialized memory read - fixed a compilation warning - added a suppression for FC9 x86_64
-
- 09 Oct, 2008 6 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Fixed the handling of system variable retrieval in prepared statements : added a cleanup method that clears up the cache and restores the original scope of the variable (which is overwritten at fix_fields()).
-
Sergey Glukhov authored
TRIGGERS.SQL_MODE, EVENTS.SQL_MODE, TRIGGERS.DEFINER: field type is changed to VARCHAR.
-
Sergey Glukhov authored
The problem was that PACK_KEYS and MAX_ROWS clause in ALTER TABLE did not trigger table reconstruction. The fix is to rebuild a table if PACK_KEYS or MAX_ROWS are specified.
-
Sergey Glukhov authored
Hide "Table doesn't exist" errors if the table belongs to a merge table.
-
Sergey Glukhov authored
The problem: table_open_method is not calculated properly if '*' is used in 'select' The fix: added table_open_method calculation for such case
-
- 08 Oct, 2008 3 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
The code to get read the value of a system variable was extracting its value on PREPARE stage and was substituting the value (as a constant) into the parse tree. Note that this must be a reversible transformation, i.e. it must be reversed before each re-execution. Unfortunately this cannot be reliably done using the current code, because there are other non-reversible source tree transformations that can interfere with this reversible transformation. Fixed by not resolving the value at PREPARE, but at EXECUTE (as the rest of the functions operate). Added a cache of the value (so that it's constant throughout the execution of the query). Note that the cache also caches NULL values. Updated an obsolete related test suite (variables-big) and the code to test the result type of system variables (as per bug 74).
-
Marc Alff authored
-
- 07 Oct, 2008 10 commits
-
-
Marc Alff authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Ramil Kalimullin authored
ha_statistic_increment for rpl_temporary Problem: in some cases master send a special event to reconnecting slave to keep slave's temporary tables (see #17284) and they still have references to the "old" SQL slave thread and use them to access thread's data. Fix: set temporary tables thread references to the actual SQL slave thread in such cases.
-
Georgi Kodinov authored
-
Tatiana A. Nurnberg authored
-
Tatiana A. Nurnberg authored
-
Georgi Kodinov authored
-
- 06 Oct, 2008 14 commits
-
-
Marc Alff authored
warnings) Before this fix, several places in the code would raise a warning with an error code 0, making it impossible for a stored procedure, a connector, or a client application to trigger logic to handle the warning. Also, the warning text was hard coded, and therefore not translated. With this fix, new errors numbers have been created to represent these warnings, and the warning text is coded in the errmsg.txt file.
-
Guilhem Bichot authored
-
Chad MILLER authored
-
Chad MILLER authored
so that if the substitution contains single-quotes, the program will fail.
-
Tatiana A. Nurnberg authored
Adds --general-log-file, --slow-query-log-file command- line options to match system variables of the same names. Deprecates --log, --log-slow-queries command-line option and log, log_slow_queries system-variables for v7.0; they are superseded by general_log/general_log_file and slow_query_log/slow_query_log_file, respectively.
-
Georgi Kodinov authored
crashes server When creating temporary table that contains aggregate functions a non-reversible source transformation was performed to redirect aggregate function arguments towards temporary table columns. This caused EXPLAIN EXTENDED to fail because it was trying to resolve references to the (freed) temporary table. Fixed by preserving the original aggregate function arguments and using them (instead of the transformed ones) for EXPLAIN EXTENDED.
-
Guilhem Bichot authored
"Trigger fired multiple times leads to gaps in auto_increment sequence". The bug was that if a trigger fired multiple times inside a top statement (for example top-statement is a multi-row INSERT, and trigger is ON INSERT), and that trigger inserted into an auto_increment column, then gaps could be observed in the auto_increment sequence, even if there were no other users of the database (no concurrency). It was wrong usage of THD::auto_inc_intervals_in_cur_stmt_for_binlog. Note that the fix changes "class handler", I'll tell the Storage Engine API team.
-
Chad MILLER authored
-
Chad MILLER authored
-
Chad MILLER authored
-
Alexey Botchkov authored
-
Alexey Botchkov authored
MyISAM blocks index usage for bulk insert into zero-records tables. See ha_myisam::start_bulk_insert() lines from ... if (file->state->records == 0 ... ... That causes problems for partition engine when some partitions have records some not as the engine uses same access method for all partitions. Now partition engine doesn't call index_first/index_last for empty tables. per-file comments: mysql-test/r/partition.result Bug#38005 Partitions: error with insert select. test result mysql-test/t/partition.test Bug#38005 Partitions: error with insert select. test case sql/ha_partition.cc Bug#38005 Partitions: error with insert select. ha_engine::index_first and ha_engine::index_last not called for empty tables.
-
Chad MILLER authored
-
Chad MILLER authored
-