An error occurred fetching the project authors.
- 15 Oct, 2008 1 commit
-
-
Georgi Kodinov authored
If delayed insert fails to upgrade the lock it was not freeing the temporary memory storage used to keep newly constructed blob values in memory. Fixed by iterating over the remaining rows in the delayed insert rowset and freeing the blob storage for each row. No test suite because it involves concurrent delayed inserts on a table and cannot easily be made deterministic. Added a correct valgrind suppression for Fedora 9.
-
- 08 Oct, 2008 1 commit
-
-
Mats Kindahl authored
The failure was caused by executing a CREATE-SELECT statement that creates a table in another database than the current one. In row-based logging, the CREATE statement was written to the binary log without the database, hence creating the table in the wrong database, causing the following inserts to fail since the table didn't exist in the given database. Fixed the bug by adding a parameter to store_create_info() that will make the function print the database name before the table name and used that in the calls that write the CREATE statement to the binary log. The database name is only printed if it is different than the currently selected database. The output of SHOW CREATE TABLE has not changed and is still printed without the database name.
-
- 07 Oct, 2008 1 commit
-
-
Kristofer Pettersson authored
Concurrent inserts produce valgrind error messages. The reason is that the query cache is invalidated after the target table object is closed. Since the delayed insert thread already takes care of invalidating the query cache there is no need to try to synchronize an extra cache invalidation call. The fix is to remove the query_cache_invalidate3 call altogether.
-
- 16 Sep, 2008 1 commit
-
-
Narayanan V authored
Fix the write_record function to record auto increment values in a consistent way.
-
- 10 Sep, 2008 1 commit
-
-
Kristofer Pettersson authored
If a delayed insert thread was aborted by a concurrent 'truncate table' statement, then the diagnostic area would fail with an assert in a debug build because no actual error message was pushed on the stack despite a thread being killed. This patch adds an error message to the stack.
-
- 03 Sep, 2008 1 commit
-
-
Ramil Kalimullin authored
in open_table() Problem: repeating "CREATE... ( AUTOINCREMENT) ... SELECT" may lead to an assertion failure. Fix: reset table->auto_increment_field_not_null after each record writing.
-
- 29 Aug, 2008 1 commit
-
-
Andrei Elkin authored
The assert is about binlogging must have been activated, but it was not actually according to the reported how-to-repeat instuctions. Analysis revealed that binlog_start_trans_and_stmt() was called without prior testing if binlogging is ON. Fixed with avoing entering binlog_start_trans_and_stmt() if binlog is not activated.
-
- 30 Jun, 2008 1 commit
-
-
Mats Kindahl authored
In order to handle CHAR() fields, 8 bits were reserved for the size of the CHAR field. However, instead of denoting the number of characters in the field, field_length was used which denotes the number of bytes in the field. Since UTF-8 fields can have three bytes per character (and has been extended to have four bytes per character in 6.0), an extra two bits have been encoded in the field metadata work for fields of type Field_string (i.e., CHAR fields). Since the metadata word is filled, the extra bits have been encoded in the upper 4 bits of the real type (the most significant byte of the metadata word) by computing the bitwise xor of the extra two bits. Since the upper 4 bits of the real type always is 1111 for Field_string, this means that for fields of length <256, the encoding is identical to the encoding used in pre-5.1.26 servers, but for lengths of 256 or more, an unrecognized type is formed, causing an old slave (that does not handle lengths of 256 or more) to stop.
-
- 03 Jun, 2008 1 commit
-
-
Mattias Jonsson authored
Problem was an unclear error message since it could suggest that MyISAM did not support INSERT DELAYED. Changed the error message to say that DELAYED is not supported by the table, instead of the table's storage engine. The confusion is that a partitioned table is in somewhat sense using the partitioning storage engine, which in turn uses the ordinary storage engine. By saying that the table does not support DELAYED we do not give any extra informantion about the storage engine or if it is partitioned.
-
- 08 Apr, 2008 1 commit
-
-
aelkin/andrei@mysql1000.(none) authored
Among two claimed artifacts the critical one is in that the Table map of a query following the failing with a duplicate key error CREATE-SELECT is skipped from instantionating (and thus binlogging). That leads to sending a "chopped" group of the data row-events without the table map head to the slave. The slave can not apply the only data row events. It's not easy to force the slave to react with an error in such a case (the second complaint on the bug report), because the lack of a table Rows_log_event::do_apply_event the data row event handler is a common situation which normally designates the event has to be filtered out basing on the repliation do/ingore rules decision. Fixed: table map creating and binlogging is restored via deploying the standard cleanup call in select_create::abort(). No error is reported if by chance the table map was not been binlogged. Leaving this out to resolve with considering how to combine the do/ingore rules with the situation when erronoulsy the Table_map is not written to binlog.
-
- 28 Mar, 2008 2 commits
-
-
mattiasj@witty. authored
in REPLACE DELAYED post push patch, removing the optimization for copying delayed_insert variables.
-
mats@mats-laptop.(none) authored
The bug allow multiple executing transactions working with non-transactional to interfere with each others by interleaving the events of different trans- actions. Bug is fixed by writing non-transactional events to the transaction cache and flushing the cache to the binary log at statement commit. To mimic the behavior of normal statement-based replication, we flush the transaction cache in row- based mode when there is no committed statements in the transaction cache, which means we are committing the first one. This means that it will be written to the binary log as a "mini-transaction" with just the rows for the statement. Note that the changes here does not take effect when building the server with HAVE_TRANSACTIONS set to false, but it is not clear if this was possible before this patch either. For row-based logging, we also have that when AUTOCOMMIT=1, the code now always generates a BEGIN/COMMIT pair for single statements, or BEGIN/ROLLBACK pair in the case of non-transactional changes in a statement that was rolled back. Note that for the case where changes to a non-transactional table causes a rollback due to error, the statement will now be logged with a BEGIN/ROLLBACK pair, even though some changes has been committed to the non-transactional table.
-
- 27 Mar, 2008 1 commit
-
-
mattiasj@witty. authored
Bug#21413 "Engine table handler used by multiple threads in REPLACE DELAYED" When executing a REPLACE DELAYED statement, the storage engine ::extra() method was invoked by a different thread than the thread which has acquired the handler instance. This did not cause problems within the current server and with the current storage engines. But it has the potential to confuse future storage engines. Added code to avoid surplus calls to extra() method in case of DELAYED which avoids calling storage engine from a different thread than expected. No test case. This change does not change behavior in conjunction with current storage engines. So it cannot be tested by the regression test suite.
-
- 18 Mar, 2008 1 commit
-
-
svoj@mysql.com/june.mysql.com authored
binlog_format=mixed Statement-based replication of DELETE ... LIMIT, UPDATE ... LIMIT, INSERT ... SELECT ... LIMIT is not safe as order of rows is not defined. With this fix, we issue a warning that this statement is not safe to replicate in statement mode, or go to row-based mode in mixed mode. Note that we may consider a statement as safe if ORDER BY primary_key is present. However it may confuse users to see very similiar statements replicated differently. Note 2: regular UPDATE statement (w/o LIMIT) is unsafe as well, but this patch doesn't address this issue. See comment from Kristian posted 18 Mar 10:55.
-
- 05 Mar, 2008 1 commit
-
-
kaa@kaamos.(none) authored
sporadically Under some circumstances, the mysql_insert_id() value after SELECT ... INSERT could return a wrong value. This could happen when the last SELECT ... INSERT did not involve an AUTO_INCREMENT column, but the value of mysql_insert_id() was changed by some previous statements. Fixed by checking the value of thd->insert_id_used in select_insert::send_eof() and returning 0 for mysql_insert_id() if it is not set.
-
- 19 Feb, 2008 2 commits
-
-
kostja@dipika.(none) authored
does not send it to the client.
-
kostja@dipika.(none) authored
a SELECT doesn't cause ROLLBACK of statem". The idea of the fix is to ensure that we always commit the current statement at the end of dispatch_command(). In order to not issue redundant disc syncs, an optimization of the two-phase commit protocol is implemented to bypass the two phase commit if the transaction is read-only.
-
- 16 Jan, 2008 1 commit
-
-
kostja@dipika.(none) authored
-
- 11 Jan, 2008 1 commit
-
-
evgen@moonbone.local authored
value when inserting into a view. The mysql_prepare_insert function checks all fields of the target table that directly or indirectly (through a view) are specified in the INSERT statement to have a default value. This check can be skipped if the INSERT statement doesn't mention any insert fields. In case of a view this allows fields that aren't mentioned in the view to bypass the check. Now fields of the target table are always checked to have a default value when insert goes into a view.
-
- 12 Dec, 2007 1 commit
-
-
kostja@bodhi.(none) authored
cause ROLLBACK of statement", part 1. Review fixes. Do not send OK/EOF packets to the client until we reached the end of the current statement. This is a consolidation, to keep the functionality that is shared by all SQL statements in one place in the server. Currently this functionality includes: - close_thread_tables() - log_slow_statement(). After this patch and the subsequent patch for Bug#12713, it shall also include: - ha_autocommit_or_rollback() - net_end_statement() - query_cache_end_of_result(). In future it may also include: - mysql_reset_thd_for_next_command().
-
- 26 Nov, 2007 2 commits
-
-
kaa@polly.(none) authored
insert ... select. The 5.0 manual page for mysql_insert_id() does not mention anything about INSERT ... SELECT, though its current behavior is incosistent with what the manual says about the plain INSERT. Fixed by changing the AUTO_INCREMENT and mysql_insert_id() handling logic in INSERT ... SELECT to be consistent with the INSERT behavior, the manual, and the changes in 5.1 introduced by WL3146: - mysql_insert_id() now returns the first automatically generated AUTO_INCREMENT value that was successfully inserted by INSERT ... SELECT - if an INSERT ... SELECT statement is executed, and no automatically generated value is successfully inserted, mysql_insert_id() now returns the ID of the last inserted row.
-
Problem: using wrong local lock type value in the mysql_insert() results in a crash. Fix: use a proper value.
-
- 19 Nov, 2007 1 commit
-
-
evgen@moonbone.local authored
led to creating corrupted index. Corrected fix. The new method called prepare2 is added to the select_create class. As all preparations are done by the select_create::prepare function it doesn't do anything. Slightly changed algorithm of calling the start_bulk_insert function. Now it's called from the select_insert::prepare2 function when the SQL_BUFFER_RESULT flags is set. The is_bulk_insert_mode flag is removed as it is not needed anymore.
-
- 15 Nov, 2007 1 commit
-
-
istruewing@stella.local authored
corrupts a MERGE table Bug 26867 - LOCK TABLES + REPAIR + merge table result in memory/cpu hogging Bug 26377 - Deadlock with MERGE and FLUSH TABLE Bug 25038 - Waiting TRUNCATE Bug 25700 - merge base tables get corrupted by optimize/analyze/repair table Bug 30275 - Merge tables: flush tables or unlock tables causes server to crash Bug 19627 - temporary merge table locking Bug 27660 - Falcon: merge table possible Bug 30273 - merge tables: Can't lock file (errno: 155) The problems were: Bug 26379 - Combination of FLUSH TABLE and REPAIR TABLE corrupts a MERGE table 1. A thread trying to lock a MERGE table performs busy waiting while REPAIR TABLE or a similar table administration task is ongoing on one or more of its MyISAM tables. 2. A thread trying to lock a MERGE table performs busy waiting until all threads that did REPAIR TABLE or similar table administration tasks on one or more of its MyISAM tables in LOCK TABLES segments do UNLOCK TABLES. The difference against problem #1 is that the busy waiting takes place *after* the administration task. It is terminated by UNLOCK TABLES only. 3. Two FLUSH TABLES within a LOCK TABLES segment can invalidate the lock. This does *not* require a MERGE table. The first FLUSH TABLES can be replaced by any statement that requires other threads to reopen the table. In 5.0 and 5.1 a single FLUSH TABLES can provoke the problem. Bug 26867 - LOCK TABLES + REPAIR + merge table result in memory/cpu hogging Trying DML on a MERGE table, which has a child locked and repaired by another thread, made an infinite loop in the server. Bug 26377 - Deadlock with MERGE and FLUSH TABLE Locking a MERGE table and its children in parent-child order and flushing the child deadlocked the server. Bug 25038 - Waiting TRUNCATE Truncating a MERGE child, while the MERGE table was in use, let the truncate fail instead of waiting for the table to become free. Bug 25700 - merge base tables get corrupted by optimize/analyze/repair table Repairing a child of an open MERGE table corrupted the child. It was necessary to FLUSH the child first. Bug 30275 - Merge tables: flush tables or unlock tables causes server to crash Flushing and optimizing locked MERGE children crashed the server. Bug 19627 - temporary merge table locking Use of a temporary MERGE table with non-temporary children could corrupt the children. Temporary tables are never locked. So we do now prohibit non-temporary chidlren of a temporary MERGE table. Bug 27660 - Falcon: merge table possible It was possible to create a MERGE table with non-MyISAM children. Bug 30273 - merge tables: Can't lock file (errno: 155) This was a Windows-only bug. Table administration statements sometimes failed with "Can't lock file (errno: 155)". These bugs are fixed by a new implementation of MERGE table open. When opening a MERGE table in open_tables() we do now add the child tables to the list of tables to be opened by open_tables() (the "query_list"). The children are not opened in the handler at this stage. After opening the parent, open_tables() opens each child from the now extended query_list. When the last child is opened, we remove the children from the query_list again and attach the children to the parent. This behaves similar to the old open. However it does not open the MyISAM tables directly, but grabs them from the already open children. When closing a MERGE table in close_thread_table() we detach the children only. Closing of the children is done implicitly because they are in thd->open_tables. For more detail see the comment at the top of ha_myisammrg.cc. Changed from open_ltable() to open_and_lock_tables() in all places that can be relevant for MERGE tables. The latter can handle tables added to the list on the fly. When open_ltable() was used in a loop over a list of tables, the list must be temporarily terminated after every table for open_and_lock_tables(). table_list->required_type is set to FRMTYPE_TABLE to avoid open of special tables. Handling of derived tables is suppressed. These details are handled by the new function open_n_lock_single_table(), which has nearly the same signature as open_ltable() and can replace it in most cases. In reopen_tables() some of the tables open by a thread can be closed and reopened. When a MERGE child is affected, the parent must be closed and reopened too. Closing of the parent is forced before the first child is closed. Reopen happens in the order of thd->open_tables. MERGE parents do not attach their children automatically at open. This is done after all tables are reopened. So all children are open when attaching them. Special lock handling like mysql_lock_abort() or mysql_lock_remove() needs to be suppressed for MERGE children or forwarded to the parent. This depends on the situation. In loops over all open tables one suppresses child lock handling. When a single table is touched, forwarding is done. Behavioral changes: =================== This patch changes the behavior of temporary MERGE tables. Temporary MERGE must have temporary children. The old behavior was wrong. A temporary table is not locked. Hence even non-temporary children were not locked. See Bug 19627 - temporary merge table locking. You cannot change the union list of a non-temporary MERGE table when LOCK TABLES is in effect. The following does *not* work: CREATE TABLE m1 ... ENGINE=MRG_MYISAM ...; LOCK TABLES t1 WRITE, t2 WRITE, m1 WRITE; ALTER TABLE m1 ... UNION=(t1,t2) ...; However, you can do this with a temporary MERGE table. You cannot create a MERGE table with CREATE ... SELECT, neither as a temporary MERGE table, nor as a non-temporary MERGE table. CREATE TABLE m1 ... ENGINE=MRG_MYISAM ... SELECT ...; Gives error message: table is not BASE TABLE.
-
- 05 Nov, 2007 1 commit
-
-
istruewing@stella.local authored
partitioned table Trying INSERT DELAYED on a partitioned table, that has not been used right before, crashes the server. When a table is used for select or update, it is kept open for some time. This period I mean with "right before". Information about partitioning of a table is stored in form of a string in the .frm file. Parsing of this string requires a correctly set up lexical analyzer (lex). The partitioning code uses a new temporary instance of a lex. But it does still refer to the previously active lex. The delayd insert thread does not initialize its lex though... Added initialization for thd->lex before open table in the delayed thread and at all other places where it is necessary to call lex_start() if all tables would be partitioned and need to parse the .frm file.
-
- 01 Nov, 2007 1 commit
-
-
davi@endora.local authored
If a stored function that contains a drop temporary table statement is invoked by a create temporary table of the same name may cause a server crash. The problem is that when dropping a table no check is done to ensure that table is not being used by some outer query (or outer statement), potentially leaving the outer query with a reference to a stale (freed) table. The solution is when dropping a temporary table, always check if the table is being used by some outer statement as a temporary table can be dropped inside stored procedures. The check is performed by looking at the TABLE::query_id value for temporary tables. To simplify this check and to solve a bug related to handling of temporary tables in prelocked mode, this patch changes the way in which this member is used to track the fact that table is used/unused. Now we ensure that TABLE::query_id is zero for unused temporary tables (which means that all temporary tables which were used by a statement should be marked as free for reuse after it's execution has been completed).
-
- 30 Oct, 2007 2 commits
-
-
kostja@bodhi.(none) authored
in THD. In future the error may be stored elsewhere (not in net.report_error) and it's important to start using an opaque getter to simplify merges.
-
aelkin/elkin@koti.dsl.inet.fi authored
involved bug#12691, bug#27571
-
- 29 Oct, 2007 1 commit
-
-
aelkin/elkin@koti.dsl.inet.fi authored
Query_log_event::error_code A query can perform completely having the local var error of mysql_$query zero, where $query in insert, update, delete, load, and be binlogged with error_code e.g KILLED_QUERY while there is no reason do to so. That can happen because Query_log_event consults thd->killed flag to evaluate error_code. Fixed with implementing a scheme suggested and partly implemented at time of bug@22725 work-on. error_status is cached immediatly after the control leaves the main rows-loop and that instance always corresponds to `error' the local of mysql_$query functions. The cached value is passed to Query_log_event constructor, not the default thd->killed which can be changed in between of the caching and the constructing.
-
- 29 Sep, 2007 1 commit
-
-
davi@moksha.local authored
caused a few tests to fail because the thd->extra_lock wasn't being set to NULL after the table was unlocked. This poses a serious problem because later attempts to access thd->extra_lock (now a dangling pointer) will probably result in a crash (undefined behavior) -- and that's what actually happens in some test cases. The solution is to set the select_create::m_plock pointee to NULL, which means that thd->extra_lock is set to NULL when the lock data is not for a temporary table.
-
- 28 Sep, 2007 1 commit
-
-
davi@moksha.local authored
When CREATE TEMPORARY TABLE .. SELECT is invoked from a stored function which in turn is called from CREATE TABLE SELECT causes a memory leak because the inner create temporary table overrides the outter extra_lock reference when locking the table. The solution is to simply not overrride the extra_lock by only using the extra_lock for a non-temporary table lock.
-
- 27 Sep, 2007 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
When expanding a * in a USING/NATURAL join the check for table access for both tables in the join was done using the grant information of the first one. Fixed by getting the grant information for the current table while iterating through the columns of the join.
-
- 26 Sep, 2007 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
-
- 22 Sep, 2007 1 commit
-
-
evgen@sunlight.local authored
type of the result. There are several functions that accept parameters of different types. The result field type of such functions was determined based on the aggregated result type of its arguments. As the DATE and the DATETIME types are represented by the STRING type, the result field type of the affected functions was always STRING for DATE/DATETIME arguments. The affected functions are COALESCE, IF, IFNULL, CASE, LEAST/GREATEST, CASE. Now the affected functions aggregate the field types of their arguments rather than their result types and return the result of aggregation as their result field type. The cached_field_type member variable is added to the number of classes to hold the aggregated result field type. The str_to_date() function's result field type now defaults to the MYSQL_TYPE_DATETIME. The agg_field_type() function is added. It aggregates field types with help of the Field::field_type_merge() function. The create_table_from_items() function now uses the item->tmp_table_field_from_field_type() function to get the proper field when the item is a function with a STRING result type.
-
- 21 Sep, 2007 1 commit
-
-
evgen@sunlight.local authored
led to creating corrupted index. While execution of the CREATE .. SELECT SQL_BUFFER_RESULT statement the engine->start_bulk_insert function was called twice. On the first call On the first call MyISAM disabled all non-unique indexes and on the second call it decides to not re-enable them because all indexes was disabled. Due to this no indexes was actually created during CREATE TABLE thus producing crashed table. Now the select_inset class has is_bulk_insert_mode flag which prevents calling the start_bulk_insert function twice. The flag is set in the select_create::prepare, select_insert::prepare2 functions and the select_insert class constructor. The flag is reset in the select_insert::send_eof function.
-
- 16 Sep, 2007 1 commit
-
-
aelkin@dl145j.mysql.com authored
-
- 30 Aug, 2007 1 commit
-
-
davi@moksha.local authored
The problem is that a SELECT on one thread is blocked by INSERT ... ON DUPLICATE KEY UPDATE on another thread even when low_priority_updates is activated. The solution is to possibly downgrade the lock type to the setting of low_priority_updates if the INSERT cannot be concurrent.
-
- 23 Aug, 2007 1 commit
-
-
thek@adventure.(none) authored
SQL_MODE was ignored when a client issued INSERT DELAYED. Some system settings weren't copied as intended when a record was saved for a delayed insert.
-
- 21 Aug, 2007 1 commit
-
-
Binlogging of the statement with a side effect like a modified non-trans table did not happen. The artifact involved all binloggable dml queries. Fixed with changing the binlogging conditions all over the code to exploit thd->transaction.stmt.modified_non_trans_table introduced by the patch for bug@27417. Multi-delete case has own specific addressed by another bug@29136. Multi-update case has been addressed by bug#27716 and patch and will need merging.
-
- 13 Aug, 2007 1 commit
-
-
monty@mysql.com/nosik.monty.fi authored
Faster thr_alarm() Added 'Opened_files' status variable to track calls to my_open() Don't give warnings when running mysql_install_db Added option --source-install to mysql_install_db I had to do the following renames() as used polymorphism didn't work with Forte compiler on 64 bit systems index_read() -> index_read_map() index_read_idx() -> index_read_idx_map() index_read_last() -> index_read_last_map()
-