- 18 Jun, 2009 1 commit
-
-
Alfranio Correia authored
Large transactions and statements may corrupt the binary log if the size of the cache, which is set by the max_binlog_cache_size, is not enough to store the the changes. In a nutshell, to fix the bug, we save the position of the next character in the cache before starting processing a statement. If there is a problem, we simply restore the position thus removing any effect of the statement from the cache. Unfortunately, to avoid corrupting the binary log, we may end up loosing changes on non-transactional tables if they do not fit in the cache. In such cases, we store an Incident_log_event in order to stop the slave and alert users that some changes were not logged. Precisely, for every non-transactional changes that do not fit into the cache, we do the following: a) the statement is *not* logged b) an incident event is logged after committing/rolling back the transaction, if any. Note that if a failure happens before writing the incident event to the binary log, the slave will not stop and the master will not have reported any error. c) its respective statement gives an error For transactional changes that do not fit into the cache, we do the following: a) the statement is *not* logged b) its respective statement gives an error To work properly, this patch requires two additional things. Firstly, callers to MYSQL_BIN_LOG::write and THD::binlog_query must handle any error returned and take the appropriate actions such as undoing the effects of a statement. We already changed some calls in the sql_insert.cc, sql_update.cc and sql_insert.cc modules but the remaining calls spread all over the code should be handled in BUG#37148. Secondly, statements must be either classified as DDL or DML because DDLs that do not get into the cache must generate an incident event since they cannot be rolled back.
-
- 16 Jun, 2009 9 commits
-
-
Georgi Kodinov authored
to a test file that guarantees the presence of partition code
-
Kristofer Pettersson authored
-
Kristofer Pettersson authored
-
Martin Hansson authored
-
Kristofer Pettersson authored
-
Georgi Kodinov authored
-
Kristofer Pettersson authored
Early patch submitted for discussion. It is possible for more than one thread to enter the condition in query_cache_insert(), but the condition predicate is to signal one thread each time the cache status changes between the following states: {NO_FLUSH_IN_PROGRESS,FLUSH_IN_PROGRESS, TABLE_FLUSH_IN_PROGRESS} Consider three threads THD1, THD2, THD3 THD2: select ... => Got a writer in ::store_query THD3: select ... => Got a writer in ::store_query THD1: flush tables => qc status= FLUSH_IN_PROGRESS; new writers are blocked. THD2: select ... => Still got a writer and enters cond in query_cache_insert THD3: select ... => Still got a writer and enters cond in query_cache_insert THD1: flush tables => finished and signal status change. THD2: select ... => Wakes up and completes the insert. THD3: select ... => Happily waiting for better times. Why hurry? This patch is a refactoring of this lock system. It introduces four new methods: Query_cache::try_lock() Query_cache::lock() Query_cache::lock_and_suspend() Query_cache::unlock() This change also deprecates wait_while_table_flush_is_in_progress(). All threads are queued and put on a conditional wait. On each unlock the queue is signalled. This resolve the issues with left over threads. To assure that no threads are spending unnecessary time waiting a signal broadcast is issued every time a lock is taken before a full cache flush.
-
Martin Hansson authored
-
Georgi Kodinov authored
-
- 15 Jun, 2009 12 commits
-
-
Davi Arnaut authored
times (ie: 2:16:20).
-
Davi Arnaut authored
-
Georgi Kodinov authored
-
Staale Smedseng authored
statements missed from general log A FLUSH LOGS is added to ensure that the log info hits the file before attempting to process.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Bernt M. Johnsen authored
-
Bernt M. Johnsen authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
crashes server! The problem affects the scenario when index merge is followed by a filesort and the sort buffer is not big enough for all the sort keys. In this case the filesort function will read the data to the end through the index merge quick access method (and thus closing the cursor etc), but will leave the pointer to the quick select method in place. It will then create a temporary file to hold the results of the filesort and will add it as a sort output file (in sort.io_cache). Note that filesort will copy the original 'sort' structure in an automatic variable and restore it after it's done. As a result at exiting filesort() we have a sort.io_cache filled in and nothing else (as a result of close of the cursors at end of reading data through index merge). Now create_sort_index() will note that there is a select and will clean it up (as it's been used already by filesort() reading the data in). While doing that a special case in the index merge destructor will clean up the sort.io_cache, assuming it's an output of the index merge method and is not needed anymore. As a result the code that tries to read the data back from the filesort output will get no data in both memory and disk and will crash. Fixed similarly to how filesort() does it : by copying the sort.io_cache structure to a local variable, removing the pointer to the io_cache (so that it's not freed by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original structure (together with the valid pointer) after the cleanup is done. This is a safe thing to do because all the structures are already cleaned up by hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next()) and the cleanup code being written in a way that tolerates repeating cleanups.
-
- 12 Jun, 2009 8 commits
-
-
Davi Arnaut authored
The SQL-mode PAD_CHAR_TO_FULL_LENGTH could prevent a DROP USER statement from privileges associated with the user being dropped. What ocurred was that reading from the User and Host fields of the tables tables_priv or columns_priv would yield values padded with spaces, causing a failure to match a specified user or host ('user' != 'user '); The solution is to disregard the PAD_CHAR_TO_FULL_LENGTH mode when iterating over and matching values in the privileges tables for a DROP USER statement.
-
Staale Smedseng authored
statements missed from general log A refinement of the test in the previous patch to avoid using sleep as a means to ensure that timestamps are added to the log entries.
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Patrick Crews authored
Re-enabled tests main.init_connect and rpl.rpl_init_slave.test for non-Windows platforms. Please remove this code upon fixing the bug.
-
Georgi Kodinov authored
WHERE and GROUP BY clause Loose index scan may use range conditions on the argument of the MIN/MAX aggregate functions to find the beginning/end of the interval that satisfies the range conditions in a single go. These range conditions may have open or closed minimum/maximum values. When the comparison returns 0 (equal) the code should check the type of the min/max values of the current interval and accept or reject the row based on whether the limit is open or not. There was a wrong composite condition on checking this and it was not working in all cases. Fixed by simplifying the conditions and reversing the logic.
-
- 11 Jun, 2009 4 commits
-
-
Joerg Bruehe authored
This is the backmerge of 5.0.74sp1 into the main sources, but effectively a null-merge, because the changes in that version were backports of changes already present in later sources.
-
Joerg Bruehe authored
-
Joerg Bruehe authored
-
Joerg.Bruehe@Sun.COM authored
-
- 10 Jun, 2009 4 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
Backport to MySQL 5.0/1 fix by Vladislav Vaintroub: In Vista and later and also in when using terminal services, when server is started from command line, client cannot connect to it via shared memory protocol. This is a regression introduced when Bug#24731 was fixed. The reason is that client is trying to attach to shared memory using global kernel object namespace (all kernel objects are prefixed with Global\). However, server started from the command line in Vista and later will create shared memory and events using current session namespace. Thus, client is unable to find the server and connection fails. The fix for the client is to first try to find server using "local" names (omitting Global\ prefix) and only if server is not found, trying global namespace.
-
Martin Hansson authored
Range analysis did not request sorted output from the storage engine, which cause partitioned handlers to process one partition at a time while reading key prefixes in ascending order, causing values to be missed. Fixed by always requesting sorted order during range analysis. This fix is introduced in 6.0 by the fix for bug no 41136.
-
Philip Stoev authored
This test uses SHOW STATUS and the like, which may be unstable in the face of logging to table, since the CSV handler is actively executing operations and thus incrementing the counters. Fixed by disabling logging to table for the duration of the test and restoring it afterwards. This causes various counters to properly start counting from zero and never advance due to CSV operations.
-
- 09 Jun, 2009 2 commits
-
-
Davi Arnaut authored
Needed for substitution in some tests.
-
Matthias Leich authored
-