An error occurred fetching the project authors.
- 28 Mar, 2010 1 commit
-
-
When mysqlbinlog was given the --database=X flag, it always printed 'ROLLBACK TO', but the corresponding 'SAVEPOINT' statement was not printed. The replicated filter(replicated-do/ignore-db) and binlog filter (binlog-do/ignore-db) has the same problem. They are solved in this patch together. After this patch, We always check whether the query is 'SAVEPOINT' statement or not. Because this is a literal check, 'SAVEPOINT' and 'ROLLBACK TO' statements are also binlogged in uppercase with no any comments. The binlog before this patch can be handled correctly except one case that any comments are in front of the keywords. for example: /* bla bla */ SAVEPOINT a; /* bla bla */ ROLLBACK TO a;
-
- 17 Mar, 2010 1 commit
-
-
Mats Kindahl authored
for InnoDB The class Field_bit_as_char stores the metadata for the field incorrecly because bytes_in_rec and bit_len are set to (field_length + 7 ) / 8 and 0 respectively, while Field_bit has the correct values field_length / 8 and field_length % 8. Solved the problem by re-computing the values for the metadata based on the field_length instead of using the bytes_in_rec and bit_len variables. To handle compatibility with old server, a table map flag was added to indicate that the bit computation is exact. If the flag is clear, the slave computes the number of bytes required to store the bit field and compares that instead, effectively allowing replication *without conversion* from any field length that require the same number of bytes to store.
-
- 22 Feb, 2010 1 commit
-
-
Staale Smedseng authored
MySQL with gcc 4.3.2 This is the final patch in the context of this bug.
-
- 17 Feb, 2010 1 commit
-
-
Luis Soares authored
I found three issues during the analysis: 1. Memory leak caused by temp_buf not being freed; 2. Memory leak caused when handling argv; 3. Conditional jump that depended on unitialized values. Issue #1 -------- DESCRIPTION: when mysqlbinlog is reading from a remote location the event temp_buf references the incoming stream (in NET object), which is not freed by mysqlbinlog explicitly. On the other hand, when it is reading local binary log, it points to a temporary buffer that needs to be explicitly freed. For both cases, the temp_buf was not freed by mysqlbinlog, instead was set to 0. This clearly disregards the free required in the second case, thence creating a memory leak. FIX: we make temp_buf to be conditionally freed depending on the value of remote_opt. Found out that similar fix is already in most recent codebases. Issue #2 -------- DESCRIPTION: load_defaults is called by parse_args, and it reads default options from configuration files and put them BEFORE the arguments that are already in argc and argv. This is done resorting to MEM_ROOT. However, parse_args calls handle_options immediately after which changes argv. Later when freeing the defaults, pointers to MEM_ROOT won't match, causing the memory not to be freed: void free_defaults(char **argv) { MEM_ROOT ptr memcpy_fixed((char*) &ptr,(char *) argv - sizeof(ptr), sizeof(ptr)); free_root(&ptr,MYF(0)); } FIX: we remove load_defaults from parse_args and call it before. Then we save argv with defaults in defaults_argv BEFORE calling parse_args (which inside can then call handle_options at will). Actually, found out that this is in fact kind of a backport for BUG#38468 into 5.1, so I merged in the test case as well and added error check for load_defaults call. Fix based on: revid:zhenxing.he@sun.com-20091002081840-uv26f0flw4uvo33y Issue #3 -------- DESCRIPTION: the structure st_print_event_info constructor would not initialize the sql_mode member, although it did for sql_mode_inited (set to false). This would later raise the warning in valgrind when printing the sql_mode in the event header, as this print out is protected by a check against sql_mode_inited and sql_mode variables. Given that sql_mode was not initialized valgrind would output the warning. FIX: we add initialization of sql_mode to the st_print_event_info constructor.
-
- 05 Feb, 2010 1 commit
-
-
Luis Soares authored
into slow log While processing a statement, down the mysql_parse execution stack, the thd->enable_slow_log can be assigned to opt_log_slow_admin_statements, depending whether one is executing administrative statements, such as ALTER TABLE, OPTIMIZE, ANALYZE, etc, or not. This can have an impact on slow logging for statements that are executed after an administrative statement execution is completed. When executing statements directly from the user this is fine because, the thd->enable_slow_log is reset right at the beginning of the dispatch_command function, ie, everytime a new statement is set is set to execute. On the other hand, for slave SQL thread (sql_thd) the story is a bit different. When in SBR the sql_thd applies statements by calling mysql_parse. Right after, it calls log_slow_statement function to log them if they take too long. Calling mysql_parse directly is fine, but also means that dispatch_command function is bypassed. As a consequence, thd->enable_slow_log does not get a chance to be reset before the next statement to be executed by the sql_thd. If the statement just executed by the sql_thd was an administrative statement and logging of admin statements was disabled, this means that sql_thd->enable_slow_log will be set to 0 (disabled) from that moment on. End result: sql_thd stops logging slow statements. We fix this by resetting the value of sql_thd->enable_slow_log to the value of opt_log_slow_slave_statements right after log_slow_stement is called by the sql_thd.
-
- 28 Jan, 2010 1 commit
-
-
Davi Arnaut authored
Rename method as to not hide a base. Reorder attributes initialization. Remove unused variable. Rework code to silence a warning due to assignment used as truth value.
-
- 25 Jan, 2010 1 commit
-
-
Andrei Elkin authored
When replicating from 4.1 master to 5.0 slave START SLAVE UNTIL can stop too late. The necessary in calculating of the beginning of an event the event's length did not correspond to the master's genuine information at the event's execution time. That piece of info was changed at the event's relay-logging due to binlog_version<4 event conversion by IO thread. Fixed with storing the master genuine Query_log_event size into a new status variable at relay-logging of the event. The stored info is extacted at the event execution and participate further to caclulate the correct start position of the event in the until-pos stopping routine. The new status variable's algorithm will be only active when the event comes from the master of version < 5.0 (binlog_version < 4).
-
- 24 Jan, 2010 1 commit
-
-
He Zhenxing authored
-
- 19 Jan, 2010 1 commit
-
-
Luis Soares authored
PB2 run uncovered issue that needs further analysis.
-
- 14 Jan, 2010 1 commit
-
-
Luis Soares authored
BUG#49481: RBR: MyISAM and bit fields may cause slave to stop on delete: cant find record BUG#49482: RBR: Replication may break on deletes when MyISAM tables + char field are used When using MyISAM tables, despite the fact that the null bit is set for some fields, their old value is still in the row. This can cause the comparison of records to fail when the slave is doing an index or range scan. We fix this by avoiding memcmp for MyISAM tables when comparing records. Additionally, when comparing field by field, we first check if both fields are not null and if so, then we compare them. If just one field is null we return failure immediately. If both fields are null, we move on to the next field.
-
- 06 Jan, 2010 3 commits
-
-
Luis Soares authored
For tables with metadata sizes ranging from 251 to 255 the size of the event data (m_data_size) was being improperly calculated in the Table_map_log_event constructor. This was due to the fact that when writing the Table_map_log_event body (in Table_map_log_event::write_data_body) a call to net_store_length is made for packing the m_field_metadata_size. It happens that net_store_length uses *one* byte for storing m_field_metadata_size when it is smaller than 251 but *three* bytes when it exceeds that value. BUG 42749 had already pinpointed and fix this fact, but the fix was incomplete, as the calculation in the Table_map_log_event constructor considers 255 instead of 251 as the threshold to increment m_data_size by three. Thence, the window for having a mismatch between the number of bytes written and the number of bytes accounted in the event length (m_data_size) was left open for m_field_metadata_size values between 251 and 255. We fix this by changing the condition in the Table_map_log_event constructor to match the one in the net_store_length, ie, increment one byte if m_field_metadata_size < 251 and three if it exceeds this value.
- 31 Dec, 2009 1 commit
-
-
In statement-based or mixed-mode replication, use DROP TEMPORARY TABLE to drop multiple tables causes different errors on master and slave, when one or more of these tables do not exist. Because when executed on slave, it would automatically add IF EXISTS to the query to ignore all ER_BAD_TABLE_ERROR errors. To fix the problem, do not add IF EXISTS when executing DROP TEMPORARY TABLE on the slave, and clear the ER_BAD_TABLE_ERROR error after execution if the query does not expect any errors.
-
- 15 Dec, 2009 1 commit
-
-
'LOAD DATA CONCURRENT [LOCAL] INFILE ...' statment only is binlogged as 'LOAD DATA [LOCAL] INFILE ...' in SBR and MBR. As a result, if replication is on, queries on slaves will be blocked by the replication SQL thread. This patch write code to write 'CONCURRENT' into the log event if 'CONCURRENT' option is in the original statement in SBR and MBR.
-
- 24 Nov, 2009 1 commit
-
-
Luis Soares authored
Valgrind reports a conditional jump that depends on uninitialized data while doing a LOAD DATA and for this test case only. This test case, tests that loading data from a 4.0 or 4.1 instance into a 5.1 instance is working. As such it handles old binary log with a different set of events than currently 5.1 codebase uses. See the following reference for details: http://forge.mysql.com/wiki/MySQL_Internals_Binary_Log#LOAD_DATA_INFILE_Events Problem: The server is handling an Execute_load_log_event, which results in reading a Load_log_event from the binary log and applying it. When applying the Load_log_event, some variable setup is done and then mysql_load is called. Late in mysql_load execution, if not in row mode logging, the event is binlogged write_execute_load_query_log_event. In write_execute_load_query_log_event, thd->lex->local_file is inspected. The problem is that it has not been set before in the execution stack. This causes valgrind to report the warning. Fix: We fix this by initializing thd->lex->local_file to be the same as the value of Load_log_event::local_fname, when lex_start is called inside Load_log_event::do_apply_event.
-
- 09 Nov, 2009 1 commit
-
-
Luis Soares authored
In function log_event.cc:Query_log_event::write, there was a cast that was triggering undefined behavior. The offending cast is the following: write_str_with_code_and_len((char **)(&start), catalog, catalog_len, Q_CATALOG_NZ_CODE); This results in calling write_str_with_code_and_len with first argument pointing to a (char **) while "start" is itself a pointer to uchar (uchar *). Inside write_str_with_..., the content of start is then be updated: (*dst)+= len; The instruction above would cause the (*dst) pointer (ie, the "start" argument, from the caller point of view, and which actually points to uchar instead of pointing to char) to be updated so that it would increment catalog_len. However, this seems to break strict-aliasing rules ultimately causing the increment and assignment to behave unexpectedly. We fix this by removing the cast and by making the types match.
-
- 23 Oct, 2009 1 commit
-
- 22 Oct, 2009 2 commits
-
-
Alfranio Correia authored
Backporting BUG#43789 to mysql-5.1-bugteam The replication was generating corrupted data, warning messages on Valgrind and aborting on debug mode while replicating a "null" to "not null" field. Specifically the unpack_row routine, was considering the slave's table definition and trying to retrieve a field value, where there was nothing to be retrieved, ignoring the fact that the value was defined as "null" by the master. To fix the problem, we proceed as follows: 1 - If it is not STRICT sql_mode, implicit default values are used, regardless if it is multi-row or single-row statement. 2 - However, if it is STRICT mode, then a we do what follows: 2.1 If it is a transactional engine, we do a rollback on the first NULL that is to be set into a NOT NULL column and return an error. 2.2 If it is a non-transactional engine and it is the first row to be inserted with multi-row, we also return the error. Otherwise, we proceed with the execution, use implicit default values and print out warning messages. Unfortunately, the current patch cannot mimic the behavior showed by the master for updates on multi-tables and multi-row inserts. This happens because such statements are unfolded in different row events. For instance, considering the following updates and strict mode: (master) create table t1 (a int); create table t2 (a int not null); insert into t1 values (1); insert into t2 values (2); update t1, t2 SET t1.a=10, t2.a=NULL; t1 would have (10) and t2 would have (0) as this would be handled as a multi-row update. On the other hand, if we had the following updates: (master) create table t1 (a int); create table t2 (a int); (slave) create table t1 (a int); create table t2 (a int not null); (master) insert into t1 values (1); insert into t2 values (2); update t1, t2 SET t1.a=10, t2.a=NULL; On the master t1 would have (10) and t2 would have (NULL). On the slave, t1 would have (10) but the update on t1 would fail.
-
Alfranio Correia authored
Backporting BUG#38173 to mysql-5.1-bugteam The reason of the bug was incompatibile with the master side behaviour. INSERT query on the master is allowed to insert into a table without specifying values of DEFAULT-less fields if sql_mode is not strict. Fixed with checking sql_mode by the sql thread to decide how to react. Non-strict sql_mode should allow Write_rows event to complete. todo: warnings can be shown via show slave status, still this is a separate rather general issue how to show warnings for the slave threads.
-
- 16 Oct, 2009 1 commit
-
-
Georgi Kodinov authored
Implemented the server infrastructure for the fix: 1. Added a function LEX_STRING *thd_query_string(THD) to return a LEX_STRING structure instead of char *. This is the function that must be called in innodb instead of thd_query() 2. Did some encapsulation in THD : aggregated thd_query and thd_query_length into a LEX_STRING and made accessor and mutator methods for easy code updating. 3. Updated the server code to use the new methods where applicable.
-
- 14 Oct, 2009 1 commit
-
-
The BINLOG statement was sharing too much code with the slave SQL thread, introduced with the patch for Bug#32407. This caused statements to be logged with the wrong server_id, the id stored inside the events of the BINLOG statement rather than the id of the running server. Fix by rearranging code a bit so that only relevant parts of the code are executed by the BINLOG statement, and the server_id of the server executing the statements will not be overrided by the server_id stored in the 'format description BINLOG statement'.
-
- 09 Oct, 2009 1 commit
-
-
He Zhenxing authored
Commit the non-NDB specific part (originated by frazer) to 5.1 mainline.
-
- 28 Sep, 2009 1 commit
-
-
Tatiana A. Nurnberg authored
"load data" statements were written to the binlog as a mix of the original statement and bits recreated from parse-info. This relied on implementation details and broke with IGNORE_SPACES and versioned comments. We now completely resynthesize the query for LOAD DATA for binlog (which among other things normalizes them somewhat with regard to case, spaces, etc.). We have already parsed the query properly, so we make use of that rather than mix-and-match string literals and parsed items. This should make us safe with regard to versioned comments, even those spanning multiple tokens. Also no longer affected by IGNORE_SPACES.
-
- 27 Sep, 2009 1 commit
-
-
Luis Soares authored
HA_ERR_WRONG_INDEX In RBR, disabling keys on slave table will break replication when updating or deleting a record. When the slave thread tries to find the row, by searching in the storage engine, it checks whether the table has a key or not. If it has one, then the slave thread uses it to search the record. Nonetheless, the slave only checks whether the key exists or not, it does not verify if it is active. Should the key be disabled (eg, DBA has issued an ALTER TABLE ... DISABLE KEYS) then it will result in error: HA_ERR_WRONG_INDEX. This patch addresses this issue by making the slave thread also check whether the key is active or not before actually using it.
-
- 10 Sep, 2009 1 commit
-
-
In RBR, There is an inconsistency between slaves and master. When INSERT statement which includes an auto_increment field is executed, Store engine of master will check the value of the auto_increment field. It will generate a sequence number and then replace the value, if its value is NULL or empty. if the field's value is 0, the store engine will do like encountering the NULL values unless NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE. In contrast, if the field's value is 0, Store engine of slave always generates a new sequence number whether or not NO_AUTO_VALUE_ON_ZERO is set into SQL_MODE. SQL MODE of slave sql thread is always consistency with master's. Another variable is related to this bug. If generateing a sequence number is decided by the values of table->auto_increment_field_not_null and SQL_MODE(if includes MODE_NO_AUTO_VALUE_ON_ZERO) The table->auto_increment_is_not_null is FALSE, which causes this bug to appear. ..
-
- 27 Aug, 2009 1 commit
-
-
Alfranio Correia authored
Slave does not correctly handle "expected errors" leading to inconsistencies between the mater and slave. Specifically, when a statement changes both transactional and non-transactional tables, the transactional changes are automatically rolled back on the master but the slave ignores the error and does not roll them back thus leading to inconsistencies. To fix the problem, we automatically roll back a statement that fails on the slave but note that the transaction is not rolled back unless a "rollback" command is in the relay log file.
-
- 13 Aug, 2009 1 commit
-
-
Alfranio Correia authored
In STATEMENT based replication, a statement that failed on the master but that updated non-transactional tables is written to binary log with the error code appended to it. On the slave, the statement is executed and the same error is expected. However, when an "expected error" did not happen on the slave and was either ignored or was related to a concurrency issue on the master, the slave did not rollback the effects of the statement and as such inconsistencies might happen. To fix the problem, we automatically rollback a statement that should have failed on a slave but succeded and whose expected failure is either ignored or stems from a concurrency issue on the master.
-
- 12 Aug, 2009 1 commit
-
-
Replication SQL thread does not set database default charset to thd->variables.collation_database properly, when executing LOAD DATA binlog. This bug can be repeated by using "LOAD DATA" command in STATEMENT mode. This patch adds code to find the default character set of the current database then assign it to thd->db_charset when slave server begins to execute a relay log. The test of this bug is added into rpl_loaddata_charset.test
-
- 24 Jul, 2009 1 commit
-
-
Gleb Shchepa authored
procedures causes crashes! The problem of that bugreport was mostly fixed by the patch for bug 38691. However, attached test case focused on another crash or valgrind warning problem: SHOW PROCESSLIST query accesses freed memory of SP instruction that run in a parallel connection. Changes of thd->query/thd->query_length in dangerous places have been guarded with the per-thread LOCK_thd_data mutex (the THD::LOCK_delete mutex has been renamed to THD::LOCK_thd_data).
-
- 06 Jul, 2009 1 commit
-
-
Alfranio Correia authored
timeout In STMT and MIXED modes, a statement that changes both non-transactional and transactional tables must be written to the binary log whenever there are changes to non-transactional tables. This means that the statement gets into the binary log even when the changes to the transactional tables fail. In particular , in the presence of a failure such statement is annotated with the error number and wrapped in a begin/rollback. On the slave, while applying the statement, it is expected the same failure and the rollback prevents the transactional changes to be persisted. Unfortunately, statements that fail due to concurrency issues (e.g. deadlocks, timeouts) are logged in the same way causing the slave to stop as the statements are applied sequentially by the SQL Thread. To fix this bug, we automatically ignore concurrency failures on the slave. Specifically, the following failures are ignored: ER_LOCK_WAIT_TIMEOUT, ER_LOCK_DEADLOCK and ER_XA_RBDEADLOCK.
-
- 29 Jun, 2009 1 commit
-
-
Staale Smedseng authored
-
- 11 Jun, 2009 1 commit
-
-
Alfranio Correia authored
While reading a binary log that is being used by a master or was not properly closed, most likely due to a crash, the following warning message is being printed out: "Warning: this binlog was not closed properly. Most probably mysqld crashed writing it.". This was scaring our users as the message was not taking into account the possibility of the file is being just used by the master. To avoid unnecessarily scaring our users, we replace the original message by the following one: Warning: "this binlog is either is use or was not closed properly.".
-
- 09 Jun, 2009 2 commits
-
-
Staale Smedseng authored
with gcc 4.3.2 Compiling MySQL with gcc 4.3.2 and later produces a number of warnings, many of which are new with the recent compiler versions. This bug will be resolved in more than one patch to limit the size of changesets. This is the first patch, fixing a number of the warnings, predominantly "suggest using parentheses around && in ||", and empty for and while bodies.
-
Staale Smedseng authored
with gcc 4.3.2 Compiling MySQL with gcc 4.3.2 and later produces a number of warnings, many of which are new with the recent compiler versions. This bug will be resolved in more than one patch to limit the size of changesets. This is the first patch, fixing a number of the warnings, predominantly "suggest using parentheses around && in ||", and empty for and while bodies.
-
- 31 May, 2009 1 commit
-
-
He Zhenxing authored
BEGIN/COMMIT/ROLLBACK was subject to replication db rules, and caused the boundary of a transaction not recognized correctly when these queries were ignored by the rules. Fixed the problem by skipping replication db rules for these statements.
-
- 30 May, 2009 1 commit
-
-
He Zhenxing authored
Make the caller of Query_log_event, Execute_load_log_event constructors and THD::binlog_query to provide the error code instead of having the constructors to figure out the error code.
-
- 12 May, 2009 1 commit
-
-
Luis Soares authored
"freeing items" The calculation of the table map log event in the event constructor was one byte shorter than what would be actually written. This would lead to a mismatch between the number of bytes written and the event end_log_pos, causing bad event alignment in the binlog (corrupted binlog) or in the transaction cache while fixing positions (MYSQL_BIN_LOG::write_cache). This could lead to impossible to read binlog or even infinite loops in MYSQL_BIN_LOG::write_cache. This patch addresses this issue by correcting the expected event length in the Table_map_log_event constructor, when the field metadata size exceeds 255.
-
- 11 May, 2009 1 commit
-
-
Mats Kindahl authored
In the output from mysqlbinlog, incident log events were represented as just a comment. Since the incident log event represents an incident that could cause the contents of the database to change without being logged to the binary log, it means that if the SQL is applied to a server, it could potentially lead to that the databases are out of sync. In order to handle that, this patch adds the statement "RELOAD DATABASE" to the SQL output for the incident log event. This will require a DBA to edit the file and handle the case as apropriate before applying the output to a server.
-
- 21 Apr, 2009 1 commit
-
-
Alfranio Correia authored
The rpl_binlog_corruption test case was inject failures, specifically, incidents with invalid numbers to see if the replication was failing gracefully. However, this test was causing the following warning message in Valgrind: "Conditional jump or move depends on uninitialised value(s)" The patch fixes the problem by correctly initializing the m_inicident number.
-