- 08 May, 2012 1 commit
-
-
Sunanda Menon authored
-
- 07 May, 2012 1 commit
-
-
Venkata Sidagam authored
CAUSES RESTORE PROBLEM Problem Statement: ------------------ mysqldump is not having the dump stmts for general_log and slow_log tables. That is because of the fix for Bug#26121. Hence, after dropping the mysql database, and applying the dump by enabling the logging, "'general_log' table not found" errors are logged into the server log file. Analysis: --------- As part of the fix for Bug#26121, we skipped the dumping of tables for general_log and slow_log, because the data dump of those tables are taking LOCKS, which is not allowed for log tables. Fix: ---- We came up with an approach that instead of taking both meta data and data dump information for those tables, take only the meta data dump which doesn't need LOCKS. As part of fixing the issue we came up with below algorithm. Design before fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. Design with the fix: 1) mysql database is having tables like db, event,... general_log, ... slow_log... 2) Skip general_log and slow_log while preparing the tables list 3) Explicitly call the 'show create table' for general_log and slow_log 3) Take the TL_READ lock on tables which are present in the table list and do 'show create table'. 4) Release the lock. While taking the meta data dump for general_log and slow_log the "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". This is because we skipped "DROP TABLE" for those tables, "DROP TABLE" fails for these tables if logging is enabled. Customer is applying the dump by enabling logging so, if the dump has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" stmts for those tables. After the fix we could observe "Table 'mysql.general_log' doesn't exist" errors initially that is because in the customer scenario they are dropping the mysql database by enabling the logging, Hence, those errors are expected. Once we apply the dump which is taken before the "drop database mysql", the errors will not be there.
-
- 27 Apr, 2012 1 commit
-
-
Yasufumi Kinoshita authored
Fixed not to check timeout during the check table.
-
- 26 Apr, 2012 1 commit
-
-
irana authored
-
- 23 Apr, 2012 2 commits
-
-
Andrei Elkin authored
-
Andrei Elkin authored
rpl_auto_increment_bug45679.test is refined due to not fixed in 5.1 Bug11749859-39934.
-
- 20 Apr, 2012 2 commits
-
-
Nuno Carvalho authored
The function mysql_show_binlog_events has a local stack variable 'LOG_INFO linfo;', which is assigned to thd->current_linfo, however this variable goes out of scope and is destroyed before clean thd->current_linfo. The problem is solved by moving 'LOG_INFO linfo;' to function scope.
-
Andrei Elkin authored
BUG#11761686 insert_id event is not filtered. Two issues are covered. INSERT into autoincrement field which is not the first part in the composed primary key is unsafe by autoincrement logging design. The case is specific to MyISAM engine because Innodb does not allow such table definition. However no warnings and row-format logging in the MIXED mode was done, and that is fixed. Int-, Rand-, User-var log-events were not filtered along with their parent query that made possible them to screw up execution context of the following query. Fixed with deferring their execution until the parent query. ****** Bug#11754117 Post review fixes.
-
- 19 Apr, 2012 1 commit
-
-
Mayank Prasad authored
Reason: This is a regression happened because of changes done in code refactoring in 5.1 from 5.0. Issue: While doing "Show tables" lex->verbose was being checked to avoid opening FRM files to get table type. In case of "Show full table", lex->verbose is true to indicate table type is required. In 5.0, this check was present which got missing in >=5.5. Fix: Added the required check to avoid opening FRM files unnecessarily in case of "Show tables".
-
- 18 Apr, 2012 4 commits
-
-
Tor Didriksen authored
-
Tor Didriksen authored
Stored program cache produces wrong result in same THD.
-
Nuno Carvalho authored
Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER privilege. Monitoring tools (such as MEM) often want to check this output - for instance MEM generates the SUM of the sizes of the logs reported here, and puts that in the Replication overview within the MEM Dashboard. However, because of the SUPER requirement, these tools often have an account that holds open the connection whilst monitoring, and can lock out administrators when the server gets overloaded and reaches max_connections - there is already another SUPER privileged account connected, the "monitor". As SHOW MASTER STATUS, and all other replication related statements, return with either REPLICATION CLIENT or SUPER privileges, this worklog is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this as well, and allow both of these commands with either SUPER or REPLICATION CLIENT. This allows monitoring tools to not require a SUPER privilege any more, so is safer in overloaded situations, as well as being more secure, as lighter privileges can be given to users of such tools or scripts.
-
Chaithra Gopalareddy authored
ORDER BY COUNT(*) LIMIT. PROBLEM: With respect to problem in the bug description, we exhibit different behaviors for the two tables presented, because innodb statistics (rec_per_key in this case) are updated for the first table and not so for the second one. As a result the query plan gets changed in test_if_skip_sort_order to use 'index' scan. Hence the difference in the explain output. (NOTE: We can reproduce the problem with first table by reducing the number of tuples and changing the table structure) The varied output w.r.t the query on the second table is because of the result in the query plan change. When a query plan is changed to use 'index' scan, after the call to test_if_skip_sort_order, we set keyread to TRUE immedietly. If for some reason we drop this index scan for a filesort later on, we fetch only the keys not the entire tuple. As a result we would see junk values in the result set. Following is the code flow: Call test_if_skip_sort_order -Choose an index to give sorted output -If this is a covering index, set_keyread to TRUE -Set the scan to INDEX scan Call test_if_skip_sort_order second time -Index is not chosen (note that we do not pass the actual limit value second time. Hence we do not choose index scan second time which in itself is a bug fixed in 5.6 with WL#5558) -goto filesort Call filesort -Create quick range on a different index -Since keyread is set to TRUE, we fetch only the columns of the index -results in the required columns are not fetched FIX: Remove the call to set_keyread(TRUE) from test_if_skip_sort_order. The access function which is 'join_read_first' or 'join_read_last' calls set_keyread anyways.
-
- 17 Apr, 2012 1 commit
-
-
Georgi Kodinov authored
-
- 10 Apr, 2012 1 commit
-
-
Georgi Kodinov authored
-
- 09 Apr, 2012 1 commit
-
-
Venkata Sidagam authored
This bug is a duplicate of Bug #31173, which was pushed to the mysql-trunk 5.6 on 4th Aug, 2010. This is just a back-port of the fix
-
- 06 Apr, 2012 2 commits
-
-
Mayank Prasad authored
Background : In mysql-5.1, in a fix for bug#47485, code has been changed for mysql client (libmysql/libmysql.c) but corresponding code was not changed for embedded mysql. In that code change, after execution of a statement, mysql_stmt_store_result() checks for mysql->state to be MYSQL_STATUS_STATEMENT_GET_RESULT, instead of MYSQL_STATUS_GET_RESULT (earlier). Reason: In embedded mysql code, after execution, mysql->state was not set to MYSQL_STATUS_STATEMENT_GET_RESULT, so it was throwing OUT_OF_SYNC error. Fix: Fixed the code in libmysqld/lib_sql.cc to have mysql->state to be set to MYSQL_STATUS_STATEMENT_GET_RESULT after execution.
-
Georgi Kodinov authored
Fixed an improper type conversion on return that can make the server accept logins with a wrong password.
-
- 04 Apr, 2012 1 commit
-
-
Sergey Glukhov authored
Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX The crash happened due to wrong calculation of key length during creation of reference for sort order index. The problem is that keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled but used_tables parameter(create_ref_for_key() func) does not have it. So key parts which have OUTER_REF_TABLE_BIT are ommited and it could lead to incorrect key length calculation(zero key length).
-
- 28 Mar, 2012 3 commits
-
-
Praveenkumar Hulakund authored
Analysis: ------------------------------- According to the Manual (http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html): "Column, index, stored routine, and event names are not case sensitive on any platform, nor are column aliases." In other words, 'lower_case_table_names' does not affect the behaviour of those identifiers. On the other hand, trigger names are case sensitive on some platforms, and case insensitive on others. 'lower_case_table_names' does not affect the behaviour of trigger names either. The bug was that SHOW statements did case sensitive comparison for stored procedure / stored function / event names. Fix: Modified the code so that comparison in case insensitive for routines and events for "SHOW" operation. As part of this commit, only fixing the test failures due to the actual code fix.
-
Sunny Bains authored
-
Sunny Bains authored
Change the type of purge_sys_t::n_pages_handled and purge_sys_t::handle_limit to ulonglong from ulint. On a 32 bit system doing ~700 deletes per second the counters can overflow in ~3.5 months, if they are 32 bit. Approved by Jimmy Yang over IM.
-
- 27 Mar, 2012 2 commits
-
-
Tor Didriksen authored
-
Praveenkumar Hulakund authored
Analysis: ------------------------------- According to the Manual (http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html): "Column, index, stored routine, and event names are not case sensitive on any platform, nor are column aliases." In other words, 'lower_case_table_names' does not affect the behaviour of those identifiers. On the other hand, trigger names are case sensitive on some platforms, and case insensitive on others. 'lower_case_table_names' does not affect the behaviour of trigger names either. The bug was that SHOW statements did case sensitive comparison for stored procedure / stored function / event names. Fix: Modified the code so that comparison in case insensitive for routines and events for "SHOW" operation.
-
- 21 Mar, 2012 4 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
Joerg Bruehe authored
solve a conflict in ".bzr-mysql/default.conf".
-
- 20 Mar, 2012 2 commits
-
-
karen.langford@oracle.com authored
-
karen.langford@oracle.com authored
-
- 16 Mar, 2012 1 commit
-
-
Annamalai Gurusami authored
The test case must insert all the records using a single transaction. Otherwise the test case takes more than 15 minutes and will time out in pb2 and mtr.
-
- 15 Mar, 2012 3 commits
-
-
Inaam Rana authored
FROM BUFFER POOL rb://975 approved by: Marko Makela There is a race in lock_validate() where we try to access a page without ensuring that the tablespace stays valid during the operation i.e.: it is not deleted. This patch tries to fix that by using an existing flag (the flag is renamed to make it's name more generic in line with it's new use).
-
Inaam Rana authored
rb://976 approved by: Marko Makela Add an assertion to ensure that string overflow is not happening. Pointed by Coverity analysis.
-
Inaam Rana authored
IN OS_THREAD_EQ rb://977 approved by: Marko Makela rw_lock::writer_thread field contains the thread id of current x-holder or wait-x thread. This field is un-initialized at lock creation and is written to for the first time when an attempt is made to x-lock. Current code considers ::writer_thread as valid memory region only when the lock is held in x-mode (or there is an x-waiter). This is an overkill and it generates valgrind warnings. The fix is to consider ::writer_thread as valid memory region once it has been written to. Reasoning: ========== The ::writer_thread can be safely considered valid because: * We only ever do comparison with current calling threads id. * We only ever do comparison when ::recursive flag is set * We always unset ::recursive flag in x-unlock * Same thread cannot be unlocking and attempting to lock at the same time * thread_id recycling is not an issue because before an id is recycled the thread must leave innodb meaning it must release all locks meaning it must unset ::recursive flag.
-
- 12 Mar, 2012 6 commits
-
-
Luis Soares authored
Adding missing sync_slave_with_master to the test case.
-
Luis Soares authored
-
Luis Soares authored
Hardening the test case: - including a diff_tables at the end. - increasing the tolerance on the relay limit size.
-
Luis Soares authored
Automerge with mysql-5.1.
-
Luis Soares authored
BUG#64503: mysql frequently ignores --relay-log-space-limit When the SQL thread goes to sleep, waiting for more events, it sets the flag ignore_log_space_limit to true. This gives the IO thread a chance to queue some more events and ultimately the SQL thread will be able to purge the log once it is rotated. By then the SQL thread resets the ignore_log_space_limit to false. However, between the time the SQL thread has set the ignore flag and the time it resets it, the IO thread will be queuing events in the relay log, possibly going way over the limit. This patch makes the IO and SQL thread to synchronize when they reach the space limit and only ask for one event at a time. Thus the SQL thread sets ignore_log_space_limit flag and the IO thread resets it to false everytime it processes one more event. In addition, everytime the SQL thread processes the next event, and the limit has been reached, it checks if the IO thread should rotate. If it should, it instructs the IO thread to rotate, giving the SQL thread a chance to purge the logs (freeing space). Finally, this patch removes the resetting of the ignore_log_space_limit flag from purge_first_log, because this is now reset by the IO thread every time it processes the next event when the limit has been reached. If the SQL thread is in a transaction, it cannot purge so, there is no point in asking the IO thread to rotate. The only thing it can do is to ask for more events until the transaction is over (then it can ask the IO to rotate and purge the log right away). Otherwise, there would be a deadlock (SQL would not be able to purge and IO thread would not be able to queue events so that the SQL would finish the transaction).
-
Norvald H. Ryeng authored
Problem: Grouping results by VALUES(alias for string literal) causes the server to crash. Item_insert_values is not constructed to handle other types of arguments than field and reference to field. In this case, the argument is an Item_string, and this causes Item_insert_values::fix_fields() to crash. Fix: Issue an error message when the argument to Item_insert_values is not a field or a reference to a field. This is slightly in breach with documentation, which states that VALUES should return NULL, but the error message is only issued in cases where the server otherwise would crash, so there is no change in behavior for queries that already work. Future versions will restrict syntax so that using VALUES in this way is illegal.
-