- 11 Feb, 2010 2 commits
- 10 Feb, 2010 4 commits
-
-
marko authored
-
marko authored
btr_pcur_open_with_no_init() is a macro, do not mix preprocessor directives in the macro invocation, because it is implementation-defined whether that is going to work.
-
marko authored
------------------------------------------------------------------------ r6545 | jyang | 2010-02-03 03:57:32 +0200 (Wed, 03 Feb 2010) | 8 lines Changed paths: M /branches/5.1/lock/lock0lock.c branches/5.1: Fix bug #49001, "SHOW INNODB STATUS deadlock info incorrect when deadlock detection aborts". Print the correct lock owner when recursive function lock_deadlock_recursive() exceeds its maximum depth LOCK_MAX_DEPTH_IN_DEADLOCK_CHECK. rb://217, approved by Marko. ------------------------------------------------------------------------ r6613 | inaam | 2010-02-09 20:23:09 +0200 (Tue, 09 Feb 2010) | 11 lines Changed paths: M /branches/5.1/buf/buf0buf.c M /branches/5.1/buf/buf0rea.c M /branches/5.1/include/buf0rea.h branches/5.1: Fix Bug #38901 InnoDB logs error repeatedly when trying to load page into buffer pool In buf_page_get_gen() if we are unable to read a page (because of corruption or some other reason) we keep on retrying. This fills up error log with millions of entries in no time and we'd eventually run out of disk space. This patch limits the number of attempts that we make (currently set to 100) and after that we abort with a message. rb://241 Approved by: Heikki ------------------------------------------------------------------------
-
marko authored
-
- 09 Feb, 2010 6 commits
-
-
marko authored
Drop the temporary tables and indexes after enabling sync order checks. This should not make any difference. This could have been done in r6611.
-
marko authored
innobase_start_or_create_for_mysql(): Roll back data dictionary transactions before scanning the *.ibd files. Then, data dictionary records can be loaded to the cache before opening the *.ibd files. recv_recovery_rollback_active(): Refactored from recv_recovery_from_checkpoint_finish(). rb://235, committing without review, because this is needed for TablespaceDictionary.
-
marko authored
first load them to the data dictionary cache and use the normal routines for dropping tables or indexes. This should reduce the risk of bugs and also make the code compatible with the upcoming TablespaceDictionary implementation. DICT_SYS_INDEXES_NAME_FIELD: The clustered index position of SYS_INDEXES.NAME. row_merge_drop_temp_indexes(): Scan SYS_INDEXES for tables containing temporary indexes, and load the tables as needed. Invoke row_merge_drop_index() to drop the indexes. row_mysql_drop_temp_tables(): Scan SYS_TABLES for temporary tables, load them with dict_load_table() and drop them with row_drop_table_for_mysql(). rb://251, not yet reviewed
-
marko authored
-
marko authored
- 08 Feb, 2010 3 commits
- 04 Feb, 2010 2 commits
-
-
marko authored
b-tree cursor functions to the buffer pool requests, in order to make the latch diagnostics more accurate. buf_page_optimistic_get_func(): Renamed to buf_page_optimistic_get(). btr_page_get_father_node_ptr(), btr_insert_on_non_leaf_level(), btr_pcur_open(), btr_pcur_open_with_no_init(), btr_pcur_open_on_user_rec(), btr_pcur_open_at_rnd_pos(), btr_pcur_restore_position(), btr_cur_open_at_index_side(), btr_cur_open_at_rnd_pos(): Rename the function to _func and add the parameters file, line. Define wrapper macros with __FILE__, __LINE__. btr_cur_search_to_nth_level(): Add the parameters file, line.
- 03 Feb, 2010 3 commits
-
-
marko authored
is not relocated when freeing a compressed block. This avoids the costly rescan of the LRU list. (Bug #35077, Issue #449) At most one buffer-fix will be active at a time, affecting two blocks: the buf_page_t and the compressed page frame. This should not block the memory defragmentation in buf0buddy.c too much. In fact, it may avoid unnecessary copying if also prev_bpage belongs to the tablespace that is being invalidated. rb://240
-
marko authored
ha_innobase::change_active_index(): Clean up code formatting. ha_innobase::check(): Incorporate the code from row_check_table_for_mysql(). Report errors to the client connection instead of writing them to the error log. row_check_table_for_mysql(): Remove. row_check_index_for_mysql(): Renamed from row_scan_and_check_index(). Let the caller initialize prebuilt, and assume that the index is usable. rb://178 approved by Sunny Bains
-
- 01 Feb, 2010 2 commits
-
-
marko authored
-
marko authored
------------------------------------------------------------------------ r6488 | sunny | 2010-01-21 02:55:08 +0200 (Thu, 21 Jan 2010) | 2 lines Changed paths: M /branches/5.1/mysql-test/innodb-autoinc.result M /branches/5.1/mysql-test/innodb-autoinc.test branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test into a separate test/result files. ------------------------------------------------------------------------ r6489 | sunny | 2010-01-21 02:57:50 +0200 (Thu, 21 Jan 2010) | 2 lines Changed paths: A /branches/5.1/mysql-test/innodb-autoinc-44030.result A /branches/5.1/mysql-test/innodb-autoinc-44030.test branches/5.1: Factor out test for bug#44030 from innodb-autoinc.test into a separate test/result files. ------------------------------------------------------------------------ r6492 | sunny | 2010-01-21 09:38:35 +0200 (Thu, 21 Jan 2010) | 1 line Changed paths: M /branches/5.1/mysql-test/innodb-autoinc-44030.test branches/5.1: Add reference to bug#47621 in the comment. ------------------------------------------------------------------------ r6535 | sunny | 2010-01-30 00:08:40 +0200 (Sat, 30 Jan 2010) | 11 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc branches/5.1: Undo the change from r6424. We need to return DB_SUCCESS even if we were unable to initialize the tabe autoinc value. This is required for the open to succeed. The only condition we currently treat as a hard error is if the autoinc field instance passed in by MySQL is NULL. Previously if the table autoinc value was 0 and the next value was requested we had an assertion that would fail. Change that assertion and treat a value of 0 to mean that the autoinc system is unavailable. Generation of next value will now return failure. rb://237 ------------------------------------------------------------------------ r6536 | sunny | 2010-01-30 00:13:42 +0200 (Sat, 30 Jan 2010) | 6 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/mysql-test/innodb-autoinc.result M /branches/5.1/mysql-test/innodb-autoinc.test branches/5.1: Check *first_value everytime against the column max value and set *first_value to next autoinc if it's > col max value. ie. not rely on what is passed in from MySQL. [49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value rb://236 ------------------------------------------------------------------------ r6537 | sunny | 2010-01-30 00:35:00 +0200 (Sat, 30 Jan 2010) | 2 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/mysql-test/innodb-autoinc.result M /branches/5.1/mysql-test/innodb-autoinc.test branches/5.1: Undo r6536. ------------------------------------------------------------------------ r6538 | sunny | 2010-01-30 00:43:06 +0200 (Sat, 30 Jan 2010) | 6 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/mysql-test/innodb-autoinc.result M /branches/5.1/mysql-test/innodb-autoinc.test branches/5.1: Check *first_value every time against the column max value and set *first_value to next autoinc if it's > col max value. ie. not rely on what is passed in from MySQL. [49497] Error 1467 (ER_AUTOINC_READ_FAILED) on inserting a negative value rb://236 ------------------------------------------------------------------------
-
- 29 Jan, 2010 3 commits
-
-
sunny authored
1. First scan the joining transaction's locks and check if no other transaction is waiting for a lock held by the joining transaction. If no other transaction is waiting then no deadlock an occur and we avoid doing an exhaustive search. 2. Change the direction of the lock traversal from backward to forward. Previously we traversed backward from the lock that has to wait, the function to that fetched the previous node was very inefficient resulting in O(n^2) access to the rec lock list. Fix Bug #49047 InnoDB deadlock detection is CPU intensive with many locks on a single row. rb://218
-
calvin authored
-
vasil authored
Extend the comment about row_mysql_handle_errors(). Suggested by: Heikki
-
- 28 Jan, 2010 2 commits
-
-
marko authored
acquire the block_mutex for every block in the LRU list. Only acquire it when holding buf_pool_mutex is not sufficient. This should speed up the function and considerably reduce traffic on the memory bus and caches. I noticed this deficiency when working on Issue #157. This deficiency popped up again in Issue #449 (Bug #35077), which this fix does not fully address. rb://78 revision 1 approved by Heikki Tuuri.
- 27 Jan, 2010 1 commit
-
-
marko authored
This addresses the third aspect of Bug #41609. row_mysql_drop_temp_tables(): New function, to drop all temporary tables. These can be distinguished by the least significant bit of MIX_LEN. However, we will skip ROW_FORMAT=REDUNDANT tables, because in the records for those tables, that bit may be garbage. recv_recovery_from_checkpoint_finish(): Invoke row_mysql_drop_temp_tables(). Normally, if the .frm files for the temporary tables exist at startup, MySQL will ask InnoDB to drop the temporary tables. However, if the files are deleted, for instance, by the boot scripts of the operating system, the tables would remain in the InnoDB data dictionary unless someone digs them up by innodb_table_monitor and creates .frm files for dropping the tables. rb://221 approved by Sunny Bains.
-
- 21 Jan, 2010 1 commit
-
-
marko authored
and do not call ibuf_merge_or_delete_for_page() in crash recovery, before the redo log has been applied. This could cure some hard-to-repeat, hard-to-explain bugs related to secondary indexes. A possible recipe to repeat the bug: 1. update a secondary index leaf page on a compressed table 2. evict the page from the buffer pool while it is still dirty 3. ibuf_insert() something for the page 4. crash 5. crash recovery; ibuf merge would be done too early, before applying redo log to the sec index page or the ibuf pages
-
- 15 Jan, 2010 2 commits
-
-
calvin authored
embedded mode This is 2nd part of the fix for bug#49396. The 1st part is innodb.test. Tested in both embedded mode and normal server mode.
-
calvin authored
to pick up the first part fix of bug49396. ------------------------------------------------------------------------ r6471 | calvin | 2010-01-15 17:43:27 -0600 (Fri, 15 Jan 2010) | 4 lines branches/5.1: fix bug#49396: main.innodb test fails in embedded mode Change replace_result by using $MYSQLD_DATADIR. Tested in both embedded mode and normal server mode. ------------------------------------------------------------------------
-
- 14 Jan, 2010 2 commits
-
-
rb://226inaam authored
log_sys->written_to_all_lsn does not accurately represent the LSN upto which write and flush has taken place. Under a race condition it can fall behind log_sys->flushed_to_disk_lsn which is accurate. Besides written_to_all_lsn is redundant as currently InnoDB supports only one log group. Approved by: Heikki
-
marko authored
Update PAGE_MAX_TRX_ID before attempting to compress the page. This fixes Issue #382 (a debug assertion failure in page_zip_reorganize()) and reduces the generated redo log. There was no bug or crash in non-debug builds.
-
- 13 Jan, 2010 5 commits
-
-
marko authored
queues when the thread is not holding a space->latch. When UNIV_DEBUG is defined while UNIV_SYNC_DEBUG is not, latching order violations will still occur and deadlocks will be possible. sync_thread_levels_nonempty_gen(): Renamed from sync_thread_levels_empty_gen(). Return the violating latch or NULL instead of FALSE or TRUE, except that there will be a ut_error before the non-NULL return. sync_thread_levels_empty_gen(): A macro that negates the return value of sync_thread_levels_nonempty_gen(). sync_thread_levels_contains(): New function, based on sync_thread_levels_nonempty_gen(). This should fix Issue #441.
-
marko authored
isolation level, do not attempt to access a clustered index record that has been marked for deletion. This fixes Issue #433. Approved by Heikki over the IM.
-
marko authored
and explicitly free mem_hash_mutex in mem_close(). This fixes the breakage of UNIV_MEM_DEBUG that was filed as Issue #434.
-
marko authored
before checking block->is_hashed, because the latter may be uninitialized right after server startup.
-
marko authored
Add some const qualifiers and comments.
-
- 12 Jan, 2010 2 commits
-
-
marko authored
more accurately.
-
marko authored
------------------------------------------------------------------------ r6421 | jyang | 2010-01-12 07:59:16 +0200 (Tue, 12 Jan 2010) | 8 lines Changed paths: M /branches/5.1/row/row0mysql.c branches/5.1: Fix bug #49238: Creating/Dropping a temporary table while at 1023 transactions will cause assert. Handle possible DB_TOO_MANY_CONCURRENT_TRXS when deleting metadata in row_drop_table_for_mysql(). rb://220, approved by Marko ------------------------------------------------------------------------ r6422 | marko | 2010-01-12 11:34:27 +0200 (Tue, 12 Jan 2010) | 3 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/handler/ha_innodb.h branches/5.1: Non-functional change: Make innobase_get_int_col_max_value() a static function. It does not access any fields of class ha_innobase. ------------------------------------------------------------------------ r6424 | marko | 2010-01-12 12:22:19 +0200 (Tue, 12 Jan 2010) | 16 lines Changed paths: M /branches/5.1/handler/ha_innodb.cc M /branches/5.1/handler/ha_innodb.h branches/5.1: In innobase_initialize_autoinc(), do not attempt to read the maximum auto-increment value from the table if innodb_force_recovery is set to at least 4, so that writes are disabled. (Bug #46193) innobase_get_int_col_max_value(): Move the function definition before ha_innobase::innobase_initialize_autoinc(), because that function now calls this function. ha_innobase::innobase_initialize_autoinc(): Change the return type to void. Do not attempt to read the maximum auto-increment value from the table if innodb_force_recovery is set to at least 4. Issue ER_AUTOINC_READ_FAILED to the client when the auto-increment value cannot be read. rb://144 by Sunny, revised by Marko ------------------------------------------------------------------------
-