1. 19 Dec, 2007 3 commits
    • marko's avatar
      branches/zip: Fast index creation: Lock the data dictionary only after · e3d19b0f
      marko authored
      acquiring the table lock.  The data dictionary should not be locked for
      long periods.  Before this change, in the worst case, the dictionary
      would be locked until the expiration of innodb_lock_wait_timeout.
      
      Virtually, transaction-level locks (locks on database objects, such
      as records and tables) have a latching order level of SYNC_USER_TRX_LOCK,
      which is above any InnoDB rw-locks or mutexes.  However, the latching
      order of SYNC_USER_TRX_LOCK is never checked, not even by UNIV_SYNC_DEBUG.
      
      ha_innobase::add_index(), ha_innobase::final_drop_index(): Invoke
      row_mysql_lock_data_dictionary(trx) only after row_merge_lock_table().
      e3d19b0f
    • marko's avatar
      branches/zip: Implement a limit for the size of undo log records. · 463141d1
      marko authored
      innodb-index.test: Add a test with a large number of externally stored
      columns.  Check that there may not be prefix indexes on too many columns.
      
      dict_index_too_big_for_undo(): New function: Check if the undo log may
      overflow.
      
      dict_index_add_to_cache(): Return DB_SUCCESS or DB_TOO_BIG_RECORD.
      Postpone the creation and linking of some data structures, so that
      when dict_index_too_big_for_undo() holds, it will be easier to clean up.
      Check the return status in all callers.
      463141d1
    • marko's avatar
      branches/zip: dict0dict.c: Minor cleanup. · b33d8c40
      marko authored
      dict_index_copy(): Remove the prototype, because this static function
      will be defined before its first use.  Add const qualifier to "table".
      
      dict_index_build_internal_clust(), dict_index_build_internal_non_clust():
      Add const qualifier to "table".  Correct the comment about setting indexed[].
      b33d8c40
  2. 18 Dec, 2007 1 commit
    • vasil's avatar
      branches/zip: · 204f29ac
      vasil authored
      Non-functional change:
      Do not include the terminating '\0' in TRX_I_S_LOCK_ID_MAX_LEN.
      204f29ac
  3. 17 Dec, 2007 7 commits
    • marko's avatar
      branches/zip: Fast index creation: Clarify why lock waits may occur in · 6690972b
      marko authored
      row_merge_lock_table().
      
      ha_innobase::final_drop_index(): Set the dictionary operation mode to
      TRX_DICT_OP_INDEX_MAY_WAIT for the duration of the row_merge_lock_table()
      call.
      6690972b
    • marko's avatar
    • marko's avatar
      branches/zip: Fast index creation: Remove the ROW_PREBUILT_OBSOLETE nonsense. · d0d92991
      marko authored
      Active transactions must not switch table or index definitions on the fly,
      for several reasons, including the following:
      
       * copied indexes do not carry any history or locking information;
         that is, rollbacks, read views, and record locking would be broken
      
       * huge potential for race conditions, inconsistent reads and writes,
         loss of data, and corruption
      
      Instead of trying to track down if the table was changed during a transaction,
      acquire appropriate locks that protect the creation and dropping of indexes.
      
      innodb-index.test: Test the locking of CREATE INDEX and DROP INDEX.  Test
      that consistent reads work across dropped indexes.
      
      lock_rec_insert_check_and_lock(): Relax the lock_table_has() assertion.
      When inserting a record into an index, the table must be at least IX-locked.
      However, when an index is being created, an IS-lock on the table is
      sufficient.
      
      row_merge_lock_table(): Add the parameter enum lock_mode mode, which must
      be LOCK_X or LOCK_S.
      
      row_merge_drop_table(): Assert that n_mysql_handles_opened == 0.
      Unconditionally drop the table.
      
      ha_innobase::add_index(): Acquire an X or S lock on the table, as appropriate.
      After acquiring an X lock, assert that n_mysql_handles_opened == 1.
      Remove the comments about dropping tables in the background.
      
      ha_innobase::final_drop_index(): Acquire an X lock on the table.
      
      dict_table_t: Remove version_number, to_be_dropped, and prebuilts.
      ins_node_t: Remove table_version_number.
      
      enum lock_mode: Move the definition from lock0lock.h to lock0types.h.
      
      ROW_PREBUILT_OBSOLETE, row_update_prebuilt(), row_prebuilt_table_obsolete():
      Remove.
      
      row_prebuilt_t: Remove the declaration from row0types.h.
      
      row_drop_table_for_mysql_no_commit(): Always print a warning if a table
      was added to the background drop queue.
      d0d92991
    • marko's avatar
      branches/zip: lock_rec_insert_check_and_lock(): Use the cached value · da28dd3c
      marko authored
      of thr_get_trx(thr).
      da28dd3c
    • marko's avatar
      branches/zip: innobase_mysql_end_print_arbitrary_thd(): Note that · ca75e779
      marko authored
      kernel_mutex must be released before calling this function.
      
      innobase_mysql_end_print_arbitrary_thd(),
      innobase_mysql_prepare_print_arbitrary_thd(): Assert that the
      kernel_mutex is not being held by the current thread.
      ca75e779
    • vasil's avatar
      branches/zip: · 74ab4b66
      vasil authored
       
      Bugfix: Lock the MySQL mutex LOCK_thread_count before accessing
      trx->mysql_query_str to avoid race conditions where MySQL sets it to
      NULL after we have checked that it is not NULL and before we access it.
       
      Approved by:	Marko
      74ab4b66
    • vasil's avatar
      branches/zip: · 815bb2cc
      vasil authored
      Non-functional change: add "out:" comment for the return value.
      815bb2cc
  4. 16 Dec, 2007 1 commit
    • vasil's avatar
      branches/zip: · 0845bd2d
      vasil authored
       
      Non-functional change:
       
      Move the prototypes of
      innobase_mysql_prepare_print_arbitrary_thd() and
      innobase_mysql_end_print_arbitrary_thd() from lock0lock.c to
      ha_prototypes.h
      
      Suggested by:	Marko
      Approved by:	Marko
      0845bd2d
  5. 13 Dec, 2007 5 commits
    • marko's avatar
      branches/zip: page_zip_decompress(): Implement a proper check if there · 476cadbf
      marko authored
      is an overlap between BLOB pointers and the modification log or the
      zlib stream.
      
      page_zip_decompress_clust_ext(): Remove the improper check.  The
      d_stream->avail_in cannot be decremented here, because we do not know
      at this point if the record is deleted.  No space is reserved for the
      BLOB pointers in deleted records.
      
      page_zip_decompress_clust(): Check for the overlap here, right before
      copying the BLOB pointers.
      
      page_zip_decompress_clust(): Also check that the target column is long
      enough, and return FALSE instead of ut_ad() failure.
      476cadbf
    • vasil's avatar
      branches/zip: · 81ff3ffd
      vasil authored
      Add some clarification to a comment.
      81ff3ffd
    • marko's avatar
      branches/zip: page_zip_decompress_node_ptrs(): Remove the local variable · e525556f
      marko authored
      is_clust, to avoid a warning about unused variable when the definition
      of page_zip_fail() is empty.
      e525556f
    • marko's avatar
      branches/zip: page0zip.c: Add more page_zip_fail() diagnostics to · 189aabfd
      marko authored
      some decompression functions.
      
      page_zip_apply_log_ext(), page_zip_apply_log(): Call page_zip_fail()
      with appropriate diagnostics before returning NULL.
      
      page_zip_decompress_node_ptrs(), page_zip_decompress_sec(),
      page_zip_decompress_clust(): When detecting that the zlib stream
      followed by the modification log overlaps the trailer, do not
      let an assertion fail, but invoke page_zip_fail() and return FALSE.
      Corrupt data should never lead into assertion failures in decompression
      functions.
      189aabfd
    • marko's avatar
      branches/zip: page0zip.c: Define and use the auxiliary macros · d6a5ad85
      marko authored
      ASSERT_ZERO() and ASSERT_ZERO_BLOB() for asserting that certain
      blocks of memory are filled with zero.
      d6a5ad85
  6. 12 Dec, 2007 2 commits
  7. 10 Dec, 2007 3 commits
  8. 07 Dec, 2007 2 commits
    • marko's avatar
      branches/zip: rec_convert_dtuple_to_rec_comp(): Allow externally stored · 132d888b
      marko authored
      columns to be up to REC_MAX_INDEX_COL_LEN + BTR_EXTERN_FIELD_REF_SIZE
      bytes in a debug assertion.  This assertion could fail since r2159 in
      trx_undo_prev_version_build(), because the undo log records for updates
      and deletes would contain longer prefixes of externally stored columns.
      
      The assertion failure was reported by Sunny.
      132d888b
    • marko's avatar
      branches/zip: dict_table_copy_types(): Initialize all fields to the SQL NULL · 5a6cc213
      marko authored
      value.  Document this change in behaviour, and make all callers invoke
      the function right after dtuple_create().
      
      dict_create_sys_fields_tuple(): Add a missing "break" statement to the loop
      that checks if there are any column prefixes in the index.
      
      row_get_prebuilt_insert_row(): Do not set the fields to the SQL NULL value,
      now that dict_table_copy_types() takes care of it.
      5a6cc213
  9. 05 Dec, 2007 3 commits
    • marko's avatar
      branches/zip: When logging updates or deletes in the undo log, store long · 93877450
      marko authored
      enough prefixes of externally stored columns, so that purge will not have
      to dereference any BLOB pointers, which may be invalid.  This will not be
      necessary for logging inserts, because inserts are no-ops in purge, and
      the record will remain locked during transaction rollback.
      
      TODO: in dict_build_table_def_step() or dict_build_index_def_step(),
      prevent the creation of tables with too many columns for which a
      prefix index is defined.  This is because there is a size limit of undo
      log records, and for each prefix-indexed column, the log must store
      REC_MAX_INDEX_COL_LEN + BTR_EXTERN_FIELD_REF_SIZE bytes.
      
      trx_undo_page_report_insert(): Assert that the index is clustered.
      
      trx_undo_page_fetch_ext(): New function, for fetching the BLOB prefix
      in trx_undo_page_report_modify().
      
      trx_undo_page_report_modify(): Write long enough prefixes of the externally
      stored columns to the undo log.
      
      trx_undo_rec_get_partial_row(): Remove the parameter "ext".  Assert that
      the undo log contains long enough prefixes of the externally stored columns.
      
      purge_node_t: Remove the field "ext".
      93877450
    • marko's avatar
      branches/zip: row_build_index_entry(): Add assertions that prevent improper · 9c975c59
      marko authored
      prefix indexes from being built on externally stored columns.
      9c975c59
    • marko's avatar
      branches/zip: btr_cur_pessimistic_update(), btr_cur_pessimistic_delete(): · 89a51cc8
      marko authored
      Use rec_offs_any_extern() as a condition for freeing externally stored
      columns.  This is only a performance optimization.
      89a51cc8
  10. 04 Dec, 2007 1 commit
    • marko's avatar
      branches/zip: Merge r2154 from trunk: · 45a28445
      marko authored
      innodb.result, innodb.test: Revert the changes in r2145.
      
      The tests that were removed by MySQL
      
      ChangeSet@1.2598.2.6  2007-11-06 15:42:58-07:00  tsmith@hindu.god
      
      were moved to a new test, innodb_autoinc_lock_mode_zero, which is
      kept in the MySQL BitKeeper tree.
      45a28445
  11. 03 Dec, 2007 2 commits
  12. 30 Nov, 2007 3 commits
  13. 29 Nov, 2007 7 commits
    • vasil's avatar
      branches/zip: · a3417d90
      vasil authored
      * Change terminology:
        wait lock -> requested lock
        waited lock -> blocking lock
        new: requesting transaction (the trx what owns the requested lock)
        new: blocking transaction (the trx that owns the blocking lock)
      
      * Add transaction ids to INFORMATION_SCHEMA.INNODB_LOCK_WAITS. This is
        somewhat redundant because transaction ids can be found in INNODB_LOCKS
        (which can be joined with INNODB_LOCK_WAITS) but would help users to
        write shorter joins (one table less) in some cases where they want to
        find which transaction is blocking which.
      
      Suggested by:	Ken
      Approved by:	Heikki
      a3417d90
    • marko's avatar
      445924b5
    • marko's avatar
      branches/zip: row_ext_create(): Remove the UNIV_INLINE that should · c15cbd7d
      marko authored
      have been removed in r2131.
      c15cbd7d
    • marko's avatar
      branches/zip: row_ext: Fetch the BLOB prefixes already at row_ext_create(). · ab7b4937
      marko authored
      Only add indexed BLOBs to row_ext.
      
      trx_undo_rec_get_partial_row(): Move the BLOB fetching to row_ext_create().
      
      row_build(): Pass only those BLOBs to row_ext_create() that are referenced by
      ordering columns of some indexes, similar to trx_undo_rec_get_partial_row().
      
      row_ext_create(): Add the parameter "tuple".  Move the implementation
      from row0ext.ic to row0ext.c.
      
      row_ext_lookup_ith(), row_ext_lookup(): Return a const pointer.  Remove
      the parameters "field" and "f_len".  Make the row_ext_t* parameter const.
      
      row_ext_t: Remove the field zip_size.
      
      field_ref_zero[]: Declare in btr0types.h instead of btr0cur.h.
      
      row_ext_lookup_low(): Rename to row_ext_cache_fill() and change the
      signature.
      ab7b4937
    • marko's avatar
      branches/zip: Clean up after r2129: · a5bd2496
      marko authored
      univ.i: Do not define UNIV_DEBUG, UNIV_ZIP_DEBUG.
      
      btr_cur_del_unmark_for_ibuf(): Use the same comment in both btr0cur.c and
      btr0cur.h.  Wrap long lines.
      a5bd2496
    • sunny's avatar
      branches/zip: Fix a bug where the zipped page and the uncompressed page · 5cab01e9
      sunny authored
      contents end up with conflicting versions of a record's state. The zipped
      page record was not being marked as "(un)deleted" because we were not
      passing the zipped page contents to the (un)delete function, which first
      (un)delete marks the uncompressed version and then based on whether
      page_zip is NULL or not (un)delete marks the record in the compressed page.
      5cab01e9
    • marko's avatar
      branches/zip: ha_innobase::final_drop_index(): Allocate a separate transaction · d5c04aae
      marko authored
      for dropping the index trees, and set the dictionary operation flag, similar
      to what ha_innobase::add_index() does.  This should ensure correct crash
      recovery.
      d5c04aae