An error occurred fetching the project authors.
  1. 19 Oct, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-15662 Instant DROP COLUMN or changing the order of columns · 0e5a4ac2
      Marko Mäkelä authored
      Allow ADD COLUMN anywhere in a table, not only adding as the
      last column.
      
      Allow instant DROP COLUMN and instant changing the order of columns.
      
      The added columns will always be added last in clustered index records.
      In new records, instantly dropped columns will be stored as NULL or
      empty when possible.
      
      Information about dropped and reordered columns will be written in
      a metadata BLOB (mblob), which is stored before the first 'user' field
      in the hidden metadata record at the start of the clustered index.
      The presence of mblob is indicated by setting the delete-mark flag in
      the metadata record.
      
      The metadata BLOB stores the number of clustered index fields,
      followed by an array of column information for each field.
      For dropped columns, we store the NOT NULL flag, the fixed length,
      and for variable-length columns, whether the maximum length exceeded
      255 bytes. For non-dropped columns, we store the column position.
      
      Unlike with MDEV-11369, when a table becomes empty, it cannot
      be converted back to the canonical format. The reason for this is
      that other threads may hold cached objects such as
      row_prebuilt_t::ins_node that could refer to dropped or reordered
      index fields.
      
      For instant DROP COLUMN and ROW_FORMAT=COMPACT or ROW_FORMAT=DYNAMIC,
      we must store the n_core_null_bytes in the root page, so that the
      chain of node pointer records can be followed in order to reach the
      leftmost leaf page where the metadata record is located.
      If the mblob is present, we will zero-initialize the strings
      "infimum" and "supremum" in the root page, and use the last byte of
      "supremum" for storing the number of null bytes (which are allocated
      but useless on node pointer pages). This is necessary for
      btr_cur_instant_init_metadata() to be able to navigate to the mblob.
      
      If the PRIMARY KEY contains any variable-length column and some
      nullable columns were instantly dropped, the dict_index_t::n_nullable
      in the data dictionary could be smaller than it actually is in the
      non-leaf pages. Because of this, the non-leaf pages could use more
      bytes for the null flags than the data dictionary expects, and we
      could be reading the lengths of the variable-length columns from the
      wrong offset, and thus reading the child page number from wrong place.
      This is the result of two design mistakes that involve unnecessary
      storage of data: First, it is nonsense to store any data fields for
      the leftmost node pointer records, because the comparisons would be
      resolved by the MIN_REC_FLAG alone. Second, there cannot be any null
      fields in the clustered index node pointer fields, but we nevertheless
      reserve space for all the null flags.
      
      Limitations (future work):
      
      MDEV-17459 Allow instant ALTER TABLE even if FULLTEXT INDEX exists
      MDEV-17468 Avoid table rebuild on operations on generated columns
      MDEV-17494 Refuse ALGORITHM=INSTANT when the row size is too large
      
      btr_page_reorganize_low(): Preserve any metadata in the root page.
      Call lock_move_reorganize_page() only after restoring the "infimum"
      and "supremum" records, to avoid a memcmp() assertion failure.
      
      dict_col_t::DROPPED: Magic value for dict_col_t::ind.
      
      dict_col_t::clear_instant(): Renamed from dict_col_t::remove_instant().
      Do not assert that the column was instantly added, because we
      sometimes call this unconditionally for all columns.
      Convert an instantly added column to a "core column". The old name
      remove_instant() could be mistaken to refer to "instant DROP COLUMN".
      
      dict_col_t::is_added(): Rename from dict_col_t::is_instant().
      
      dtype_t::metadata_blob_init(): Initialize the mblob data type.
      
      dtuple_t::is_metadata(), dtuple_t::is_alter_metadata(),
      upd_t::is_metadata(), upd_t::is_alter_metadata(): Check if info_bits
      refer to a metadata record.
      
      dict_table_t::instant: Metadata about dropped or reordered columns.
      
      dict_table_t::prepare_instant(): Prepare
      ha_innobase_inplace_ctx::instant_table for instant ALTER TABLE.
      innobase_instant_try() will pass this to dict_table_t::instant_column().
      On rollback, dict_table_t::rollback_instant() will be called.
      
      dict_table_t::instant_column(): Renamed from instant_add_column().
      Add the parameter col_map so that columns can be reordered.
      Copy and adjust v_cols[] as well.
      
      dict_table_t::find(): Find an old column based on a new column number.
      
      dict_table_t::serialise_columns(), dict_table_t::deserialise_columns():
      Convert the mblob.
      
      dict_index_t::instant_metadata(): Create the metadata record
      for instant ALTER TABLE. Invoke dict_table_t::serialise_columns().
      
      dict_index_t::reconstruct_fields(): Invoked by
      dict_table_t::deserialise_columns().
      
      dict_index_t::clear_instant_alter(): Move the fields for the
      dropped columns to the end, and sort the surviving index fields
      in ascending order of column position.
      
      ha_innobase::check_if_supported_inplace_alter(): Do not allow
      adding a FTS_DOC_ID column if a hidden FTS_DOC_ID column exists
      due to FULLTEXT INDEX. (This always required ALGORITHM=COPY.)
      
      instant_alter_column_possible(): Add a parameter for InnoDB table,
      to check for additional conditions, such as the maximum number of
      index fields.
      
      ha_innobase_inplace_ctx::first_alter_pos: The first column whose position
      is affected by instant ADD, DROP, or changing the order of columns.
      
      innobase_build_col_map(): Skip added virtual columns.
      
      prepare_inplace_add_virtual(): Correctly compute num_to_add_vcol.
      Remove some unnecessary code. Note that the call to
      innodb_base_col_setup() should be executed later.
      
      commit_try_norebuild(): If ctx->is_instant(), let the virtual
      columns be added or dropped by innobase_instant_try().
      
      innobase_instant_try(): Fill in a zero default value for the
      hidden column FTS_DOC_ID (to reduce the work needed in MDEV-17459).
      If any columns were dropped or reordered (or added not last),
      delete any SYS_COLUMNS records for the following columns, and
      insert SYS_COLUMNS records for all subsequent stored columns as well
      as for all virtual columns. If any virtual column is dropped, rewrite
      all virtual column metadata. Use a shortcut only for adding
      virtual columns. This is because innobase_drop_virtual_try()
      assumes that the dropped virtual columns still exist in ctx->old_table.
      
      innodb_update_cols(): Renamed from innodb_update_n_cols().
      
      innobase_add_one_virtual(), innobase_insert_sys_virtual(): Change
      the return type to bool, and invoke my_error() when detecting an error.
      
      innodb_insert_sys_columns(): Insert a record into SYS_COLUMNS.
      Refactored from innobase_add_one_virtual() and innobase_instant_add_col().
      
      innobase_instant_add_col(): Replace the parameter dfield with type.
      
      innobase_instant_drop_cols(): Drop matching columns from SYS_COLUMNS
      and all columns from SYS_VIRTUAL.
      
      innobase_add_virtual_try(), innobase_drop_virtual_try(): Let
      the caller invoke innodb_update_cols().
      
      innobase_rename_column_try(): Skip dropped columns.
      
      commit_cache_norebuild(): Update table->fts->doc_col.
      
      dict_mem_table_col_rename_low(): Skip dropped columns.
      
      trx_undo_rec_get_partial_row(): Skip dropped columns.
      
      trx_undo_update_rec_get_update(): Handle the metadata BLOB correctly.
      
      trx_undo_page_report_modify(): Avoid out-of-bounds access to record fields.
      Log metadata records consistently.
      Apparently, the first fields of a clustered index may be updated
      in an update_undo vector when the index is ID_IND of SYS_FOREIGN,
      as part of renaming the table during ALTER TABLE. Normally, updates of
      the PRIMARY KEY should be logged as delete-mark and an insert.
      
      row_undo_mod_parse_undo_rec(), row_purge_parse_undo_rec():
      Use trx_undo_metadata.
      
      row_undo_mod_clust_low(): On metadata rollback, roll back the root page too.
      
      row_undo_mod_clust(): Relax an assertion. The delete-mark flag was
      repurposed for ALTER TABLE metadata records.
      
      row_rec_to_index_entry_impl(): Add the template parameter mblob
      and the optional parameter info_bits for specifying the desired new
      info bits. For the metadata tuple, allow conversion between the original
      format (ADD COLUMN only) and the generic format (with hidden BLOB).
      Add the optional parameter "pad" to determine whether the tuple should
      be padded to the index fields (on ALTER TABLE it should), or whether
      it should remain at its original size (on rollback).
      
      row_build_index_entry_low(): Clean up the code, removing
      redundant variables and conditions. For instantly dropped columns,
      generate a dummy value that is NULL, the empty string, or a
      fixed length of NUL bytes, depending on the type of the dropped column.
      
      row_upd_clust_rec_by_insert_inherit_func(): On the update of PRIMARY KEY
      of a record that contained a dropped column whose value was stored
      externally, we will be inserting a dummy NULL or empty string value
      to the field of the dropped column. The externally stored column would
      eventually be dropped when purge removes the delete-marked record for
      the old PRIMARY KEY value.
      
      btr_index_rec_validate(): Recognize the metadata record.
      
      btr_discard_only_page_on_level(): Preserve the generic instant
      ALTER TABLE metadata.
      
      btr_set_instant(): Replaces page_set_instant(). This sets a clustered
      index root page to the appropriate format, or upgrades from
      the MDEV-11369 instant ADD COLUMN to generic ALTER TABLE format.
      
      btr_cur_instant_init_low(): Read and validate the metadata BLOB page
      before reconstructing the dictionary information based on it.
      
      btr_cur_instant_init_metadata(): Do not read any lengths from the
      metadata record header before reading the BLOB. At this point, we
      would not actually know how many nullable fields the metadata record
      contains.
      
      btr_cur_instant_root_init(): Initialize n_core_null_bytes in one
      of two possible ways.
      
      btr_cur_trim(): Handle the mblob record.
      
      row_metadata_to_tuple(): Convert a metadata record to a data tuple,
      based on the new info_bits of the metadata record.
      
      btr_cur_pessimistic_update(): Invoke row_metadata_to_tuple() if needed.
      Invoke dtuple_convert_big_rec() for metadata records if the record is
      too large, or if the mblob is not yet marked as externally stored.
      
      btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
      When the last user record is deleted, do not delete the
      generic instant ALTER TABLE metadata record. Only delete
      MDEV-11369 instant ADD COLUMN metadata records.
      
      btr_cur_optimistic_insert(): Avoid unnecessary computation of rec_size.
      
      btr_pcur_store_position(): Allow a logically empty page to contain
      a metadata record for generic ALTER TABLE.
      
      REC_INFO_DEFAULT_ROW_ADD: Renamed from REC_INFO_DEFAULT_ROW.
      This is for the old instant ADD COLUMN (MDEV-11369) only.
      
      REC_INFO_DEFAULT_ROW_ALTER: The more generic metadata record,
      with additional information for dropped or reordered columns.
      
      rec_info_bits_valid(): Remove. The only case when this would fail
      is when the record is the generic ALTER TABLE metadata record.
      
      rec_is_alter_metadata(): Check if a record is the metadata record
      for instant ALTER TABLE (other than ADD COLUMN). NOTE: This function
      must not be invoked on node pointer records, because the delete-mark
      flag in those records may be set (it is garbage), and then a debug
      assertion could fail because index->is_instant() does not necessarily
      hold.
      
      rec_is_add_metadata(): Check if a record is MDEV-11369 ADD COLUMN metadata
      record (not more generic instant ALTER TABLE).
      
      rec_get_converted_size_comp_prefix_low(): Assume that the metadata
      field will be stored externally. In dtuple_convert_big_rec() during
      the rec_get_converted_size() call, it would not be there yet.
      
      rec_get_converted_size_comp(): Replace status,fields,n_fields with tuple.
      
      rec_init_offsets_comp_ordinary(), rec_get_converted_size_comp_prefix_low(),
      rec_convert_dtuple_to_rec_comp(): Add template<bool mblob = false>.
      With mblob=true, process a record with a metadata BLOB.
      
      rec_copy_prefix_to_buf(): Assert that no fields beyond the key and
      system columns are being copied. Exclude the metadata BLOB field.
      
      rec_convert_dtuple_to_metadata_comp(): Convert an alter metadata tuple
      into a record.
      
      row_upd_index_replace_metadata(): Apply an update vector to an
      alter_metadata tuple.
      
      row_log_allocate(): Replace dict_index_t::is_instant()
      with a more appropriate condition that ignores dict_table_t::instant.
      Only a table on which the MDEV-11369 ADD COLUMN was performed
      can "lose its instantness" when it becomes empty. After
      instant DROP COLUMN or reordering columns, we cannot simply
      convert the table to the canonical format, because the data
      dictionary cache and all possibly existing references to it
      from other client connection threads would have to be adjusted.
      
      row_quiesce_write_index_fields(): Do not crash when the table contains
      an instantly dropped column.
      
      Thanks to Thirunarayanan Balathandayuthapani for discussing the design
      and implementing an initial prototype of this.
      Thanks to Matthias Leich for testing.
      0e5a4ac2
  2. 19 Sep, 2018 1 commit
    • Marko Mäkelä's avatar
      Terminology: 'metadata record' instead of 'default row' · 755187c8
      Marko Mäkelä authored
      For instant ALTER TABLE, we store a hidden metadata record at the
      start of the clustered index, to indicate how the format of the
      records differs from the latest table definition.
      
      The term 'default row' is too specific, because it applies to
      instant ADD COLUMN only, and we will be supporting more classes
      of instant ALTER TABLE later on. For instant ADD COLUMN, we
      store the initial default values in the metadata record.
      755187c8
  3. 11 Sep, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-13564: Remove old crash-upgrade logic in 10.4 · 09af00cb
      Marko Mäkelä authored
      Stop supporting the additional *trunc.log files that were
      introduced via MySQL 5.7 to MariaDB Server 10.2 and 10.3.
      
      DB_TABLESPACE_TRUNCATED: Remove.
      
      purge_sys.truncate: A new structure to track undo tablespace
      file truncation.
      
      srv_start(): Remove the call to buf_pool_invalidate(). It is
      no longer necessary, given that we no longer access things in
      ways that violate the ARIES protocol. This call was originally
      added for innodb_file_format, and it may later have been necessary
      for the proper function of the MySQL 5.7 TRUNCATE recovery, which
      we are now removing.
      
      trx_purge_cleanse_purge_queue(): Take the undo tablespace as a
      parameter.
      
      trx_purge_truncate_history(): Rewrite everything mostly in a
      single function, replacing references to undo::Truncate.
      
      recv_apply_hashed_log_recs(): If any redo log is to be applied,
      and if the log_sys.log.subformat indicates that separately
      logged truncate may have been used, refuse to proceed except if
      innodb_force_recovery is set. We will still refuse crash-upgrade
      if TRUNCATE TABLE was logged. Undo tablespace truncation would
      only be logged in undo*trunc.log files, which we are no longer
      checking for.
      09af00cb
  4. 24 Aug, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-16868 Same query gives different results · 1b4c5b73
      Marko Mäkelä authored
      An INSERT into a temporary table would fail to set the
      index page as modified. If there were no other write operations
      (such as UPDATE or DELETE) to the page, and the page was evicted,
      we would read back the old contents of the page, causing
      corruption or loss of data.
      
      page_cur_insert_rec_write_log(): Call mtr_t::set_modified()
      for temporary tables. Normally this is part of the mlog_open()
      call, but the mlog_open() call was only present in debug builds.
      This regression was caused by
      commit 48192f96
      which was preparation for MDEV-11369 and supposed to affect
      debug builds only.
      
      Thanks to Thirunarayanan Balathandayuthapani for debugging.
      1b4c5b73
  5. 10 Aug, 2018 1 commit
    • Marko Mäkelä's avatar
      Report InnoDB redo log corruption better · b853b4fd
      Marko Mäkelä authored
      recv_parse_log_recs(): Check for corruption before checking for
      end-of-log-buffer.
      
      mlog_parse_initial_log_record(), page_cur_parse_delete_rec():
      Flag corruption for out-of-bounds values, and let the caller
      dump the corrupted redo log extract.
      b853b4fd
  6. 26 Jul, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-16809 Allow full redo logging for ALTER TABLE · 0f90728b
      Marko Mäkelä authored
      Introduce the configuration option innodb_log_optimize_ddl
      for controlling whether native index creation or table-rebuild
      in InnoDB should keep optimizing the redo log
      (and writing MLOG_INDEX_LOAD records to ensure that
      concurrent backup would fail).
      
      By default, we have innodb_log_optimize_ddl=ON, that is,
      the default behaviour that was introduced in MariaDB 10.2.2
      (with the merge of InnoDB from MySQL 5.7) will be unchanged.
      
      BtrBulk::m_trx: Replaces m_trx_id. We must be able to check for
      KILL QUERY even if !m_flush_observer (innodb_log_optimize_ddl=OFF).
      
      page_cur_insert_rec_write_log(): Declare globally, so that this
      can be called from PageBulk::insert().
      
      row_merge_insert_index_tuples(): Remove the unused parameter trx_id.
      
      row_merge_build_indexes(): Enable or disable redo logging based on
      the innodb_log_optimize_ddl parameter.
      
      PageBulk::init(), PageBulk::insert(), PageBulk::finish(): Write
      redo log records if needed. For ROW_FORMAT=COMPRESSED, redo log
      will be written in PageBulk::compress() unless we called
      m_mtr.set_log_mode(MTR_LOG_NO_REDO).
      0f90728b
  7. 12 May, 2018 1 commit
  8. 30 Apr, 2018 1 commit
  9. 28 Apr, 2018 3 commits
  10. 29 Mar, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-12266: Remove dict_index_t::space · 604fea1a
      Marko Mäkelä authored
      We can rely on the dict_table_t::space. All indexes of a table object
      are always in the same tablespace. (For fulltext indexes, the data is
      located in auxiliary tables, and these will continue to have their own
      table objects, separate from the main table.)
      604fea1a
  11. 08 Feb, 2018 1 commit
  12. 12 Jan, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-14935 Remove bogus conditions related to not redo-logging PAGE_MAX_TRX_ID changes · 3e6fcb6a
      Marko Mäkelä authored
      InnoDB originally skipped the redo logging of PAGE_MAX_TRX_ID changes
      until I enabled it in commit e76b873f
      that was part of MySQL 5.5.5 already.
      
      Later, when a more complete history of the InnoDB Plugin for MySQL 5.1
      (aka branches/zip in the InnoDB subversion repository) and of the
      planned-to-be closed-source branches/innodb+ that became the basis of
      InnoDB in MySQL 5.5 was pushed to the MySQL source repository, the
      change was part of commit 509e761f:
      
       ------------------------------------------------------------------------
       r5038 | marko | 2009-05-19 22:59:07 +0300 (Tue, 19 May 2009) | 30 lines
      
       branches/zip: Write PAGE_MAX_TRX_ID to the redo log. Otherwise,
       transactions that are started before the rollback of incomplete
       transactions has finished may have an inconsistent view of the
       secondary indexes.
      
       dict_index_is_sec_or_ibuf(): Auxiliary function for controlling
       updates and checks of PAGE_MAX_TRX_ID: check whether an index is a
       secondary index or the insert buffer tree.
      
       page_set_max_trx_id(), page_update_max_trx_id(),
       lock_rec_insert_check_and_lock(),
       lock_sec_rec_modify_check_and_lock(), btr_cur_ins_lock_and_undo(),
       btr_cur_upd_lock_and_undo(): Add the parameter mtr.
      
       page_set_max_trx_id(): Allow mtr to be NULL.  When mtr==NULL, do not
       attempt to write to the redo log.  This only occurs when creating a
       page or reorganizing a compressed page.  In these cases, the
       PAGE_MAX_TRX_ID will be set correctly during the application of redo
       log records, even though there is no explicit log record about it.
      
       btr_discard_only_page_on_level(): Preserve PAGE_MAX_TRX_ID.  This
       function should be unreachable, though.
      
       btr_cur_pessimistic_update(): Update PAGE_MAX_TRX_ID.
      
       Add some assertions for checking that PAGE_MAX_TRX_ID is set on all
       secondary index leaf pages.
      
       rb://115 tested by Michael, fixes Issue #211
       ------------------------------------------------------------------------
      
      After this fix, some bogus references to recv_recovery_is_on()
      remained. Also, some references could be replaced with
      references to index->is_dummy to prepare us for MDEV-14481
      (background redo log apply).
      3e6fcb6a
  13. 06 Oct, 2017 1 commit
    • Marko Mäkelä's avatar
      MDEV-11369 Instant ADD COLUMN for InnoDB · a4948daf
      Marko Mäkelä authored
      For InnoDB tables, adding, dropping and reordering columns has
      required a rebuild of the table and all its indexes. Since MySQL 5.6
      (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing
      concurrent modification of the tables.
      
      This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT
      and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously,
      with only minor changes performed to the table structure. The counter
      innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS
      is incremented whenever a table rebuild operation is converted into
      an instant ADD COLUMN operation.
      
      ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN.
      
      Some usability limitations will be addressed in subsequent work:
      
      MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY
      and ALGORITHM=INSTANT
      MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE
      
      The format of the clustered index (PRIMARY KEY) is changed as follows:
      
      (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT,
      and a new field PAGE_INSTANT will contain the original number of fields
      in the clustered index ('core' fields).
      If instant ADD COLUMN has not been used or the table becomes empty,
      or the very first instant ADD COLUMN operation is rolled back,
      the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset
      to 0 and FIL_PAGE_INDEX.
      
      (2) A special 'default row' record is inserted into the leftmost leaf,
      between the page infimum and the first user record. This record is
      distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the
      same format as records that contain values for the instantly added
      columns. This 'default row' always has the same number of fields as
      the clustered index according to the table definition. The values of
      'core' fields are to be ignored. For other fields, the 'default row'
      will contain the default values as they were during the ALTER TABLE
      statement. (If the column default values are changed later, those
      values will only be stored in the .frm file. The 'default row' will
      contain the original evaluated values, which must be the same for
      every row.) The 'default row' must be completely hidden from
      higher-level access routines. Assertions have been added to ensure
      that no 'default row' is ever present in the adaptive hash index
      or in locked records. The 'default row' is never delete-marked.
      
      (3) In clustered index leaf page records, the number of fields must
      reside between the number of 'core' fields (dict_index_t::n_core_fields
      introduced in this work) and dict_index_t::n_fields. If the number
      of fields is less than dict_index_t::n_fields, the missing fields
      are replaced with the column value of the 'default row'.
      Note: The number of fields in the record may shrink if some of the
      last instantly added columns are updated to the value that is
      in the 'default row'. The function btr_cur_trim() implements this
      'compression' on update and rollback; dtuple::trim() implements it
      on insert.
      
      (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new
      status value REC_STATUS_COLUMNS_ADDED will indicate the presence of
      a new record header that will encode n_fields-n_core_fields-1 in
      1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header
      always explicitly encodes the number of fields.)
      
      We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for
      covering the insert of the 'default row' record when instant ADD COLUMN
      is used for the first time. Subsequent instant ADD COLUMN can use
      TRX_UNDO_UPD_EXIST_REC.
      
      This is joint work with Vin Chen (陈福荣) from Tencent. The design
      that was discussed in April 2017 would not have allowed import or
      export of data files, because instead of the 'default row' it would
      have introduced a data dictionary table. The test
      rpl.rpl_alter_instant is exactly as contributed in pull request #408.
      The test innodb.instant_alter is based on a contributed test.
      
      The redo log record format changes for ROW_FORMAT=DYNAMIC and
      ROW_FORMAT=COMPACT are as contributed. (With this change present,
      crash recovery from MariaDB 10.3.1 will fail in spectacular ways!)
      Also the semantics of higher-level redo log records that modify the
      PAGE_INSTANT field is changed. The redo log format version identifier
      was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1.
      
      Everything else has been rewritten by me. Thanks to Elena Stepanova,
      the code has been tested extensively.
      
      When rolling back an instant ADD COLUMN operation, we must empty the
      PAGE_FREE list after deleting or shortening the 'default row' record,
      by calling either btr_page_empty() or btr_page_reorganize(). We must
      know the size of each entry in the PAGE_FREE list. If rollback left a
      freed copy of the 'default row' in the PAGE_FREE list, we would be
      unable to determine its size (if it is in ROW_FORMAT=COMPACT or
      ROW_FORMAT=DYNAMIC) because it would contain more fields than the
      rolled-back definition of the clustered index.
      
      UNIV_SQL_DEFAULT: A new special constant that designates an instantly
      added column that is not present in the clustered index record.
      
      len_is_stored(): Check if a length is an actual length. There are
      two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL.
      
      dict_col_t::def_val: The 'default row' value of the column.  If the
      column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT.
      
      dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(),
      instant_value().
      
      dict_col_t::remove_instant(): Remove the 'instant ADD' status of
      a column.
      
      dict_col_t::name(const dict_table_t& table): Replaces
      dict_table_get_col_name().
      
      dict_index_t::n_core_fields: The original number of fields.
      For secondary indexes and if instant ADD COLUMN has not been used,
      this will be equal to dict_index_t::n_fields.
      
      dict_index_t::n_core_null_bytes: Number of bytes needed to
      represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable).
      
      dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that
      n_core_null_bytes was not initialized yet from the clustered index
      root page.
      
      dict_index_t: Add the accessors is_instant(), is_clust(),
      get_n_nullable(), instant_field_value().
      
      dict_index_t::instant_add_field(): Adjust clustered index metadata
      for instant ADD COLUMN.
      
      dict_index_t::remove_instant(): Remove the 'instant ADD' status
      of a clustered index when the table becomes empty, or the very first
      instant ADD COLUMN operation is rolled back.
      
      dict_table_t: Add the accessors is_instant(), is_temporary(),
      supports_instant().
      
      dict_table_t::instant_add_column(): Adjust metadata for
      instant ADD COLUMN.
      
      dict_table_t::rollback_instant(): Adjust metadata on the rollback
      of instant ADD COLUMN.
      
      prepare_inplace_alter_table_dict(): First create the ctx->new_table,
      and only then decide if the table really needs to be rebuilt.
      We must split the creation of table or index metadata from the
      creation of the dictionary table records and the creation of
      the data. In this way, we can transform a table-rebuilding operation
      into an instant ADD COLUMN operation. Dictionary objects will only
      be added to cache when table rebuilding or index creation is needed.
      The ctx->instant_table will never be added to cache.
      
      dict_table_t::add_to_cache(): Modified and renamed from
      dict_table_add_to_cache(). Do not modify the table metadata.
      Let the callers invoke dict_table_add_system_columns() and if needed,
      set can_be_evicted.
      
      dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the
      system columns (which will now exist in the dict_table_t object
      already at this point).
      
      dict_create_table_step(): Expect the callers to invoke
      dict_table_add_system_columns().
      
      pars_create_table(): Before creating the table creation execution
      graph, invoke dict_table_add_system_columns().
      
      row_create_table_for_mysql(): Expect all callers to invoke
      dict_table_add_system_columns().
      
      create_index_dict(): Replaces row_merge_create_index_graph().
      
      innodb_update_n_cols(): Renamed from innobase_update_n_virtual().
      Call my_error() if an error occurs.
      
      btr_cur_instant_init(), btr_cur_instant_init_low(),
      btr_cur_instant_root_init():
      Load additional metadata from the clustered index and set
      dict_index_t::n_core_null_bytes. This is invoked
      when table metadata is first loaded into the data dictionary.
      
      dict_boot(): Initialize n_core_null_bytes for the four hard-coded
      dictionary tables.
      
      dict_create_index_step(): Initialize n_core_null_bytes. This is
      executed as part of CREATE TABLE.
      
      dict_index_build_internal_clust(): Initialize n_core_null_bytes to
      NO_CORE_NULL_BYTES if table->supports_instant().
      
      row_create_index_for_mysql(): Initialize n_core_null_bytes for
      CREATE TEMPORARY TABLE.
      
      commit_cache_norebuild(): Call the code to rename or enlarge columns
      in the cache only if instant ADD COLUMN is not being used.
      (Instant ADD COLUMN would copy all column metadata from
      instant_table to old_table, including the names and lengths.)
      
      PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields.
      This is repurposing the 16-bit field PAGE_DIRECTION, of which only the
      least significant 3 bits were used. The original byte containing
      PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B.
      
      page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT.
      
      page_ptr_get_direction(), page_get_direction(),
      page_ptr_set_direction(): Accessors for PAGE_DIRECTION.
      
      page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION.
      
      page_direction_increment(): Increment PAGE_N_DIRECTION
      and set PAGE_DIRECTION.
      
      rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes,
      and assume that heap_no is always set.
      Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records,
      even if the record contains fewer fields.
      
      rec_offs_make_valid(): Add the parameter 'leaf'.
      
      rec_copy_prefix_to_dtuple(): Assert that the tuple is only built
      on the core fields. Instant ADD COLUMN only applies to the
      clustered index, and we should never build a search key that has
      more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR.
      All these columns are always present.
      
      dict_index_build_data_tuple(): Remove assertions that would be
      duplicated in rec_copy_prefix_to_dtuple().
      
      rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose
      number of fields is between n_core_fields and n_fields.
      
      cmp_rec_rec_with_match(): Implement the comparison between two
      MIN_REC_FLAG records.
      
      trx_t::in_rollback: Make the field available in non-debug builds.
      
      trx_start_for_ddl_low(): Remove dangerous error-tolerance.
      A dictionary transaction must be flagged as such before it has generated
      any undo log records. This is because trx_undo_assign_undo() will mark
      the transaction as a dictionary transaction in the undo log header
      right before the very first undo log record is being written.
      
      btr_index_rec_validate(): Account for instant ADD COLUMN
      
      row_undo_ins_remove_clust_rec(): On the rollback of an insert into
      SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the
      last column from the table and the clustered index.
      
      row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(),
      trx_undo_update_rec_get_update(): Handle the 'default row'
      as a special case.
      
      dtuple_t::trim(index): Omit a redundant suffix of an index tuple right
      before insert or update. After instant ADD COLUMN, if the last fields
      of a clustered index tuple match the 'default row', there is no
      need to store them. While trimming the entry, we must hold a page latch,
      so that the table cannot be emptied and the 'default row' be deleted.
      
      btr_cur_optimistic_update(), btr_cur_pessimistic_update(),
      row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low():
      Invoke dtuple_t::trim() if needed.
      
      row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling
      row_ins_clust_index_entry_low().
      
      rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number
      of fields to be between n_core_fields and n_fields. Do not support
      infimum,supremum. They are never supposed to be stored in dtuple_t,
      because page creation nowadays uses a lower-level method for initializing
      them.
      
      rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the
      number of fields.
      
      btr_cur_trim(): In an update, trim the index entry as needed. For the
      'default row', handle rollback specially. For user records, omit
      fields that match the 'default row'.
      
      btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
      Skip locking and adaptive hash index for the 'default row'.
      
      row_log_table_apply_convert_mrec(): Replace 'default row' values if needed.
      In the temporary file that is applied by row_log_table_apply(),
      we must identify whether the records contain the extra header for
      instantly added columns. For now, we will allocate an additional byte
      for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table
      has been subject to instant ADD COLUMN. The ROW_T_DELETE records are
      fine, as they will be converted and will only contain 'core' columns
      (PRIMARY KEY and some system columns) that are converted from dtuple_t.
      
      rec_get_converted_size_temp(), rec_init_offsets_temp(),
      rec_convert_dtuple_to_temp(): Add the parameter 'status'.
      
      REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED:
      An info_bits constant for distinguishing the 'default row' record.
      
      rec_comp_status_t: An enum of the status bit values.
      
      rec_leaf_format: An enum that replaces the bool parameter of
      rec_init_offsets_comp_ordinary().
      a4948daf
  14. 21 Sep, 2017 1 commit
    • Marko Mäkelä's avatar
      Fix bogus rec_get_offsets() debug assertion failures for ROW_FORMAT=REDUNDANT · 9c373d4d
      Marko Mäkelä authored
      When the debug parameter 'bool leaf' was added to rec_get_offsets(),
      also some debug assertions for reading the heap_no of ROW_FORMAT=REDUNDANT
      records were added. However, the heap number is uninitialized when
      offsets are being computed for to-be-inserted records.
      
      For debug builds, initialize the heap number to a dummy value, so that
      the record will be interpreted as 'user record'. The infimum and supremum
      pseudo-records are never copied from the page frame and never inserted;
      they are part of the page creation.
      
      rec_convert_dtuple_to_rec_old(): Remove a bogus memset() in debug builds.
      9c373d4d
  15. 20 Sep, 2017 1 commit
    • Marko Mäkelä's avatar
      Add the parameter bool leaf to rec_get_offsets() · 48192f96
      Marko Mäkelä authored
      This should affect debug builds only. Debug builds will check that
      the status bits of ROW_FORMAT!=REDUNDANT records match the is_leaf
      parameter.
      
      The only observable change to non-debug should be the addition of
      the is_leaf parameter to the function rec_copy_prefix_to_dtuple(),
      and the removal of some calls to update the adaptive hash index
      (it is only built for the leaf pages).
      
      This change should have been made in MySQL 5.0.3, instead of
      introducing the status flags in the ROW_FORMAT=COMPACT record header.
      48192f96
  16. 06 Sep, 2017 1 commit
    • Marko Mäkelä's avatar
      MDEV-13103 Assertion `flags & BUF_PAGE_PRINT_NO_CRASH' failed in buf_page_print · 6b45355e
      Marko Mäkelä authored
      buf_page_print(): Remove the parameter 'flags',
      and when a server abort is intended, perform that in the caller.
      
      In this way, page corruption reports due to different reasons
      can be distinguished better.
      
      This is non-functional code refactoring that does not fix any
      page corruption issues. The change is only made to avoid falsely
      grouping together unrelated causes of page corruption.
      6b45355e
  17. 21 Apr, 2017 1 commit
    • Marko Mäkelä's avatar
      MDEV-12488 Remove type mismatch in InnoDB printf-like calls · 5684aa22
      Marko Mäkelä authored
      Alias the InnoDB ulint and lint data types to size_t and ssize_t,
      which are the standard names for the machine-word-width data types.
      
      Correspondingly, define ULINTPF as "%zu" and introduce ULINTPFx as "%zx".
      In this way, better compiler warnings for type mismatch are possible.
      
      Furthermore, use PRIu64 for that 64-bit format, and define
      the feature macro __STDC_FORMAT_MACROS to enable it on Red Hat systems.
      
      Fix some errors in error messages, and replace some error messages
      with assertions.
      Most notably, an IMPORT TABLESPACE error message in InnoDB was
      displaying the number of columns instead of the mismatching flags.
      5684aa22
  18. 17 Mar, 2017 1 commit
    • Marko Mäkelä's avatar
      MDEV-12271 Port MySQL 8.0 Bug#23150562 REMOVE UNIV_MUST_NOT_INLINE AND UNIV_NONINL · 4e1116b2
      Marko Mäkelä authored
      Also, remove empty .ic files that were not removed by my MySQL commit.
      
      Problem:
      InnoDB used to support a compilation mode that allowed to choose
      whether the function definitions in .ic files are to be inlined or not.
      This stopped making sense when InnoDB moved to C++ in MySQL 5.6
      (and ha_innodb.cc started to #include .ic files), and more so in
      MySQL 5.7 when inline methods and functions were introduced
      in .h files.
      
      Solution:
      Remove all references to UNIV_NONINL and UNIV_MUST_NOT_INLINE from
      all files, assuming that the symbols are never defined.
      Remove the files fut0fut.cc and ut0byte.cc which only mattered when
      UNIV_NONINL was defined.
      4e1116b2
  19. 03 Mar, 2017 1 commit
    • Marko Mäkelä's avatar
      MDEV-12121 Introduce build option WITH_INNODB_AHI to disable innodb_adaptive_hash_index · 27b9989d
      Marko Mäkelä authored
      The InnoDB adaptive hash index is sometimes degrading the performance of
      InnoDB, and it is sometimes disabled to get more consistent performance.
      We should have a compile-time option to disable the adaptive hash index.
      
      Let us introduce two options:
      
      OPTION(WITH_INNODB_AHI "Include innodb_adaptive_hash_index" ON)
      OPTION(WITH_INNODB_ROOT_GUESS "Cache index root block descriptors" ON)
      
      where WITH_INNODB_AHI always implies WITH_INNODB_ROOT_GUESS.
      
      As part of this change, the misleadingly named function
      trx_search_latch_release_if_reserved(trx) will be replaced with the macro
      trx_assert_no_search_latch(trx) that will be empty unless
      BTR_CUR_HASH_ADAPT is defined (cmake -DWITH_INNODB_AHI=ON).
      
      We will also remove the unused column
      INFORMATION_SCHEMA.INNODB_TRX.TRX_ADAPTIVE_HASH_TIMEOUT.
      In MariaDB Server 10.1, it used to reflect the value of
      trx_t::search_latch_timeout which could be adjusted during
      row_search_for_mysql(). In 10.2, there is no such field.
      
      Other than the removal of the unused column TRX_ADAPTIVE_HASH_TIMEOUT,
      this is an almost non-functional change to the server when using the
      default build options.
      
      Some tests are adjusted so that they will work with both
      -DWITH_INNODB_AHI=ON and -DWITH_INNODB_AHI=OFF. The test
      innodb.innodb_monitor has been renamed to innodb.monitor
      in order to track MySQL 5.7, and the duplicate tests
      sys_vars.innodb_monitor_* are removed.
      27b9989d
  20. 30 Dec, 2016 1 commit
    • Marko Mäkelä's avatar
      MDEV-11690 Remove UNIV_HOTBACKUP · 63574f12
      Marko Mäkelä authored
      The InnoDB source code contains quite a few references to a closed-source
      hot backup tool which was originally called InnoDB Hot Backup (ibbackup)
      and later incorporated in MySQL Enterprise Backup.
      
      The open source backup tool XtraBackup uses the full database for recovery.
      So, the references to UNIV_HOTBACKUP are only cluttering the source code.
      63574f12
  21. 09 Dec, 2016 1 commit
    • Marko Mäkelä's avatar
      MDEV-11487 Revert InnoDB internal temporary tables from WL#7682 · c868acdf
      Marko Mäkelä authored
      WL#7682 in MySQL 5.7 introduced the possibility to create light-weight
      temporary tables in InnoDB. These are called 'intrinsic temporary tables'
      in InnoDB, and in MySQL 5.7, they can be created by the optimizer for
      sorting or buffering data in query processing.
      
      In MariaDB 10.2, the optimizer temporary tables cannot be created in
      InnoDB, so we should remove the dead code and related data structures.
      c868acdf
  22. 02 Sep, 2016 1 commit
    • Jan Lindström's avatar
      Merge InnoDB 5.7 from mysql-5.7.9. · 2e814d47
      Jan Lindström authored
      Contains also
      
      MDEV-10547: Test multi_update_innodb fails with InnoDB 5.7
      
      	The failure happened because 5.7 has changed the signature of
      	the bool handler::primary_key_is_clustered() const
      	virtual function ("const" was added). InnoDB was using the old
      	signature which caused the function not to be used.
      
      MDEV-10550: Parallel replication lock waits/deadlock handling does not work with InnoDB 5.7
      
      	Fixed mutexing problem on lock_trx_handle_wait. Note that
      	rpl_parallel and rpl_optimistic_parallel tests still
      	fail.
      
      MDEV-10156 : Group commit tests fail on 10.2 InnoDB (branch bb-10.2-jan)
        Reason: incorrect merge
      
      MDEV-10550: Parallel replication can't sync with master in InnoDB 5.7 (branch bb-10.2-jan)
        Reason: incorrect merge
      2e814d47
  23. 04 May, 2015 2 commits
  24. 05 May, 2014 2 commits
  25. 26 Feb, 2014 1 commit
  26. 16 Dec, 2013 2 commits
  27. 28 Feb, 2013 1 commit
  28. 01 Nov, 2012 1 commit
  29. 15 Jun, 2012 1 commit
  30. 21 Nov, 2011 1 commit
  31. 14 Jul, 2011 1 commit
  32. 29 Apr, 2011 1 commit
  33. 12 Apr, 2010 2 commits
  34. 07 Apr, 2010 1 commit