An error occurred fetching the project authors.
  1. 07 May, 2020 1 commit
  2. 02 Dec, 2019 1 commit
  3. 11 May, 2019 1 commit
  4. 18 Mar, 2019 1 commit
    • Marko Mäkelä's avatar
      MDEV-18644: Support full_crc32 for page_compressed · 6b6fa3cd
      Marko Mäkelä authored
      This is a follow-up task to MDEV-12026, which introduced
      innodb_checksum_algorithm=full_crc32 and a simpler page format.
      MDEV-12026 did not enable full_crc32 for page_compressed tables,
      which we will be doing now.
      
      This is joint work with Thirunarayanan Balathandayuthapani.
      
      For innodb_checksum_algorithm=full_crc32 we change the
      page_compressed format as follows:
      
      FIL_PAGE_TYPE: The most significant bit will be set to indicate
      page_compressed format. The least significant bits will contain
      the compressed page size, rounded up to a multiple of 256 bytes.
      
      The checksum will be stored in the last 4 bytes of the page
      (whether it is the full page or a page_compressed page whose
      size is determined by FIL_PAGE_TYPE), covering all preceding
      bytes of the page. If encryption is used, then the page will
      be encrypted between compression and computing the checksum.
      For page_compressed, FIL_PAGE_LSN will not be repeated at
      the end of the page.
      
      FSP_SPACE_FLAGS (already implemented as part of MDEV-12026):
      We will store the innodb_compression_algorithm that may be used
      to compress pages. Previously, the choice of algorithm was written
      to each compressed data page separately, and one would be unable
      to know in advance which compression algorithm(s) are used.
      
      fil_space_t::full_crc32_page_compressed_len(): Determine if the
      page_compressed algorithm of the tablespace needs to know the
      exact length of the compressed data. If yes, we will reserve and
      write an extra byte for this right before the checksum.
      
      buf_page_is_compressed(): Determine if a page uses page_compressed
      (in any innodb_checksum_algorithm).
      
      fil_page_decompress(): Pass also fil_space_t::flags so that the
      format can be determined.
      
      buf_page_is_zeroes(): Check if a page is full of zero bytes.
      
      buf_page_full_crc32_is_corrupted(): Renamed from
      buf_encrypted_full_crc32_page_is_corrupted(). For full_crc32,
      we always simply validate the checksum to the page contents,
      while the physical page size is explicitly specified by an
      unencrypted part of the page header.
      
      buf_page_full_crc32_size(): Determine the size of a full_crc32 page.
      
      buf_dblwr_check_page_lsn(): Make this a debug-only function, because
      it involves potentially costly lookups of fil_space_t.
      
      create_table_info_t::check_table_options(),
      ha_innobase::check_if_supported_inplace_alter(): Do allow the creation
      of SPATIAL INDEX with full_crc32 also when page_compressed is used.
      
      commit_cache_norebuild(): Preserve the compression algorithm when
      updating the page_compression_level.
      
      dict_tf_to_fsp_flags(): Set the flags for page compression algorithm.
      FIXME: Maybe there should be a table option page_compression_algorithm
      and a session variable to back it?
      6b6fa3cd
  5. 19 Feb, 2019 1 commit
    • Thirunarayanan Balathandayuthapani's avatar
      MDEV-12026: Implement innodb_checksum_algorithm=full_crc32 · c0f47a4a
      Thirunarayanan Balathandayuthapani authored
      MariaDB data-at-rest encryption (innodb_encrypt_tables)
      had repurposed the same unused data field that was repurposed
      in MySQL 5.7 (and MariaDB 10.2) for the Split Sequence Number (SSN)
      field of SPATIAL INDEX. Because of this, MariaDB was unable to
      support encryption on SPATIAL INDEX pages.
      
      Furthermore, InnoDB page checksums skipped some bytes, and there
      are multiple variations and checksum algorithms. By default,
      InnoDB accepts all variations of all algorithms that ever existed.
      This unnecessarily weakens the page checksums.
      
      We hereby introduce two more innodb_checksum_algorithm variants
      (full_crc32, strict_full_crc32) that are special in a way:
      When either setting is active, newly created data files will
      carry a flag (fil_space_t::full_crc32()) that indicates that
      all pages of the file will use a full CRC-32C checksum over the
      entire page contents (excluding the bytes where the checksum
      is stored, at the very end of the page). Such files will always
      use that checksum, no matter what the parameter
      innodb_checksum_algorithm is assigned to.
      
      For old files, the old checksum algorithms will continue to be
      used. The value strict_full_crc32 will be equivalent to strict_crc32
      and the value full_crc32 will be equivalent to crc32.
      
      ROW_FORMAT=COMPRESSED tables will only use the old format.
      These tables do not support new features, such as larger
      innodb_page_size or instant ADD/DROP COLUMN. They may be
      deprecated in the future. We do not want an unnecessary
      file format change for them.
      
      The new full_crc32() format also cleans up the MariaDB tablespace
      flags. We will reserve flags to store the page_compressed
      compression algorithm, and to store the compressed payload length,
      so that checksum can be computed over the compressed (and
      possibly encrypted) stream and can be validated without
      decrypting or decompressing the page.
      
      In the full_crc32 format, there no longer are separate before-encryption
      and after-encryption checksums for pages. The single checksum is
      computed on the page contents that is written to the file.
      
      We do not make the new algorithm the default for two reasons.
      First, MariaDB 10.4.2 was a beta release, and the default values
      of parameters should not change after beta. Second, we did not
      yet implement the full_crc32 format for page_compressed pages.
      This will be fixed in MDEV-18644.
      
      This is joint work with Marko Mäkelä.
      c0f47a4a
  6. 14 Jun, 2018 1 commit
    • Marko Mäkelä's avatar
      MDEV-13103 Deal with page_compressed page corruption · f5eb3712
      Marko Mäkelä authored
      fil_page_decompress(): Replaces fil_decompress_page().
      Allow the caller detect errors. Remove
      duplicated code. Use the "safe" instead of "fast" variants of
      decompression routines.
      
      fil_page_compress(): Replaces fil_compress_page().
      The length of the input buffer always was srv_page_size (innodb_page_size).
      Remove printouts, and remove the fil_space_t* parameter.
      
      buf_tmp_buffer_t::reserved: Make private; the accessors acquire()
      and release() will use atomic memory access.
      
      buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter.
      Do not acquire any mutex. Remove the allocation of the buffers.
      
      buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf():
      Refactored away from buf_pool_reserve_tmp_slot().
      
      buf_page_decrypt_after_read(): Make static, and simplify the logic.
      Use the encryption buffer also for decompressing.
      
      buf_page_io_complete(), buf_dblwr_process(): Check more failures.
      
      fil_space_encrypt(): Simplify the debug checks.
      
      fil_space_t::printed_compression_failure: Remove.
      
      fil_get_compression_alg_name(): Remove.
      
      fil_iterate(): Allocate a buffer for compression and decompression
      only once, instead of allocating and freeing it for every page
      that uses compression, during IMPORT TABLESPACE.
      
      fil_node_get_space_id(), fil_page_is_index_page(),
      fil_page_is_lzo_compressed(): Remove (unused code).
      f5eb3712
  7. 13 Jun, 2018 1 commit
  8. 06 Oct, 2017 1 commit
    • Marko Mäkelä's avatar
      MDEV-11369 Instant ADD COLUMN for InnoDB · a4948daf
      Marko Mäkelä authored
      For InnoDB tables, adding, dropping and reordering columns has
      required a rebuild of the table and all its indexes. Since MySQL 5.6
      (and MariaDB 10.0) this has been supported online (LOCK=NONE), allowing
      concurrent modification of the tables.
      
      This work revises the InnoDB ROW_FORMAT=REDUNDANT, ROW_FORMAT=COMPACT
      and ROW_FORMAT=DYNAMIC so that columns can be appended instantaneously,
      with only minor changes performed to the table structure. The counter
      innodb_instant_alter_column in INFORMATION_SCHEMA.GLOBAL_STATUS
      is incremented whenever a table rebuild operation is converted into
      an instant ADD COLUMN operation.
      
      ROW_FORMAT=COMPRESSED tables will not support instant ADD COLUMN.
      
      Some usability limitations will be addressed in subsequent work:
      
      MDEV-13134 Introduce ALTER TABLE attributes ALGORITHM=NOCOPY
      and ALGORITHM=INSTANT
      MDEV-14016 Allow instant ADD COLUMN, ADD INDEX, LOCK=NONE
      
      The format of the clustered index (PRIMARY KEY) is changed as follows:
      
      (1) The FIL_PAGE_TYPE of the root page will be FIL_PAGE_TYPE_INSTANT,
      and a new field PAGE_INSTANT will contain the original number of fields
      in the clustered index ('core' fields).
      If instant ADD COLUMN has not been used or the table becomes empty,
      or the very first instant ADD COLUMN operation is rolled back,
      the fields PAGE_INSTANT and FIL_PAGE_TYPE will be reset
      to 0 and FIL_PAGE_INDEX.
      
      (2) A special 'default row' record is inserted into the leftmost leaf,
      between the page infimum and the first user record. This record is
      distinguished by the REC_INFO_MIN_REC_FLAG, and it is otherwise in the
      same format as records that contain values for the instantly added
      columns. This 'default row' always has the same number of fields as
      the clustered index according to the table definition. The values of
      'core' fields are to be ignored. For other fields, the 'default row'
      will contain the default values as they were during the ALTER TABLE
      statement. (If the column default values are changed later, those
      values will only be stored in the .frm file. The 'default row' will
      contain the original evaluated values, which must be the same for
      every row.) The 'default row' must be completely hidden from
      higher-level access routines. Assertions have been added to ensure
      that no 'default row' is ever present in the adaptive hash index
      or in locked records. The 'default row' is never delete-marked.
      
      (3) In clustered index leaf page records, the number of fields must
      reside between the number of 'core' fields (dict_index_t::n_core_fields
      introduced in this work) and dict_index_t::n_fields. If the number
      of fields is less than dict_index_t::n_fields, the missing fields
      are replaced with the column value of the 'default row'.
      Note: The number of fields in the record may shrink if some of the
      last instantly added columns are updated to the value that is
      in the 'default row'. The function btr_cur_trim() implements this
      'compression' on update and rollback; dtuple::trim() implements it
      on insert.
      
      (4) In ROW_FORMAT=COMPACT and ROW_FORMAT=DYNAMIC records, the new
      status value REC_STATUS_COLUMNS_ADDED will indicate the presence of
      a new record header that will encode n_fields-n_core_fields-1 in
      1 or 2 bytes. (In ROW_FORMAT=REDUNDANT records, the record header
      always explicitly encodes the number of fields.)
      
      We introduce the undo log record type TRX_UNDO_INSERT_DEFAULT for
      covering the insert of the 'default row' record when instant ADD COLUMN
      is used for the first time. Subsequent instant ADD COLUMN can use
      TRX_UNDO_UPD_EXIST_REC.
      
      This is joint work with Vin Chen (陈福荣) from Tencent. The design
      that was discussed in April 2017 would not have allowed import or
      export of data files, because instead of the 'default row' it would
      have introduced a data dictionary table. The test
      rpl.rpl_alter_instant is exactly as contributed in pull request #408.
      The test innodb.instant_alter is based on a contributed test.
      
      The redo log record format changes for ROW_FORMAT=DYNAMIC and
      ROW_FORMAT=COMPACT are as contributed. (With this change present,
      crash recovery from MariaDB 10.3.1 will fail in spectacular ways!)
      Also the semantics of higher-level redo log records that modify the
      PAGE_INSTANT field is changed. The redo log format version identifier
      was already changed to LOG_HEADER_FORMAT_CURRENT=103 in MariaDB 10.3.1.
      
      Everything else has been rewritten by me. Thanks to Elena Stepanova,
      the code has been tested extensively.
      
      When rolling back an instant ADD COLUMN operation, we must empty the
      PAGE_FREE list after deleting or shortening the 'default row' record,
      by calling either btr_page_empty() or btr_page_reorganize(). We must
      know the size of each entry in the PAGE_FREE list. If rollback left a
      freed copy of the 'default row' in the PAGE_FREE list, we would be
      unable to determine its size (if it is in ROW_FORMAT=COMPACT or
      ROW_FORMAT=DYNAMIC) because it would contain more fields than the
      rolled-back definition of the clustered index.
      
      UNIV_SQL_DEFAULT: A new special constant that designates an instantly
      added column that is not present in the clustered index record.
      
      len_is_stored(): Check if a length is an actual length. There are
      two magic length values: UNIV_SQL_DEFAULT, UNIV_SQL_NULL.
      
      dict_col_t::def_val: The 'default row' value of the column.  If the
      column is not added instantly, def_val.len will be UNIV_SQL_DEFAULT.
      
      dict_col_t: Add the accessors is_virtual(), is_nullable(), is_instant(),
      instant_value().
      
      dict_col_t::remove_instant(): Remove the 'instant ADD' status of
      a column.
      
      dict_col_t::name(const dict_table_t& table): Replaces
      dict_table_get_col_name().
      
      dict_index_t::n_core_fields: The original number of fields.
      For secondary indexes and if instant ADD COLUMN has not been used,
      this will be equal to dict_index_t::n_fields.
      
      dict_index_t::n_core_null_bytes: Number of bytes needed to
      represent the null flags; usually equal to UT_BITS_IN_BYTES(n_nullable).
      
      dict_index_t::NO_CORE_NULL_BYTES: Magic value signalling that
      n_core_null_bytes was not initialized yet from the clustered index
      root page.
      
      dict_index_t: Add the accessors is_instant(), is_clust(),
      get_n_nullable(), instant_field_value().
      
      dict_index_t::instant_add_field(): Adjust clustered index metadata
      for instant ADD COLUMN.
      
      dict_index_t::remove_instant(): Remove the 'instant ADD' status
      of a clustered index when the table becomes empty, or the very first
      instant ADD COLUMN operation is rolled back.
      
      dict_table_t: Add the accessors is_instant(), is_temporary(),
      supports_instant().
      
      dict_table_t::instant_add_column(): Adjust metadata for
      instant ADD COLUMN.
      
      dict_table_t::rollback_instant(): Adjust metadata on the rollback
      of instant ADD COLUMN.
      
      prepare_inplace_alter_table_dict(): First create the ctx->new_table,
      and only then decide if the table really needs to be rebuilt.
      We must split the creation of table or index metadata from the
      creation of the dictionary table records and the creation of
      the data. In this way, we can transform a table-rebuilding operation
      into an instant ADD COLUMN operation. Dictionary objects will only
      be added to cache when table rebuilding or index creation is needed.
      The ctx->instant_table will never be added to cache.
      
      dict_table_t::add_to_cache(): Modified and renamed from
      dict_table_add_to_cache(). Do not modify the table metadata.
      Let the callers invoke dict_table_add_system_columns() and if needed,
      set can_be_evicted.
      
      dict_create_sys_tables_tuple(), dict_create_table_step(): Omit the
      system columns (which will now exist in the dict_table_t object
      already at this point).
      
      dict_create_table_step(): Expect the callers to invoke
      dict_table_add_system_columns().
      
      pars_create_table(): Before creating the table creation execution
      graph, invoke dict_table_add_system_columns().
      
      row_create_table_for_mysql(): Expect all callers to invoke
      dict_table_add_system_columns().
      
      create_index_dict(): Replaces row_merge_create_index_graph().
      
      innodb_update_n_cols(): Renamed from innobase_update_n_virtual().
      Call my_error() if an error occurs.
      
      btr_cur_instant_init(), btr_cur_instant_init_low(),
      btr_cur_instant_root_init():
      Load additional metadata from the clustered index and set
      dict_index_t::n_core_null_bytes. This is invoked
      when table metadata is first loaded into the data dictionary.
      
      dict_boot(): Initialize n_core_null_bytes for the four hard-coded
      dictionary tables.
      
      dict_create_index_step(): Initialize n_core_null_bytes. This is
      executed as part of CREATE TABLE.
      
      dict_index_build_internal_clust(): Initialize n_core_null_bytes to
      NO_CORE_NULL_BYTES if table->supports_instant().
      
      row_create_index_for_mysql(): Initialize n_core_null_bytes for
      CREATE TEMPORARY TABLE.
      
      commit_cache_norebuild(): Call the code to rename or enlarge columns
      in the cache only if instant ADD COLUMN is not being used.
      (Instant ADD COLUMN would copy all column metadata from
      instant_table to old_table, including the names and lengths.)
      
      PAGE_INSTANT: A new 13-bit field for storing dict_index_t::n_core_fields.
      This is repurposing the 16-bit field PAGE_DIRECTION, of which only the
      least significant 3 bits were used. The original byte containing
      PAGE_DIRECTION will be accessible via the new constant PAGE_DIRECTION_B.
      
      page_get_instant(), page_set_instant(): Accessors for the PAGE_INSTANT.
      
      page_ptr_get_direction(), page_get_direction(),
      page_ptr_set_direction(): Accessors for PAGE_DIRECTION.
      
      page_direction_reset(): Reset PAGE_DIRECTION, PAGE_N_DIRECTION.
      
      page_direction_increment(): Increment PAGE_N_DIRECTION
      and set PAGE_DIRECTION.
      
      rec_get_offsets(): Use the 'leaf' parameter for non-debug purposes,
      and assume that heap_no is always set.
      Initialize all dict_index_t::n_fields for ROW_FORMAT=REDUNDANT records,
      even if the record contains fewer fields.
      
      rec_offs_make_valid(): Add the parameter 'leaf'.
      
      rec_copy_prefix_to_dtuple(): Assert that the tuple is only built
      on the core fields. Instant ADD COLUMN only applies to the
      clustered index, and we should never build a search key that has
      more than the PRIMARY KEY and possibly DB_TRX_ID,DB_ROLL_PTR.
      All these columns are always present.
      
      dict_index_build_data_tuple(): Remove assertions that would be
      duplicated in rec_copy_prefix_to_dtuple().
      
      rec_init_offsets(): Support ROW_FORMAT=REDUNDANT records whose
      number of fields is between n_core_fields and n_fields.
      
      cmp_rec_rec_with_match(): Implement the comparison between two
      MIN_REC_FLAG records.
      
      trx_t::in_rollback: Make the field available in non-debug builds.
      
      trx_start_for_ddl_low(): Remove dangerous error-tolerance.
      A dictionary transaction must be flagged as such before it has generated
      any undo log records. This is because trx_undo_assign_undo() will mark
      the transaction as a dictionary transaction in the undo log header
      right before the very first undo log record is being written.
      
      btr_index_rec_validate(): Account for instant ADD COLUMN
      
      row_undo_ins_remove_clust_rec(): On the rollback of an insert into
      SYS_COLUMNS, revert instant ADD COLUMN in the cache by removing the
      last column from the table and the clustered index.
      
      row_search_on_row_ref(), row_undo_mod_parse_undo_rec(), row_undo_mod(),
      trx_undo_update_rec_get_update(): Handle the 'default row'
      as a special case.
      
      dtuple_t::trim(index): Omit a redundant suffix of an index tuple right
      before insert or update. After instant ADD COLUMN, if the last fields
      of a clustered index tuple match the 'default row', there is no
      need to store them. While trimming the entry, we must hold a page latch,
      so that the table cannot be emptied and the 'default row' be deleted.
      
      btr_cur_optimistic_update(), btr_cur_pessimistic_update(),
      row_upd_clust_rec_by_insert(), row_ins_clust_index_entry_low():
      Invoke dtuple_t::trim() if needed.
      
      row_ins_clust_index_entry(): Restore dtuple_t::n_fields after calling
      row_ins_clust_index_entry_low().
      
      rec_get_converted_size(), rec_get_converted_size_comp(): Allow the number
      of fields to be between n_core_fields and n_fields. Do not support
      infimum,supremum. They are never supposed to be stored in dtuple_t,
      because page creation nowadays uses a lower-level method for initializing
      them.
      
      rec_convert_dtuple_to_rec_comp(): Assign the status bits based on the
      number of fields.
      
      btr_cur_trim(): In an update, trim the index entry as needed. For the
      'default row', handle rollback specially. For user records, omit
      fields that match the 'default row'.
      
      btr_cur_optimistic_delete_func(), btr_cur_pessimistic_delete():
      Skip locking and adaptive hash index for the 'default row'.
      
      row_log_table_apply_convert_mrec(): Replace 'default row' values if needed.
      In the temporary file that is applied by row_log_table_apply(),
      we must identify whether the records contain the extra header for
      instantly added columns. For now, we will allocate an additional byte
      for this for ROW_T_INSERT and ROW_T_UPDATE records when the source table
      has been subject to instant ADD COLUMN. The ROW_T_DELETE records are
      fine, as they will be converted and will only contain 'core' columns
      (PRIMARY KEY and some system columns) that are converted from dtuple_t.
      
      rec_get_converted_size_temp(), rec_init_offsets_temp(),
      rec_convert_dtuple_to_temp(): Add the parameter 'status'.
      
      REC_INFO_DEFAULT_ROW = REC_INFO_MIN_REC_FLAG | REC_STATUS_COLUMNS_ADDED:
      An info_bits constant for distinguishing the 'default row' record.
      
      rec_comp_status_t: An enum of the status bit values.
      
      rec_leaf_format: An enum that replaces the bool parameter of
      rec_init_offsets_comp_ordinary().
      a4948daf
  9. 21 Apr, 2017 4 commits
    • Marko Mäkelä's avatar
      Fix a compilation error · 200ef513
      Marko Mäkelä authored
      200ef513
    • Marko Mäkelä's avatar
      MDEV-12545 Reduce the amount of fil_space_t lookups · 0871a00a
      Marko Mäkelä authored
      buf_flush_write_block_low(): Acquire the tablespace reference once,
      and pass it to lower-level functions. This is only a start; further
      calls may be removed.
      
      fil_decompress_page(): Remove unsafe use of fil_space_get_by_id().
      0871a00a
    • Marko Mäkelä's avatar
      MDEV-12488 Remove type mismatch in InnoDB printf-like calls · 5684aa22
      Marko Mäkelä authored
      Alias the InnoDB ulint and lint data types to size_t and ssize_t,
      which are the standard names for the machine-word-width data types.
      
      Correspondingly, define ULINTPF as "%zu" and introduce ULINTPFx as "%zx".
      In this way, better compiler warnings for type mismatch are possible.
      
      Furthermore, use PRIu64 for that 64-bit format, and define
      the feature macro __STDC_FORMAT_MACROS to enable it on Red Hat systems.
      
      Fix some errors in error messages, and replace some error messages
      with assertions.
      Most notably, an IMPORT TABLESPACE error message in InnoDB was
      displaying the number of columns instead of the mismatching flags.
      5684aa22
    • Marko Mäkelä's avatar
      Follow-up to MDEV-12488: Fix some type mismatch in header files · d23eb8e6
      Marko Mäkelä authored
      This reduces the number of compilation warnings on Windows.
      d23eb8e6
  10. 10 Feb, 2017 1 commit
  11. 06 Feb, 2017 1 commit
    • Jan Lindström's avatar
      MDEV-11759: Encryption code in MariaDB 10.1/10.2 causes · ddf2fac7
      Jan Lindström authored
      compatibility problems
      
      Pages that are encrypted contain post encryption checksum on
      different location that normal checksum fields. Therefore,
      we should before decryption check this checksum to avoid
      unencrypting corrupted pages. After decryption we can use
      traditional checksum check to detect if page is corrupted
      or unencryption was done using incorrect key.
      
      Pages that are page compressed do not contain any checksum,
      here we need to fist unencrypt, decompress and finally
      use tradional checksum check to detect page corruption
      or that we used incorrect key in unencryption.
      
      buf0buf.cc: buf_page_is_corrupted() mofified so that
      compressed pages are skipped.
      
      buf0buf.h, buf_block_init(), buf_page_init_low():
      removed unnecessary page_encrypted, page_compressed,
      stored_checksum, valculated_checksum fields from
      buf_page_t
      
      buf_page_get_gen(): use new buf_page_check_corrupt() function
      to detect corrupted pages.
      
      buf_page_check_corrupt(): If page was not yet decrypted
      check if post encryption checksum still matches.
      If page is not anymore encrypted, use buf_page_is_corrupted()
      traditional checksum method.
      
      If page is detected as corrupted and it is not encrypted
      we print corruption message to error log.
      If page is still encrypted or it was encrypted and now
      corrupted, we will print message that page is
      encrypted to error log.
      
      buf_page_io_complete(): use new buf_page_check_corrupt()
      function to detect corrupted pages.
      
      buf_page_decrypt_after_read(): Verify post encryption
      checksum before tring to decrypt.
      
      fil0crypt.cc: fil_encrypt_buf() verify post encryption
      checksum and ind fil_space_decrypt() return true
      if we really decrypted the page.
      
      fil_space_verify_crypt_checksum(): rewrite to use
      the method used when calculating post encryption
      checksum. We also check if post encryption checksum
      matches that traditional checksum check does not
      match.
      
      fil0fil.ic: Add missed page type encrypted and page
      compressed to fil_get_page_type_name()
      
      Note that this change does not yet fix innochecksum tool,
      that will be done in separate MDEV.
      
      Fix test failures caused by buf page corruption injection.
      ddf2fac7
  12. 18 Jan, 2017 1 commit
    • Marko Mäkelä's avatar
      Remove MYSQL_COMPRESSION. · 1eabad5d
      Marko Mäkelä authored
      The MariaDB 10.1 page_compression is incompatible with the Oracle
      implementation that was introduced in MySQL 5.7 later.
      
      Remove the Oracle implementation. Also remove the remaining traces of
      MYSQL_ENCRYPTION.
      
      This will also remove traces of PUNCH_HOLE until it is implemented
      better. The only effective call to os_file_punch_hole() was in
      fil_node_create_low() to test if the operation is supported for the file.
      
      In other words, it looks like page_compression is not working in
      MariaDB 10.2, because no code equivalent to the 10.1 os_file_trim()
      is enabled.
      1eabad5d
  13. 29 Nov, 2016 1 commit
  14. 22 Nov, 2016 1 commit
  15. 30 Sep, 2016 1 commit
  16. 02 Sep, 2016 1 commit
    • Jan Lindström's avatar
      Merge InnoDB 5.7 from mysql-5.7.9. · 2e814d47
      Jan Lindström authored
      Contains also
      
      MDEV-10547: Test multi_update_innodb fails with InnoDB 5.7
      
      	The failure happened because 5.7 has changed the signature of
      	the bool handler::primary_key_is_clustered() const
      	virtual function ("const" was added). InnoDB was using the old
      	signature which caused the function not to be used.
      
      MDEV-10550: Parallel replication lock waits/deadlock handling does not work with InnoDB 5.7
      
      	Fixed mutexing problem on lock_trx_handle_wait. Note that
      	rpl_parallel and rpl_optimistic_parallel tests still
      	fail.
      
      MDEV-10156 : Group commit tests fail on 10.2 InnoDB (branch bb-10.2-jan)
        Reason: incorrect merge
      
      MDEV-10550: Parallel replication can't sync with master in InnoDB 5.7 (branch bb-10.2-jan)
        Reason: incorrect merge
      2e814d47
  17. 27 Jan, 2016 1 commit
  18. 26 Jan, 2016 1 commit
  19. 17 Dec, 2015 2 commits
  20. 04 Jun, 2015 1 commit
    • Jan Lindström's avatar
      MDEV-8250: InnoDB: Page compressed tables are not compressed and... · f7002c05
      Jan Lindström authored
      MDEV-8250: InnoDB: Page compressed tables are not compressed and compressed+encrypted tables cause crash
      
      Analysis: Problem is that both encrypted tables and compressed tables use
      FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION to store
      required metadata. Furhermore, for only compressed tables currently
      code skips compression.
      
      Fixes:
      - Only encrypted pages store key_version to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
        no need to fix
      - Only compressed pages store compression algorithm to FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION,
        no need to fix as they have different page type FIL_PAGE_PAGE_COMPRESSED
      - Compressed and encrypted pages now use a new page type FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED and
        key_version is stored on FIL header offset FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION and compression
        method is stored after FIL header similar way as compressed size, so that first
        FIL_PAGE_COMPRESSED_SIZE is stored followed by FIL_PAGE_COMPRESSION_METHOD
      - Fix buf_page_encrypt_before_write function to really compress pages if compression is enabled
      - Fix buf_page_decrypt_after_read function to really decompress pages if compression is used
      - Small style fixes
      f7002c05
  21. 02 Jun, 2015 1 commit
  22. 14 May, 2015 1 commit
  23. 07 Apr, 2015 1 commit
    • Jan Lindström's avatar
      InnoDB/XtraDB Encryption cleanup. · b4a4d823
      Jan Lindström authored
      Step 1:
      -- Remove page encryption from dictionary (per table
      encryption will be handled by storing crypt_data to page 0)
      -- Remove encryption/compression from os0file and all functions
      before that (compression will be added to buf0buf.cc)
      -- Use same CRYPT_SCHEME_1 for all encryption methods
      -- Do some code cleanups to confort InnoDB coding style
      b4a4d823