1. 21 Sep, 2018 2 commits
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 81ba90b5
      Marko Mäkelä authored
      81ba90b5
    • Marko Mäkelä's avatar
      Clean up some SPATIAL INDEX code · bc7d40d0
      Marko Mäkelä authored
      Clarify some comments about accessing an externally stored column
      on which a spatial index has been defined. Add a TODO comment that
      we should actually write the minimum bounding rectangle (MBR) to
      the undo log record, so that we can avoid fetching BLOBs and recomputing
      MBR.
      
      row_build_spatial_index_key(): Split from row_build_index_entry_low().
      bc7d40d0
  2. 19 Sep, 2018 5 commits
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 45eaed0c
      Marko Mäkelä authored
      45eaed0c
    • Marko Mäkelä's avatar
      Follow-up to MDEV-16328: ALTER TABLE…page_compression_level should not rebuild table · 90b292ce
      Marko Mäkelä authored
      Allow combination of non-instant, non-rebuilding operations with
      changes of table options that do not require a rebuild.
      
      For example, DROP INDEX or ADD INDEX can be performed with
      ALGORITHM=NOCOPY together with changing such table options.
      Changing the table options alone would be allowed with ALGORITHM=INSTANT.
      
      INNOBASE_ALTER_NOCREATE: A new set of flags, for operations that
      are refused for ALGORITHM=INSTANT and do not involve creating
      index trees.
      
      Move ALTER_RENAME_INDEX to the proper place (INNOBASE_ALTER_INSTANT).
      
      innobase_need_rebuild(): Do not require a rebuild if
      INNOBASE_ALTER_NOREBUILD operations are combined with ALTER_OPTIONS.
      
      ha_innobase::prepare_inplace_alter_table(),
      ha_innobase::inplace_alter_table(): Use the fast path if
      ALTER_OPTIONS is combined with INNOBASE_ALTER_NOCREATE.
      In this case, the actual changes would be deferred to
      ha_innobase::commit_inplace_alter_table().
      90b292ce
    • Marko Mäkelä's avatar
      Terminology: 'metadata' not 'default rec' · 28ae7965
      Marko Mäkelä authored
      This follows up to commit 755187c8.
      
      TRX_UNDO_INSERT_METADATA: Renamed from TRX_UNDO_INSERT_DEFAULT
      
      trx_undo_metadata: Renamed from trx_undo_default_rec
      28ae7965
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 4a246fec
      Marko Mäkelä authored
      4a246fec
    • Marko Mäkelä's avatar
      Terminology: 'metadata record' instead of 'default row' · 755187c8
      Marko Mäkelä authored
      For instant ALTER TABLE, we store a hidden metadata record at the
      start of the clustered index, to indicate how the format of the
      records differs from the latest table definition.
      
      The term 'default row' is too specific, because it applies to
      instant ADD COLUMN only, and we will be supporting more classes
      of instant ALTER TABLE later on. For instant ADD COLUMN, we
      store the initial default values in the metadata record.
      755187c8
  3. 18 Sep, 2018 5 commits
    • Marko Mäkelä's avatar
      Simplify innobase_add_instant_try() · 043639f9
      Marko Mäkelä authored
      Remove some code duplication and dead code. If no 'default row'
      record exists, the root page must be in the conventional format.
      Should the page type already be FIL_PAGE_TYPE_INSTANT, we would
      necessarily hit a debug assertion failure in page_set_instant().
      043639f9
    • Jacob Mathew's avatar
      MDEV-17144: Sample of spider_direct_sql cause crash · c9a5804e
      Jacob Mathew authored
      The crash occurs when the Spider node server attempts to create an error
      message stating that the temporary table is not found.  The function to
      create the error message is called with incorrect parameters.
      
      I fixed the crash by correcting the incorrect parameter values.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      
      Cherry-Picked:
        Commit e3396161 on branch bb-10.3-mdev-17144
      c9a5804e
    • Jacob Mathew's avatar
      MDEV-17144: Sample of spider_direct_sql cause crash · 159b41b8
      Jacob Mathew authored
      The crash occurs when the Spider node server attempts to create an error
      message stating that the temporary table is not found.  The function to
      create the error message is called with incorrect parameters.
      
      I fixed the crash by correcting the incorrect parameter values.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      
      Merged:
        Commit e3396161 branch bb-10.3-mdev-17144
      159b41b8
    • Igor Babaev's avatar
      MDEV-17211 Server crash on query · 5ec144cf
      Igor Babaev authored
      The function JOIN_TAB::choose_best_splitting() did not take into account
      that for some tables whose fields were used in the GROUP BY list of
      the specification of a splittable materialized derived there might exist
      no elements in the array ext_keyuses_for_splitting.
      5ec144cf
    • Jacob Mathew's avatar
      MDEV-17144: Sample of spider_direct_sql cause crash · e3396161
      Jacob Mathew authored
      The crash occurs when the Spider node server attempts to create an error
      message stating that the temporary table is not found.  The function to
      create the error message is called with incorrect parameters.
      
      I fixed the crash by correcting the incorrect parameter values.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      e3396161
  4. 17 Sep, 2018 4 commits
    • Marko Mäkelä's avatar
      Fix the Windows build · 21f310db
      Marko Mäkelä authored
      21f310db
    • Marko Mäkelä's avatar
      Mroonga follow-up fix for MDEV-16328 · 774a4cb5
      Marko Mäkelä authored
      Now that ha_innobase::prepare_inplace_alter_table() is accessing
      ha_alter_info->create_info->option_struct, we must initialize it in
      the Mroonga wrapper for ALTER TABLE based on the parsed table options
      for the wrap_altered_table.
      774a4cb5
    • Marko Mäkelä's avatar
      MDEV-16328 ALTER TABLE…page_compression_level should not rebuild table · ac24289e
      Marko Mäkelä authored
      The table option page_compression_level is something that only
      affects future writes, not actually the data format. Therefore,
      we can allow instant changes of this option.
      
      Similarly, the table option page_compressed can be set on a
      previously uncompressed table without rebuilding the table,
      because an uncompressed page would be considered valid when
      reading a page_compressed table.
      
      Removing the page_compressed option will continue to require
      the table to be rebuilt.
      
      ha_innobase_inplace_ctx::page_compression_level: The requested
      page_compression_level at the start of ALTER TABLE, or 0 if
      page_compressed=OFF.
      
      alter_options_need_rebuild(): Renamed from
      create_option_need_rebuild(). Allow page_compression_level and
      page_compressed to be changed as above, without rebuilding the table.
      
      ha_innobase::check_if_supported_inplace_alter(): Allow ALGORITHM=INSTANT
      for ALTER_OPTIONS if the table is not to be rebuilt. If rebuild is
      needed, set ha_alter_info->unsupported_reason.
      
      innobase_page_compression_try(): Update SYS_TABLES.TYPE according
      to the table flags, for an instant change of page_compression_level
      or page_compressed.
      
      commit_cache_norebuild(): Adjust dict_table_t::flags, fil_space_t::flags
      and (if needed) FSP_SPACE_FLAGS if page_compression_level was specified.
      ac24289e
    • Alexander Barkov's avatar
      9e1a39aa
  5. 16 Sep, 2018 2 commits
  6. 15 Sep, 2018 1 commit
    • Igor Babaev's avatar
      MDEV-16917 Index affects query results · c5a9a632
      Igor Babaev authored
      The optimizer erroneously allowed to use join cache when joining a
      splittable materialized table together with splitting optimization.
      As a consequence in some rare cases the server returned wrong result
      sets for queries with materialized derived.
      
      This patch allows to use either join cache without usage of splitting
      technique for materialization of a splittable derived table or splitting
      without usage of join cache when joining such table. The costs the these
      alternatives are compared and the best variant is chosen.
      c5a9a632
  7. 14 Sep, 2018 2 commits
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 171fbbb9
      Marko Mäkelä authored
      171fbbb9
    • Marko Mäkelä's avatar
      MDEV-17196 Crash during instant ADD COLUMN with long DEFAULT value · aba5c72b
      Marko Mäkelä authored
      A debug assertion would fail if an instant ADD COLUMN operation
      involves splitting the leftmost leaf page and storing a default
      value off-page. Another debug assertion could fail if the
      default value does not fit in an undo log page.
      
      btr_cur_pessimistic_update(): Invoke rec_offs_make_valid()
      in order to prevent rec_offs_validate() assertion failure.
      
      innobase_add_instant_try(): Invoke btr_cur_pessimistic_update()
      with the BTR_KEEP_POS_FLAG, which is the correct course of action
      when BLOBs may need to be written. Whenever returning true,
      ensure that my_error() will have been called.
      aba5c72b
  8. 13 Sep, 2018 2 commits
    • Jacob Mathew's avatar
      MDEV-16912: Spider Order By column[datatime] limit 5 returns 3 rows · 30d22569
      Jacob Mathew authored
      The problem occurs in 10.2 and earlier releases of MariaDB Server because the
      Partition Engine was not pushing the engine conditions to the underlying
      storage engine of each partition.  This caused Spider to return the first 5
      rows in the table with the data provided by the customer.  2 of the 5 rows
      did not qualify the WHERE clause, so they were removed from the result set by
      the server.
      
      To fix the problem, I have back-ported support for engine condition pushdown
      in the Partition Engine from MariaDB Server 10.3 to 10.2 and 10.1.  In 10.3
      and 10.4 I have merged the comments and the test case.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      
      Cherry-Picked:
        Commit ed49f9aa on branch 10.3
      30d22569
    • Jacob Mathew's avatar
      MDEV-16912: Spider Order By column[datatime] limit 5 returns 3 rows · ed49f9aa
      Jacob Mathew authored
      The problem occurs in 10.2 and earlier releases of MariaDB Server because the
      Partition Engine was not pushing the engine conditions to the underlying
      storage engine of each partition.  This caused Spider to return the first 5
      rows in the table with the data provided by the customer.  2 of the 5 rows
      did not qualify the WHERE clause, so they were removed from the result set by
      the server.
      
      To fix the problem, I have back-ported support for engine condition pushdown
      in the Partition Engine from MariaDB Server 10.3 to 10.2 and 10.1.  In 10.3
      and 10.4 I have merged the comments and the test case.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      
      Merged:
        Commit eb2ca3d4 on branch bb-10.2-MDEV-16912
      ed49f9aa
  9. 12 Sep, 2018 3 commits
  10. 11 Sep, 2018 7 commits
    • Marko Mäkelä's avatar
      MDEV-17138 Reduce redo log volume for undo tablespace initialization · 5567a8c9
      Marko Mäkelä authored
      Implement a 10.4 redo log format, which extends the 10.3 format
      by introducing the MLOG_MEMSET record.
      
      MLOG_MEMSET: A new redo log record type for filling an area with a byte.
      
      mlog_memset(): Write the MLOG_MEMSET record.
      
      mlog_parse_nbytes(): Handle MLOG_MEMSET as well.
      
      trx_rseg_header_create(): Reduce the redo log volume by making use of
      mlog_memset() and the zero-initialization that happens inside page
      allocation.
      
      fil_addr_null: Remove.
      
      flst_init(): Create a variant that takes a zero-initialized
      buf_block_t* as a parameter, and only writes the FIL_NULL using
      mlog_memset().
      
      flst_zero_addr(): A variant of flst_write_addr() that writes
      a null address using mlog_memset() for the FIL_NULL.
      
      The following fixes are replacing some use of MLOG_WRITE_STRING
      with the more compact MLOG_MEMSET record, or eliminating
      redundant redo log writes:
      
      btr_store_big_rec_extern_fields(): Invoke mlog_memset() for
      zero-initializing the tail of the ROW_FORMAT=COMPRESSED BLOB page.
      
      trx_sysf_create(), trx_rseg_format_upgrade(): Invoke mlog_memset()
      for zero-initializing the page trailer.
      
      fsp_header_init(), trx_rseg_header_create():
      Remove redundant zero-initializations.
      5567a8c9
    • Marko Mäkelä's avatar
      MDEV-13564: Remove old crash-upgrade logic in 10.4 · 09af00cb
      Marko Mäkelä authored
      Stop supporting the additional *trunc.log files that were
      introduced via MySQL 5.7 to MariaDB Server 10.2 and 10.3.
      
      DB_TABLESPACE_TRUNCATED: Remove.
      
      purge_sys.truncate: A new structure to track undo tablespace
      file truncation.
      
      srv_start(): Remove the call to buf_pool_invalidate(). It is
      no longer necessary, given that we no longer access things in
      ways that violate the ARIES protocol. This call was originally
      added for innodb_file_format, and it may later have been necessary
      for the proper function of the MySQL 5.7 TRUNCATE recovery, which
      we are now removing.
      
      trx_purge_cleanse_purge_queue(): Take the undo tablespace as a
      parameter.
      
      trx_purge_truncate_history(): Rewrite everything mostly in a
      single function, replacing references to undo::Truncate.
      
      recv_apply_hashed_log_recs(): If any redo log is to be applied,
      and if the log_sys.log.subformat indicates that separately
      logged truncate may have been used, refuse to proceed except if
      innodb_force_recovery is set. We will still refuse crash-upgrade
      if TRUNCATE TABLE was logged. Undo tablespace truncation would
      only be logged in undo*trunc.log files, which we are no longer
      checking for.
      09af00cb
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 67fa97dc
      Marko Mäkelä authored
      67fa97dc
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 1bf3e8ab
      Marko Mäkelä authored
      1bf3e8ab
    • Jan Lindström's avatar
      Merge pull request #850 from HeMan/10.3 · 8dda6d79
      Jan Lindström authored
      Return code from starting MariaDB.
      8dda6d79
    • Jan Lindström's avatar
      Merge pull request #858 from codership/10.3-MDEV-16052 · ffd583bb
      Jan Lindström authored
      MDEV-16052 galera mtr galera_certification_double_failure fails with deadlock
      ffd583bb
    • Jan Lindström's avatar
      Merge pull request #857 from codership/10.3-MDEV-15845 · 4d93fea4
      Jan Lindström authored
      MDEV-15845 Test failure on galera.galera_concurrent_ctas
      4d93fea4
  11. 10 Sep, 2018 6 commits
    • Teodor Mircea Ionita's avatar
      README.md: Break off long line · f5bebaf1
      Teodor Mircea Ionita authored
      f5bebaf1
    • Teodor Mircea Ionita's avatar
    • Marko Mäkelä's avatar
      6b61f1bb
    • Marko Mäkelä's avatar
      MDEV-17161 TRUNCATE TABLE fails after upgrade from 10.1 · fc34e4c0
      Marko Mäkelä authored
      With the TRUNCATE by rename, create, drop (MDEV-13564),
      old tables with invalid ROW_FORMAT attribute could not be
      truncated. Introduce a sloppy mode for allowing the TRUNCATE.
      
      create_table_info_t::prepare_create_table(): Add the parameter
      strict=true.
      
      ha_innobase::create(): Pass strict=false if trx!=NULL
      (the create is part of TRUNCATE).
      fc34e4c0
    • Marko Mäkelä's avatar
      b02c722e
    • Marko Mäkelä's avatar
      MDEV-17158 TRUNCATE is not atomic after MDEV-13564 · 75f8e86f
      Marko Mäkelä authored
      It turned out that ha_innobase::truncate() would prematurely
      commit the transaction already before the completion of the
      ha_innobase::create(). All of this must be atomic.
      
      innodb.truncate_crash: Use the correct DEBUG_SYNC point, and
      tolerate non-truncation of the table, because the redo log
      for the TRUNCATE transaction commit might be flushed due to
      some InnoDB background activity.
      
      dict_build_tablespace_for_table(): Merge to the function
      dict_build_table_def_step().
      
      dict_build_table_def_step(): If a table is being created during
      an already started data dictionary transaction (such as TRUNCATE),
      persistently write the table_id to the undo log header before
      creating any file. In this way, the recovery of TRUNCATE will be
      able to delete the new file before rolling back the rename of
      the original table.
      
      dict_table_rename_in_cache(): Add the parameter replace_new_file,
      used as part of rolling back a TRUNCATE operation.
      
      fil_rename_tablespace_check(): Add the parameter replace_new.
      If the parameter is set and a file identified by new_path exists,
      remove a possible tablespace and also the file.
      
      create_table_info_t::create_table_def(): Remove some debug assertions
      that no longer hold. During TRUNCATE, the transaction will already
      have been started (and performed a rename operation) before the
      table is created. Also, remove a call to dict_build_tablespace_for_table().
      
      create_table_info_t::create_table(): Add the parameter create_fk=true.
      During TRUNCATE TABLE, do not add FOREIGN KEY constraints to the
      InnoDB data dictionary, because they will also not be removed.
      
      row_table_add_foreign_constraints(): If trx=NULL, do not modify
      the InnoDB data dictionary, but only load the FOREIGN KEY constraints
      from the data dictionary.
      
      ha_innobase::create(): Lock the InnoDB data dictionary cache only
      if no transaction was passed by the caller. Unlock it in any case.
      
      innobase_rename_table(): Add the parameter commit = true.
      If !commit, do not lock or unlock the data dictionary cache.
      
      ha_innobase::truncate(): Lock the data dictionary before invoking
      rename or create, and let ha_innobase::create() unlock it and
      also commit or roll back the transaction.
      
      trx_undo_mark_as_dict(): Renamed from trx_undo_mark_as_dict_operation()
      and declared global instead of static.
      
      row_undo_ins_parse_undo_rec(): If table_id is set, this must
      be rolling back the rename operation in TRUNCATE TABLE, and
      therefore replace_new_file=true.
      75f8e86f
  12. 09 Sep, 2018 1 commit