1. 13 Dec, 2017 3 commits
  2. 11 Dec, 2017 1 commit
  3. 10 Dec, 2017 2 commits
  4. 09 Dec, 2017 1 commit
    • Vesa Pentti's avatar
      MDEV-12501 -- set --maturity-level by default · 99bcec29
      Vesa Pentti authored
        * Note: breaking change; since this commit, a plugin that has
          worked so far might get rejected due to plugin maturity
        * mariabackup is not affected (allows all plugins)
        * VERSION file defines SERVER_MATURITY, which defines the
          corresponding numeric value as SERVER_MATURITY_LEVEL in
          include/mysql_version.h
        * The default value for 'plugin_maturity' is SERVER_MATURITY_LEVEL - 1
        * Logs a warning if a plugin has maturity lower than
          SERVER_MATURITY_LEVEL
        * Tests suppress the plugin maturity warning
        * Tests use --plugin-maturity=unknown by default so as not to fail
          due to the stricter plugin maturity handling
      99bcec29
  5. 08 Dec, 2017 22 commits
  6. 07 Dec, 2017 5 commits
  7. 06 Dec, 2017 6 commits
    • Marko Mäkelä's avatar
      Merge bb-10.2-ext into 10.3 · 976f6fb1
      Marko Mäkelä authored
      976f6fb1
    • Marko Mäkelä's avatar
      Merge 10.2 into bb-10.2-ext · ce076765
      Marko Mäkelä authored
      ce076765
    • Marko Mäkelä's avatar
      Follow-up fix to MDEV-13201 Assertion `srv_undo_sources || ...` failed on... · 77fb7ccb
      Marko Mäkelä authored
      Follow-up fix to MDEV-13201 Assertion `srv_undo_sources || ...` failed on shutdown during DDL operation
      
      Introduce the debug flag trx_t::persistent_stats to suppress the
      assertion for the updates of persistent statistics during fast
      shutdown.
      
      dict_stats_exec_sql(): Do execute the statement even though shutdown
      has been initiated.
      77fb7ccb
    • Marko Mäkelä's avatar
      MDEV-14511 Use fewer transactions for updating InnoDB persistent statistics · 7dc6066d
      Marko Mäkelä authored
      dict_stats_exec_sql(): Expect the caller to always provide a transaction.
      Remove some redundant assertions. The caller must hold dict_sys->mutex,
      but holding dict_operation_lock is only necessary for accessing
      data dictionary tables, which we are not accessing.
      
      dict_stats_save_index_stat(): Acquire dict_sys->mutex
      for invoking dict_stats_exec_sql().
      
      dict_stats_save(), dict_stats_update_for_index(), dict_stats_update(),
      dict_stats_drop_index(), dict_stats_delete_from_table_stats(),
      dict_stats_delete_from_index_stats(), dict_stats_drop_table(),
      dict_stats_rename_in_table_stats(), dict_stats_rename_in_index_stats(),
      dict_stats_rename_table(): Use a single caller-provided
      transaction that is started and committed or rolled back by the caller.
      
      dict_stats_process_entry_from_recalc_pool(): Let the caller provide
      a transaction object.
      
      ha_innobase::open(): Pass a transaction to dict_stats_init().
      
      ha_innobase::create(), ha_innobase::discard_or_import_tablespace():
      Pass a transaction to dict_stats_update().
      
      ha_innobase::rename_table(): Pass a transaction to
      dict_stats_rename_table(). We do not use the same transaction
      as the one that updated the data dictionary tables, because
      we already released the dict_operation_lock. (FIXME: there is
      a race condition; a lock wait on SYS_* tables could occur
      in another DDL transaction until the data dictionary transaction
      is committed.)
      
      ha_innobase::info_low(): Pass a transaction to dict_stats_update()
      when calculating persistent statistics.
      
      alter_stats_norebuild(), alter_stats_rebuild(): Update the
      persistent statistics as well. In this way, a single transaction
      will be used for updating the statistics of a whole table, even
      for partitioned tables.
      
      ha_innobase::commit_inplace_alter_table(): Drop statistics for
      all partitions when adding or dropping virtual columns, so that
      the statistics will be recalculated on the next handler::open().
      This is a refactored version of Oracle Bug#22469660 fix.
      
      RecLock::add_to_waitq(), lock_table_enqueue_waiting():
      Do not allow a lock wait to occur for updating statistics
      in a data dictionary transaction, such as DROP TABLE. Instead,
      return the previously unused error code DB_QUE_THR_SUSPENDED.
      
      row_merge_lock_table(), row_mysql_lock_table(): Remove dead code
      for handling DB_QUE_THR_SUSPENDED.
      
      row_drop_table_for_mysql(), row_truncate_table_for_mysql():
      Drop the statistics as part of the data dictionary transaction.
      After TRUNCATE TABLE, the statistics will be recalculated on
      subsequent ha_innobase::open(), similar to how the logic after
      the above-mentioned Oracle Bug#22469660 fix in
      ha_innobase::commit_inplace_alter_table() works.
      
      btr_defragment_thread(): Use a single transaction object for
      updating defragmentation statistics.
      
      dict_stats_save_defrag_stats(), dict_stats_save_defrag_stats(),
      dict_stats_process_entry_from_defrag_pool(),
      dict_defrag_process_entries_from_defrag_pool(),
      dict_stats_save_defrag_summary(), dict_stats_save_defrag_stats():
      Add a parameter for the transaction.
      
      dict_stats_empty_table(): Make public. This will be called by
      row_truncate_table_for_mysql() after dropping persistent statistics,
      to clear the memory-based statistics as well.
      7dc6066d
    • Sergei Petrunia's avatar
      MDEV-14563: Wrong query plan for query with no PK · 2c1e4d4d
      Sergei Petrunia authored
      Part #2: Don't use the new code for the clustered PK, it is handled
      in the special way right above.
      2c1e4d4d
    • Sergei Petrunia's avatar
      MDEV-14563: Wrong query plan for query with no PK · a6254e5e
      Sergei Petrunia authored
      TABLE_SHARE::init_from_binary_frm_image() calls handler_file->index_flags()
      before it has set TABLE_SHARE::primary_key (it is 0 while it should be
      MAX_KEY in my example).
      This causes MyRocks to report wrong index flags (it thinks it's a PK while
      it is not), which causes invalid query plans later on.
      
      Do the only thing that seems feasible: adjust field->part_of key to have
      correct value in ha_rocksdb::open.
      a6254e5e