1. 29 Jun, 2021 5 commits
    • Marko Mäkelä's avatar
      MDEV-26029: Sparse files are inefficient on thinly provisioned storage · 30edd554
      Marko Mäkelä authored
      The MariaDB implementation of page_compressed tables for InnoDB used
      sparse files. In the worst case, in the data file, every data page
      will consist of some data followed by a hole. This may be extremely
      inefficient in some file systems.
      
      If the underlying storage device is thinly provisioned (can compress
      data on the fly), it would be good to write regular files (with sequences
      of NUL bytes at the end of each page_compressed block) and let the
      storage device take care of compressing the data.
      
      For reads, sparse file regions and regions containing NUL bytes will be
      indistinguishable.
      
      my_test_if_disable_punch_hole(): A new predicate for detecting thinly
      provisioned storage. (Not implemented yet.)
      
      innodb_atomic_writes: Correct the comment.
      
      buf_flush_page(): Support all values of fil_node_t::punch_hole.
      On a thinly provisioned storage device, we will always write
      NUL-padded innodb_page_size bytes also for page_compressed tables.
      
      buf_flush_freed_pages(): Remove a redundant condition.
      
      fil_space_t::atomic_write_supported: Remove. (This was duplicating
      fil_node_t::atomic_write.)
      
      fil_space_t::punch_hole: Remove. (Duplicated fil_node_t::punch_hole.)
      
      fil_node_t: Remove magic_n, and consolidate flags into bitfields.
      For punch_hole we introduce a third value that indicates a
      thinly provisioned storage device.
      
      fil_node_t::find_metadata(): Detect all attributes of the file.
      30edd554
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · b11aa0df
      Marko Mäkelä authored
      b11aa0df
    • Marko Mäkelä's avatar
      MDEV-26042 Atomic write capability is not detected correctly · 617dee34
      Marko Mäkelä authored
      my_init_atomic_write(): Detect all forms of SSD, in case multiple
      types of devices are installed in the same machine.
      This was broken in commit ed008a74
      and further in commit 70684afe.
      
      SAME_DEV(): Match block devices, ignoring partition numbers.
      
      Let us use stat() instead of lstat(), in case someone has a symbolic
      link in /dev.
      
      Instead of reporting errors with perror(), let us use fprintf(stderr)
      with the file name, the impact of the error, and the strerror(errno).
      Because this code is specific to Linux, we may depend on the
      GNU libc/uClibc/musl extension %m for strerror(errno).
      617dee34
    • Marko Mäkelä's avatar
      3d15e3c0
    • Andrei Elkin's avatar
      MDEV-26031 unnessary xid logging in one phase commit case · 39001478
      Andrei Elkin authored
      The bug was originally observed as hanging binlog background thread at
      shutdown similar to one of MDEV-21120.
      It occurred through unnessary xid logging in 1pc execution.
      
      Two parts of the issue are fixed.  Per engine loop by involved engine
      with attempt to mark a group requiring xid unlogging gets corrected in
      two ways. Do not execute it when the termination event is irrelevant
      for recovery, does not have xid in particular.  Do not break the loop
      anymore unconditionally at the end of the 1st iteration.
      39001478
  2. 28 Jun, 2021 3 commits
  3. 26 Jun, 2021 4 commits
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · 891a927e
      Marko Mäkelä authored
      891a927e
    • Marko Mäkelä's avatar
      MDEV-26017: Assertion stat.flush_list_bytes <= curr_pool_size · fc2ff464
      Marko Mäkelä authored
      buf_flush_relocate_on_flush_list(): If we are removing the block from
      buf_pool.flush_list, subtract its size from buf_pool.stat.flush_list_bytes.
      This fixes a regression that was introduced in
      commit 22b62eda (MDEV-25113).
      fc2ff464
    • Marko Mäkelä's avatar
      Cleanup: Remove unused mtr_block_dirtied · aa95c423
      Marko Mäkelä authored
      aa95c423
    • Marko Mäkelä's avatar
      MDEV-26010 fixup: Use acquire/release memory order · 759deaa0
      Marko Mäkelä authored
      In commit 5f22511e we depend on
      Total Store Ordering. For correct operation on ISAs that implement
      weaker memory ordering, we must explicitly use release/acquire stores
      and loads on buf_page_t::oldest_modification_ to prevent a race condition
      when buf_page_t::list does not happen to be on the same cache line.
      
      buf_page_t::clear_oldest_modification(): Assert that the block is
      not in buf_pool.flush_list, and use std::memory_order_release.
      
      buf_page_t::oldest_modification_acquire(): Read oldest_modification_
      with std::memory_order_acquire. In this way, if the return value is 0,
      the caller may safely assume that it will not observe the buf_page_t
      as being in buf_pool.flush_list, even if it is not holding
      buf_pool.flush_list_mutex.
      
      buf_flush_relocate_on_flush_list(), buf_LRU_free_page():
      Invoke buf_page_t::oldest_modification_acquire().
      759deaa0
  4. 24 Jun, 2021 7 commits
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · a8350cfb
      Marko Mäkelä authored
      a8350cfb
    • Marko Mäkelä's avatar
      MDEV-26010: Assertion lsn > 2 failed in buf_pool_t::get_oldest_modification · 5f22511e
      Marko Mäkelä authored
      In commit 22b62eda (MDEV-25113)
      we introduced a race condition. buf_LRU_free_page() would read
      buf_page_t::oldest_modification() as 0 and assume that
      buf_page_t::list can be used (for attaching the block to the
      buf_pool.free list). In the observed race condition,
      buf_pool_t::delete_from_flush_list() had cleared the field,
      and buf_pool_t::delete_from_flush_list_low() was executing
      concurrently with buf_LRU_block_free_non_file_page(),
      which resulted in buf_pool.flush_list.end becoming corrupted.
      
      buf_pool_t::delete_from_flush_list(), buf_flush_relocate_on_flush_list():
      First remove the block from buf_pool.flush_list, and only then
      invoke buf_page_t::clear_oldest_modification(), to ensure that
      reading oldest_modification()==0 really implies that the block
      no longer is in buf_pool.flush_list.
      5f22511e
    • Marko Mäkelä's avatar
      MDEV-25948 fixup: Demote a warning to a note · e329dc8d
      Marko Mäkelä authored
      buf_dblwr_t::recover(): Issue a note, not a warning, about
      pages whose FIL_PAGE_LSN is in the future. This was supposed to be
      part of commit 762bcb81 (MDEV-25948)
      but had been accidentally omitted.
      e329dc8d
    • Marko Mäkelä's avatar
      MDEV-26012 InnoDB purge and shutdown hangs after failed ALTER TABLE · 82fe83a3
      Marko Mäkelä authored
      ha_innobase::commit_inplace_alter_table(): Invoke
      purge_sys.resume_FTS() on all error handling paths
      if purge_sys.stop_FTS() had been called.
      
      This fixes a regression that had been introduced in
      commit 1bd681c8 (MDEV-25506).
      82fe83a3
    • Marko Mäkelä's avatar
      MDEV-26007 Rollback unnecessarily initiates redo log write · 033e29b6
      Marko Mäkelä authored
      trx_t::commit_in_memory(): Do not initiate a redo log write if
      the transaction has no visible effect. If anything for this
      transaction had been made durable, crash recovery will roll back
      the transaction just fine even if the end of ROLLBACK is not
      durably written.
      
      Rollbacks of transactions that are associated with XA identifiers
      (possibly internally via the binlog) will always be persisted.
      The test rpl.rpl_gtid_crash covers this.
      033e29b6
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · b4c9cd20
      Marko Mäkelä authored
      b4c9cd20
    • Marko Mäkelä's avatar
      MDEV-26004 Excessive wait times in buf_LRU_get_free_block() · 60ed4797
      Marko Mäkelä authored
      buf_LRU_get_free_block(): Initially wait for a single block to be
      freed, signaled by buf_pool.done_free. Only if that fails and no
      LRU eviction flushing batch is already running, we initiate a
      flushing batch that should serve all threads that are currently
      waiting in buf_LRU_get_free_block().
      
      Note: In an extreme case, this may introduce a performance regression
      at larger numbers of connections. We observed this in sysbench
      oltp_update_index with 512MiB buffer pool, 4GiB of data on fast NVMe,
      and 1000 concurrent connections, on a 20-thread CPU. The contention point
      appears to be buf_pool.mutex, and the improvement would turn into a
      regression somewhere beyond 32 concurrent connections.
      
      On slower storage, such regression was not observed; instead, the
      throughput was improving and maximum latency was reduced.
      
      The excessive waits were pointed out by Vladislav Vaintroub.
      60ed4797
  5. 23 Jun, 2021 15 commits
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · 101da872
      Marko Mäkelä authored
      101da872
    • Marko Mäkelä's avatar
      MDEV-25113: Introduce a page cleaner mode before 'furious flush' · 6441bc61
      Marko Mäkelä authored
      MDEV-23855 changed the way how the page cleaner is signaled by
      user threads. If a threshold is exceeded, a mini-transaction commit
      would invoke buf_flush_ahead() in order to initiate page flushing
      before all writers would eventually grind to halt in
      log_free_check(), waiting for the checkpoint age to reduce.
      
      However, buf_flush_ahead() would always initiate 'furious flushing',
      making the buf_flush_page_cleaner thread write innodb_io_capacity_max
      pages per batch, and sleeping no time between batches, until the
      limit LSN is reached. Because this could saturate the I/O subsystem,
      system throughput could significantly reduce during these
      'furious flushing' spikes.
      
      With this change, we introduce a gentler version of flush-ahead,
      which would write innodb_io_capacity_max pages per second until
      the 'soft limit' is reached.
      
      buf_flush_ahead(): Add a parameter to specify whether furious flushing
      is requested.
      
      buf_flush_async_lsn: Similar to buf_flush_sync_lsn, a limit for
      the less intrusive flushing.
      
      buf_flush_page_cleaner(): Keep working until buf_flush_async_lsn
      has been reached.
      
      log_close(): Suppress a warning message in the event that a new log
      is being created during startup, when old logs did not exist.
      Return what type of page cleaning will be needed.
      
      mtr_t::finish_write(): Also when m_log.is_small(), invoke log_close().
      Return what type of page cleaning will be needed.
      
      mtr_t::commit(): Invoke buf_flush_ahead() based on the return value of
      mtr_t::finish_write().
      6441bc61
    • Marko Mäkelä's avatar
      MDEV-25113: Make page flushing faster · 22b62eda
      Marko Mäkelä authored
      buf_page_write_complete(): Reduce the buf_pool.mutex hold time,
      and do not acquire buf_pool.flush_list_mutex at all.
      Instead, mark blocks clean by setting oldest_modification to 1.
      Dirty pages of temporary tables will be identified by the special
      value 2 instead of the previous special value 1.
      (By design of the ib_logfile0 format, actual LSN values smaller
      than 2048 are not possible.)
      
      buf_LRU_free_page(), buf_pool_t::get_oldest_modification()
      and many other functions will remove the garbage (clean blocks)
      from buf_pool.flush_list while holding buf_pool.flush_list_mutex.
      
      buf_pool_t::n_flush_LRU, buf_pool_t::n_flush_list:
      Replaced with non-atomic variables, protected by buf_pool.mutex,
      to avoid unnecessary synchronization when modifying the counts.
      
      export_vars: Remove unnecessary indirection for
      innodb_pages_created, innodb_pages_read, innodb_pages_written.
      22b62eda
    • Marko Mäkelä's avatar
      MDEV-25801: buf_flush_dirty_pages() is very slow · 8af53897
      Marko Mäkelä authored
      In commit 7cffb5f6 (MDEV-23399)
      the implementation of buf_flush_dirty_pages() was replaced with
      a slow one, which would perform excessive scans of the
      buf_pool.flush_list and make little progress.
      
      buf_flush_list(), buf_flush_LRU(): Split from buf_flush_lists().
      Vladislav Vaintroub noticed that we will not need to invoke
      log_flush_task.wait() for the LRU eviction flushing.
      
      buf_flush_list_space(): Replaces buf_flush_dirty_pages().
      This is like buf_flush_list(), but operating on a single
      tablespace at a time. Writes at most innodb_io_capacity
      pages. Returns whether some of the tablespace might remain
      in the buffer pool.
      8af53897
    • Marko Mäkelä's avatar
      MDEV-25948 Remove log_flush_task · 762bcb81
      Marko Mäkelä authored
      Vladislav Vaintroub suggested that invoking log_flush_up_to()
      for every page could perform better than invoking a log write
      between buf_pool.flush_list batches, like we started doing in
      commit 3a9a3be1 (MDEV-23855).
      This could depend on the sequence in which pages are being
      modified. The buf_pool.flush_list is ordered by
      oldest_modification, while the FIL_PAGE_LSN of the pages is
      theoretically independent of that. In the pathological case,
      we will wait for a log write before writing each individual page.
      
      It turns out that we can defer the call to log_flush_up_to()
      until just before submitting the page write. If the doublewrite
      buffer is being used, we can submit a write batch of "future" pages
      to the doublewrite buffer, and only wait for the log write right
      before we are writing an already doublewritten page.
      The next doublewrite batch will not be initiated before the last
      page write from the current batch has completed.
      
      When a future version introduces asynchronous writes if the log,
      we could initiate a write at the start of a flushing batch, to
      reduce waiting further.
      762bcb81
    • Marko Mäkelä's avatar
      MDEV-25954: Trim os_aio_wait_until_no_pending_writes() · 6dfd44c8
      Marko Mäkelä authored
      It turns out that we had some unnecessary waits for no outstanding
      write requests to exist. They were basically working around a
      bug that was fixed in MDEV-25953.
      
      On write completion callback, blocks will be marked clean.
      So, it is sufficient to consult buf_pool.flush_list to determine
      which writes have not been completed yet.
      
      On FLUSH TABLES...FOR EXPORT we must still wait for all pending
      asynchronous writes to complete, because buf_flush_file_space()
      would merely guarantee that writes will have been initiated.
      6dfd44c8
    • Marko Mäkelä's avatar
      MDEV-25062: Reduce trx_rseg_t::mutex contention · 6e12ebd4
      Marko Mäkelä authored
      redo_rseg_mutex, noredo_rseg_mutex: Remove the PERFORMANCE_SCHEMA keys.
      The rollback segment mutex will be uninstrumented.
      
      trx_sys_t: Remove pointer indirection for rseg_array, temp_rseg.
      Align each element to the cache line.
      
      trx_sys_t::rseg_id(): Replaces trx_rseg_t::id.
      
      trx_rseg_t::ref: Replaces needs_purge, trx_ref_count, skip_allocation
      in a single std::atomic<uint32_t>.
      
      trx_rseg_t::latch: Replaces trx_rseg_t::mutex.
      
      trx_rseg_t::history_size: Replaces trx_sys_t::rseg_history_len
      
      trx_sys_t::history_size_approx(): Replaces trx_sys.rseg_history_len
      in those places where the exact count does not matter. We must not
      acquire any trx_rseg_t::latch while holding index page latches, because
      normally the trx_rseg_t::latch is acquired before any page latches.
      
      trx_sys_t::history_exists(): Replaces trx_sys.rseg_history_len!=0
      with an approximation.
      
      We remove some unnecessary trx_rseg_t::latch acquisition around
      trx_undo_set_state_at_prepare() and trx_undo_set_state_at_finish().
      Those operations will only access fields that remain constant
      after trx_rseg_t::init().
      6e12ebd4
    • Marko Mäkelä's avatar
      MDEV-25967: Correctly extend deferred-recovery files · b3e87880
      Marko Mäkelä authored
      recv_sys_t::recover_deferred(): Set the file size to match the number
      of pages. Mariabackup might copy the file while it was being extended.
      b3e87880
    • Marko Mäkelä's avatar
      MDEV-25996 sux_lock::s_lock(): Assertion !have_s() failed on startup · 592a925c
      Marko Mäkelä authored
      dict_check_sys_tables(): Correctly advance the cursor position.
      This fixes a regression that was caused by
      commit 49e2c8f0 (MDEV-25743).
      592a925c
    • Marko Mäkelä's avatar
      Merge 10.5 into 10.6 · 3a566de2
      Marko Mäkelä authored
      3a566de2
    • Marko Mäkelä's avatar
      Merge 10.4 into 10.5 · 344e5990
      Marko Mäkelä authored
      344e5990
    • Marko Mäkelä's avatar
      Merge 10.3 into 10.4 · 09b03ff3
      Marko Mäkelä authored
      09b03ff3
    • Daniel Bartholomew's avatar
      bump the VERSION · 55b3a3f4
      Daniel Bartholomew authored
      55b3a3f4
    • Daniel Bartholomew's avatar
      bump the VERSION · bf2680ea
      Daniel Bartholomew authored
      bf2680ea
    • Daniel Bartholomew's avatar
      bump the VERSION · 1deb6304
      Daniel Bartholomew authored
      1deb6304
  6. 22 Jun, 2021 6 commits