1. 05 Jun, 2023 1 commit
    • Sergei Petrunia's avatar
      MDEV-31403: Server crashes in st_join_table::choose_best_splitting · 928012a2
      Sergei Petrunia authored
      The code in choose_best_splitting() assumed that the join prefix is
      in join->positions[].
      
      This is not necessarily the case. This function might be called when
      the join prefix is in join->best_positions[], too.
      Follow the approach from best_access_path(), which calls this function:
      pass the current join prefix as an argument,
      "const POSITION *join_positions" and use that.
      928012a2
  2. 03 Jun, 2023 5 commits
  3. 05 May, 2023 1 commit
    • Sergei Petrunia's avatar
      MDEV-31194: Server crash or assertion failure with join_cache_level=4 · 2594da7a
      Sergei Petrunia authored
      The problem, introduced in patch for MDEV-26301:
      
      When check_join_cache_usage() decides not to use join buffer, it must
      adjust the access method accordingly. For BNL-H joins this means switching
      from pseudo-"ref access"(with index=MAX_KEY) to some other access method.
      
      Failing to do this will cause assertions down the line when code that is
      not aware of BNL-H will try to initialize index use for ref access with
      index=MAX_KEY.
      
      The fix is to follow the regular code path to disable the join buffer for
      the join_tab ("goto no_join_cache") instead of just returning from
      check_join_cache_usage().
      2594da7a
  4. 04 May, 2023 5 commits
  5. 03 May, 2023 4 commits
  6. 02 May, 2023 7 commits
  7. 29 Apr, 2023 2 commits
  8. 28 Apr, 2023 4 commits
    • Angelique's avatar
      MDEV-30221: Move environmental macros to before master-slave · 1963a87b
      Angelique authored
      The fix was introduced, along with re-ordering to do other macros that check test environment capabilities before master/slave is set up.
      1963a87b
    • Sergei Petrunia's avatar
      MDEV-31067: selectivity_from_histogram >1.0 for a DOUBLE_PREC_HB histogram · 85cc8318
      Sergei Petrunia authored
      Variant #2.
      
      When Histogram::point_selectivity() sees that the point value of interest
      falls into one bucket, it tries to guess whether the bucket has many
      different (unpopular) values or a few popular values. (The number of
      rows is fixed, as it's a Height-balanced histogram).
      The basis for this guess is the "width" of the value range the bucket
      covers. Buckets covering wider value ranges are assumed to contain
      values with proportionally lower frequencies.
      
      This is just a [brave] guesswork. For a very narrow bucket, it may
      produce an estimate that's larger than total #rows in the bucket
      or even in the whole table.
      
      Remove the guesswork and replace it with basic logic: return
      either the per-table average selectivity of col=const, or selectivity
      of one bucket, whichever is lower.
      85cc8318
    • Sergei Golubchik's avatar
      MDEV-22756 SQL Error (1364): Field 'DB_ROW_HASH_1' doesn't have a default value · bc970573
      Sergei Golubchik authored
      exclude generated columns from the "has default value" check
      bc970573
    • Oleg Smirnov's avatar
      MDEV-31113 Server crashes in store_length /... · adbad5e3
      Oleg Smirnov authored
      MDEV-31113 Server crashes in store_length / Type_handler_string_result::make_sort_key with DISTINCT and group function
      
      Fix-up for commit 476b24d0
        Author: Monty
        Date:   Thu Feb 16 14:19:33 2023 +0200
          MDEV-20057 Distinct SUM on CROSS JOIN and grouped returns wrong result
      which misses initializing of sorder->suffix_length.
      In this commit the initialization is implemented by passing
      MY_ZEROFILL flag to the allocation of SORT_FIELD elements
      adbad5e3
  9. 27 Apr, 2023 3 commits
    • Andrei's avatar
      MDEV-29621: Replica stopped by locks on sequence · 55a53949
      Andrei authored
      When using binlog_row_image=FULL with sequence table inserts, a
      replica can deadlock because it treats full inserts in a sequence as DDL
      statements by getting an exclusive lock on the sequence table. It
      has been observed that with parallel replication, this exclusive
      lock on the sequence table can lead to a deadlock where one
      transaction has the exclusive lock and is waiting on a prior
      transaction to commit, whereas this prior transaction is waiting on
      the MDL lock.
      
      This fix for this is on the master side, to raise FL_DDL
      flag on the GTID of a full binlog_row_image write of a sequence table.
      This forces the slave to execute the statement serially so a deadlock
      cannot happen.
      
      A test verifies the deadlock also to prove it happen on the OLD (pre-fixes)
      slave.
      
      OLD (buggy master) -replication-> NEW (fixed slave) is provided.
      As the pre-fixes master's full row-image may represent both
      SELECT NEXT VALUE and INSERT, the parallel slave pessimistically
      waits for the prior transaction to have committed before to take on the
      critical part of the second (like INSERT in the test) event execution.
      The waiting exploits a parallel slave's retry mechanism which is
      controlled by `@@global.slave_transaction_retries`.
      
      Note that in order to avoid any persistent 'Deadlock found' 2013 error
      in OLD -> NEW, `slave_transaction_retries` may need to be set to a
      higher than the default value.
      START-SLAVE is an effective work-around if this still happens.
      55a53949
    • Sergei Golubchik's avatar
    • Oleksandr Byelkin's avatar
      a959c22e
  10. 26 Apr, 2023 6 commits
  11. 25 Apr, 2023 2 commits
    • Andrei's avatar
      MDEV-30620 Trying to lock uninitialized LOCK_parallel_entry · e22a57da
      Andrei authored
      The error was seen by a number of mtr tests being caused
      by overdue initialization of rpl_parallel::LOCK_parallel_entry.
      Specifically, SHOW-SLAVE-STATUS might find in
      rpl_parallel::workers_idle() a gtid domain hash entry
      already inserted whose mutex had not done
      mysql_mutex_init().
      
      Fixed with swapping the mutex init and the its entry's stack insertion.
      
      Tested with a generous number of `mtr --repeat` of a few of the reported
      to fail tests, incl rpl.parallel_backup.
      e22a57da
    • Sergei Petrunia's avatar
      MDEV-31121: ANALYZE statement produces 0 for all timings in embedded server · a72b2c3f
      Sergei Petrunia authored
      Timers require my_timer_init() call.
      It was made only in mysqld_main(). Call it also from init_embedded_server().
      a72b2c3f