1. 30 Aug, 2018 3 commits
    • Monty's avatar
      Sequences with negative numbers and auto_increment_increment crashes · 7aa80ba6
      Monty authored
      This also fixes MDEV-16313 Assertion `next_free_value % real_increment == offset' fails upon CREATE SEQUENCE in galera cluster
      
      Fixed by adding llabs() to assert.
      Also adjusted auto_increment_offset to mod auto_increment_increment.
      7aa80ba6
    • Jacob Mathew's avatar
      MDEV-16889: Spider Crash mysqld got exception 0xc0000005 · ceb55971
      Jacob Mathew authored
      The SELECT with the INNER JOIN is executed with one of the two tables being
      optimized as a constant table, which is pre-read.  Spider nevertheless attempts
      to push down the join to the data node.  The crash occurs because the constant
      table is excluded from the optimized query that Spider attempts to push down.
      
      In order for Spider to be able to push down a join, the following conditions
      need to be met:
      - All of the tables involved in the join need to be included in the optimized
        query that Spider pushes down.  When any of the tables involved in the join
        is a constant table, it is excluded from the optimized query that Spider
        attempts to push down.
      - All fields involved in the query need to be members of tables included in the
        optimized query.
      
      I fixed the problem by preventing Spider from pushing down queries that include
      a field that is not a member of a table included in the optimized query.  This
      solution fixes the reported problem and also fixes other potential problems.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      
      Merged:
        Commit 4885baf6 on branch bb-10.3-MDEV-16889
      ceb55971
    • Jacob Mathew's avatar
      MDEV-16889: Spider Crash mysqld got exception 0xc0000005 · 4885baf6
      Jacob Mathew authored
      The SELECT with the INNER JOIN is executed with one of the two tables being
      optimized as a constant table, which is pre-read.  Spider nevertheless attempts
      to push down the join to the data node.  The crash occurs because the constant
      table is excluded from the optimized query that Spider attempts to push down.
      
      In order for Spider to be able to push down a join, the following conditions
      need to be met:
      - All of the tables involved in the join need to be included in the optimized
        query that Spider pushes down.  When any of the tables involved in the join
        is a constant table, it is excluded from the optimized query that Spider
        attempts to push down.
      - All fields involved in the query need to be members of tables included in the
        optimized query.
      
      I fixed the problem by preventing Spider from pushing down queries that include
      a field that is not a member of a table included in the optimized query.  This
      solution fixes the reported problem and also fixes other potential problems.
      
      Author:
        Jacob Mathew.
      
      Reviewer:
        Kentoku Shiba.
      4885baf6
  2. 28 Aug, 2018 2 commits
  3. 27 Aug, 2018 3 commits
    • Igor Babaev's avatar
      MDEV-17017 Explain for query using derived table specified with · 497d8627
      Igor Babaev authored
                 a table value constructor shows wrong number of rows
      
      This is another attempt to fix this bug. The previous patch did not take
      into account that a transformation for ALL/ANY subqueries could be applied
      to the materialized table that wrapped the table value constructor used as
      a specification of the subselect used an ALL/ANY subquery. In this case
      the result of the derived table used a sink of the class select_subselect
      rather than of the class select_unit. Thus the previous fix could cause
      memory overwrites when running EXPLAIN for queries with table value
      constructors in ALL/ANY subselects.
      497d8627
    • Galina Shalygina's avatar
      MDEV-16803: Pushdown Item_func_in item that uses vectors in several SELECTs · 55163ba1
      Galina Shalygina authored
      The bug appears because of the Item_func_in::build_clone() method.
      The 'array' field for the Item_func_in item that can be pushed into
      the materialized view/derived table was built in the wrong way.
      It becomes lame after the pushdown of the condition into the first
      SELECT that defines that view/derived table. The server crashes in
      the pushdown into the next SELECT while trying to use already lame
      'array' field.
      
      To fix it Item_func_in::build_clone() was changed.
      55163ba1
    • Jan Lindström's avatar
      MDEV-17062: Test failure on galera.MW-336 · a290b807
      Jan Lindström authored
      MDEV-17058: Test failure on wsrep.variables
      MDEV-17060: Test failure on galera.galera_var_slave_threads
      
      Fix incorrect calculation of increased applier (slave) threads.
      Note that increase change takes effect "immediately" but we should
      use proper wait condition to wait it. Reducing the number of
      slave threads is not immediate as thread will only exit after a
      replication event.
      a290b807
  4. 26 Aug, 2018 1 commit
    • Ming Lin's avatar
      MDEV-16703: Update AUTO_INCREMENT in the UPDATE statement · 2b76f6f6
      Ming Lin authored
      Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
      For example,
      
      CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
      INSERT INTO t1 (a) VALUES (1);
      UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
      
      Without this fix, it hits the Assertion `dd_val >= last_val' failed in
      myrocks::ha_rocksdb::load_auto_incr_value_from_index.
      
      (cherry picked from commit f7154242)
      2b76f6f6
  5. 25 Aug, 2018 4 commits
  6. 24 Aug, 2018 8 commits
  7. 23 Aug, 2018 4 commits
  8. 22 Aug, 2018 2 commits
    • Ming Lin's avatar
      MDEV-16703: Update AUTO_INCREMENT in the UPDATE statement · f7154242
      Ming Lin authored
      Currently RocksDB engine doesn't update AUTO_INCREMENT in the UPDATE statement.
      For example,
      
      CREATE TABLE t1 (pk INT AUTO_INCREMENT, a INT, PRIMARY KEY(pk)) ENGINE=RocksDB;
      INSERT INTO t1 (a) VALUES (1);
      UPDATE t1 SET pk = 3; ==> AUTO_INCREMENT should be updated to 4.
      
      Without this fix, it hits the Assertion `dd_val >= last_val' failed in
      myrocks::ha_rocksdb::load_auto_incr_value_from_index.
      f7154242
    • Sergei Golubchik's avatar
      MDEV-16961 Assertion `!table || (!table->read_set ||... · 5d650d36
      Sergei Golubchik authored
      MDEV-16961 Assertion `!table || (!table->read_set || bitmap_is_set(table->read_set, field_index))' failed upon concurrent DELETE and DDL with virtual blob column
      
      After iterating all fields and setting PART_INDIRECT_KEY_FLAG as
      necessary, TABLE::mark_columns_used_by_virtual_fields() remembers
      in TABLE_SHARE that this operation was done and need not be repeated.
      
      But as the flag is set in TABLE_SHARE, PART_INDIRECT_KEY_FLAG must
      be set in TABLE_SHARE::field[], not only in TABLE::field[].
      
      Otherwise all new TABLEs opened from this TABLE_SHARE will
      never have it.
      5d650d36
  9. 21 Aug, 2018 10 commits
  10. 20 Aug, 2018 1 commit
    • Galina Shalygina's avatar
      MDEV-16765: Missing rows with pushdown condition defined with CASE using Item_cond · 0de3c423
      Galina Shalygina authored
      The bug appears because of the wrong pushdown into the WHERE clause of the
      materialized derived table/view work. For the excl_dep_on_grouping_fields()
      method that checks if the condition can be pushed into the WHERE clause
      the case when Item_cond is used is missing. For Item_cond elements this
      method always returns positive result (that condition can be pushed).
      So this condition is pushed even if is shouldn't be pushed.
      
      To fix it new Item_cond::excl_dep_on_grouping_fields() method is added.
      0de3c423
  11. 18 Aug, 2018 1 commit
  12. 17 Aug, 2018 1 commit
    • Igor Babaev's avatar
      MDEV-16934 Query with very large IN clause lists runs slowly · 4eac5df3
      Igor Babaev authored
      This patch introduces support for the system variable eq_range_index_dive_limit
      that existed in MySQL starting from 5.6. The variable sets a limit for
      index dives into equality ranges. Index dives are performed by optimizer
      to estimate the number of rows in range scans. Index dives usually provide
      good estimate but they are pretty expensive. To estimate the number of rows
      in equality ranges statistical data on indexes can be employed. Its usage gives
      not so good estimates but it's cheap. So if the number of equality dives
      required by an index scan exceeds the set limit no dives for equality
      ranges are performed by the optimizer for this index.
      
      As the new system variable is introduced in a stable version the default
      value for it is set to a special value meaning there is no limit for the number
      of index dives performed by the optimizer.
      
      The patch partially uses the MySQL code for WL 5957
      'Statistics-based Range optimization for many ranges'.
      4eac5df3