1. 29 Jun, 2018 1 commit
    • Teodor Mircea Ionita's avatar
      MDEV-16213: Improvements and adjustments to Travis config · 5cdc70b8
      Teodor Mircea Ionita authored
      Several improvements have been made so that builds run
      faster and with fewer canceled jobs:
      
      * Set ccache max size to 1GB. Was 512MB for Linux
      (too low for MariaDB) and 5GB on macOS with defaults;
      
      * Don't install libasan in Travis if not necessary.
      Sicne ASAN is disabled for the time being, save
      time/resources for other steps;
      
      * Decrease number of parallel processes. To prevent
      resource exhaustion leading to poor performance. According
      to Travis docs, a max of 4 concurrent processses should be
      run per job:
      https://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error
      
      * Reconsider tests exec order and split huge main and rocksdb
      test suites into their own job, decreasing the chance of going
      over the Travis job execution limit and getting killed;
      
      * Increase Travis testcase-timeout to 4 minutes. Occasionally
      on Ubuntu target and frequently on macOS, many tests in main,
      rpl, binlog suites take longer than 2 minutes, resulting in
      many jobs failing, when in reality the failing tests didn't
      get a chance to complete. From my testing, along with the other
      speedups, i.e. increasing ccache size, a timeout of 4 minutes
      should be Ok.  Revert to 3 minutes of necessary.
      
      * Build with GCC and Clang version 5,6 only.
      
      * Rename GCC_VERSION to CC_VERSION for clarity. We are using
      two compilers after all, GCC and Clang.
      
      * Stop using somewhat obsolete Clang4 in Travis. Also, was the
      reason for the failing test suites in MDEV-15430.
      5cdc70b8
  2. 28 Jun, 2018 1 commit
    • Alexander Barkov's avatar
      MDEV-16584 SP with a cursor inside a loop wastes THD memory aggressively · 724a5105
      Alexander Barkov authored
      Problem:
      
      push_handler() created sp_handler_entry instances on THD::main_mem_root,
      which is freed only after the SP instructions execution.
      So in case of a CONTINUE HANDLER inside a loop (e.g. WHILE) this approach
      leaked thread memory on every loop iteration.
      
      Changes:
      - Removing sp_handler_entry declaration, it's not really needed.
      - Fixing the data type of sp_rcontext::m_handlers from
        Dynamic_array<sp_handler_entry*> to Dynamic_array<sp_instr_hpush_jump*>
      - Fixing sp_rcontext::push_handler() to push the pointer to
        an sp_instr_hpush_jump instance to the handler stack.
        This instance contains everything we need.
        There is no a need to allocate anything else.
      724a5105
  3. 27 Jun, 2018 2 commits
    • Sergei Golubchik's avatar
      compat/oracle.parser failed in --ps · 445339fe
      Sergei Golubchik authored
      445339fe
    • Alexander Barkov's avatar
      MDEV-16584 SP with a cursor inside a loop wastes THD memory aggressively · 56145be2
      Alexander Barkov authored
      Problem:
      
      push_cursor() created sp_cursor instances on THD::main_mem_root,
      which is freed only after the SP instructions loop.
      
      Changes:
      - Moving sp_cursor declaration from sp_rcontext.h to sql_class.h
      - Deriving sp_instr_cpush from sp_cursor. So now sp_cursor is created
        only once (at the SP parse time) and then reused on all loop iterations
      - Adding a new method reset() into sp_cursor (and its parent classes)
        to reset an sp_cursor instance before reuse.
      - Moving former sp_cursor members m_fetch_count, m_row_count, m_found
        into a separate class sp_cursor_statistics. This helps to reuse
        the code in sp_cursor constructors, and in sp_cursor::reset()
      - Adding a helper method sp_rcontext::pop_cursor().
      - Adding "THD*" parameter to so_rcontext::pop_cursors() and pop_all_cursors()
      - Removing "new" and "delete" from sp_rcontext::push_cursor() and
        sp_rconext::pop_cursor().
      - Fixing sp_cursor not to derive from Sql_alloc, as it's now allocated
        only as a part of sp_instr_cpush (and not allocated separately).
      - Moving lex_keeper->disable_query_cache() from sp_cursor::sp_cursor()
        to sp_instr_cpush::execute().
      - Adding tests
      56145be2
  4. 26 Jun, 2018 2 commits
  5. 25 Jun, 2018 6 commits
  6. 22 Jun, 2018 1 commit
  7. 21 Jun, 2018 1 commit
  8. 20 Jun, 2018 4 commits
  9. 19 Jun, 2018 2 commits
    • Oleksandr Byelkin's avatar
      083279f7
    • Igor Babaev's avatar
      MDEV-16420 View stop working after upgrade from 10.1.15 to 10.3.7 · 956b2962
      Igor Babaev authored
      This bug happened for queries that used a materialized view that
      renamed columns of the specifying query in an inner table of
      an outer join. For such a query name resolution for a column
      belonging the view could fail if the underlying column was
      non-nullable.
      When creating the defintion of the the temporary table for
      the materialized view used in the inner part of an outer join
      the definition of the non-nullable columns are created by the
      function create_tmp_field_from_item() that names the columns
      according to the names of the underlying columns. So these names
      should be changed for the view column names.
      
      This bug cannot be reproduced in 10.2 because there setup_fields()
      called when preparing joins in the view specification effectively
      renames the underlying columns in the function find_field_in_view().
      In 10.3 this renaming was removed as improper
      (see Monty's commit b478276b).
      956b2962
  10. 18 Jun, 2018 2 commits
  11. 16 Jun, 2018 1 commit
  12. 15 Jun, 2018 4 commits
  13. 14 Jun, 2018 4 commits
    • Galina Shalygina's avatar
      MDEV-16386: Wrong result when pushdown into the HAVING clause of the · ec4fdd57
      Galina Shalygina authored
                  materialized derived table/view that uses aliases is done
      
      The problem appears when a column alias inside the materialized derived
      table/view t1 definition coincides with the column name used in the
      GROUP BY clause of t1. If the condition that can be pushed into t1
      uses that ambiguous column name this column is determined as a column that
      is used in the GROUP BY clause instead of the alias used in the projection
      list of t1. That causes wrong result.
      To prevent it resolve_ref_in_select_and_group() was changed.
      ec4fdd57
    • Marko Mäkelä's avatar
      MDEV-16457 mariabackup 10.2+ should default to innodb_checksum_algorithm=crc32 · a79b033b
      Marko Mäkelä authored
      Since MariaDB Server 10.2.2 (and MySQL 5.7), the default value of
      innodb_checksum_algorithm is crc32 (CRC-32C), not the inefficient "innodb"
      checksum. Change Mariabackup to use the same default, so that checksum
      validation (when using the default algorithm on the server) will take less
      time during mariabackup --backup. Also, mariabackup --prepare should be
      a little faster, and the server should read backups faster, because the
      page checksums would only be validated against CRC-32C.
      a79b033b
    • Marko Mäkelä's avatar
      MDEV-13103 Deal with page_compressed page corruption · 2ca904f0
      Marko Mäkelä authored
      fil_page_decompress(): Replaces fil_decompress_page().
      Allow the caller detect errors. Remove
      duplicated code. Use the "safe" instead of "fast" variants of
      decompression routines.
      
      fil_page_compress(): Replaces fil_compress_page().
      The length of the input buffer always was srv_page_size (innodb_page_size).
      Remove printouts, and remove the fil_space_t* parameter.
      
      buf_tmp_buffer_t::reserved: Make private; the accessors acquire()
      and release() will use atomic memory access.
      
      buf_pool_reserve_tmp_slot(): Make static. Remove the second parameter.
      Do not acquire any mutex. Remove the allocation of the buffers.
      
      buf_tmp_reserve_crypt_buf(), buf_tmp_reserve_compression_buf():
      Refactored away from buf_pool_reserve_tmp_slot().
      
      buf_page_decrypt_after_read(): Make static, and simplify the logic.
      Use the encryption buffer also for decompressing.
      
      buf_page_io_complete(), buf_dblwr_process(): Check more failures.
      
      fil_space_encrypt(): Simplify the debug checks.
      
      fil_space_t::printed_compression_failure: Remove.
      
      fil_get_compression_alg_name(): Remove.
      
      fil_iterate(): Allocate a buffer for compression and decompression
      only once, instead of allocating and freeing it for every page
      that uses compression, during IMPORT TABLESPACE. Also, validate the
      page checksum before decryption, and reduce the scope of some variables.
      
      fil_page_is_index_page(), fil_page_is_lzo_compressed(): Remove (unused).
      
      AbstractCallback::operator()(): Remove the parameter 'offset'.
      The check for it in FetchIndexRootPages::operator() was basically
      redundant and dead code since the previous refactoring.
      2ca904f0
    • Alexander Barkov's avatar
  14. 13 Jun, 2018 5 commits
  15. 12 Jun, 2018 3 commits
  16. 11 Jun, 2018 1 commit