- 04 Sep, 2018 1 commit
-
-
mkaruza authored
While executing CTAS galera applier thread can cause CTAS to abort and rollback. Rollback can take time causing applier thread to shutdown node after serial unsuccessful retries to apply transaction. Don't set lock_wait_timeout to zero to wait for lock.
-
- 03 Jul, 2018 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Daniel Bartholomew authored
-
- 02 Jul, 2018 6 commits
-
-
Vladislav Vaintroub authored
Marko mentions, it could be caused by MDEV-15740 where InnoDB does not flush redo log as often as it should, with innodb_flush_log_at_trx_commit=1 The workaround is to use innodb_flush_log_at_trx_commit=2, which, according to MDEV-15740 is more durable.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
For some reason, some of these suppressions would fail to suppress when the code is compiled with clang 6.0, Debug and -DWITH_ASAN=ON. Possibly it is related to the number of .* or the length of the regular expression strings.
-
Marko Mäkelä authored
Before attempting to create an index, copy any fields from dict_table_t, because the table would be freed after a failed index creation.
-
Sergei Golubchik authored
-
Thirunarayanan Balathandayuthapani authored
NULL values when there is no DEFAULT - Merged the alter_non_null test case to alter_not_null test case. Renamed the alter_non_null_debug to alter_not_null_debug test case
-
- 01 Jul, 2018 4 commits
-
-
Anel Husakovic authored
One can create table with the same name for `field` and `table` `check` constraint. For example: `create table t(a int check(a>0), constraint a check(a>10));` But when inserting new rows same error is always raised. For example with ```insert into t values (-1);``` and ```insert into t values (10);``` same error `ER_CONSTRAINT_FAILED` is obtained and it is not clear which constraint is violated. This patch solve this error so that in case if field constraint is violated the first parameter in the error message is `table.field_name` and if table constraint is violated the first parameter in error message is `constraint_name`.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
Correct 898a8c3c to work when newer debhelper-10.2 is installed from xenial-backports (or jessie-backports). Use gcc version instead of debproxy version, this is likely a gcc issue (as disabling LTO and gcc's linker plugin fixes it).
-
Vladislav Vaintroub authored
use OPEN_ALWAYS instead, since we know file already exist.
-
- 30 Jun, 2018 9 commits
-
-
Elena Stepanova authored
-
Sergei Golubchik authored
-
Aleksey Midenkov authored
* ignore CHECK constraint for historical rows; * FOREIGN KEY test case. TODO: MDEV-16301 IB: use real table name for error messages on ALTER Closes tempesta-tech/mariadb#491 Closes #748
-
Eugene Kosov authored
MDEV-15947 ASAN heap-use-after-free in Item_ident::print or in my_strcasecmp_utf8 or unexpected ER_BAD_FIELD_ERROR upon call of stored procedure reading from versioned table Closes #728
-
Eugene Kosov authored
MDEV-15645 Assertion `table->insert_values' failed in write_record upon REPLACE into a view with underlying versioned table Right temporary storage for system versioning operations is table->record[2], not table->insert_values Closes #712
-
Sergei Golubchik authored
RBR not versioned -> versioned do it for all write_row events, not only for WRITE_ROWS_EVENT_V1
-
Sergei Golubchik authored
-
Vladislav Vaintroub authored
Disks with native 4K sectors need 4K alignment and size for unbuffered IO (i.e for files opened with FILE_FLAG_NO_BUFFERING) Innodb opens redo log with FILE_FLAG_NO_BUFFERING, however it always does 512byte IOs. Thus, the IO on 4K native sectors will fail, rendering Innodb non-functional. The fix is to check whether OS_FILE_LOG_BLOCK_SIZE is multiple of logical sector size, and if it is not, reopen the redo log without FILE_FLAG_NO_BUFFERING flag.
-
Vicențiu Ciorbaru authored
-
- 29 Jun, 2018 8 commits
-
-
Otto Kekäläinen authored
Building this plugin which requires run-time access to network, uses a lot of disk space and is slow was already partially disabled. This way we also ensure that on cmake level it never runs even if it out of some autodetection reason at times thought it could run. This fixes the error message: fatal: unable to access 'https://github.com/awslabs/aws-sdk-cpp.git/': Problem with the SSL CA cert (path? access rights?)
-
Otto Kekäläinen authored
Fixes errors on Travis like: cp: error writing debian/libmariadbd19//usr/lib/x86_64-linux-gnu/ libmariadbd.so.19: No space left on device
-
Otto Kekäläinen authored
This complements commit ecb0e0ad that disabled a bunch of plugins from being built on Travis-CI (due to time and disk space saving reasons). When the plugins are not built, the packaging phase will fail due to missing files. This change omits the files from packaging to the process can complete successfully.
-
Teodor Mircea Ionita authored
-
Teodor Mircea Ionita authored
-
Teodor Mircea Ionita authored
* Exclude some storage engines from Travis to conserve build time and disk usage per job. Exluded: TOKUDB MROONGA SPIDER OQGRAPH PERFSCHEMA SPHINX * Increase travis_wait from default 20m to 30 for MTR * Use travis_wait for long running MTR command (wait 30m instead of default 20m) * Increase testcase-timeout to 20m for OSX, 2m for Linux * Set ccache size only on Linux, adjust timeout again * Increase cache push timeout to 5 mins * Remove AWS defines, not needed * Remove commented out ASAN rules, has been disabled previously since it has a significant impact on job runtime, should be used more in buildbot instead * Misc cleanup and fixes
-
Teodor Mircea Ionita authored
-
Teodor Mircea Ionita authored
Several improvements have been made so that builds run faster and with fewer canceled jobs: * Set ccache max size to 1GB. Was 512MB for Linux (too low for MariaDB) and 5GB on macOS with defaults; * Don't install libasan in Travis if not necessary. Sicne ASAN is disabled for the time being, save time/resources for other steps; * Decrease number of parallel processes. To prevent resource exhaustion leading to poor performance. According to Travis docs, a max of 4 concurrent processses should be run per job: https://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error * Reconsider tests exec order and split huge main and rocksdb test suites into their own job, decreasing the chance of going over the Travis job execution limit and getting killed; * Increase Travis testcase-timeout to 4 minutes. Occasionally on Ubuntu target and frequently on macOS, many tests in main, rpl, binlog suites take longer than 2 minutes, resulting in many jobs failing, when in reality the failing tests didn't get a chance to complete. From my testing, along with the other speedups, i.e. increasing ccache size, a timeout of 4 minutes should be Ok. Revert to 3 minutes of necessary. * Build with GCC and Clang version 5,6 only. * Rename GCC_VERSION to CC_VERSION for clarity. We are using two compilers after all, GCC and Clang. * Stop using somewhat obsolete Clang4 in Travis. Also, was the reason for the failing test suites in MDEV-15430.
-
- 28 Jun, 2018 9 commits
-
-
Sergei Golubchik authored
-
Vladislav Vaintroub authored
Use GetLastError() instead.
-
Sergei Golubchik authored
table->in_use is not always set and a KILL signal can arrive anytime.
-
Andrei Elkin authored
MDEV-7257 made a dump thread to read from binlog concurrently with writers as long as the read bytes are below a water-mark (MYSQL_BIN_LOG::binlog_end_pos). However it appeared to be possible a dump thread reader reach out for bytes past the water mark through a feature of IO_CACHE that fills in the internal buffer and while doing so it could read what the reader is not supposed to see (the bytes above MYSQL_BIN_LOG::binlog_end_pos). The issue is fixed with constraining the IO_CACHE buffer fill to respect the watermark. An added unit test proves reading from file is bound to an external parameter passed to {IO_CACHE::end_of_file} cache member.
-
Sergei Golubchik authored
-
Alexander Barkov authored
Problem: push_handler() created sp_handler_entry instances on THD::main_mem_root, which is freed only after the SP instructions execution. So in case of a CONTINUE HANDLER inside a loop (e.g. WHILE) this approach leaked thread memory on every loop iteration. Changes: - Removing sp_handler_entry declaration, it's not really needed. - Fixing the data type of sp_rcontext::m_handlers from Dynamic_array<sp_handler_entry*> to Dynamic_array<sp_instr_hpush_jump*> - Fixing sp_rcontext::push_handler() to push the pointer to an sp_instr_hpush_jump instance to the handler stack. This instance contains everything we need. There is no a need to allocate anything else.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
rnd_pos_by_record calls ha_rnd_pos, which does the counting
-
Sergei Golubchik authored
-