- 24 Nov, 2017 3 commits
-
-
Marko Mäkelä authored
When Mariabackup is invoked on an instance that uses a multi-file InnoDB system tablespace, it may fail to other files of the system tablespace than the first one. This was revealed by the MDEV-14447 test case. The offending code is assuming that the first page of each data file is page 0. But, in multi-file system tablespaces that is not the case. xb_fil_cur_open(): Instead of re-reading the first page of the file, rely on the fil_space_t metadata that already exists in memory. xb_get_space_flags(): Remove.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Update C/C to include fix for this bug.
-
- 23 Nov, 2017 5 commits
-
-
Elena Stepanova authored
-
Andrei Elkin authored
MDEV-12012. Post-push attempt to catch failure in rpl_gtid_delete_domain failing on P8. The test is made more verbose.
-
Marko Mäkelä authored
Import and adjust the MySQL 5.7 tests innodb.update_time innodb.update_time_wl6658 into MariaDB. The functionality is present since MariaDB 10.2.2 merged InnoDB from MySQL 5.7.9. It was implemented in MySQL 5.7.2.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 22 Nov, 2017 4 commits
-
-
Sergei Golubchik authored
-
Aleksey Midenkov authored
List<>::last is wrong after memcpy(). Doing it on constructed objects is bad practice.
-
Sergei Golubchik authored
-
David Carlier authored
* rocksdb fails without timer_delete() - only use it when it exists
-
- 21 Nov, 2017 7 commits
-
-
Sergei Golubchik authored
we now have cmake/submodules.cmake that updates all submodules
-
Sergei Golubchik authored
another followup for 4c2c057d. there are six possible cases: --port can be set or not. --address can be set, not set, or set but without a port number The correct behavior is: 1 both --port and --address have a port number - use it if it's the same, otherwise an error 2 only --port has the number (--address isn't set) - use the value from --port 3 only --port has the number (--address is set, but has no port) - use the value from --port 4 --port is unset, --address has the port number - use the value from --address 5 --port is unset, --address has no port number - use the value from --address, that is, port is empty string 6 --port is unset, --address is unset - port is unset (an error somewhere later) case 5 wasn't handled correctly
-
Sergei Golubchik authored
-
Sergei Golubchik authored
move the privilege related test to main.cte_grant
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_stats_exec_sql(): Refuse the operation if shutdown has been initiated. The real fix would be to update the persistent statistics as part of the data dictionary transactions. To do this, we should move the storage of InnoDB persistent statistics to the InnoDB data files, and maybe also remove the InnoDB data dictionary.
-
- 20 Nov, 2017 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Also, MDEV-14317 When ALTER TABLE is aborted, do not write garbage pages to data files As pointed out by Shaohua Wang, the merge of MDEV-13328 from MariaDB 10.1 (based on MySQL 5.6) to 10.2 (based on 5.7) was performed incorrectly. Let us always pass a non-NULL FlushObserver* when writing to data files is desired. FlushObserver::is_partial_flush(): Check if this is a bulk-load (partial flush of the tablespace). FlushObserver::is_interrupted(): Check for interrupt status. buf_LRU_flush_or_remove_pages(): Instead of trx_t*, take FlushObserver* as a parameter. buf_flush_or_remove_pages(): Remove the parameters flush, trx. If observer!=NULL, write out the data pages. Use the new predicate observer->is_partial() to distinguish a partial tablespace flush (after bulk-loading) from a full tablespace flush (export). Return a bool (whether all pages were removed from the flush_list). buf_flush_dirty_pages(): Remove the parameter trx.
-
- 19 Nov, 2017 1 commit
-
-
Sergei Golubchik authored
this fixes aae49327
-
- 18 Nov, 2017 1 commit
-
-
Alexander Barkov authored
MDEV-14435 Different UNSIGNED flag of out user variable for YEAR parameter for direct vs prepared CALL
-
- 17 Nov, 2017 3 commits
-
-
Alexander Barkov authored
-
David Carlier authored
* cast pthread_t for printf * don't use RTLD_NOLOAD * tokudb fails without F_NOCACHE and O_DIRECT - ditto
-
Sergei Golubchik authored
"sed -r" fails on labrador. Don't use sed, use perl.
-
- 16 Nov, 2017 7 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
is set to true, as it should. Copy and modify original io_win.h header file to a different location (as we cannot patch anything in submodule). Make sure modified header is used.
-
Alexey Botchkov authored
Result unescaping added.
-
Jan Lindström authored
MariaDB adjustments to test case innodb-replace-debug and add missing instrumentation to row0ins.cc. MariaDB 10.1 does not seem to be affected.
-
Jan Lindström authored
Imported missing test case from MySQL 5.7 for commit 25781c154396dbbc21023786aa3be070057d6999 Author: Annamalai Gurusami <annamalai.gurusami@oracle.com> Date: Mon Feb 24 14:00:03 2014 +0530 Bug #17604730 ASSERTION: *CURSOR->INDEX->NAME == TEMP_INDEX_PREFIX
-
Jan Lindström authored
This is caused by following change: commit 95d29c99f01882ffcc2259f62b3163f9b0e80c75 Author: Marko Mäkelä <marko.makela@oracle.com> Date: Tue Nov 27 11:12:13 2012 +0200 Bug#15920445 INNODB REPORTS ER_DUP_KEY BEFORE CREATE UNIQUE INDEX COMPLETED There is a phase during online secondary index creation where the index has been internally completed inside InnoDB, but does not 'officially' exist yet. We used to report ER_DUP_KEY in these situations, like this: ERROR 23000: Can't write; duplicate key in table 't1' What we should do is to let the 'offending' operation complete, but report an error to the ALTER TABLE t1 ADD UNIQUE KEY (c2): ERROR HY000: Index c2 is corrupted (This misleading error message should be fixed separately: Bug#15920713 CREATE UNIQUE INDEX REPORTS ER_INDEX_CORRUPT INSTEAD OF DUPLICATE) row_ins_sec_index_entry_low(): flag the index corrupted instead of reporting a duplicate, in case the index has not been published yet. rb:1614 approved by Jimmy Yang Problem is that after we have found duplicate key on primary key we continue to get necessary gap locks in secondary indexes to block concurrent transactions from inserting the searched records. However, search from unique index used in foreign key constraint could return DB_NO_REFERENCED_ROW if INSERT .. ON DUPLICATE KEY UPDATE does not contain value for foreign key column. In this case we should return the original DB_DUPLICATE_KEY error instead of DB_NO_REFERENCED_ROW. Consider as a example following: create table child(a int not null primary key, b int not null, c int, unique key (b), foreign key (b) references parent (id)) engine=innodb; insert into child values (1,1,2); insert into child(a) values (1) on duplicate key update c = 3; Now primary key value 1 naturally causes duplicate key error that will be stored on node->duplicate. If there was no duplicate key error, we should return the actual no referenced row error. As value for column b used in both unique key and foreign key is not provided, server uses 0 as a search value. This is naturally, not found leading to DB_NO_REFERENCED_ROW. But, we should update the row with primay key value 1 anyway as requested by on duplicate key update clause.
-
Jun Su authored
-
- 15 Nov, 2017 4 commits
-
-
Andrei Elkin authored
With combination of --log-bin and Galera the server may crash reporting two characteristic stacks: /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG13mark_xid_doneEmb+0xc7)[0x7f182a8e2cb7] /usr/sbin/mysqld(binlog_background_thread+0x2b5)[0x7f182a8e3275] or /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG21do_checkpoint_requestEm+0x9d)[0x7ff395b2dafd] /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG20checkpoint_and_purgeEm+0x11)[0x7ff395b2db91] /usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG16rotate_and_purgeEb+0xc2)[0x7ff395b300b2] The reason of the failure appears to be non-matching decrements for `xid_count_per_binlog::xid_count` which can occur when a transaction is executed having its connection issued `SET @@sql_log_bin=0`. In such case the xid count is not incremented but its decrements still runs to turn `binlog_xid_count_list` into improper state which the following FLUSH BINARY LOGS exposes through the crash. *Note_1*: the regression test reuses an existing galera.sql_log_bin which does not run stably (even in its base form) by mtr with --log-bin. *Note_2*: 10.0-galera branch is free of this issue having missed MDEV-7205 fixes.
-
Andrei Elkin authored
As reported in MDEV-11969 "there's no way to ditch knowledge" about some domain that is no longer updated on a server. Besides being of annoyance to clutter output in DBA console stale domains can prevent the slave to connect the master as MDEV-12012 witnesses. What domain is obsolete must be evaluated by the user (DBA) according to whether the domain info is still relevant and will the domain ever receive any update. This patch introduces a method to discard obsolete gtid domains from the server binlog state. The removal requires no event group from such domain present in existing binlog files though. If there are any the containing logs must be first PURGEd in order for FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains) succeed. Otherwise the command returns an error. The list of obsolete domains can be computed through intersecting two sets - the earliest (first) binlog's Gtid_list and the current value of @@global.gtid_binlog_state - and extracting the domain id components from the intersection list items. The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog omitting the deleted domains from the active binlog file's Gtid_list. Notice though when the command is ineffective - that none of requested to delete domain exists in the binlog state - rotation does not occur. Obsolete domain deletion is not harmful for connected slaves as long as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID. The slaves must have the last event from purged files processed as usual, in order not to bump later into requesting a gtid from a file which was already gone. While the command is not replicated (as ordinary FLUSH BINLOG LOGS is) slaves, even though having extra domains, won't suffer from reconnection errors thanks to master-slave gtid connection protocol allowing the master to be ignorant about a gtid domain. Should at failover such slave to be promoted into master role it may run the ex-master's FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains) to clean its own binlog state. NOTES. suite/perfschema/r/start_server_low_digest.result is re-recorded as consequence of internal parser codes changes.
-
Oleksandr Byelkin authored
Fix of nondebuging version issue
-
Marko Mäkelä authored
-
- 14 Nov, 2017 1 commit
-
-
Daniel Bartholomew authored
-