- 07 Feb, 2018 4 commits
-
-
Vladislav Vaintroub authored
-
Andrei Elkin authored
The test was used to result in mismatch due to unaccounted specifics of the master-slave handshake protocol that sets the Slave_IO_Running status to true while the semisync master status is set to active a bit later. The test is refined to expect that.
-
Monty authored
-
Monty authored
Fixed that Truncate_versioning_privilege works as any other privilege during upgrade: - If the privilege field does not exists, add it to the user and db tables. If the user had super_privilege then the user will also get the new Truncate_versioning_privilege. This is done to ensure that if one has GRANT ALL PRIVILEGE before, one will continue to have it after running mysql_upgrade. This also fixes a bug where the Truncate_versioning_privilege
-
- 06 Feb, 2018 7 commits
-
-
Monty authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
dict_tf_is_valid(): Allow no-rollback tables in ROW_FORMAT=REDUNDANT.
-
Marko Mäkelä authored
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
MDEV-15091 : Windows, 64bit: reenable and fix warning C4267 (conversion from 'size_t' to 'type', possible loss of data) Handle string length as size_t, consistently (almost always:)) Change function prototypes to accept size_t, where in the past ulong or uint were used. change local/member variables to size_t when appropriate. This fix excludes rocksdb, spider,spider, sphinx and connect for now.
-
Igor Babaev authored
If setting user variable was used in the specification of a recursive CTE then Item_func_set_user_var::fix_fields() went into an infinite loop.
-
- 05 Feb, 2018 2 commits
-
-
Igor Babaev authored
does not return error Corrected the code of st_select_lex::find_table_def_in_with_clauses() for a proper identification of CTE references used in embedded CTEs.
-
Alexander Barkov authored
Applying https://github.com/MariaDB/server/pull/594 to bb-10.2-ext
-
- 04 Feb, 2018 4 commits
-
-
Alexander Barkov authored
-
Alexander Barkov authored
-
Alexander Barkov authored
When storing '0001-01-01 10:20:30x', execution went throw the last code branch in Field_time::store_TIME_with_warning(), around the test for (ltime->year || ltime->month). This then resulted into wrong results because: 1. Field_time::store_TIME() does not check YYYYMM against zero. It assumes that ltime->days and ltime->hours are already properly set. So it mixed days to hours, even when YYYYMM was not zero. 2. Field_time_hires::store_TIME() does not check YYYYMM against zero. It assumes that ltime->year, ltime->month, ltime->days and ltime->hours are already properly set. So it always mixed days and even months(!) and years(!) to hours, using pack_time(). This gave even worse results comparing to #2. 3. Field_timef::store_TIME() did not check the entire YYYYMM for being zero. It only checked MM, but did not check YYYY. In case of a zero MM, it mixed days to hours, even if YYYY was not zero. The wrong code was in TIME_to_longlong_time_packed(). In the new reduction Field_time::store_TIME_with_warning() is responsible to prepare the YYYYYMMDD part properly in all code branches (with trailing garbage like 'x' and without trailing garbage). It was reorganized into a more straightforward style. Field_time:store_TIME(), Field_time_hires::store_TIME() and TIME_to_longlong_time_packed() were fixed to do a DBUG_ASSERT on non-zero ltime->year or ltime->month. The code testing ltime->month was removed from TIME_to_longlong_time_packed(), as it's now properly done on the caller level. Truncation was moved from Field_timef::store_TIME() to Field_time::store_TIME_with_warning(). So now all thee methods Field_time*::store_TIME() assume a properly set input value: - Only zero ltime->year and ltime->month are allowed. - The value must be already properly truncated according to decimals() (this will help to add rounding soon, see MDEV-8894) A "const" qualifier was added to the argument of Field_time*::store_TIME().
-
Marko Mäkelä authored
-
- 03 Feb, 2018 1 commit
-
-
Alexander Barkov authored
-
- 02 Feb, 2018 5 commits
-
-
Alexander Barkov authored
Virtial_tmp_table did not set the "field_index" member for its Fields. Fixing Virtual_tmp_table::add() to set "field_index" to the Field's ordinal position inside the table, like a normal TABLE does, for consistency. Although, this flaw did not seem to cause any bugs, having field_index properly set is helpful for debugging purposes.
-
Sachin Setiya authored
1st. Create_field does not have function vers_sys_field() kind of handy function, second I think Create_field and Field should not divert much , and Field does have this function. 2nd. Versioning column does not have NOT_NULL_FLAG, since they can never be null. So I have added NOT_NULL_FLAG. 3rd. Since I added NOT_NULL_FLAG this created one issue , versioning column of datatype bigint unsigned were getting NO_DEFAULT_VALUE_FLAG. This makes test like versioning.insert to fail, Reason being If a column gets this flag if we insert 'default' value it will generate error(that is why ) test was failing. So now versioning column wont get NO_DEFAULT_VALUE_FLAG flag.
-
Sachin Setiya authored
Problem:- create or replace table t1 (pk int auto_increment primary key invisible, i int); alter table t1 modify pk int invisible; This last alter makes a invisible column which is not null and does not have default value. Analysis:- This is caused because our error check for NOT_NULL_FLAG and NO_DEFAULT_VALUE_FLAG flag misses this sql_field , but this is not the fault of error check :).Actually this field come via mysql_prepare_alter_table and it does not have NO_DEFAULT_VALUE_FLAG flag turned on. (If it was create table NO_DEFAULT_VALUE_FLAG would have turned on Column_definition::check) and this would have generated error. Solution:- I have moved the error check to kind last of mysql_prepare_create_table because upto this point we have applied NO_DEFAULT_VALUE_FLAG to required column.
-
Sachin Setiya authored
Problem:- If we create table field with dynamic default value then that field always gets NULL value. Analyze:- This is because in fill_record we simple continue at Invisible column because we though that share->default_values(default value is always copied into table->record[0] before insert) will have a default value for them(which is true for constant defaults , but not for dynamic defaults). Solution:- We simple set all_fields_have_value to null , and this will make call to update_default_fields (in the case of dynamic default), And default expr will be evaluted and value will be set in field.
-
Monty authored
This will make it easier to how memory allocation is done when debugging with either DBUG or gdb. Will especially help when debugging stored procedures Main change is a name argument as second argument to init_alloc_root() init_sql_alloc() Other things: - Added DBUG_ENTER/EXIT to some Virtual_tmp_table functions
-
- 01 Feb, 2018 1 commit
-
-
Monty authored
This is to make it a proper class function.
-
- 31 Jan, 2018 16 commits
-
-
Igor Babaev authored
When identifying a table name the following should be taken into account: a CTE name cannot be qualified with a database name, otherwise the table name is considered as the name of a non-CTE table.
-
Sergey Vojtovich authored
With trx_sys_t::rw_trx_ids removal, MVCC snapshot overhead became slightly higher. That is instead of copying an array we now have to iterate LF_HASH. All this done under trx_sys.mutex protection. This patch moves MVCC snapshot out of trx_sys.mutex. Clean-ups: Removed MVCC: doesn't make too much sense to keep it in a separate class anymore. Refactored ReadView so that it now calls register()/deregister() routines (it was vice versa before). ReadView doesn't have friends anymore. :( Even less trx_sys.mutex references.
-
Alexander Barkov authored
- Changing sp_rcontext::m_var_items from list of Item to list of Item_field - Renaming sp_rcontext::get_item() to get_variable() and changing its return type from Item* to Item_field * - Adding sp_rcontext::get_parameter() and sp_rcontext::set_parameter(), wrappers for get_variable() and set_variable() with extra DBUG_ASSERT. Using new methods instead of get_variable()/set_variable() in relevant places.
-
Sergey Vojtovich authored
serialisation_list was supposed to instantly give minimum registered transaction serialisation number. However maintaining and accessing this list requires global mutex protection. Since we already take MVCC snapshot by iterating trx_sys_t::rw_trx_hash, it is cheap to integrate minimum registered transaction lookup into this iteration.
-
Oleksandr Byelkin authored
Setting non_null value drops null_value flag. Part 2 of 3. Part 1 was 10.2 fix. Part 3 is test for Connector C.
-
Sergey Vojtovich authored
Take snapshot of registered read-write transaction identifiers directly from rw_trx_hash. It immediately saves one trx_sys.mutex lock, reduces size of another critical section protected by this mutex, and makes further optimisations like removing trx_sys_t::serialisation_list possible. Downside of this approach is bigger overhead for view opening, because iterating LF_HASH is more expensive compared to taking snapshot of an array. However for low concurrency overhead difference is negligible, while for high concurrency mutex is much bigger evil. Currently we still take trx_sys.mutex to serialise ReadView creation. This is required to keep serialisation_list ordered by trx->no as well as not to let purge thread to create more recent snapshot while another thread gets suspended during creation of older snapshot. This will become completely mutex free along with serialisation_list removal. Compared to previous implementation removing element from rw_trx_hash and serialisation_list is not atomic. We disregard all possible bad consequences (if there're any) since it will be solved along with serialisation_list removal.
-
Sergey Vojtovich authored
trx->state change must be guarded by trx->mutex. Moved mutex locking to MVCC::view_close().
-
Marko Mäkelä authored
trx_undo_mem_create_at_db_start(): Do not read TRX_UNDO_TRX_NO unless the field is known to be valid, that is, the transaction has been serialized and trx_purge_add_undo_to_history() has been invoked. Normally InnoDB pages would be zero-initialized on allocation (since MySQL 5.5 or so), but the undo log pages skip that mechanism. So, reused undo log pages can contain garbage. Undo log headers can start at any offset (there can be multiple undo log headers in the same undo log page). Therefore, because the TRX_UNDO_TRX_NO is never explicitly initialized on undo log header creation, its contents may be garbage.
-
Daniel Black authored
extra/mariabackup/xtrabackup.cc: In function ‘ulint xb_process_datadir(const char*, const char*, handle_datadir_entry_func_t)’: extra/mariabackup/xtrabackup.cc:4534:1: warning: ‘snprintf’ output may be truncated before the last format character [-Wformat-truncation=] xb_process_datadir( ^~~~~~~~~~~~~~~~~~ mariabackup/xtrabackup.cc:4607:11: note: ‘snprintf’ output 2 or more bytes (assuming 4001) into a destination of size 4000 snprintf(dbpath, sizeof(dbpath), "%s/%s", path, dbinfo.name); ~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
Daniel Black authored
gcc7 warning: sql/table.cc: In member function ‘int TABLE_SHARE::init_from_binary_frm_image(THD*, bool, const uchar*, size_t)’: sql/table.cc:2032:11: warning: this statement may fall through [-Wimplicit-fallthrough=] if (vers_can_native) ^~ sql/table.cc:2037:9: note: here default: ^~~~~~~
-
Marko Mäkelä authored
trx_write_serialisation_history(): Only invoke trx_sysf_get() to exclusively lock the TRX_SYS page if some change really has to be written to the page. On transaction commit, we will still write some binlog and Galera WSREP XID information. FIXME: If this information has to be written, it should be partitioned into the rollback segment pages.
-
Marko Mäkelä authored
InnoDB maintains an internal persistent sequence of transaction identifiers. This sequence is used for assigning both transaction start identifiers (DB_TRX_ID=trx->id) and end identifiers (trx->no) as well as end identifiers for the mysql.transaction_registry table that was introduced in MDEV-12894. TRX_SYS_TRX_ID_WRITE_MARGIN: Remove. After this many updates of the sequence we used to update the TRX_SYS page. We can avoid accessing the TRX_SYS page if we modify the InnoDB startup so that resurrecting the sequence from other pages of the transaction system. TRX_SYS_TRX_ID_STORE: Deprecate. The field only exists for the purpose of upgrading from an earlier version of MySQL or MariaDB. Starting with this fix, MariaDB will rely on the fields TRX_UNDO_TRX_ID, TRX_UNDO_TRX_NO in the undo log header page of each non-committed transaction, and on the new field TRX_RSEG_MAX_TRX_ID in rollback segment header pages. Because of this change, setting innodb_force_recovery=5 or 6 may cause the system to recover with trx_sys.get_max_trx_id()==0. We must adjust checks for invalid DB_TRX_ID and PAGE_MAX_TRX_ID accordingly. We will change the startup and shutdown messages to display the trx_sys.get_max_trx_id() in addition to the log sequence number. trx_sys_t::flush_max_trx_id(): Remove. trx_undo_mem_create_at_db_start(), trx_undo_lists_init(): Add an output parameter max_trx_id, to be updated from TRX_UNDO_TRX_ID, TRX_UNDO_TRX_NO. TRX_RSEG_MAX_TRX_ID: New field, for persisting trx_sys.get_max_trx_id() at the time of the latest transaction commit. Startup is not reading the undo log pages of committed transactions. We want to avoid additional page accesses on startup, as well as trouble when all undo logs have been emptied. On startup, we will simply determine the maximum value from all pages that are being read anyway. TRX_RSEG_FORMAT: Redefined from TRX_RSEG_MAX_SIZE. Old versions of InnoDB wrote uninitialized garbage to unused data fields. Because of this, we cannot simply introduce a new field in the rollback segment pages and expect it to be always zero, like it would if the database was created by a recent enough InnoDB version. Luckily, it looks like the field TRX_RSEG_MAX_SIZE was always written as 0xfffffffe. We will indicate a new subformat of the page by writing 0 to this field. This has the nice side effect that after a downgrade to older versions of InnoDB, transactions should fail to allocate any undo log, that is, writes will be blocked. So, there is no problem of getting corrupted transaction identifiers after downgrading. trx_rseg_t::max_size: Remove. trx_rseg_header_create(): Remove the parameter max_size=ULINT_MAX. trx_purge_add_undo_to_history(): Update TRX_RSEG_MAX_SIZE (and TRX_RSEG_FORMAT if needed). This is invoked on transaction commit. trx_rseg_mem_restore(): If TRX_RSEG_FORMAT contains 0, read TRX_RSEG_MAX_SIZE. trx_rseg_array_init(): Invoke trx_sys.init_max_trx_id(max_trx_id + 1) where max_trx_id was the maximum that was encountered in the rollback segment pages and the undo log pages of recovered active, XA PREPARE, or some committed transactions. (See trx_purge_add_undo_to_history() which invokes trx_rsegf_set_nth_undo(..., FIL_NULL, ...); not all committed transactions will be immediately detached from the rollback segment header.)
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_rseg_mem_restore(): Update the max_trx_id from the undo log pages. trx_sys_init_at_db_start(): Remove; merge with trx_lists_init_at_db_start(). trx_undo_lists_init(): Move to the only calling module, trx0rseg.cc. trx_undo_mem_create_at_db_start(): Declare globally. Return the number of pages.
-
Marko Mäkelä authored
trx_rseg_mem_create(): Initialize rseg->curr_size and rseg->max_size. trx_rseg_create(), trx_temp_rseg_create(): Do not call trx_rseg_mem_restore().
-
Marko Mäkelä authored
trx_undo_page_get_prev_rec(), trx_undo_page_get_last_rec(), trx_undo_page_get_first_rec(), trx_undo_page_get_start(): Move to the only caller, trx0undo.cc. Add some const qualifiers.
-