An error occurred fetching the project authors.
- 23 Aug, 2006 1 commit
-
-
timour/timka@lamia.home authored
GROUP BY/DISTINCT pruning optimization must be done before ORDER BY optimization because ORDER BY may be removed when GROUP BY/DISTINCT sorts as a side effect, e.g. in SELECT DISTINCT <non-key-col>,<pk> FROM t1 ORDER BY <non-key-col> DISTINCT must be removed before ORDER BY as if done the other way around it will remove both.
-
- 21 Aug, 2006 1 commit
-
-
dlenev@mockturtle.local authored
from cache" and #21216 "Simultaneous DROP TABLE and SHOW OPEN TABLES causes server to crash". Crash happened when one ran DROP DATABASE or SHOW OPEN TABLES statements while concurrently doing DROP TABLE (or RENAME TABLE, CREATE TABLE LIKE or any other command that takes name-lock) in other connection. This problem was caused by the fact that table placeholders which were added to table cache in order to obtain name-lock on table had TABLE_SHARE::db and table_name set to 0. Therefore they broke assumption that these members are non-0 for all tables in table cache on which some of our code relies. The fix sets these members for such placeholders to appropriate value making this assumption true again. As attempt to avoid such problems in future we introduce auxiliary TABLE_SHARE::set_table_cache_key() methods which should be used when one wants to set TABLE_SHARE::table_cache_key and which ensure that TABLE_SHARE::table_name/db are set properly. Test cases for these bugs were added to 5.0 test-suite (with 5.0-specific fix for bug #21216).
-
- 01 Aug, 2006 1 commit
-
-
evgen@sunlight.local authored
-
- 31 Jul, 2006 1 commit
-
-
evgen@moonbone.local authored
After merge fix
-
- 28 Jul, 2006 2 commits
-
-
Renamed variable, to avoid name clash with macro "rem_size" on AIX 5.3 and "/usr/include/sys/xmem.h" (bug#17648) asn.cpp, asn.hpp: Avoid name clash with NAME_MAX
-
sergefp@mysql.com authored
- Make the range-et-al optimizer produce E(#table records after table condition is applied), - Make the join optimizer use this value, - Add "filtered" column to EXPLAIN EXTENDED to show fraction of records left after table condition is applied - Adjust test results, add comments
-
- 27 Jul, 2006 1 commit
-
-
kroki/tomash@moonlight.intranet authored
Too many cursors (more than 1024) could lead to memory corruption. This affects both, stored routines and C API cursors, and the threshold is per-server, not per-connection. Similarly, the corruption could happen when the server was under heavy load (executing more than 1024 simultaneous complex queries), and this is the reason why this bug is fixed in 4.1, which doesn't support cursors. The corruption was caused by a bug in the temporary tables code, when an attempt to create a table could lead to a write beyond allocated space. Note, that only internal tables were affected (the tables created internally by the server to resolve the query), not tables created with CREATE TEMPORARY TABLE. Another pre-condition for the bug is TRUE value of --temp-pool startup option, which, however, is a default. The cause of a bug was that random memory was overwritten in bitmap_set_next() due to out-of-bound memory access.
-
- 26 Jul, 2006 4 commits
-
-
evgen@moonbone.local authored
Post review changes for bug#19862.
-
gkodinov/kgeorge@macbook.gmz authored
When processing aggregate functions all tables values are reset to NULLs at the end of each group. When doing that if there are no rows found for a group the const tables must not be reset as they are not recalculated by do_select()/sub_select() for each group.
-
kroki/tomash@moonlight.intranet authored
Too many cursors (more than 1024) could lead to memory corruption. This affects both, stored routines and C API cursors, and the threshold is per-server, not per-connection. Similarly, the corruption could happen when the server was under heavy load (executing more than 1024 simultaneous complex queries), and this is the reason why this bug is fixed in 4.1, which doesn't support cursors. The corruption was caused by a bug in the temporary tables code, when an attempt to create a table could lead to a write beyond allocated space. Note, that only internal tables were affected (the tables created internally by the server to resolve the query), not tables created with CREATE TEMPORARY TABLE. Another pre-condition for the bug is TRUE value of --temp-pool startup option, which, however, is a default. The cause of a bug was that random memory was overwritten in bitmap_set_next() due to out-of-bound memory access.
-
gkodinov/kgeorge@macbook.gmz authored
When optimizing conditions like 'a = <some_val> OR a IS NULL' so that they're united into a single condition on the key and checked together the server must check which value is the NULL value in a correct way : not only using ->is_null but also check if the expression doesn't depend on any tables referenced in the current statement. This additional check must be performed because that optimization takes place before the actual execution of the statement, so if the field was initialized to NULL from a previous statement the optimization would be applied incorrectly.
-
- 25 Jul, 2006 3 commits
-
-
timour/timka@lamia.home authored
The problem was in that opt_sum_query() replaced MIN/MAX functions with the corresponding constant found in a key, but due to imprecise representation of float numbers, when evaluating the where clause, this comparison failed. When MIN/MAX optimization detects that all tables can be removed, also remove all conjuncts in a where clause that refer to these tables. As a result of this fix, these conditions are not evaluated twice, and in the case of float number comparisons we do not discard result rows due to imprecise float representation. As a side-effect this fix also corrects an unnoticed problem in bug 12882.
-
evgen@moonbone.local authored
When there is no index defined filesort is used to sort the result of a query. If there is a function in the select list and the result set should be ordered by it's value then this function will be evaluated twice. First time to get the value of the sort key and second time to send its value to a user. This happens because filesort when sorts a table remembers only values of its fields but not values of functions. All functions are affected. But taking into account that SP and UDF functions can be both expensive and non-deterministic a temporary table should be used to store their results and then sort it to avoid twice SP evaluation and to get a correct result. If an expression referenced in an ORDER clause contains a SP or UDF function, force the use of a temporary table. A new Item_processor function called func_type_checker_processor is added to check whether the expression contains a function of a particular type.
-
gkodinov/kgeorge@macbook.gmz authored
when calculating GROUP_CONCAT all blob fields are transformed to varchar when making the temp table. However a varchar has at max 2 bytes for length. This fix makes the conversion only for blobs whose max length is below that limit. Otherwise blob field is created by make_string_field() call.
-
- 21 Jul, 2006 2 commits
-
-
gkodinov/kgeorge@macbook.mshome.net authored
When making a place to store field values at the start of each group the real item (not the reference) must be used when deciding which column to copy.
-
gkodinov/kgeorge@macbook.gmz authored
An aggregate function reference was resolved incorrectly and caused a crash in count_field_types. Must use real_item() to get to the real Item instance through the reference
-
- 15 Jul, 2006 1 commit
-
-
igor@olga.mysql.com authored
The bug was due to a loss happened during a refactoring made on May 30 2005 that modified the function JOIN::reinit. As a result of it for any subquery the value of offset_limit_cnt was not restored for the following executions. Yet the first execution of the subquery made it equal to 0. The fix restores this value in the function JOIN::reinit.
-
- 14 Jul, 2006 1 commit
-
-
igor@olga.mysql.com authored
DESCRIBE returned the type BIGINT for a column of a view if the column was specified by an expression over values of the type INT. E.g. for the view defined as follows: CREATE VIEW v1 SELECT COALESCE(f1,f2) FROM t1 DESCRIBE returned type BIGINT for the only column of the view if f1,f2 are columns of the INT type. At the same time DESCRIBE returned type INT for the only column of the table defined by the statement: CREATE TABLE t2 SELECT COALESCE(f1,f2) FROM t1. This inconsistency was removed by the patch. Now the code chooses between INT/BIGINT depending on the precision of the aggregated column type. Thus both DESCRIBE commands above returns type INT for v1 and t2.
-
- 12 Jul, 2006 1 commit
-
-
gkodinov/kgeorge@macbook.gmz authored
* don't use join cache when the incoming data set is already ordered for ORDER BY This choice must be made because join cache will effectively reverse the join order and the results will be sorted by the index of the table that uses join cache.
-
- 11 Jul, 2006 1 commit
-
-
evgen@moonbone.local authored
may return a wrong result. An Item_sum_hybrid object has the was_values flag which indicates whether any values were added to the sum function. By default it is set to true and reset to false on any no_rows_in_result() call. This method is called only in return_zero_rows() function. An ALL/ANY subquery can be optimized by MIN/MAX optimization. The was_values flag is used to indicate whether the subquery has returned at least one row. This bug occurs because return_zero_rows() is called only when we know that the select will return zero rows before starting any scans but often such information is not known. In the reported case the return_zero_rows() function is not called and the was_values flag is not reset to false and yet the subquery return no rows Item_func_not_all and Item_func_nop_all functions return a wrong comparison result. The end_send_group() function now calls no_rows_in_result() for each item in the fields_list if there is no rows were found for the (sub)query.
-
- 10 Jul, 2006 1 commit
-
-
To make MySQL compatible with some ODBC applications, you can find the AUTO_INCREMENT value for the last inserted row with the following query: SELECT * FROM tbl_name WHERE auto_col IS NULL. This is done with a special code that replaces 'auto_col IS NULL' with 'auto_col = LAST_INSERT_ID'. However this also resets the LAST_INSERT_ID to 0 as it uses it for a flag so as to ensure that only the first SELECT ... WHERE auto_col IS NULL after an INSERT has this special behaviour. In order to avoid resetting the LAST_INSERT_ID a special flag is introduced in the THD class. This flag is used to restrict the second and subsequent SELECTs instead of LAST_INSERT_ID.
-
- 09 Jul, 2006 1 commit
-
-
guilhem@gbichot3.local authored
this is a cleanup patch for our current auto_increment handling: new names for auto_increment variables in THD, new methods to manipulate them (see sql_class.h), some move into handler::, causing less backup/restore work when executing substatements. This makes the logic hopefully clearer, less work is is needed in mysql_insert(). By cleaning up, using different variables for different purposes (instead of one for 3 things...), we fix those bugs, which someone may want to fix in 5.0 too: BUG#20339 "stored procedure using LAST_INSERT_ID() does not replicate statement-based" BUG#20341 "stored function inserting into one auto_increment puts bad data in slave" BUG#19243 "wrong LAST_INSERT_ID() after ON DUPLICATE KEY UPDATE" (now if a row is updated, LAST_INSERT_ID() will return its id) and re-fixes: BUG#6880 "LAST_INSERT_ID() value changes during multi-row INSERT" (already fixed differently by Ramil in 4.1) Test of documented behaviour of mysql_insert_id() (there was no test). The behaviour changes introduced are: - LAST_INSERT_ID() now returns "the first autogenerated auto_increment value successfully inserted", instead of "the first autogenerated auto_increment value if any row was successfully inserted", see auto_increment.test. Same for mysql_insert_id(), see mysql_client_test.c. - LAST_INSERT_ID() returns the id of the updated row if ON DUPLICATE KEY UPDATE, see auto_increment.test. Same for mysql_insert_id(), see mysql_client_test.c. - LAST_INSERT_ID() does not change if no autogenerated value was successfully inserted (it used to then be 0), see auto_increment.test. - if in INSERT SELECT no autogenerated value was successfully inserted, mysql_insert_id() now returns the id of the last inserted row (it already did this for INSERT VALUES), see mysql_client_test.c. - if INSERT SELECT uses LAST_INSERT_ID(X), mysql_insert_id() now returns X (it already did this for INSERT VALUES), see mysql_client_test.c. - NDB now behaves like other engines wrt SET INSERT_ID: with INSERT IGNORE, the id passed in SET INSERT_ID is re-used until a row succeeds; SET INSERT_ID influences not only the first row now. Additionally, when unlocking a table we check that the thread is not keeping a next_insert_id (as the table is unlocked that id is potentially out-of-date); forgetting about this next_insert_id is done in a new handler::ha_release_auto_increment(). Finally we prepare for engines capable of reserving finite-length intervals of auto_increment values: we store such intervals in THD. The next step (to be done by the replication team in 5.1) is to read those intervals from THD and actually store them in the statement-based binary log. NDB will be a good engine to test that.
-
- 01 Jul, 2006 1 commit
-
-
mikael@dator5.(none) authored
Last round of review fixes
-
- 30 Jun, 2006 1 commit
-
-
sergefp@mysql.com authored
- Don't forget to produce "partitions" column for "UNION RESULT" row of EXPLAIN output
-
- 28 Jun, 2006 1 commit
-
-
gkodinov@mysql.com authored
-
- 27 Jun, 2006 2 commits
-
-
kroki@mysql.com authored
The problem was that we restored SQL_CACHE, SQL_NO_CACHE flags in SELECT statement from internal structures based on value set later at runtime, not the original value set by the user. The solution is to remember that original value.
-
gkodinov@mysql.com authored
'SELECT DISTINCT a,b FROM t1' should not use temp table if there is unique index (or primary key) on a. There are a number of other similar cases that can be calculated without the use of a temp table : multi-part unique indexes, primary keys or using GROUP BY instead of DISTINCT. When a GROUP BY/DISTINCT clause contains all key parts of a unique index, then it is guaranteed that the fields of the clause will be unique, therefore we can optimize away GROUP BY/DISTINCT altogether. This optimization has two effects: * there is no need to create a temporary table to compute the GROUP/DISTINCT operation (or the temporary table will be smaller if only GROUP is removed and DISTINCT stays or if DISTINCT is removed and GROUP BY stays) * this causes the statement in effect to become updatable in Connector/Java because the result set columns will be direct reference to the primary key of the table (instead to the temporary table that it currently references). Implemented a check that will optimize away GROUP BY/DISTINCT for queries like the above. Currently it will work only for single non-constant table in the FROM clause.
-
- 26 Jun, 2006 1 commit
-
-
ingo@mysql.com authored
Bug#17294 - INSERT DELAYED puting an \n before data Bug#16611 - INSERT DELAYED corrupts data Bug#13707 - Server crash with INSERT DELAYED on MyISAM table Combined as Bug#16218. INSERT DELAYED crashed in 5.0 on a table with a varchar that could be NULL and was created pre-5.0 (Bugs 16218 and 13707). INSERT DELAYED corrupted data in 5.0 on a table with varchar fields that was created pre-5.0 (Bugs 17294 and 16611). In case of INSERT DELAYED the open table is copied from the delayed insert thread to be able to create a record for the queue. When copying the fields, a method was used that did convert old varchar to new varchar fields and did not set up some pointers into the record buffer of the table. The field conversion was guilty for the misinterpretation of the record contents by the delayed insert thread. The wrong pointer setup was guilty for the crashes. For Bug 13707 (Server crash with INSERT DELAYED on MyISAM table) I fixed the above mentioned method to set up one of the pointers. For Bug 16218 I set up the other pointers too. But when looking at the corruptions I got aware that converting the field type was totally wrong for INSERT DELAYED. The copied table is used to create a record that is to be sent to the delayed insert thread. Of course it can interpret the record correctly only if all field types are the same in both table objects. So I revoked the fix for Bug 13707 and changed the new_field() method so that it can suppress conversions. No test case as this is a migration problem. One needs to create a table with 4.x and use it with 5.x. I added two test scripts to the bug report.
-
- 22 Jun, 2006 1 commit
-
-
mikael@dator5.(none) authored
Review comments
-
- 20 Jun, 2006 4 commits
-
-
evgen@moonbone.local authored
Additional fix for #16377 for bigendian platforms sql_select.cc, select.result, select.test: After merge fix
-
mikael@dator5.(none) authored
-
monty@mysql.com authored
SHOW STATUS does not anymore change local status variables (except com_show_status). Global status variables are still updated. SHOW STATUS are not anymore put in slow query log because of no index usage. Implemntation done by removing orig_sql_command and moving logic of SHOW STATUS to mysql_excute_command() This simplifies code and allows us to remove some if statements all over the code. Upgraded uc_update_queries[] to sql_command_flags and added more bitmaps to better categorize commands. This allowed some overall simplifaction when testing sql_command. Fixes bugs: Bug#10210: running SHOW STATUS increments counters it shouldn't Bug#19764: SHOW commands end up in the slow log as table scans
-
gkodinov@mysql.com authored
-
- 19 Jun, 2006 1 commit
-
-
gkodinov@mysql.com authored
tables Currently in INSERT ... SELECT ... LIMIT ... the compiler uses a temporary table to store the results of SELECT ... LIMIT .. and then uses that table as a source for INSERT. The problem is that in some cases it actually skips the LIMIT clause in doing that and materializes the whole SELECT result set regardless of the LIMIT. This fix is limiting the process of filling up the temp table with only that much rows that will be actually used by propagating the LIMIT value.
-
- 05 Jun, 2006 2 commits
-
-
sergefp@mysql.com authored
In select_describe(), make the String object that holds the value of "partitions" column to "own" the value buffer, so the buffer isn't prematurely freed. [this is the second attempt with review fixes]
-
monty@mysql.com authored
Remove compiler warnings
-
- 04 Jun, 2006 1 commit
-
-
monty@mysql.com authored
This changeset is largely a handler cleanup changeset (WL#3281), but includes fixes and cleanups that was found necessary while testing the handler changes Changes that requires code changes in other code of other storage engines. (Note that all changes are very straightforward and one should find all issues by compiling a --debug build and fixing all compiler errors and all asserts in field.cc while running the test suite), - New optional handler function introduced: reset() This is called after every DML statement to make it easy for a handler to statement specific cleanups. (The only case it's not called is if force the file to be closed) - handler::extra(HA_EXTRA_RESET) is removed. Code that was there before should be moved to handler::reset() - table->read_set contains a bitmap over all columns that are needed in the query. read_row() and similar functions only needs to read these columns - table->write_set contains a bitmap over all columns that will be updated in the query. write_row() and update_row() only needs to update these columns. The above bitmaps should now be up to date in all context (including ALTER TABLE, filesort()). The handler is informed of any changes to the bitmap after fix_fields() by calling the virtual function handler::column_bitmaps_signal(). If the handler does caching of these bitmaps (instead of using table->read_set, table->write_set), it should redo the caching in this code. as the signal() may be sent several times, it's probably best to set a variable in the signal and redo the caching on read_row() / write_row() if the variable was set. - Removed the read_set and write_set bitmap objects from the handler class - Removed all column bit handling functions from the handler class. (Now one instead uses the normal bitmap functions in my_bitmap.c instead of handler dedicated bitmap functions) - field->query_id is removed. One should instead instead check table->read_set and table->write_set if a field is used in the query. - handler::extra(HA_EXTRA_RETRIVE_ALL_COLS) and handler::extra(HA_EXTRA_RETRIEVE_PRIMARY_KEY) are removed. One should now instead use table->read_set to check for which columns to retrieve. - If a handler needs to call Field->val() or Field->store() on columns that are not used in the query, one should install a temporary all-columns-used map while doing so. For this, we provide the following functions: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->read_set); field->val(); dbug_tmp_restore_column_map(table->read_set, old_map); and similar for the write map: my_bitmap_map *old_map= dbug_tmp_use_all_columns(table, table->write_set); field->val(); dbug_tmp_restore_column_map(table->write_set, old_map); If this is not done, you will sooner or later hit a DBUG_ASSERT in the field store() / val() functions. (For not DBUG binaries, the dbug_tmp_restore_column_map() and dbug_tmp_restore_column_map() are inline dummy functions and should be optimized away be the compiler). - If one needs to temporary set the column map for all binaries (and not just to avoid the DBUG_ASSERT() in the Field::store() / Field::val() methods) one should use the functions tmp_use_all_columns() and tmp_restore_column_map() instead of the above dbug_ variants. - All 'status' fields in the handler base class (like records, data_file_length etc) are now stored in a 'stats' struct. This makes it easier to know what status variables are provided by the base handler. This requires some trivial variable names in the extra() function. - New virtual function handler::records(). This is called to optimize COUNT(*) if (handler::table_flags() & HA_HAS_RECORDS()) is true. (stats.records is not supposed to be an exact value. It's only has to be 'reasonable enough' for the optimizer to be able to choose a good optimization path). - Non virtual handler::init() function added for caching of virtual constants from engine. - Removed has_transactions() virtual method. Now one should instead return HA_NO_TRANSACTIONS in table_flags() if the table handler DOES NOT support transactions. - The 'xxxx_create_handler()' function now has a MEM_ROOT_root argument that is to be used with 'new handler_name()' to allocate the handler in the right area. The xxxx_create_handler() function is also responsible for any initialization of the object before returning. For example, one should change: static handler *myisam_create_handler(TABLE_SHARE *table) { return new ha_myisam(table); } -> static handler *myisam_create_handler(TABLE_SHARE *table, MEM_ROOT *mem_root) { return new (mem_root) ha_myisam(table); } - New optional virtual function: use_hidden_primary_key(). This is called in case of an update/delete when (table_flags() and HA_PRIMARY_KEY_REQUIRED_FOR_DELETE) is defined but we don't have a primary key. This allows the handler to take precisions in remembering any hidden primary key to able to update/delete any found row. The default handler marks all columns to be read. - handler::table_flags() now returns a ulonglong (to allow for more flags). - New/changed table_flags() - HA_HAS_RECORDS Set if ::records() is supported - HA_NO_TRANSACTIONS Set if engine doesn't support transactions - HA_PRIMARY_KEY_REQUIRED_FOR_DELETE Set if we should mark all primary key columns for read when reading rows as part of a DELETE statement. If there is no primary key, all columns are marked for read. - HA_PARTIAL_COLUMN_READ Set if engine will not read all columns in some cases (based on table->read_set) - HA_PRIMARY_KEY_ALLOW_RANDOM_ACCESS Renamed to HA_PRIMARY_KEY_REQUIRED_FOR_POSITION. - HA_DUPP_POS Renamed to HA_DUPLICATE_POS - HA_REQUIRES_KEY_COLUMNS_FOR_DELETE Set this if we should mark ALL key columns for read when when reading rows as part of a DELETE statement. In case of an update we will mark all keys for read for which key part changed value. - HA_STATS_RECORDS_IS_EXACT Set this if stats.records is exact. (This saves us some extra records() calls when optimizing COUNT(*)) - Removed table_flags() - HA_NOT_EXACT_COUNT Now one should instead use HA_HAS_RECORDS if handler::records() gives an exact count() and HA_STATS_RECORDS_IS_EXACT if stats.records is exact. - HA_READ_RND_SAME Removed (no one supported this one) - Removed not needed functions ha_retrieve_all_cols() and ha_retrieve_all_pk() - Renamed handler::dupp_pos to handler::dup_pos - Removed not used variable handler::sortkey Upper level handler changes: - ha_reset() now does some overall checks and calls ::reset() - ha_table_flags() added. This is a cached version of table_flags(). The cache is updated on engine creation time and updated on open. MySQL level changes (not obvious from the above): - DBUG_ASSERT() added to check that column usage matches what is set in the column usage bit maps. (This found a LOT of bugs in current column marking code). - In 5.1 before, all used columns was marked in read_set and only updated columns was marked in write_set. Now we only mark columns for which we need a value in read_set. - Column bitmaps are created in open_binary_frm() and open_table_from_share(). (Before this was in table.cc) - handler::table_flags() calls are replaced with handler::ha_table_flags() - For calling field->val() you must have the corresponding bit set in table->read_set. For calling field->store() you must have the corresponding bit set in table->write_set. (There are asserts in all store()/val() functions to catch wrong usage) - thd->set_query_id is renamed to thd->mark_used_columns and instead of setting this to an integer value, this has now the values: MARK_COLUMNS_NONE, MARK_COLUMNS_READ, MARK_COLUMNS_WRITE Changed also all variables named 'set_query_id' to mark_used_columns. - In filesort() we now inform the handler of exactly which columns are needed doing the sort and choosing the rows. - The TABLE_SHARE object has a 'all_set' column bitmap one can use when one needs a column bitmap with all columns set. (This is used for table->use_all_columns() and other places) - The TABLE object has 3 column bitmaps: - def_read_set Default bitmap for columns to be read - def_write_set Default bitmap for columns to be written - tmp_set Can be used as a temporary bitmap when needed. The table object has also two pointer to bitmaps read_set and write_set that the handler should use to find out which columns are used in which way. - count() optimization now calls handler::records() instead of using handler->stats.records (if (table_flags() & HA_HAS_RECORDS) is true). - Added extra argument to Item::walk() to indicate if we should also traverse sub queries. - Added TABLE parameter to cp_buffer_from_ref() - Don't close tables created with CREATE ... SELECT but keep them in the table cache. (Faster usage of newly created tables). New interfaces: - table->clear_column_bitmaps() to initialize the bitmaps for tables at start of new statements. - table->column_bitmaps_set() to set up new column bitmaps and signal the handler about this. - table->column_bitmaps_set_no_signal() for some few cases where we need to setup new column bitmaps but don't signal the handler (as the handler has already been signaled about these before). Used for the momement only in opt_range.cc when doing ROR scans. - table->use_all_columns() to install a bitmap where all columns are marked as use in the read and the write set. - table->default_column_bitmaps() to install the normal read and write column bitmaps, but not signaling the handler about this. This is mainly used when creating TABLE instances. - table->mark_columns_needed_for_delete(), table->mark_columns_needed_for_delete() and table->mark_columns_needed_for_insert() to allow us to put additional columns in column usage maps if handler so requires. (The handler indicates what it neads in handler->table_flags()) - table->prepare_for_position() to allow us to tell handler that it needs to read primary key parts to be able to store them in future table->position() calls. (This replaces the table->file->ha_retrieve_all_pk function) - table->mark_auto_increment_column() to tell handler are going to update columns part of any auto_increment key. - table->mark_columns_used_by_index() to mark all columns that is part of an index. It will also send extra(HA_EXTRA_KEYREAD) to handler to allow it to quickly know that it only needs to read colums that are part of the key. (The handler can also use the column map for detecting this, but simpler/faster handler can just monitor the extra() call). - table->mark_columns_used_by_index_no_reset() to in addition to other columns, also mark all columns that is used by the given key. - table->restore_column_maps_after_mark_index() to restore to default column maps after a call to table->mark_columns_used_by_index(). - New item function register_field_in_read_map(), for marking used columns in table->read_map. Used by filesort() to mark all used columns - Maintain in TABLE->merge_keys set of all keys that are used in query. (Simplices some optimization loops) - Maintain Field->part_of_key_not_clustered which is like Field->part_of_key but the field in the clustered key is not assumed to be part of all index. (used in opt_range.cc for faster loops) - dbug_tmp_use_all_columns(), dbug_tmp_restore_column_map() tmp_use_all_columns() and tmp_restore_column_map() functions to temporally mark all columns as usable. The 'dbug_' version is primarily intended inside a handler when it wants to just call Field:store() & Field::val() functions, but don't need the column maps set for any other usage. (ie:: bitmap_is_set() is never called) - We can't use compare_records() to skip updates for handlers that returns a partial column set and the read_set doesn't cover all columns in the write set. The reason for this is that if we have a column marked only for write we can't in the MySQL level know if the value changed or not. The reason this worked before was that MySQL marked all to be written columns as also to be read. The new 'optimal' bitmaps exposed this 'hidden bug'. - open_table_from_share() does not anymore setup temporary MEM_ROOT object as a thread specific variable for the handler. Instead we send the to-be-used MEMROOT to get_new_handler(). (Simpler, faster code) Bugs fixed: - Column marking was not done correctly in a lot of cases. (ALTER TABLE, when using triggers, auto_increment fields etc) (Could potentially result in wrong values inserted in table handlers relying on that the old column maps or field->set_query_id was correct) Especially when it comes to triggers, there may be cases where the old code would cause lost/wrong values for NDB and/or InnoDB tables. - Split thd->options flag OPTION_STATUS_NO_TRANS_UPDATE to two flags: OPTION_STATUS_NO_TRANS_UPDATE and OPTION_KEEP_LOG. This allowed me to remove some wrong warnings about: "Some non-transactional changed tables couldn't be rolled back" - Fixed handling of INSERT .. SELECT and CREATE ... SELECT that wrongly reset (thd->options & OPTION_STATUS_NO_TRANS_UPDATE) which caused us to loose some warnings about "Some non-transactional changed tables couldn't be rolled back") - Fixed use of uninitialized memory in ha_ndbcluster.cc::delete_table() which could cause delete_table to report random failures. - Fixed core dumps for some tests when running with --debug - Added missing FN_LIBCHAR in mysql_rm_tmp_tables() (This has probably caused us to not properly remove temporary files after crash) - slow_logs was not properly initialized, which could maybe cause extra/lost entries in slow log. - If we get an duplicate row on insert, change column map to read and write all columns while retrying the operation. This is required by the definition of REPLACE and also ensures that fields that are only part of UPDATE are properly handled. This fixed a bug in NDB and REPLACE where REPLACE wrongly copied some column values from the replaced row. - For table handler that doesn't support NULL in keys, we would give an error when creating a primary key with NULL fields, even after the fields has been automaticly converted to NOT NULL. - Creating a primary key on a SPATIAL key, would fail if field was not declared as NOT NULL. Cleanups: - Removed not used condition argument to setup_tables - Removed not needed item function reset_query_id_processor(). - Field->add_index is removed. Now this is instead maintained in (field->flags & FIELD_IN_ADD_INDEX) - Field->fieldnr is removed (use field->field_index instead) - New argument to filesort() to indicate that it should return a set of row pointers (not used columns). This allowed me to remove some references to sql_command in filesort and should also enable us to return column results in some cases where we couldn't before. - Changed column bitmap handling in opt_range.cc to be aligned with TABLE bitmap, which allowed me to use bitmap functions instead of looping over all fields to create some needed bitmaps. (Faster and smaller code) - Broke up found too long lines - Moved some variable declaration at start of function for better code readability. - Removed some not used arguments from functions. (setup_fields(), mysql_prepare_insert_check_table()) - setup_fields() now takes an enum instead of an int for marking columns usage. - For internal temporary tables, use handler::write_row(), handler::delete_row() and handler::update_row() instead of handler::ha_xxxx() for faster execution. - Changed some constants to enum's and define's. - Using separate column read and write sets allows for easier checking of timestamp field was set by statement. - Remove calls to free_io_cache() as this is now done automaticly in ha_reset() - Don't build table->normalized_path as this is now identical to table->path (after bar's fixes to convert filenames) - Fixed some missed DBUG_PRINT(.."%lx") to use "0x%lx" to make it easier to do comparision with the 'convert-dbug-for-diff' tool. Things left to do in 5.1: - We wrongly log failed CREATE TABLE ... SELECT in some cases when using row based logging (as shown by testcase binlog_row_mix_innodb_myisam.result) Mats has promised to look into this. - Test that my fix for CREATE TABLE ... SELECT is indeed correct. (I added several test cases for this, but in this case it's better that someone else also tests this throughly). Lars has promosed to do this.
-
- 02 Jun, 2006 2 commits
-
-
igor@rurik.mysql.com authored
The bug report revealed two problems related to min/max optimization: 1. If the length of a constant key used in a SARGable condition for for the MIN/MAX fields is greater than the length of the field an unwanted warning on key truncation is issued; 2. If MIN/MAX optimization is applied to a partial index, like INDEX(b(4)) than can lead to returning a wrong result set.
-
guilhem@mysql.com authored
New prototype for get_auto_increment() (but new arguments not yet used), to be able to reserve a finite interval of auto_increment values from cooperating engines. A hint on how many values to reserve is found in handler::estimation_rows_to_insert, filled by ha_start_bulk_insert(), new wrapper around start_bulk_insert(). NOTE: this patch changes nothing, for all engines. But it makes the API ready for those engines which will want to do reservation. More csets will come to complete WL#3146.
-
- 26 May, 2006 1 commit
-
-
gkodinov@mysql.com authored
The check for view security was lacking several points : 1. Check with the right set of permissions : for each table ref that participates in a view there were the right credentials to use in it's security_ctx member, but these weren't used for checking the credentials. This makes hard enforcing the SQL SECURITY DEFINER|INVOKER property consistently. 2. Because of the above the security checking for views was just ruled out in explicit ways in several places. 3. The security was checked only for the columns of the tables that are brought into the query from a view. So if there is no column reference outside of the view definition it was not detecting the lack of access to the tables in the view in SQL SECURITY INVOKER mode. The fix below tries to fix the above 3 points.
-