An error occurred fetching the project authors.
- 17 Sep, 2009 1 commit
-
-
Staale Smedseng authored
with gcc 4.3.2 This is the fifth patch cleaning up more GCC warnings about variables used before initialized using the new macro UNINIT_VAR().
-
- 04 Sep, 2009 1 commit
-
-
Sergey Glukhov authored
Memory allocated in TMP_TABLE_PARAM::copy_field is not cleaned up. The fix is to clean up TMP_TABLE_PARAM::copy_field array in JOIN::destroy.
-
- 03 Sep, 2009 1 commit
-
-
Georgi Kodinov authored
function,file sql_base.cc When uncacheable queries are written to a temp table the optimizer must preserve the original JOIN structure, because it is re-using the JOIN structure to read from the resulting temporary table. This was done only for uncacheable sub-queries. But top level queries can also benefit from this mechanism, specially if they're using index access and need a reset. Fixed by not limiting the saving of JOIN structure to subqueries exclusively. Added a new test file to extend the existing (large) subquery.test.
-
- 27 Aug, 2009 1 commit
-
-
Georgi Kodinov authored
field references This error requires a combination of factors : 1. An "impossible where" in the outermost SELECT 2. An aggregate in the outermost SELECT 3. A correlated subquery with a WHERE clause that includes an outer field reference as a top level WHERE sargable predicate When JOIN::optimize detects an "impossible WHERE" it will bail out without doing the rest of the work and initializations. It will not call make_join_statistics() as well. And make_join_statistics fills in various structures for each table referenced. When processing the result of the "impossible WHERE" the query must send a single row of data if there are aggregate functions in it. In this case the server marks all the aggregates as having received no rows and calls the relevant Item::val_xxx() method on the SELECT list. However if this SELECT list happens to contain a correlated subquery this subquery is evaluated in a normal evaluation mode. And if this correlated subquery has a reference to a field from the outermost "impossible where" SELECT the add_key_fields will mistakenly consider the outer field reference as a "local" field reference when looking for sargable predicates. But since the SELECT where the outer field reference refers to is not completely initialized due to the "impossible WHERE" in this level we'll get a NULL pointer reference. Fixed by making a better condition for discovering if a field is "local" to the SELECT level being processed. It's not enough to look for OUTER_REF_TABLE_BIT in this case since for outer references to constant tables the Item_field::used_tables() will return 0 regardless of whether the field reference is from the local SELECT or not.
-
- 24 Jul, 2009 1 commit
-
-
Alexey Kopytov authored
In create_myisam_from_heap() mark all errors as fatal except HA_ERR_RECORD_FILE_FULL for a HEAP table. Not doing so could lead to problems, e.g. in a case when a temporary MyISAM table gets overrun due to its MAX_ROWS limit while executing INSERT/REPLACE IGNORE ... SELECT. The SELECT execution was aborted, but the error was converted to a warning due to IGNORE clause, so neither 'ok' nor 'error' packet could be sent back to the client. This condition led to hanging client when using 5.0 server, or assertion failure in 5.1.
-
- 21 Jul, 2009 2 commits
-
-
MySQL Build Team authored
> ------------------------------------------------------------ > revno: 2772 > revision-id: joro@sun.com-20090615133815-eb007p5793in33p5 > parent: joro@sun.com-20090612140659-4hj1tta9p8wvcw4k > committer: Georgi Kodinov <joro@sun.com> > branch nick: B44810-5.0-bugteam > timestamp: Mon 2009-06-15 16:38:15 +0300 > message: > Bug #44810: index merge and order by with low sort_buffer_size > crashes server! > > The problem affects the scenario when index merge is followed by a filesort > and the sort buffer is not big enough for all the sort keys. > In this case the filesort function will read the data to the end through the > index merge quick access method (and thus closing the cursor etc), > but will leave the pointer to the quick select method in place. > It will then create a temporary file to hold the results of the filesort and > will add it as a sort output file (in sort.io_cache). > Note that filesort will copy the original 'sort' structure in an automatic > variable and restore it after it's done. > As a result at exiting filesort() we have a sort.io_cache filled in and > nothing else (as a result of close of the cursors at end of reading data > through index merge). > Now create_sort_index() will note that there is a select and will clean it up > (as it's been used already by filesort() reading the data in). While doing that > a special case in the index merge destructor will clean up the sort.io_cache, > assuming it's an output of the index merge method and is not needed anymore. > As a result the code that tries to read the data back from the filesort output > will get no data in both memory and disk and will crash. > > Fixed similarly to how filesort() does it : by copying the sort.io_cache structure > to a local variable, removing the pointer to the io_cache (so that it's not freed > by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original > structure (together with the valid pointer) after the cleanup is done. > This is a safe thing to do because all the structures are already cleaned up by > hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next()) > and the cleanup code being written in a way that tolerates repeating cleanups.
-
MySQL Build Team authored
> ------------------------------------------------------------ > revno: 2733 > revision-id: gshchepa@mysql.com-20090430192037-9p1etcynkglte2j3 > parent: aelkin@mysql.com-20090430143246-zfqaz0t7uoluzdz2 > committer: Gleb Shchepa <gshchepa@mysql.com> > branch nick: mysql-5.0-bugteam > timestamp: Fri 2009-05-01 00:20:37 +0500 > message: > Bug #37362: Crash in do_field_eq > > EXPLAIN EXTENDED of nested query containing a error: > > 1054 Unknown column '...' in 'field list' > > may cause a server crash. > > > Parse error like described above forces a call to > JOIN::destroy() on malformed subquery. > That JOIN::destroy function closes and frees temporary > tables. However, temporary fields of these tables > may be listed in st_select_lex::group_list of outer > query, and that st_select_lex may not cleanup them > properly. So, after the JOIN::destroy call that > st_select_lex::group_list may have Item_field > objects with dangling pointers to freed temporary > table Field objects. That caused a crash.
-
- 16 Jul, 2009 1 commit
-
-
Georgi Kodinov authored
-
- 17 Jun, 2009 1 commit
-
-
Staale Smedseng authored
with gcc 4.3.2 Compiling MySQL with gcc 4.3.2 and later produces a number of warnings, many of which are new with the recent compiler versions. This bug will be resolved in more than one patch to limit the size of changesets. This is the second patch, fixing more of the warnings.
-
- 15 Jun, 2009 1 commit
-
-
Georgi Kodinov authored
crashes server! The problem affects the scenario when index merge is followed by a filesort and the sort buffer is not big enough for all the sort keys. In this case the filesort function will read the data to the end through the index merge quick access method (and thus closing the cursor etc), but will leave the pointer to the quick select method in place. It will then create a temporary file to hold the results of the filesort and will add it as a sort output file (in sort.io_cache). Note that filesort will copy the original 'sort' structure in an automatic variable and restore it after it's done. As a result at exiting filesort() we have a sort.io_cache filled in and nothing else (as a result of close of the cursors at end of reading data through index merge). Now create_sort_index() will note that there is a select and will clean it up (as it's been used already by filesort() reading the data in). While doing that a special case in the index merge destructor will clean up the sort.io_cache, assuming it's an output of the index merge method and is not needed anymore. As a result the code that tries to read the data back from the filesort output will get no data in both memory and disk and will crash. Fixed similarly to how filesort() does it : by copying the sort.io_cache structure to a local variable, removing the pointer to the io_cache (so that it's not freed by QUICK_INDEX_MERGE_SELECT::~QUICK_INDEX_MERGE_SELECT) and restoring the original structure (together with the valid pointer) after the cleanup is done. This is a safe thing to do because all the structures are already cleaned up by hitting the end of the index merge's read method (QUICK_INDEX_MERGE_SELECT::get_next()) and the cleanup code being written in a way that tolerates repeating cleanups.
-
- 04 Jun, 2009 1 commit
-
-
Tatiana A. Nurnberg authored
Holding on to the temporary inno hash index latch is an optimization in many cases, but a pessimization in some others. Release temporary latches for those corner cases we (or rather, or customers, thanks!) have identified, that is, when we are about to do something that might take a really long time, like REPAIR or filesort.
-
- 30 Apr, 2009 3 commits
-
-
Gleb Shchepa authored
EXPLAIN EXTENDED of nested query containing a error: 1054 Unknown column '...' in 'field list' may cause a server crash. Parse error like described above forces a call to JOIN::destroy() on malformed subquery. That JOIN::destroy function closes and frees temporary tables. However, temporary fields of these tables may be listed in st_select_lex::group_list of outer query, and that st_select_lex may not cleanup them properly. So, after the JOIN::destroy call that st_select_lex::group_list may have Item_field objects with dangling pointers to freed temporary table Field objects. That caused a crash.
-
Daniel Fischer authored
-
Daniel Fischer authored
-
- 01 Apr, 2009 1 commit
-
-
Gleb Shchepa authored
Original commentary: Bug #37348: Crash in or immediately after JOIN::make_sum_func_list The optimizer pulls up aggregate functions which should be aggregated in an outer select. At some point it may substitute such a function for a field in the temporary table. The setup_copy_fields function doesn't take this into account and may overrun the copy_field buffer. Fixed by filtering out the fields referenced through the specialized reference for aggregates (Item_aggregate_ref). Added an assertion to make sure bugs that cause similar discrepancy don't go undetected.
-
- 19 Feb, 2009 1 commit
-
-
Georgi Kodinov authored
connections The problem is that tables can enter open table cache for a thread without being properly cleaned up. This can happen if make_join_statistics() fails to read a const table because of e.g. a deadlock. It does set a member of TABLE structure to a value it allocates, but doesn't clean-up this setting on error nor does it set the rest of the members in JOIN to allow for automatic cleanup. As a result when such an error occurs and the next statement depends re-uses the table from the open tables cache it will get it with this TABLE::reginfo.join_tab pointing to a memory area that's freed. Fixed by making sure make_join_statistics() cleans up TABLE::reginfo.join_tab on error.
-
- 10 Feb, 2009 1 commit
-
-
Ignacio Galarza authored
- Remove bothersome warning messages. This change focuses on the warnings that are covered by the ignore file: support-files/compiler_warnings.supp. - Strings are guaranteed to be max uint in length
-
- 05 Feb, 2009 1 commit
-
-
Gleb Shchepa authored
ORDER BY could cause a server crash Dependent subqueries like SELECT COUNT(*) FROM t1, t2 WHERE t2.b IN (SELECT DISTINCT t2.b FROM t2 WHERE t2.b = t1.a) caused a memory leak proportional to the number of outer rows. The make_simple_join() function has been modified to JOIN class method to store join_tab_reexec and table_reexec values in the parent join only (make_simple_join of tmp_join may access these values via 'this' pointer of the parent JOIN). NOTE: this patch doesn't include standard test case (this is "out of memory" bug). See bug #42037 page for test cases.
-
- 28 Jan, 2009 1 commit
-
-
Gleb Shchepa authored
messed up "ROW(...) IN (SELECT ... FROM DUAL)" always returned TRUE. Item_in_subselect::row_value_transformer rewrites "ROW(...) IN SELECT" conditions into the "EXISTS (SELECT ... HAVING ...)" form. For a subquery from the DUAL pseudotable resulting HAVING condition is an expression on constant values, so further transformation with optimize_cond() eliminates this HAVING condition and resets JOIN::having to NULL. Then JOIN::exec treated that NULL as an always-true-HAVING and that caused a bug. To distinguish an optimized out "HAVING TRUE" clause from "HAVING FALSE" we already have the JOIN::having_value flag. However, JOIN::exec() ignored JOIN::having_value as described above as if it always set to COND_TRUE. The JOIN::exec method has been modified to take into account the value of the JOIN::having_value field.
-
- 13 Jan, 2009 1 commit
-
-
Georgi Kodinov authored
The greedy optimizer tracks the current level of nested joins and the position inside these by setting and maintaining a state that's global for the whole FROM clause. This state was correctly maintained inside the selection of the next partial plan table (in best_extension_by_limited_search()). greedy_search() also moves the current position by adding the last partial match table when there's not enough tables in the partial plan found by best_extension_by_limited_search(). This may require update of the global state variables that describe the current position in the plan if the last table placed by greedy_search is not a top-level join table. Fixed by updating the state after placing the partial plan table in greedy_search() in the same way this is done on entering the best_extension_by_limited_search(). Fixed the signature of the function called to update the state : check_interleaving_with_nj
-
- 24 Dec, 2008 1 commit
-
-
Sergey Glukhov authored
Table could be marked dependent because it is either 1) an inner table of an outer join, or 2) it is a part of STRAIGHT_JOIN. In case of STRAIGHT_JOIN table->maybe_null should not be assigned. The fix is to set st_table::maybe_null to 'true' only for those tables which are used in outer join.
-
- 10 Dec, 2008 1 commit
-
-
Sergey Glukhov authored
Bug#37671 crash on prepared statement + cursor + geometry + too many open files! if mysql_execute_command() returns error then free materialized_cursor object. is_rnd_inited is added to satisfy rnd_end() assertion (handler may be uninitialized in some cases)
-
- 09 Dec, 2008 2 commits
-
-
Georgi Kodinov authored
-
Sergey Glukhov authored
if table has bit fields then uneven bits(if exist) are stored into null bits place. So we need to copy null bits in case of uneven bit field presence.
-
- 24 Nov, 2008 1 commit
-
-
Georgi Kodinov authored
ONLY_FULL_GROUP_BY The check for non-aggregated columns in queries with aggregate function, but without GROUP BY was treating all the parts of the query as if they are in the SELECT list. Fixed by ignoring the non-aggregated fields in the WHERE clause.
-
- 21 Nov, 2008 1 commit
-
-
Alexey Botchkov authored
memory allocation error checks added for functions calling insert_dynamic() per-file messages: myisam/mi_delete.c Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled myisam/mi_write.c Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled server-tools/instance-manager/instance_options.cc Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/slave.cc Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/sp_head.cc Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/sp_head.h Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/sp_pcontext.cc Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/sp_pcontext.h Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/sql_select.cc Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled sql/sql_yacc.yy Bug#25058 ignored return codes in memory allocation functions out-of-memory errors handled
-
- 17 Oct, 2008 1 commit
-
-
Georgi Kodinov authored
fails after the first time Two separate problems : 1. When flattening joins the linked list used for name resolution (next_name_resolution_table) was not updated. Fixed by updating the pointers when extending the table list 2. The items created by expanding a * (star) as a column reference were marked as fixed, but no cached table was assigned to them (unlike what Item_field::fix_fields does). Fixed by assigning a cached table (so the re-preparation is done faster). Note that the fix for #2 hides the fix for #1 in most cases (except when a table reference cannot be cached).
-
- 16 Oct, 2008 1 commit
-
-
Gleb Shchepa authored
Server crashed during a sort order optimization of a dependent subquery: SELECT (SELECT t1.a FROM t1, t2 WHERE t1.a = t2.b AND t2.a = t3.c ORDER BY t1.a) FROM t3; Bitmap of tables, that the reference to outer table column uses, in addition to the regular table bit has the OUTER_REF_TABLE_BIT bit set. The only_eq_ref_tables function traverses this map bit by bit simultaneously with join->map2table list. Obviously join->map2table never contains an entry for the OUTER_REF_TABLE_BIT pseudo-table, so the server crashed there. The only_eq_ref_tables function has been modified to traverse regular table bits only like the update_depend_map function (resetting of the OUTER_REF_TABLE_BIT there is enough, but resetting of the whole set of PSEUDO_TABLE_BITS is used there for sure).
-
- 10 Oct, 2008 1 commit
-
-
Gleb Shchepa authored
with COALESCE and JOIN The server returned to a client the VARBINARY column type instead of the DATE type for a result of the COALESCE, IFNULL, IF, CASE, GREATEST or LEAST functions if that result was filesorted in an anonymous temporary table during the query execution. For example: SELECT COALESCE(t1.date1, t2.date2) AS result FROM t1 JOIN t2 ON t1.id = t2.id ORDER BY result; To create a column of various date/time types in a temporary table the create_tmp_field_from_item() function uses the Item::tmp_table_field_from_field_type() method call. However, fields of the MYSQL_TYPE_NEWDATE type were missed there, and the VARBINARY columns were created by default. Necessary condition has been added.
-
- 27 Aug, 2008 1 commit
-
-
Evgeny Potemkin authored
used causes server crash. When the loose index scan access method is used values of aggregated functions are precomputed by it. Aggregation of such functions shouldn't be performed in this case and functions should be treated as normal ones. The create_tmp_table function wasn't taking this into account and this led to a crash if a query has MIN/MAX aggregate functions and employs temporary table and loose index scan. Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate functions as normal ones when the loose index scan is used.
-
- 19 Aug, 2008 1 commit
-
-
Georgi Kodinov authored
is used causes server crash. Revert the fix : unstable test case revealed by pushbuild
-
- 13 Aug, 2008 1 commit
-
-
Evgeny Potemkin authored
used causes server crash. When the loose index scan access method is used values of aggregated functions are precomputed by it. Aggregation of such functions shouldn't be performed in this case and functions should be treated as normal ones. The create_tmp_table function wasn't taking this into account and this led to a crash if a query has MIN/MAX aggregate functions and employs temporary table and loose index scan. Now the JOIN::exec and the create_tmp_table functions treat MIN/MAX aggregate functions as normal ones when the loose index scan is used.
-
- 26 Jul, 2008 1 commit
-
-
Igor Babaev authored
Calling List<Cached_item>::delete_elements for the same list twice caused a crash of the server in the function JOIN::cleaunup. Ensured that delete_elements() in JOIN::cleanup would be called only once.
-
- 23 Jul, 2008 1 commit
-
-
Georgi Kodinov authored
Range scan in descending order for c <= <col> <= c type of ranges was ignoring the DESC flag. However some engines like InnoDB have the primary key parts as a suffix for every secondary key. When such primary key suffix is used for ordering ignoring the DESC is not valid. But we generally would like to do this because it's faster. Fixed by performing only reverse scan if the primary key is used. Removed some dead code in the process.
-
- 15 Jul, 2008 1 commit
-
-
Sergey Petrunia authored
- In QUICK_INDEX_MERGE_SELECT::read_keys_and_merge: when we got table->sort from Unique, tell init_read_record() not to use rr_from_cache() because a) rowids are already sorted and b) it might be that the the data is used by filesort(), which will need record rowids (which rr_from_cache() cannot provide). - Fully de-initialize the table->sort read in QUICK_INDEX_MERGE_SELECT::get_next(). This fixes BUG#35477. (bk trigger: file as fix for BUG#35478).
-
- 16 May, 2008 1 commit
-
-
gkodinov/kgeorge@magare.gmz authored
with dependent subqueries An IN subquery is executed on EXPLAIN when it's not correlated. If the subquery required a temporary table for its execution not all the internal structures were restored from pointing to the items of the temporary table to point back to the items of the subquery. Fixed by restoring the ref array when a temp tables were used in executing the IN subquery during EXPLAIN EXTENDED.
-
- 22 Apr, 2008 1 commit
-
-
gshchepa/uchum@host.loc authored
impossible WHERE/HAVING clause (subselect_single_select_engine::exec). Allocation and initialization of joined table list t1, t2... of subqueries like: NOT IN (SELECT ... FROM t1,t2,... WHERE 0) is optimized out, however server tries to traverse this list.
-
- 28 Mar, 2008 1 commit
-
-
iggy@amd64.(none) authored
- Backported the 5.1 DBUG to 5.0. - Avoid memory cleanup race on Windows client for CTRL-C
-
- 27 Mar, 2008 1 commit
-
-
evgen@moonbone.local authored
Mixing aggregate functions and non-grouping columns is not allowed in the ONLY_FULL_GROUP_BY mode. However in some cases the error wasn't thrown because of insufficient check. In order to check more thoroughly the new algorithm employs a list of outer fields used in a sum function and a SELECT_LEX::full_group_by_flag. Each non-outer field checked to find out whether it's aggregated or not and the current select is marked accordingly. All outer fields that are used under an aggregate function are added to the Item_sum::outer_fields list and later checked by the Item_sum::check_sum_func function.
-
- 26 Mar, 2008 1 commit
-
-
gshchepa/uchum@host.loc authored
View definition as SELECT ... FROM DUAL WHERE ... has valid syntax, but use of such view in SELECT or SHOW CREATE VIEW syntax causes unexpected syntax error. Server omits FROM DUAL clause when storing view body string in a .frm file for further evaluation. However, syntax of SELECT-witout-FROM query is more restrictive than SELECT FROM DUAL syntax, and doesn't allow the WHERE clause. NOTE: this syntax difference is not documented. View registration procedure has been modified to preserve original structure of view's body.
-