- 01 Apr, 2013 2 commits
- 31 Mar, 2013 2 commits
-
-
Chaithra Gopalareddy authored
-
Chaithra Gopalareddy authored
Bug #16347343 : CRASH, GROUP_CONCAT, DERIVED TABLES Problem: A select query inside a group_concat function having an outer reference results in a crash. Analysis: In function Item_group_concat::add, we do not check if return value of get_tmp_table_field can be NULL for a non-const item. This can happen for a query with a outer reference. While resolving the outer reference in the query present inside group_concat function, we set the "const_item_cache" to false. As a result in the call to const_item() from Item_func_group_concat::add, it returns false and goes on to check if this can be NULL resulting in the crash. get_tmp_table_field does not return NULL for Items of type Item_field, Item_result_field and Item_ref. For all other items, it returns NULL. Solution: Check for the return value of get_tmp_table_field before we access field contents. sql/item_sum.cc: Check for the return value of get_tmp_table_field before accessing
-
- 30 Mar, 2013 1 commit
-
-
Chaithra Gopalareddy authored
Problem: Insert with 'on duplicate key update' on a view, crashes the server. Analysis: During an insert on to a view, we do the following: For insert fields and values - 1. Resolve insert values. 2. Resolve insert fields. 3. Check if the fields and values are all from a single table of a view in case of INSERT VALUES. Do not check the same in case of INSERT SELECT, as the values can be read from different table than that of the view. For the update fields (if DUP UPDATE is used) 1. Create a name resolution context with 'table_list' only. 2. Resolve update fields in this context. 3. Check if update fields and values are from the same table as the insert fields. 4. Get the next name resolution context. Concatinate this with the previous one. 5. Resolve update values in this context as we can refer to other tables in the values clause. Note that at step 3(of update fields), we check for 'used_tables map' of update values, without resolving them first. Hence the crash. Fix: At step 3, do not pass the update values to check if its a single table view update, as update values can refer other table. Code has been re-organized to function like check_insert_fields. sql/sql_insert.cc: Do not pass update_values as they are not resolved yet.
-
- 29 Mar, 2013 7 commits
-
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
TABLE/KEY RELATIONS The DICT_FK_MAX_RECURSIVE_LOAD was reduced from 250 to 33 in rb#2058. But in optimized build, this recursive depth is still too deep and resulted in stack overflow. So reducing this depth to 20 now.
-
sayantan dutta authored
-
unknown authored
No commit message
-
unknown authored
No commit message
-
unknown authored
No commit message
-
Venkatesh Duggirala authored
SCHEDULER DROPS EVENTS Problem: On a semi sync enabled server (Master/Slave), if event scheduler drops an event after completion, server crashes. Analaysis: If an event is created with "ON COMPLETION NOT PRESERVE" clause, event scheduler deletes the event upon event completion(expiration) and the thread object will be destroyed. In the destructor of the thread object, mysys_var member is set to zero explicitly. Later from the same destructor call(same execution path), incase of semi sync enabled server, while cleanup is called, THD::mysys_var member is accessed by THD::enter_cond() function which causes server to crash. Fix: mysys_var should not be explicitly set to zero and also it is not required. sql/sql_class.cc: mysys_var should not be explicitly set to zero.
-
- 28 Mar, 2013 10 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
Fixed the get_data_size() methods for multi-point features to check properly for end of their respective data arrays. Extended the point checking function to take a 3d optional argument so cases where there's additional data in each array element (besides the point data itself) can be covered by the helper function. Fixed the 3 cases where such offset was present to use the proper checking helper function. Test cases added. Fixed review comments.
-
Nisha Gopalakrishnan authored
-
Nisha Gopalakrishnan authored
REGULAR SQL VS PREPARED STATEMENT Analysis: --------- When passing user variables as parameters to the prepared statements, the IF() function evaluation turns out to be incorrect. Consider the example: SET @var1='0.038687'; SELECT @var1 , IF( @var1 = 0 , 1 ,@var1 ) AS sqlif ; +----------+----------+ | @var1 | sqlif | +----------+----------+ | 0.038687 | 0.038687 | +----------+----------+ Executing a prepared statement where the parameters are supplied: PREPARE fail_stmt FROM "SELECT ? , IF( ? = 0 , 1 , ? ) AS ps_if_fail" ; EXECUTE fail_stmt USING @var1 ,@var1 , @var1 ; +----------+------------+ | ? | ps_if_fail | +----------+------------+ | 0.038687 | 1 | +----------+------------+ 1 row in set (0.00 sec) In the regular statement or while executing the prepared statements without passing parameters, the decimal precision is set for the user variable of type string. The comparison function used for evaluation considered the precision while comparing the values. But while executing the prepared statement with the parameters supplied, the decimal precision was not set. Thus the comparison function chosen was different which looked at the absolute values for comparison. Fix: ---- The fix is to set 'decimals' field of Item_param to the default value which is nothing but the maximum number of decimals(NOT_FIXED_DEC). This is set for cases where the strings are converted to the numeric form within certain functions. Thus the value is not rounded off during comparison, ensuring correct evaluation.
-
Sujatha Sivakumar authored
-
Sujatha Sivakumar authored
NO ERRORS REPORTED Problem: ======= Errors from my_b_fill are ignored. MYSQL_BIN_LOG::write_cache code assumes that 0 returned from my_b_fill always means end-of-cache, but that is incorrect. It can result in error and the error is ignored. Other callers of my_b_fill don't check for error: my_b_copy_to_file, maybe my_b_gets. Fix: === An error handler is already present to check the "cache" error that is reported during "MYSQL_BIN_LOG::write_cache" call. Hence error handlers are added for "my_b_copy_to_file" and "my_b_gets". During my_b_fill() function call, when the cache read fails info->error= -1 is set. Hence a check for "info->error" is added for the above to callers upon their return. mysys/mf_iocache2.c: Added a check for "cache->error" and simulation of cache read failure mysys/my_read.c: Simulation of read failure sql/log_event.cc: Added debug simulation sql/sql_repl.cc: Added a check for cache error
-
sayantan dutta authored
-
Annamalai Gurusami authored
-
Annamalai Gurusami authored
TABLE/KEY RELATIONS Problem: When there are many tables, linked together through the foreign key constraints, then loading one table will recursively open other tables. This can sometimes lead to thread stack overflow. In such situations the server will exit. I see the stack overflow problem when the thread_stack is 196608 (the default value for 32-bit systems). I don't see the problem when the thread_stack is set to 262144 (the default value for 64-bit systems). Solution: Currently, in InnoDB, there is a macro DICT_FK_MAX_RECURSIVE_LOAD which defines the maximum number of tables that will be loaded recursively because of foreign key relations. This is currently set to 250. We can reduce this number to 33 (anything more than 33 does not solve the problem for the default value). We can keep it small enough so that thread stack overflow does not happen for the default values. Reducing the DICT_FK_MAX_RECURSIVE_LOAD will not affect the functionality of InnoDB. The tables will eventually be loaded. rb#2058 approved by Marko
-
Annamalai Gurusami authored
-
- 27 Mar, 2013 7 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
The GIS WKB reader was checking for the presence of enough data by first multiplying the number read (where it could overflow) and only then comparing it to the number of bytes available. This can overflow and effectively turn off the check. Fixed by: 1. Introducing a new function that does division only so no overflow is possible. 2. Using the proper macros and parenthesizing them. 3. Doing an in-line division check in the only place where the boundary check is done over a data structure other than a dense points array.
-
Nuno Carvalho authored
Merge from mysql-5.1 into mysql-5.5.
-
Nuno Carvalho authored
Fixed possible uninitialized variable.
-
Sujatha Sivakumar authored
-
Sujatha Sivakumar authored
--BINLOG-IGNORE-DB AND FULLY QUALIFIED TABLE Problem: ======= An ALTER TABLE statement is not written to binlog if server started with "--binlog-ignore-db some database" and 'fully qualified' table names are used in the ALTER TABLE statement altering table different from current database context. Analysis: ======== The above mentioned problem not only affects "ALTER TABLE" statements but also to all kind of statements. Once the current default database becomes "NULL" none of the statements will be binlogged. The current behaviour is such that if the user has specified restrictions on which database needs to be replicated and the default db is not specified, then do not replicate. This means that "NULL" is considered to be equivalent to everything (default db = null implied ignore don't log the statement). Fix: === "NULL" should not be considered as equivalent to everything. Since the filtering criteria is not equal to "NULL" the statement should be logged into binlog. mysql-test/suite/rpl/r/rpl_loaddata_m.result: Earlier when defalut database was "NULL" DROP TABLE was not getting logged. Post this fix it will be logged and the DROP will fail at slave as the table creation was skipped by master as --binlog-ignore-db=test. mysql-test/suite/rpl/t/rpl_loaddata_m.test: Earlier when defalut database was "NULL" DROP TABLE was not getting logged. Post this fix it will be logged and the DROP will fail at slave as the table creation was skipped by master as --binlog-ignore-db=test. sql/rpl_filter.cc: Replaced DBUG_RETURN(0) with DBUG_RETURN(1).
-
Annamalai Gurusami authored
TABLE/KEY RELATIONS Problem: When there are many tables, linked together through the foreign key constraints, then loading one table will recursively open other tables. This can sometimes lead to thread stack overflow. In such situations the server will exit. I see the stack overflow problem when the thread_stack is 196608 (the default value for 32-bit systems). I don't see the problem when the thread_stack is set to 262144 (the default value for 64-bit systems). Solution: Currently, in InnoDB, there is a macro DICT_FK_MAX_RECURSIVE_LOAD which defines the maximum number of tables that will be loaded recursively because of foreign key relations. This is currently set to 250. We can reduce this number to 33 (anything more than 33 does not solve the problem for the default value). We can keep it small enough so that thread stack overflow does not happen for the default values. Reducing the DICT_FK_MAX_RECURSIVE_LOAD will not affect the functionality of InnoDB. The tables will eventually be loaded. rb#2058 approved by Marko
-
- 26 Mar, 2013 7 commits
-
-
Andrei Elkin authored
-
Andrei Elkin authored
-
unknown authored
No commit message
-
Andrei Elkin authored
-
Andrei Elkin authored
At logging a first Query referring a user var, the slave missed to log the user var. It appears that at execution of a Uservar event the slaver applier thought of the variable as already logged. The reason of misjudgement is in coincidence of query id:s: of one that the thread holds at Uservar execution and another one that the thread sees at the Query applying. While the two are naturally different in the regular execution branch (as two computational events are separated as individual events), in the deferred applying case the User var execution effectively belongs to its Query processing. Fixed with storing the Uservar parsing time (where desicion to defer is taken) query id to temporarily substitute with it the actual query id at the Uservar execution time (along with its query). Such manipulation mimics behaviour of the regular applying branch. sql/log_event.cc: Storing the Uservar parsing time query id into a new member of the event to to temporarily substitute with it the actual thread id at the Uservar execution time. sql/log_event.h: Storage for keeping query-id in User-var intance is added.
-
Tor Didriksen authored
-
Tor Didriksen authored
Bug#13243248 CHECK FOR "STACK OVERRUN" DOESN'T WORK WITH GCC-4.6, SERVER CRASHES The existing check for stack direction may give wrong results for new versions of gcc at high optimization levels. Solution: Backport the stack-direction check from 5.5
-
- 25 Mar, 2013 1 commit
-
-
Manish Kumar authored
Problem - When the slave was disconnected from the master, under certain conditions, upon reconnect, it will report that it received a packet larger the slave_max_allowed_packet which causes the replication to stop. Analysis -The reason of this failure is that on reconnect the slave sets the max_allowed_packet from the master's mi->mysql object which keeps the max_allowed_packet as 1MB. This causes the slave to report such error on recieving packet bigger than 1MB. START SLAVE on the slave fixes the problem since it restarts slave threads which initializes the max_allowed_packet to slave_max_allowed_packet. Fix - The problem is fixed by some code refactoring and introduction of a new function which updates the max_allowed_packet for the THD object of the slave thread and the mysql->options max_allowed_packet.
-
- 22 Mar, 2013 3 commits
-
-
Nirbhay Choubey authored
-
Nirbhay Choubey authored
-
Nirbhay Choubey authored
-