- 17 Sep, 2013 1 commit
-
-
Tor Didriksen authored
my_strnxfrm_win1250ch could write into dest[destlen] i.e. write a byte to the past-the-end of dest.
-
- 12 Sep, 2013 4 commits
-
-
Mattias Jonsson authored
test change only. Removed --source include/not_windows_embedded.inc which was added due to that bug.
-
Satya Bodapati authored
-
Satya Bodapati authored
-
Satya Bodapati authored
disable testcase due to BUG#17446090
-
- 11 Sep, 2013 1 commit
-
-
Satya Bodapati authored
IT IS DONE IN-PLACE With change buffer enabled, InnoDB doesn't write a transaction log record when it merges a record from the insert buffer to an secondary index page if the insertion is performed as an update-in-place. Fixed by logging the 'update-in-place' operation on secondary index pages. Approved by Marko. rb#2429
-
- 10 Sep, 2013 2 commits
-
-
mithun authored
WITH MY_B_VPRINTF() Issue : In LP 64 machine max long value can be 20 digit decimal value. But in my_b_vprintf() the intermediate buffer storage used is 17 bytes length. This will lead to buffer overflow. Solution : Increased the buffer storage from 17 to 32 bytes. code is backported from 5.6 mysys/mf_iocache2.c: In function my_b_vprintf increased the size of local buff from 17 to 32 bytes.
-
Tor Didriksen authored
For queries like update t1 set ... where <unique key> order by ... limit ... we need to handle the fact that unique keys allow NULL values, and hence can return more than one row. sql/opt_range.cc: When the unique key has multiple key parts, check each key_part for nullability, rather than the first key part. (s/key->part ==/key_tree->part ==/) Also: revert the if() test, for better readability.
-
- 11 Sep, 2013 1 commit
-
-
Satya Bodapati authored
-
- 10 Sep, 2013 6 commits
-
-
mithun authored
WITH MY_B_VPRINTF() [Merge from 5.1]
-
Libing Song authored
-
Libing Song authored
Postfix, suppress the new warning generated by the bug's fix.
-
Bjorn Munch authored
-
Libing Song authored
-
Libing Song authored
Dump thread may encounter an error when reading events from the active binlog file. However the errors may be temporary, so dump thread will try to read the event again. But dump thread seeked to an wrong position, it caused some events was sent twice. To fix the bug, prev_pos is defined out the while loop and is set the correct position after reading every event correctly. This patch also make binlog_can_be_corrupted more accurate, only the binlogs not closed normally are marked binlog_can_be_corrupted. Finally, two warnings are added when dump threads encounter the temporary errors.
-
- 09 Sep, 2013 4 commits
-
-
Venkata Sidagam authored
Merging from 5.1 to 5.5
-
Venkata Sidagam authored
Reverting the patch. Because this change is not to me made for GA versions.
-
Tor Didriksen authored
Do not call abs(INT_MIN) as the result is undefined.
-
Tor Didriksen authored
Add missing initialization in lex_start()
-
- 06 Sep, 2013 1 commit
-
-
Raghav Kapoor authored
-
- 05 Sep, 2013 2 commits
-
-
Venkata Sidagam authored
FILES SPECIFIED WITH THE BASEDIR OPTION Reverting the patch. Because asked for second review.
-
Nisha Gopalakrishnan authored
DO WHAT YOU EXPECT" Fix info: -------- Backport of the deprecation bug fix (WL#5265) for global variable 'THREAD_CONCURRENCY' from mysql-5.6 to mysql-5.5 Note: With this backport, certain additional deprecation warnings would be reported under error conditions while setting the global/session variables.
-
- 04 Sep, 2013 1 commit
-
-
Neeraj Bisht authored
CONTAINING NULL Problem:- In MySQL, We can obtain the number of distinct expression combinations that do not contain NULL by giving a list of expressions in COUNT(DISTINCT). However rows with NULL values are incorrectly included in the count when loose index scan is used. Analysis:- In case of loose index scan, we check whether the field is null or not and increase the count in Item_sum_count::add(). But there we are checking for the first field in COUNT(DISTINCT), not for every field. This is causing an incorrect result. Solution:- Check all field in Item_sum_count::add(), whether there values are null or not. Then only increment the count. ****** Bug#17222452 - SELECT COUNT(DISTINCT A,B) INCORRECTLY COUNTS ROWS CONTAINING NULL Problem:- In MySQL, We can obtain the number of distinct expression combinations that do not contain NULL by giving a list of expressions in COUNT(DISTINCT). However rows with NULL values are incorrectly included in the count when loose index scan is used. Analysis:- In case of loose index scan, we check whether the field is null or not and increase the count in Item_sum_count::add(). But there we are checking for the first field in COUNT(DISTINCT), not for every field. This is causing an incorrect result. Solution:- Check all field in Item_sum_count::add(), whether there values are null or not. Then only increment the count.
-
- 02 Sep, 2013 1 commit
-
-
Arun Kuruvila authored
SPECIFIED WITH THE BASEDIR OPTION Description: The mysql_plugin client attempts to remove any filename specified to the --basedir option. The problem is that if the filename does not end with a slash, it will attempt to unlink it, which succeeds for files, but not for directories. Analysis: When we are starting mysql_plugin with basedir option and if we are giving path of a file as basedir, it deletes that file. It was because it uses a function my_delete which unlinks the file path given. Fix: As a fix we replace that line using another function my_free, which will only free the pointer which is having that file path.
-
- 01 Sep, 2013 1 commit
-
-
unknown authored
No commit message
-
- 30 Aug, 2013 3 commits
-
-
Igor Solodovnikov authored
Memory Leak in mysql_options() was caused by missing call to my_free() in MYSQL_SET_CLIENT_IP branch. Fixed by adding my_free() to cleanup mysql->options.client_ip value before assigning new value.
-
Igor Solodovnikov authored
-
Igor Solodovnikov authored
Memory Leak in mysql_options() was caused by missing call to my_free() in MYSQL_SET_CLIENT_IP branch. Fixed by adding my_free() to cleanup mysql->options.client_ip value before assigning new value.
-
- 28 Aug, 2013 3 commits
-
-
Raghav Kapoor authored
ERROR HANDLING CODE BACKGROUND: There can be a potential crash due to buffer overrun in SSL error handling code due to missing comma in ssl_error_string[] array in viosslfactories.c. ANALYSIS: Found by code Inspection. FIX: Added the missing comma in SSL error handling code in ssl_error_string[] array in viosslfactories.c.
-
Raghav Kapoor authored
ERROR HANDLING CODE BACKGROUND: There can be a potential crash due to buffer overrun in SSL error handling code due to missing comma in ssl_error_string[] array in viosslfactories.c. ANALYSIS: Found by code Inspection. FIX: Added the missing comma in SSL error handling code in ssl_error_string[] array in viosslfactories.c.
-
Neeraj Bisht authored
Problem:- Second execution of prepared statement for query with parameter in limit clause, causes an assert when using connectors (e.g., Connector C). Analysis:- In prepared statement, LIMIT parameters can be specified using '?' markers. Value for the parameter can be supplied while executing the prepared statement. Passing string, float or double values for LIMIT clause works well from command-line client. That's because, while setting the LIMIT parameter value from a user-variable, the value is converted to integer value. However, when prepared statement is executed from other interfaces as J connectors, or C applications etc, the value for the parameters are sent to the server with execute command. Each item in command has value and the data TYPE. So, while setting parameter values from this log, value is set to all the parameters with the same data type as passed. Here, we have the logic to convert the value to change the state and item_type if it is part of LIMIT parameter and its item_type is not INT. But when we reset this parameter we save the item_type but change state. So on second execution we have old item_type but our state has been changed, which make us to use string type variable in Item_param::query_str_val(). This cause an assert. Fix: Instead of checking the item_type of the parameter, check for the state of the parameter. As state value are reset everytime we execute the statement.
-
- 27 Aug, 2013 1 commit
-
-
unknown authored
-
- 26 Aug, 2013 3 commits
-
-
Hery Ramilison authored
-
unknown authored
-
Dmitry Lenev authored
ER_LOCK_WAIT_TIMEOUT". The problem was that after changes caused by fix bug 14188793 "DEADLOCK CAUSED BY ALTER TABLE DOEN'T CLEAR STATUS OF ROLLBACKED TRANSACTION"/ bug 17054007 "TRANSACTION IS NOT FULLY ROLLED BACK IN CASE OF INNODB DEADLOCK implicit rollback of transaction which occurred on ER_LOCK_DEADLOCK (and ER_LOCK_WAIT_TIMEOUT if innodb_rollback_on_timeout option was set) didn't start new transaction in @@autocommit=1 mode. Such behavior although consistent with behavior of explicit ROLLBACK has broken expectations of users and backward compatibility assumptions. This patch fixes problem by reverting to starting new transaction in 5.5/5.6. The plan is to keep new behavior in trunk so the code change from this patch is to be null-merged there.
-
- 23 Aug, 2013 5 commits
-
-
Praveenkumar Hulakund authored
"SHOW PROCESSLIST" Follow up path, addressing pb2 test failure.
-
Praveenkumar Hulakund authored
-
unknown authored
No commit message
-
Neeraj Bisht authored
Problem:- In a Procedure, when we are comparing value of select query with IN clause and they both have different collation, cause error on first time execution and assert second time. procedure will have query like set @x = ((select a from t1) in (select d from t2));<---proc1 sel1 sel2 Analysis:- When we execute this proc1(first time) While resolving the fields of user variable, we will call Item_in_subselect::fix_fields while will resolve sel2. There in Item_in_subselect::select_transformer, we evaluate the left expression(sel1) and store it in Item_cache_* object (to avoid re-evaluating it many times during subquery execution) by making Item_in_optimizer class. While evaluating left expression we will prepare sel1. After that, we will put a new condition in sel2 in Item_in_subselect::select_transformer() which will compare t2.d and sel1(which is cached in Item_in_optimizer). Later while checking the collation in agg_item_collations() we get error and we cleanup the item. While cleaning up we cleaned the cached value in Item_in_optimizer object. When we execute the procedure second time, we have condition for sel2 and while setup_cond(), we can't able to find reference item as it is cleanup while item cleanup.So it assert. Solution:- We should not cleanup the cached value for Item_in_optimizer object, if we have put the condition to subselect.
-
Neeraj Bisht authored
Problem:- In a Procedure, when we are comparing value of select query with IN clause and they both have different collation, cause error on first time execution and assert second time. procedure will have query like set @x = ((select a from t1) in (select d from t2));<---proc1 sel1 sel2 Analysis:- When we execute this proc1(first time) While resolving the fields of user variable, we will call Item_in_subselect::fix_fields while will resolve sel2. There in Item_in_subselect::select_transformer, we evaluate the left expression(sel1) and store it in Item_cache_* object (to avoid re-evaluating it many times during subquery execution) by making Item_in_optimizer class. While evaluating left expression we will prepare sel1. After that, we will put a new condition in sel2 in Item_in_subselect::select_transformer() which will compare t2.d and sel1(which is cached in Item_in_optimizer). Later while checking the collation in agg_item_collations() we get error and we cleanup the item. While cleaning up we cleaned the cached value in Item_in_optimizer object. When we execute the procedure second time, we have condition for sel2 and while setup_cond(), we can't able to find reference item as it is cleanup while item cleanup.So it assert. Solution:- We should not cleanup the cached value for Item_in_optimizer object, if we have put the condition to subselect.
-