- 13 Apr, 2009 2 commits
-
-
MySQL Build Team authored
-
karen.langford@sun.com authored
-
- 27 Mar, 2009 2 commits
-
-
Georgi Kodinov authored
-
Georgi Kodinov authored
-
- 25 Mar, 2009 5 commits
-
-
Ramil Kalimullin authored
-
Ramil Kalimullin authored
due to name_const substitution Problem: "In general, statements executed within a stored procedure are written to the binary log using the same rules that would apply were the statements to be executed in standalone fashion. Some special care is taken when logging procedure statements because statement execution within procedures is not quite the same as in non-procedure context". For example, each reference to a local variable in SP's statements is replaced by NAME_CONST(var_name, var_value). Queries like "CREATE TABLE ... SELECT FUNC(local_var ..." are logged as "CREATE TABLE ... SELECT FUNC(NAME_CONST("local_var", var_value) ..." that leads to differrent field names and might result in "Incorrect column name" if var_value is long enough. Fix: in 5.x we'll issue a warning in such a case. In 6.0 we should get rid of NAME_CONST(). Note: this issue and change should be described in the documentation ("Binary Logging of Stored Programs").
-
Tatiana A. Nurnberg authored
Fine-tuning. Broke out comparison into method by suggestion of Davi. Clarified comments. Reverting test-case which I find too brittle; proper test case in 5.1+.
-
Georgi Kodinov authored
(Pushing for Azundris) We allow security-contexts with NULL users (for system-threads and for unauthenticated users). If a non-SUPER-user tried to KILL such a thread, we tried to compare the user-fields to see whether they owned that thread. Comparing against NULL was not a good idea. If KILLer does not have SUPER-privilege, we specifically check whether both KILLer and KILLee have a non-NULL user before testing for string- equality. If either is NULL, we reject the KILL.
-
Leonard Zhou authored
-
- 24 Mar, 2009 4 commits
-
-
Alexey Kopytov authored
-
Alexey Kopytov authored
expired timeout on debx86-b in PB Moved the resource-intensive test case for bug #41486 into a separate test file to reduce execution time for mysql.test.
-
Leonard Zhou authored
-
Leonard Zhou authored
When do 'insert delayed' operation, the time_zone info doesn't be keeped in the row info. So when we do insert sometime later, time_zone didn't write into binlog. This will cause wrong result for timestamp column in slave. Our solution is that adding time_zone info with the delayed-row and restoring time_zone from row-info when execute that row in the furture by another thread. So we can write correct time_zone info into binlog and got correct result in slave.
-
- 23 Mar, 2009 5 commits
-
-
Matthias Leich authored
-
Matthias Leich authored
Details for Bug#43015 main.lock_multi: Weak code (sleeps etc.) ------------------------------------------------------------- - The fix for bug 42003 already removed a lot of the weaknesses mentioned. - Tests showed that there are unfortunately no improvements of this tests in MySQL 5.1 which could be ported back to 5.0. - Remove a superfluous "--sleep 1" around line 195 Details for Bug#43065 main.lock_multi: This test is too big if the disk is slow ------------------------------------------------------------------------------- - move the subtests for the bugs 38499 and 36691 into separate scripts - runtime under excessive parallel I/O load after applying the fix lock_multi [ pass ] 22887 lock_multi_bug38499 [ pass ] 536926 lock_multi_bug38691 [ pass ] 258498
-
Sergey Glukhov authored
-
Tatiana A. Nurnberg authored
-
Tatiana A. Nurnberg authored
When asking what database is selected, client expected to *always* get an answer from the server. We now handle failure more gracefully. See comments in ticket for a discussion of what happens, and how things interlock.
-
- 20 Mar, 2009 1 commit
-
-
Narayanan V authored
-
- 19 Mar, 2009 12 commits
-
-
Davi Arnaut authored
Don't compare string literals as it results in unspecified behavior.
-
Ignacio Galarza authored
-
Bernt M. Johnsen authored
-
Ignacio Galarza authored
-
Staale Smedseng authored
functions Unknown timezone specifications are properly rejected by the server, but are copied into tz_storage before rejection, and hence is retained until end of server life. With sufficiently large bogus timezone specs, it is easy to exhaust system memory. Allocation of memory for a copy of the timezone name is delayed until after verification of validity, at the cost of a memcpy of the timezone info. This only happens once, future lookups will hit the cached structure.
-
Alexey Kopytov authored
-
Alexey Kopytov authored
for bug #41486. Session max_allowed_packet is read-only as of MySQL 5.1.31. In addition, the global variable now has no effect on the current session.
-
Sergey Glukhov authored
-
Sergey Glukhov authored
fixed help message
-
Satya B authored
-
Sergey Glukhov authored
Don't throw an error after checking the first and the second arguments. Continue with checking the third and higher arguments and if some of them is stronger according to coercibility rules, then this argument's collation is set as result collation.
-
Satya B authored
When loading dump created by mysqldump tool an error is thrown saying storage engine for the table doesn't have an option. mysqldump tries to re-insert the data into the federated table which causes the error. Since the data is already available on the remote server, mysqldump shouldn't try to dump the data again for FEDERATED tables. As stated in the bug page, it can be considered similar to the MERGE ENGINE with "view only" nature. Fixed by adding the "FEDERATED ENGINE" to the exception list to ignore the data.
-
- 18 Mar, 2009 3 commits
-
-
Bernt M. Johnsen authored
-
Alexey Kopytov authored
-
Alexey Kopytov authored
~40Mb after mysqldump/import When the input string exceeds the maximum allowed size for the internal buffer, batch_readline() returns a truncated string. Since there was no way for a caller to determine whether the string was truncated or not, the command line client assumed batch_readline() to always return the whole input string and appended a newline character. This resulted in garbled data when importing dumps containing strings longer than the maximum input buffer size. Fixed by adding a flag to the batch_readline() interface to signal a truncated string to the caller. Other minor problems fixed during patch implementation: - The maximum allowed buffer size for batch_readline() was set up depending on the client's max_allowed_packet value. It does not actully make any sense, as those variables are not related. The input buffer size limit is now always set to 1 MB. - fill_buffer() did not always set the EOF flag. - The input buffer could actually grow twice as the specified limit due to insufficient checks in intern_read_line().
-
- 17 Mar, 2009 1 commit
-
-
Georgi Kodinov authored
-
- 15 Mar, 2009 1 commit
-
-
Patrick Crews authored
Revised patch incorporating cleaner test code brought up during review. Removed the use of grep and accomplished same actions via SQL / use of the server. Runs as before on *nix systems and now runs on Windows without Cygwin as well.
-
- 13 Mar, 2009 1 commit
-
-
Georgi Kodinov authored
seems to become negative THD::start_time has a dual meaning : it's either the time since the process entered a given state or is the transaction time returned by e.g. NOW(). This causes problems, as sometimes THD::start_time may be set to a value that is correct and needed when used as a base for NOW(), but these times may be arbitrary (SET @@timestamp) or non-local (coming from the master through the replication feed). If one such non-local time is set there's no way to return a correct value for e.g. SHOW PROCESSLIST or SELECT ... FROM INFORMATION_SCHEMA.PROCESSLIST. Fixed by making the Time column in SHOW PROCESSLIST SIGNED LONG instead of UNSIGNED LONG and doing the correct conversions. Note that no reliable test suite can be constructed, since it would require knowing the local time and can't be achieved by the means of the current test suite.
-
- 12 Mar, 2009 1 commit
-
-
Chad MILLER authored
-
- 11 Mar, 2009 2 commits
-
-
Timothy Smith authored
Since there is more than one duplicate value in the table, when adding the unique index it is not deterministic which value will be reported as causing a problem. Replace the reported value with '' so that it doesn't affect the results.
-
Timothy Smith authored
-