- 18 May, 2010 3 commits
-
-
Jon Olav Hauglid authored
-
Andrei Elkin authored
-
Andrei Elkin authored
-
- 17 May, 2010 2 commits
-
-
Alexander Nozdrin authored
That was a pure test issue -- filter implementation in Perl did not work on some platform (the bug occurred on Windows Server 2008 with Cygwin Perl 5.10.0).
-
Alexander Nozdrin authored
in a multiquery packet): fix NDB test failures.
-
- 16 May, 2010 2 commits
-
-
Andrei Elkin authored
removing disabled line for rpl_row_create_table due to Bug#45576. Anyway, the test is still there because of Bug#51574
-
Andrei Elkin authored
pushing to next-mr-bugfixing from working branch
-
- 14 May, 2010 3 commits
-
-
Alexander Nozdrin authored
multiquery packet). Background: - a query can contain multiple SQL statements; - the server frees resources allocated to process a query when the whole query is handled. In other words, resources allocated to process one SQL statement from a multi-statement query are freed when all SQL statements are handled. The problem was that the parser allocated a buffer of size of the whole query for each SQL statement in a multi-statement query. Thus, if a query had many SQL-statements (so, the query was long), but each SQL statement was short, ther parser tried to allocate huge amount of memory (number of small SQL statements * length of the whole query). The memory was allocated for a so-called "cpp buffer", which is intended to store pre-processed SQL statement -- SQL text without version specific comments. The fix is to allocate memory for the "cpp buffer" once for all SQL statements (once for a query).
-
Konstantin Osipov authored
approved): 3161 Vladislav Vaintroub 2010-04-29 Bug#53196 : CMake builds don't support 'make tags' and 'make ctags' targets. - Added tags and ctags targets
-
Alexander Nozdrin authored
for ALTER TABLE, LOAD DATA). ROW_COUNT is now assigned according to the following rules: - In my_ok(): - for DML statements: to the number of affected rows; - for DDL statements: to 0. - In my_eof(): to -1 to indicate that there was a result set. We derive this semantics from the JDBC specification, where int java.sql.Statement.getUpdateCount() is defined to (sic) "return the current result as an update count; if the result is a ResultSet object or there are no more results, -1 is returned". - In my_error(): to -1 to be compatible with the MySQL C API and MySQL ODBC driver. - For SIGNAL statements: to 0 per WL#2110 specification. Zero is used since that's the "default" value of ROW_COUNT in the diagnostics area.
-
- 13 May, 2010 3 commits
-
-
Konstantin Osipov authored
-
Dmitry Lenev authored
type, which some time ago became part of Open_table_context class. Apparently standalone enum type was erroneously re-introduced during one of merges.
-
Dmitry Lenev authored
which was introduced by fix for bug 47459 "Assertion in Diagnostics_area::set_eof_status on OPTIMIZE TABLE.
-
- 12 May, 2010 4 commits
-
-
Jonathan Perkin authored
-
Jonathan Perkin authored
-
Jonathan Perkin authored
- Update/fix file layouts for each package type, add new types for native package formats including deb, rpm and svr4. - Build all plugins, including debug versions - Update compiler flags to match current release - Add missing @VAR@ expansions - Install correct mysqclient library symlinks - Fix icc/ia64 builds - Fix install of libmysqld-debug - Don't include mysql_embedded - Remove unpackaged manual pages to avoid missing files warnings - Don't install mtr's test suite
-
Jonathan Perkin authored
with other merges from the old distribution-specific spec file. - update copyright notices - remove __os_install_post override, it was only necessary as a hack to build debuginfo packages - now that we no longer make them we can revert to the distribution macro which likely has other useful bits we might want - remove _unpackaged_files_terminate_build override, we want to know of any orphaned files - include native distribution support - no longer build separate debuginfo RPMs, instead just include debug/symbols in all binaries, which is more useful for support - include support for building commercial RPMs, requires a commercial source tree - remove cluster RPM support, we don't build them from this source tree - use CMake for building, and update package lists to match the new install layout/files. Remove any options which were only useful for automake builds (e.g. yassl/zlib). - other minor cleanups
-
- 11 May, 2010 2 commits
-
-
Mats Kindahl authored
via mysqld_safe Plugin dir was set to a hard-coded path instead of relative the base dir. This patch fixes this by using a path relative the basedir instead of the plugin directory indicated by the configuration.
-
Alexander Nozdrin authored
-
- 07 May, 2010 2 commits
-
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
- 05 May, 2010 8 commits
-
-
Konstantin Osipov authored
thd->in_multi_stmt_transaction() and thd->active_transaction().
-
Magne Mahre authored
data is selected or not Temporary and permanent tables should live in different namespaces. In this case, resolving a permanent table name gave the temporary table, resulting in a name collision.
-
Alexander Nozdrin authored
The bug happened under the following condition: - there was a user variable of type REAL, containing NULL value - there was a table with a NOT_NULL column of any type but REAL, having default value (or auto increment); - a row was inserted into the table with the user variable as value. A warning was emitted here. The problem was that handling of NULL values of REAL type was not properly implemented: it didn't expect that REAL NULL value can be assigned to other data type. Basically, the problem was that set_field_to_null() was used instead of set_field_to_null_with_conversions(). The fix is to use the right function, or more generally, to allow conversion of REAL NULL values to other data types.
-
Alexander Barkov authored
Problem: item->name was NULL for Item_user_var_as_out_param which made strcmp(something, item->name) crash in the LOAD XML code. Fix: - item_func.h: Adding set_name() in constuctor for Item_user_var_as_out_param - sql_load.cc: Changing the condition in write_execute_load_query_log_event() which distiguished between Item_user_var_as_out_param and Item_field from if (item->name == NULL) to if (item->type() == Item::FIELD_ITEM) - loadxml.result, loadxml.test: adding tests
-
Magne Mahre authored
table If a temporary table A exists, and a (permanent) table with the same name is attempted created with "CREATE TABLE ... AS SELECT", the create would fail with an error. 1050: Table 'A' already exists The error occured in MySQL 5.1 releases, but is not present in MySQL 5.5. This patch adds a regression test to ensure that the problem does not reoccur.
-
Alexander Barkov authored
Problem: after introduction of "WL#2649 Number-to-string conversions" This query: SET NAMES cp850; -- Or any other non-latin1 ASCII-based character set SELECT * FROM t1 WHERE datetime_column='2010-01-01 00:00:00' started to add extra character set conversion: SELECT * FROM t1 WHERE CONVERT(datetime_column USING cp850)='2010-01-01 00:00:00'; so index on DATETIME column was not used anymore. Fix: avoid convertion of NUMERIC/DATETIME items (i.e. those with derivation DERIVATION_NUMERIC).
-
Horst.Hunger authored
-
Jon Olav Hauglid authored
-
- 04 May, 2010 5 commits
-
-
Omer BarNir authored
-
Omer BarNir authored
-
Alexander Nozdrin authored
-
Jon Olav Hauglid authored
Fixes a bug where bool* was used as an argument to a function where the parameter was of type bool.
-
Alexander Nozdrin authored
-
- 30 Apr, 2010 1 commit
-
-
Alexander Nozdrin authored
There were two problems here: 1. misleading error message 2. abusing KILL QUERY in the test case 1. The server reported "'DELETE FROM t1' failed: 1689: Wait on a lock was aborted due to a pending exclusive lock", while the proper error message should be "'DELETE FROM t1' failed: 1317: Query execution was interrupted". The problem is that the server has two different flags for signalling that a query is being killed: THD::killed and mysys_var::abort. The test case triggers a race: sometimes mysys_var::abort is set earlier than THD::killed. That leads to the following situation: - thr_lock() checks mysys_var::abort and returns error status, since mysys_var::abort is set; - the caller (mysql_lock_tables()) gets an error from thr_lock(), but THD::killed is not set, so it decides that thr_lock() couldn't get a lock due to a pending exclusive lock. This is a known issue with the server and it's not going to be fixed soon. 5.5 differs from 5.1 here as follows: when thr_lock() returns an error: - 5.1 continues trying thr_lock() until success; - 5.5 propagates the error 2. The test case uses KILL QUERY is a highly concurent environment. The fix is to wait for the dying statement to rest in peace before executing another DELETE FROM t1.
-
- 29 Apr, 2010 1 commit
-
-
Alfranio Correia authored
thread_temporary_used is not initialized causing valgrind's warnings.
-
- 28 Apr, 2010 4 commits
-
-
Konstantin Osipov authored
Update the result file to minor tweaks of the comments in the test case.
-
Sven Sandberg authored
Clarified error messages related to unsafe statements: - avoid the internal technical term "row injection" - use 'binary log' instead of 'binlog' - avoid the word 'unsafeness'
-
Konstantin Osipov authored
Fix for bug #46947 "Embedded SELECT without FOR UPDATE is causing a lock", with after-review fixes. SELECT statements with subqueries referencing InnoDB tables were acquiring shared locks on rows in these tables when they were executed in REPEATABLE-READ mode and with statement or mixed mode binary logging turned on. This was a regression which were introduced when fixing bug 39843. The problem was that for tables belonging to subqueries parser set TL_READ_DEFAULT as a lock type. In cases when statement/mixed binary logging at open_tables() time this type of lock was converted to TL_READ_NO_INSERT lock at open_tables() time and caused InnoDB engine to acquire shared locks on reads from these tables. Although in some cases such behavior was correct (e.g. for subqueries in DELETE) in case of SELECT it has caused unnecessary locking. This patch tries to solve this problem by rethinking our approach to how we handle locking for SELECT and subqueries. Now we always set TL_READ_DEFAULT lock type for all cases when we read data. When at open_tables() time this lock is interpreted as TL_READ_NO_INSERT or TL_READ depending on whether this statement as a whole or call to function which uses particular table should be written to the binary log or not (if yes then statement should be properly serialized with concurrent statements and stronger lock should be acquired). Test coverage is added for both InnoDB and MyISAM. This patch introduces an "incompatible" change in locking scheme for subqueries used in SELECT ... FOR UPDATE and SELECT .. IN SHARE MODE. In 4.1 the server would use a snapshot InnoDB read for subqueries in SELECT FOR UPDATE and SELECT .. IN SHARE MODE statements, regardless of whether the binary log is on or off. If the user required a different type of read (i.e. locking read), he/she could request so explicitly by providing FOR UPDATE/IN SHARE MODE clause for each individual subquery. On of the patches for 5.0 broke this behaviour (which was not documented or tested), and started to use locking reads fora all subqueries in SELECT ... FOR UPDATE/IN SHARE MODE. This patch restored 4.1 behaviour.
-
Stored routine DDL statements use statement-based replication regardless of the current binlog format. The problem here was that if a DDL statement failed during metadata lock acquisition or opening of mysql.proc, the binlog format would not be reset before returning. So the following DDL or DML statements are binlogged with a wrong binlog format, which causes the slave to stop. The problem can be resolved by grabbing an exclusive MDL lock firstly instead of clearing the current binlog format. So that the binlog format will not be affected when the lock grab returns directly with an error. The same way is taken to open a proc table for update.
-