- 18 Aug, 2010 8 commits
-
-
Jon Olav Hauglid authored
-
Jon Olav Hauglid authored
-
'CREATE TABLE IF NOT EXISTS ... SELECT' behaviour BUG#47132, BUG#47442, BUG49494, BUG#23992 and BUG#48814 will disappear automatically after the this patch. BUG#55617 is fixed by this patch too. This is the 5.5 part. It implements: - 'CREATE TABLE IF NOT EXISTS ... SELECT' statement will not insert anything and binlog anything if the table already exists. It only generate a warning that table already exists. - A couple of test cases for the behavior changing.
-
Magne Mahre authored
Added InnoDB to the 'default' plugin group, and modified the autoconf script so the 'default' group is actually built by default. (i.e ./configure.am == ./configure.am --with-plugins=default , instead of being ./configure.am --with-plugins=none )
-
Alexander Nozdrin authored
-
Vasil Dimov authored
-
Alexander Nozdrin authored
-
Vasil Dimov authored
-
- 17 Aug, 2010 10 commits
-
-
Joerg Bruehe authored
-
Marko Mäkelä authored
dict_load_index_low(): Rename the parameter "cached" to "allocated" and clarify the comments.
-
Vasil Dimov authored
Followup to vasil.dimov@oracle.com-20100817063430-inglmzgdtj95t29d which didn't fully fix the test because the order of the returned rows was different in embedded and non-embedded version. So the only way to fix this is to add an ORDER BY clause.
-
Magne Mahre authored
rpl_ndb.rpl_ndb_2other fails The two regressions tests failed after WL#5349 was pushed, since they were writted with the implicit requirement that MyISAM is the default storage engine. Adding --default-storage-engine=MyISAM as startup parameter, to mimic the pre-wl#5349 situation.
-
Jimmy Yang authored
lock wait time. Including the InnoDB lock time in the exiting "Lock_time" output.
-
Vasil Dimov authored
This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a which improved the sampling algorithm. The endspace test is non-deterministic because it does not include ORDER BY clause in its queries.
-
Vasil Dimov authored
This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a which improved the sampling algorithm.
-
Vasil Dimov authored
This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a which improved the sampling algorithm.
-
Vasil Dimov authored
This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a which improved the sampling algorithm.
-
Vasil Dimov authored
This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a which improved the sampling algorithm. I have manually checked that the new values are actually the correct ones, for example: -rows 16 +rows 32 the number of rows returned by the query is 32.
-
- 16 Aug, 2010 9 commits
-
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Vasil Dimov authored
Improve the range estimation algorithm. Previously: For a given level the algo knows the number of pages in the requested range and the n With this change: Same idea, but peek a few (10) of the intermediate pages to get a better estimate of In the bug report one of the examples has a btree with a snippet of the leaf level li page1(899 records), page2(1 record), page3(1 record), page4(1 record) so when trying to estimate, the previous algo, assumed there are average (899+1)/2=45 Fix Bug#53761 RANGE estimation for matched rows may be 200 times different Improve the range estimation algorithm. Previously: For a given level the algo knows the number of pages in the requested range and the number of records on the leftmost and the rightmost page. Then it assumes all pages in between contain the average between the two border pages and multiplies this average number by the number of intermediate pages. With this change: Same idea, but peek a few (10) of the intermediate pages to get a better estimate of the average number of records per page. If there are less than 10 intermediate pages then all of them will be scanned and the result will be precise, not an estimation. In the bug report one of the examples has a btree with a snippet of the leaf level like this: page1(899 records), page2(1 record), page3(1 record), page4(1 record) so when trying to estimate, the previous algo, assumed there are average (899+1)/2=450 records per page which went terribly wrong. With this change page2 and page3 will be read and the exact number of records will be returned. Approved by: Sunny (rb://401)
-
Magne Mahre authored
example files) The system variable 'thread_concurrency' has been (re-)enabled on all platforms, to prevent startup errors. 'thread_concurrency' is unused and has no effect, on any platform, in MySQL 5.1 and later versions. It will be deprecated, and removed, in context of worklog WL#5265
-
Mattias Jonsson authored
-
Mattias Jonsson authored
locks on the table Fixing the partitioning specifics after TRUNCATE TABLE in bug-42643 was fixed. Reorganize of code to decrease the size of the giant switch in mysql_execute_command, and to prepare for future parser reengineering. Moved code into Sql_statement objects. Updated patch according to davi's review comments.
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
- 14 Aug, 2010 1 commit
-
-
Evgeny Potemkin authored
pushdown. NDB supports only a limited set of item nodes for use in engine condition pushdown. Because of this adding cache for const expression effectively disabled this optimization. The ndb_serialize_cond function is extended to support Item_cache and treat it as a constant values. A helper function called ndb_serialize_const is added. It is used to create Ndb_cond value node from given const item.
-
- 13 Aug, 2010 9 commits
-
-
Inaam Rana authored
Note that this was originally pushed by Calvin but the was later reverted by mistake. bug#54702
-
Inaam Rana authored
-
Konstantin Osipov authored
The bug was fixed by the patch for Bug 52044. Add a test case.
-
Alexander Nozdrin authored
-
Konstantin Osipov authored
into an own implementation file.
-
Jon Olav Hauglid authored
-
Mattias Jonsson authored
-
Jon Olav Hauglid authored
The problem was that SHOW CREATE EVENT released all metadata locks held by the current transaction. This made any exisiting savepoints invalid, triggering the assert when ROLLBACK TO SAVEPOINT later was executed. This patch fixes the problem by making sure SHOW CREATE EVENT only releases metadata locks acquired by the statement itself. Test case added to event_trans.test.
-
Mattias Jonsson authored
corruption on ADD PARTITION and LOCK TABLE Bug#53770: Server crash at handler.cc:2076 on LOAD DATA after timed out COALESCE PARTITION 5.5 fix for: Bug#51042: REORGANIZE PARTITION can leave table in an inconsistent state in case of crash Needs to be back-ported to 5.1 5.5 fix for: Bug#50418: DROP PARTITION does not interact with transactions Main problem was non-persistent operations done before meta-data lock was taken (53770+53676). And 53676 needed to keep the table/partitions opened and locked while copying the data to the new partitions. Also added thorough tests to spot some additional bugs in the ddl_log code, which could result in bad state between the .frm and partitions. Collapsed patch, includes all fixes required from the reviewers.
-
- 12 Aug, 2010 3 commits
-
-
Konstantin Osipov authored
-
Konstantin Osipov authored
and a comment for the case when a connection issuing FLUSH TABLES <list> WITH READ LOCK has an open handler.
-
Alexander Nozdrin authored
Fixing copyright text.
-