- 05 Apr, 2007 4 commits
-
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
-
mskold/marty@mysql.com/linux.site authored
-
- 04 Apr, 2007 2 commits
-
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
mskold/marty@mysql.com/linux.site authored
In certain cases AFTER UPDATE/DELETE triggers on NDB tables that referenced subject table didn't see the results of operation which caused invocation of those triggers. In other words AFTER trigger invoked as result of update (or deletion) of particular row saw version of this row before update (or deletion). The problem occured because NDB handler in those cases postponed actual update/delete operations to be able to perform them later as one batch. This fix solves the problem by disabling this optimization for particular operation if subject table has AFTER trigger for this operation defined. To achieve this we introduce two new flags for handler::extra() method: HA_EXTRA_DELETE_CANNOT_BATCH and HA_EXTRA_UPDATE_CANNOT_BATCH. These are called if there exists AFTER DELETE/UPDATE triggers during a statement that potentially can generate calls to delete_row()/update_row(). This includes multi_delete/multi_update statements as well as insert statements that do delete/update as part of an ON DUPLICATE statement.
-
- 03 Apr, 2007 1 commit
-
-
mskold/marty@mysql.com/linux.site authored
into mysql.com:/windows/Linux_space/MySQL/mysql-5.0-ndb
-
- 02 Apr, 2007 9 commits
-
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
kostja@bodhi.local authored
into bodhi.local:/opt/local/work/mysql-5.0-runtime
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
tomas@whalegate.ndb.mysql.com authored
The query-cache watch thread was continually allocating new thread entries on the THD MEM_ROOT, not freed until server exit. Fixed by using a simple array, auto-expanded as necessary.
-
jonas@perch.ndb.mysql.com authored
into perch.ndb.mysql.com:/home/jonas/src/mysql-5.0-ndb
-
jonas@perch.ndb.mysql.com authored
make sure not to leave partially initialized pagerage-records
-
jonas@perch.ndb.mysql.com authored
put64 for 64-bit variables
-
ibabaev@bk-internal.mysql.com authored
into bk-internal.mysql.com:/data0/bk/mysql-5.0-opt
-
- 31 Mar, 2007 6 commits
-
-
into mysql.com:/nfsdisk1/lars/MERGE/mysql-5.0-merge
-
into mysql.com:/nfsdisk1/lars/MERGE/mysql-5.0-merge
-
into mysql.com:/nfsdisk1/lars/MERGE/mysql-5.0-merge
-
ibabaev@bk-internal.mysql.com authored
into bk-internal.mysql.com:/data0/bk/mysql-4.1-opt
-
igor@olga.mysql.com authored
conditions. When allocating memory for KEY_FIELD/SARGABLE_PARAM structures the function update_ref_and_keys did not take into account the fact that a single row equality could be replaced by several simple equalities. Fixed by adjusting the counter cond_count accordingly for each subquery when performing substitution of a row equality for simple equalities.
-
ibabaev@bk-internal.mysql.com authored
into bk-internal.mysql.com:/data0/bk/mysql-5.0-opt
-
- 30 Mar, 2007 7 commits
-
-
into mysql.com:/nfsdisk1/lars/MERGE/mysql-5.0-merge
-
sergefp@mysql.com authored
-
sergefp@mysql.com authored
Pushbuild fixes: - Make MAX_SEL_ARGS smaller (even 16K records_in_range() calls is more than it makes sense to do in typical cases) - Don't call sel_arg->test_use_count() if we've already allocated more than MAX_SEL_ARGs elements. The test will succeed but will take too much time for the test suite (and not provide much value).
-
kent@mysql.com/kent-amd64.(none) authored
into mysql.com:/home/kent/bk/tmp/mysql-4.1-build
-
evgen@sunlight.local authored
NO_AUTO_VALUE_ON_ZERO mode. In the NO_AUTO_VALUE_ON_ZERO mode the table->auto_increment_field_not_null variable is used to indicate that a non-NULL value was specified by the user for an auto_increment column. When an INSERT .. ON DUPLICATE updates the auto_increment field this variable is set to true and stays unchanged for the next insert operation. This makes the next inserted row sometimes wrongly have 0 as the value of the auto_increment field. Now the fill_record() function resets the table->auto_increment_field_not_null variable before filling the record. The table->auto_increment_field_not_null variable is also reset by the open_table() function for a case if we missed some auto_increment_field_not_null handling bug. Now the table->auto_increment_field_not_null is reset at the end of the mysql_load() function. Reset the table->auto_increment_field_not_null variable after each write_row() call in the copy_data_between_tables() function.
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-5.0-axmrg
-
bar@mysql.com authored
into mysql.com:/home/bar/mysql-5.0.b22638
-
- 29 Mar, 2007 11 commits
-
-
istruewing@chilla.local authored
into chilla.local:/home/mydev/mysql-5.0-axmrg
-
svoj@mysql.com/april.(none) authored
into mysql.com:/home/svoj/devel/mysql/BUG25521/mysql-5.0-engines
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B26815-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B26815-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
When creating a temporary table the concise column type of a string expression is decided based on its length: - if its length is under 512 it is stored as either varchar or char. - otherwise it is stored as a BLOB. There is a flag (convert_blob_length) to create_tmp_field that, when >0 allows to force creation of a varchar if the max blob length is under convert_blob_length. However it must be verified that convert_blob_length (settable through a SQL option in some cases) is under the maximum that can be stored in a varchar column. While performing that check for expressions in create_tmp_field_from_item the max length of the blob was used instead. This causes blob columns to be created in the heap temp table used by GROUP_CONCAT (where blobs must not be created in the temp table because of the constant convert_blob_length that is passed to create_tmp_field() ). And since these blob columns are not expected in that place we get wrong results. Fixed by checking that the value of the flag variable is in the limits that fit into VARCHAR instead of the max length of the blob column.
-
tomas@whalegate.ndb.mysql.com authored
into whalegate.ndb.mysql.com:/home/tomas/mysql-5.0-ndb
-
into mysql.com:/nfsdisk1/lars/MERGE/mysql-5.0-merge
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27300-5.0-opt
-
gkodinov/kgeorge@magare.gmz authored
into magare.gmz:/home/kgeorge/mysql/autopush/B27300-5.0-opt
-
sergefp@mysql.com authored
-
sergefp@mysql.com authored
- Post-review fixes
-