An error occurred fetching the project authors.
- 07 Feb, 2007 1 commit
-
-
unknown authored
bug#25821 Excessive partition pruning for multi-range index scan in NDB API: added multi_range error checking in end_of_bound ha_ndbcluster.h: Removed stray mthod declaration sql/ha_ndbcluster.h: Removed stray mthod declaration ndb/include/ndbapi/NdbScanOperation.hpp: bug#25821 Excessive partition pruning for multi-range index scan in NDB API: added multi_range error checking in end_of_bound ndb/src/ndbapi/NdbScanOperation.cpp: bug#25821 Excessive partition pruning for multi-range index scan in NDB API: added multi_range error checking in end_of_bound sql/ha_ndbcluster.cc: bug#25821 Excessive partition pruning for multi-range index scan in NDB API: added multi_range error checking in end_of_bound
-
- 06 Feb, 2007 1 commit
-
-
unknown authored
Fix for bug#25821 Excessive partition pruning for multi-range index scan in NDB API: don't set distribution key if multi_range
-
- 01 Feb, 2007 1 commit
-
-
unknown authored
Bug #25522 Update with IN syntax Clustertable + Trigger leads to mysqld segfault: moved back assignment
-
- 31 Jan, 2007 1 commit
-
-
unknown authored
Bug #25522 Update with IN syntax Clustertable + Trigger leads to mysqld segfault: in start_stmt, only change query_state if starting a new transactions, in read_multi_range_next, change query state when end is reached
-
- 08 Jan, 2007 1 commit
-
-
unknown authored
bug#24820 CREATE INDEX ....USING HASH on NDB table creates ordered index, not HASH index: Added error checking
-
- 14 Dec, 2006 1 commit
-
-
unknown authored
-
- 30 Nov, 2006 3 commits
-
-
unknown authored
bug#18487 UPDATE IGNORE not supported for unique constraint violation of non-primary key: only check pk if it is updated
-
unknown authored
bug#18487 UPDATE IGNORE not supported for unique constraint violation of non-primary key: handle INSERT ... ON DUPLICATE KEY UPDATE
-
unknown authored
#18487 UPDATE IGNORE not supported for unique constraint violation of non-primary key: call peek_index_rows
-
- 07 Nov, 2006 1 commit
-
-
unknown authored
bug#21507 I can't create a unique hash index in NDB: Added possibillity to create hash only indexes with NULL valued attributes, but any NULL valued access will become full table scan with pushed condition on index attribute values
-
- 24 Oct, 2006 1 commit
-
-
unknown authored
util thread wasn't behaving correctly after 241 error due to get_table_statistics not properly returning an error code sql/ha_ndbcluster.cc: correctly call ndb_get_table_statistics in get_commitcount (don't report error) but also return an error code from get_table_statistics so that util thread gets error code
-
- 23 Oct, 2006 2 commits
-
-
unknown authored
-
unknown authored
fixes for ndb_* tests broken by previous fix be more careful in ndb about setting errors on failure of info call (especially in open) sql/ha_ndbcluster.cc: fix some ndb* tests failing due to fix for 19914 be more careful about setting errors on failure of info call sql/ha_ndbcluster.h: fix some ndb* tests failing due to fix for 19914 be more careful about setting errors on failure of info call
-
- 19 Oct, 2006 1 commit
-
-
unknown authored
this changes lock taken during peek, to decrease likelyhood of transaction abort sql/ha_ndbcluster.cc: use exclusive lock in peek, as peek is used just before insert/update
-
- 21 Sep, 2006 1 commit
-
-
unknown authored
-
- 15 Sep, 2006 1 commit
-
-
unknown authored
-
- 13 Sep, 2006 1 commit
-
-
unknown authored
Bug #21378 Alter table from X storage engine to NDB could cause data loss: skip autodiscover of local tables
-
- 12 Sep, 2006 1 commit
-
-
unknown authored
Bug #21378 Alter table from X storage engine to NDB could cause data loss: Added warning if local table shadows ndb table
-
- 05 Sep, 2006 2 commits
-
-
unknown authored
-
unknown authored
In 5.0 we made LOAD DATA INFILE autocommit in all engines, while only NDB wanted that. Users and trainers complained that it affected InnoDB and was a change compared to 4.1 where only NDB autocommitted. To revert to the behaviour of 4.1, we move the autocommit logic out of mysql_load() into ha_ndbcluster::external_lock(). The result is that LOAD DATA INFILE commits all uncommitted changes of NDB if this is an NDB table, its own changes if this is an NDB table, but does not affect other engines. Note: even though there is no "commit the full transaction at end" anymore, LOAD DATA INFILE stays disabled in routines (re-entrency problems per a comment of Pem). Note: ha_ndbcluster::has_transactions() does not give reliable results because it says "yes" even if transactions are disabled in this engine... sql/ha_ndbcluster.cc: NDB wants to do autocommit if this is LOAD DATA INFILE. For this to not affect all other engines, we move the logic inside ha_ndbcluster. sql/sql_load.cc: This ha_enable_transaction() in mysql_load() forced an autocommit in all engines, while only NDB wants to do that. So we move the logic inside ha_ndbcluster.cc. mysql-test/include/loaddata_autocom.inc: test for engines to see if they autocommit or not in LOAD DATA INFILE mysql-test/r/loaddata_autocom_innodb.result: result for InnoDB (no autocommit) mysql-test/r/loaddata_autocom_ndb.result: result for NDB (autocommit) mysql-test/r/rpl_ndb_innodb_trans.result: result for InnoDB+NDB transactions. Observe that when ROLLBACK cannot rollback the LOAD DATA INFILE in NDB it issues warning 1196 as appropriate. mysql-test/t/loaddata_autocom_innodb.test: test that InnoDB does not autocommit in LOAD DATA INFILE. mysql-test/t/loaddata_autocom_ndb.test: test that NDB does autocommit in LOAD DATA INFIL mysql-test/t/rpl_ndb_innodb_trans-slave.opt: need to tell the slave to use innodb mysql-test/t/rpl_ndb_innodb_trans.test: test of transactions mixing NDB and InnoDB. To see if ROLLBACK rolls back in both engines, with the exception of LOAD DATA INFILE which does not roll back NDB: we see that a LOAD DATA INFILE in NDB commits all what has been done in NDB so far, commits its changes, but does not commit in other engines.
-
- 30 Aug, 2006 1 commit
-
-
unknown authored
"strict mode: inserts autogenerated auto_increment value bigger than max" Strict mode should fail if autoincrement value is out of range include/my_base.h: Add new handler error codes sql/ha_berkeley.cc: handle error in update_auto_increment() sql/ha_heap.cc: handle error in update_auto_increment() sql/ha_innodb.cc: handle error in update_auto_increment() sql/ha_myisam.cc: handle error in update_auto_increment() sql/ha_myisammrg.cc: handle error in update_auto_increment() sql/ha_ndbcluster.cc: handle error in update_auto_increment() sql/handler.cc: return error from handler::update_auto_increment() sql/handler.h: change return type of handler::update_auto_increment() to int sql/share/errmsg.txt: new error message for auto-increment mysql-test/include/strict_autoinc.inc: New BitKeeper file ``mysql-test/include/strict_autoinc.inc'' mysql-test/r/strict_autoinc_1myisam.result: New BitKeeper file ``mysql-test/r/strict_autoinc_1myisam.result'' mysql-test/r/strict_autoinc_2innodb.result: New BitKeeper file ``mysql-test/r/strict_autoinc_2innodb.result'' mysql-test/r/strict_autoinc_3heap.result: New BitKeeper file ``mysql-test/r/strict_autoinc_3heap.result'' mysql-test/r/strict_autoinc_4bdb.result: New BitKeeper file ``mysql-test/r/strict_autoinc_4bdb.result'' mysql-test/r/strict_autoinc_5ndb.result: New BitKeeper file ``mysql-test/r/strict_autoinc_5ndb.result'' mysql-test/t/strict_autoinc_1myisam.test: New BitKeeper file ``mysql-test/t/strict_autoinc_1myisam.test'' mysql-test/t/strict_autoinc_2innodb.test: New BitKeeper file ``mysql-test/t/strict_autoinc_2innodb.test'' mysql-test/t/strict_autoinc_3heap.test: New BitKeeper file ``mysql-test/t/strict_autoinc_3heap.test'' mysql-test/t/strict_autoinc_4bdb.test: New BitKeeper file ``mysql-test/t/strict_autoinc_4bdb.test'' mysql-test/t/strict_autoinc_5ndb.test: New BitKeeper file ``mysql-test/t/strict_autoinc_5ndb.test''
-
- 22 Aug, 2006 1 commit
-
-
unknown authored
sql/ha_ndbcluster.cc: calculate frags with (ulonglong)max_rows in case --without-big-tables
-
- 15 Aug, 2006 3 commits
-
-
unknown authored
Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: do not release operation records for on-going read_multi_range
-
unknown authored
init ndb_cache_check_time and honor value in my.cnf sql/ha_ndbcluster.cc: init ndb_cache_check_time and honor value in my.cnf
-
unknown authored
bug #18184 SELECT ... FOR UPDATE does not work..: New test case ha_ndbcluster.h, ha_ndbcluster.cc, NdbConnection.hpp: Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: Releasing operation for each intermediate batch, before next call to trans->execute(NoCommit); mysql-test/r/ndb_lock.result: bug #18184 SELECT ... FOR UPDATE does not work..: New test case mysql-test/t/ndb_lock.test: bug #18184 SELECT ... FOR UPDATE does not work..: New test case ndb/include/ndbapi/NdbConnection.hpp: Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: Releasing operation for each intermediate batch, before next call to trans->execute(NoCommit); sql/ha_ndbcluster.cc: Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: Releasing operation for each intermediate batch, before next call to trans->execute(NoCommit); sql/ha_ndbcluster.h: Fix for bug #21059 Server crashes on join query with large dataset with NDB tables: Releasing operation for each intermediate batch, before next call to trans->execute(NoCommit);
-
- 11 Aug, 2006 1 commit
-
-
unknown authored
use correct termninology sql/ha_ndbcluster.cc: just replace 'number_of_storage_nodes' with 'number_of_data_nodes'
-
- 10 Aug, 2006 1 commit
-
-
unknown authored
allow handler::info to return an error code (that will be returned to the user) sql/ha_berkeley.cc: update handler::info interface to return int sql/ha_berkeley.h: update handler::info interface to return int sql/ha_heap.cc: update handler::info interface to return int sql/ha_heap.h: update handler::info interface to return int sql/ha_innodb.cc: update handler::info interface to return int sql/ha_innodb.h: update handler::info interface to return int sql/ha_myisam.cc: update handler::info interface to return int sql/examples/ha_archive.cc: update handler::info interface to return int sql/examples/ha_archive.h: update handler::info interface to return int sql/examples/ha_example.cc: update handler::info interface to return int sql/examples/ha_example.h: update handler::info interface to return int sql/examples/ha_tina.cc: update handler::info interface to return int sql/examples/ha_tina.h: update handler::info interface to return int sql/ha_myisam.h: update handler::info interface to return int sql/ha_myisammrg.cc: update handler::info interface to return int sql/ha_myisammrg.h: update handler::info interface to return int sql/ha_ndbcluster.cc: update handler::info interface to return int sql/ha_ndbcluster.h: update handler::info interface to return int sql/handler.h: update handler::info interface to return int sql/opt_sum.cc: If we get an error when using handler::info to get count(*), print and return the error. sql/sql_select.cc: if error, set fatal error.
-
- 04 Jul, 2006 1 commit
-
-
unknown authored
- partial backport of code from 5.1, do cot compare_record for engines that do not read all columns during update
-
- 01 Jul, 2006 1 commit
-
-
unknown authored
NDB table". SQL-layer was not marking fields which were used in triggers as such. As result these fields were not always properly retrieved/stored by handler layer. So one might got wrong values or lost changes in triggers for NDB, Federated and possibly InnoDB tables. This fix solves the problem by marking fields used in triggers appropriately. Also this patch contains the following cleanup of ha_ndbcluster code: We no longer rely on reading LEX::sql_command value in handler in order to determine if we can enable optimization which allows us to handle REPLACE statement in more efficient way by doing replaces directly in write_row() method without reporting error to SQL-layer. Instead we rely on SQL-layer informing us whether this optimization applicable by calling handler::extra() method with HA_EXTRA_WRITE_CAN_REPLACE flag. As result we no longer apply this optimzation in cases when it should not be used (e.g. if we have on delete triggers on table) and use in some additional cases when it is applicable (e.g. for LOAD DATA REPLACE). Finally this patch includes fix for bug#20728 "REPLACE does not work correctly for NDB table with PK and unique index". This was yet another problem which was caused by improper field mark-up. During row replacement fields which weren't explicity used in REPLACE statement were not marked as fields to be saved (updated) so they have retained values from old row version. The fix is to mark all table fields as set for REPLACE statement. Note that in 5.1 we already solve this problem by notifying handler that it should save values from all fields only in case when real replacement happens. include/my_base.h: Added HA_EXTRA_WRITE_CAN_REPLACE, HA_EXTRA_WRITE_CANNOT_REPLACE - new parameters for ha_extra() method. We use them to inform handler that write_row() which tries to insert new row into the table and encounters some already existing row with same primary/unique key can replace old row with new row instead of reporting error. mysql-test/r/federated.result: Additional test for bug#18437 "Wrong values inserted with a before update trigger on NDB table". mysql-test/r/ndb_replace.result: Added test for bug #20728 "REPLACE does not work correctly for NDB table with PK and unique index". Updated wrong results from older test. mysql-test/t/federated.test: Additional test for bug#18437 "Wrong values inserted with a before update trigger on NDB table". mysql-test/t/ndb_replace.test: Added test for bug #20728 "REPLACE does not work correctly for NDB table with PK and unique index". sql/ha_ndbcluster.cc: We no longer rely on reading LEX::sql_command value in handler in order to determine if we can enable optimization which allows us to handle REPLACE statement in more efficient way by doing replaces directly in write_row() method without reporting error to SQL-layer. Instead we rely on SQL-layer informing us whether this optimization applicable by calling handler::extra() method with HA_EXTRA_WRITE_CAN_REPLACE flag. As result we no longer apply this optimization in cases when it should not be used (e.g. if we have on delete triggers on table) and use in some additional cases when it is applicable (e.g. for LOAD DATA REPLACE). sql/item.cc: Item_trigger_field::setup_field(): Added comment explaining why we don't set Field::query_id in this method. sql/mysql_priv.h: mysql_alter_table() function no longer takes handle_duplicates argument. Added declaration of mark_fields_used_by_triggers_for_insert_stmt() function. sql/sql_delete.cc: Mark fields which are used by ON DELETE triggers so handler will retrieve values for these fields. sql/sql_insert.cc: Explicitly inform handler that we are doing REPLACE (using ha_extra() method) in cases when it can promote insert operation done by write_row() to replace. Also when we do REPLACE we want to store values for all columns so we should inform handler about it. Finally we should mark fields used by ON UPDATE/ON DELETE triggers as such so handler can properly retrieve/restore values in these fields during execution of REPLACE and INSERT ... ON DUPLICATE KEY UPDATE statements. sql/sql_load.cc: Explicitly inform handler that we are doing LOAD DATA REPLACE (using ha_extra() method) in cases when it can promote insert operation done by write_row() to replace. Also when we do replace we want to save (replace) values for all columns so we should inform handler about it. Finally to properly execute LOAD DATA for table with triggers we should mark fields used by ON INSERT triggers as such so handler can properly store values for these fields. sql/sql_parse.cc: mysql_alter_table() function no longer takes handle_duplicates argument. sql/sql_table.cc: Got rid of handle_duplicates argument in mysql_alter_table() and copy_data_between_tables() functions. These functions were always called with handle_duplicates == DUP_ERROR and thus contained dead (and probably incorrect) code. sql/sql_trigger.cc: Added Table_triggers_list::mark_fields_used() method which is used to mark fields read/set by triggers as such so handlers will be able properly retrieve/store values in these fields. sql/sql_trigger.h: Table_triggers_list: Added mark_fields_used() method which is used to mark fields read/set by triggers as such so handlers will be able properly retrieve/store values in these fields. To implement this method added 'trigger_fields' member which is array of lists linking items for all fields used in triggers grouped by event and action time. sql/sql_update.cc: Mark fields which are used by ON UPDATE triggers so handler will retrieve and save values for these fields. mysql-test/r/ndb_trigger.result: Added test for bug#18437 "Wrong values inserted with a before update trigger on NDB table". mysql-test/t/ndb_trigger.test: Added test for bug#18437 "Wrong values inserted with a before update trigger on NDB table".
-
- 30 Jun, 2006 3 commits
-
-
unknown authored
BitKeeper/etc/ignore: added scripts/mysql_upgrade_shell include/my_handler.h: my_handler.h should not include my_global.h mysql-test/r/key.result: Update results after merge
-
unknown authored
- added missing retrieval of hidden primary key
-
unknown authored
heap/hp_test1.c: Changed type from last commit mysql-test/mysql-test-run.sh: Fixed problem with running with --gdb and two masters Don't disable ndb becasue we run gdb mysql-test/t/mysqldump.test: Don't read ~/.my.cnf sql/ha_ndbcluster.cc: Portability fix
-
- 29 Jun, 2006 2 commits
- 27 Jun, 2006 2 commits
-
-
unknown authored
- correction of previous patch
-
unknown authored
- make sure to allocate just enough pages in the fragments by using the actual row count from the backup, to avoid over allocation of pages to fragments, and thus avoid the bug ndb/include/kernel/GlobalSignalNumbers.h: Bug #19852 Restoring backup made from cluster with full data memory fails - distribute fragment complete to all participants to update row count ndb/include/kernel/signaldata/BackupContinueB.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - time slica writing of fragment info to ctl file ndb/include/kernel/signaldata/BackupImpl.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records - new signal fragment complete to all participants ndb/include/kernel/signaldata/BackupSignalData.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/include/kernel/signaldata/DictTabInfo.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add min and max rows to dict tab info ndb/include/kernel/signaldata/LqhFrag.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to add frag req ndb/include/kernel/signaldata/TupFrag.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to add frag req ndb/include/ndbapi/NdbDictionary.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added get/set of min max rows ndb/src/common/debugger/signaldata/BackupImpl.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/src/common/debugger/signaldata/BackupSignalData.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bit on bytes and records ndb/src/common/debugger/signaldata/DictTabInfo.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to dict tab info ndb/src/common/debugger/signaldata/LqhFrag.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/backup/Backup.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/Backup.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/BackupFormat.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new section in backup with per fragment info in ctl file - 32 -> 64 bit on bytes and records ndb/src/kernel/blocks/backup/BackupInit.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - new signal fragment complete to all participants ndb/src/kernel/blocks/dbdict/Dbdict.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added max and min rows to dict table object ndb/src/kernel/blocks/dbdict/Dbdict.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added max and min rows to dict table object ndb/src/kernel/blocks/dblqh/Dblqh.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dblqh/DblqhMain.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dbtup/Dbtup.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req ndb/src/kernel/blocks/dbtup/DbtupMeta.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - added min and max rows to frag req - move memory allocation to fragment to after adding of attributes to get correct headsize - allocate pages to fragments according to min rows setting ndb/src/kernel/blocks/dbtup/DbtupPageMap.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - grow page allocation starting from 2 irrespective of first page allocation ndb/src/mgmsrv/MgmtSrvr.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bits on bytes and records ndb/src/mgmsrv/MgmtSrvr.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - 32 -> 64 bits on bytes and records ndb/src/ndbapi/NdbDictionary.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/src/ndbapi/NdbDictionaryImpl.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/src/ndbapi/NdbDictionaryImpl.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - min and max rows in dict ndb/tools/restore/Restore.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add retrieval of fragment info ndb/tools/restore/Restore.hpp: Bug #19852 Restoring backup made from cluster with full data memory fails - add retrieval of fragment info ndb/tools/restore/consumer_restore.cpp: Bug #19852 Restoring backup made from cluster with full data memory fails - set min in restore to the actual row count (this is the actual bug fix) sql/ha_ndbcluster.cc: Bug #19852 Restoring backup made from cluster with full data memory fails - set min and max rows according to sql definition
-
- 26 Jun, 2006 1 commit
-
-
unknown authored
change names of some undocumented ndb status variables to better reflect what their values mean sql/ha_ndbcluster.cc: rename some status variables to better reflect what they show.
-
- 21 Jun, 2006 1 commit
-
-
unknown authored
-
- 14 Jun, 2006 2 commits
-
-
unknown authored
- correction of backport error
-
unknown authored
- make sure to disable bulk insert when check for duplicate key is needed mysql-test/r/ndb_loaddatalocal.result: New BitKeeper file ``mysql-test/r/ndb_loaddatalocal.result'' mysql-test/t/ndb_loaddatalocal.test: New BitKeeper file ``mysql-test/t/ndb_loaddatalocal.test''
-