Commit c89ddfdf authored by unknown's avatar unknown

Bug #14526: Partitions: indexed searches fail. When inserting a row into

a partitioned table, the value of auto_increment fields was not calculated
until after deciding what partition to add the row into, which led to rows
being written to the wrong partitions (or spurious errors).


mysql-test/r/partition.result:
  Add new results
mysql-test/t/partition.test:
  Add new regression test
sql/ha_partition.cc:
  Fix notes about, and handling of, auto_increment in ha_partition::write_row().
  We have to decide on an auto_increment value before we can figure out which
  partition the rows should be inserted into.
parent ed8b7459
......@@ -312,4 +312,22 @@ partition by hash(f_int1) partitions 2;
insert into t1 values (1,1),(2,2);
replace into t1 values (1,1),(2,2);
drop table t1;
create table t2 (s1 int not null auto_increment, primary key (s1)) partition by list (s1) (partition p1 values in (1),partition p2 values in (2),partition p3 values in (3),partition p4 values in (4));
insert into t2 values (null),(null),(null);
select * from t2;
s1
1
2
3
select * from t2 where s1 < 2;
s1
1
update t2 set s1 = s1 + 1 order by s1 desc;
select * from t2 where s1 < 3;
s1
2
select * from t2 where s1 = 2;
s1
2
drop table t2;
End of 5.1 tests
......@@ -401,4 +401,16 @@ insert into t1 values (1,1),(2,2);
replace into t1 values (1,1),(2,2);
drop table t1;
#
# Bug #14526: Partitions: indexed searches fail
#
create table t2 (s1 int not null auto_increment, primary key (s1)) partition by list (s1) (partition p1 values in (1),partition p2 values in (2),partition p3 values in (3),partition p4 values in (4));
insert into t2 values (null),(null),(null);
select * from t2;
select * from t2 where s1 < 2;
update t2 set s1 = s1 + 1 order by s1 desc;
select * from t2 where s1 < 3;
select * from t2 where s1 = 2;
drop table t2;
--echo End of 5.1 tests
......@@ -2600,22 +2600,13 @@ void ha_partition::unlock_row()
ha_berkeley.cc has a variant of how to store it intact by "packing" it
for ha_berkeley's own native storage type.
See the note for update_row() on auto_increments and timestamps. This
case also applied to write_row().
Called from item_sum.cc, item_sum.cc, sql_acl.cc, sql_insert.cc,
sql_insert.cc, sql_select.cc, sql_table.cc, sql_udf.cc, and sql_update.cc.
ADDITIONAL INFO:
Most handlers set timestamp when calling write row if any such fields
exists. Since we are calling an underlying handler we assume the´
underlying handler will assume this responsibility.
Underlying handlers will also call update_auto_increment to calculate
the new auto increment value. We will catch the call to
get_auto_increment and ensure this increment value is maintained by
only one of the underlying handlers.
We have to set timestamp fields and auto_increment fields, because those
may be used in determining which partition the row should be written to.
*/
int ha_partition::write_row(byte * buf)
......@@ -2629,6 +2620,17 @@ int ha_partition::write_row(byte * buf)
DBUG_ENTER("ha_partition::write_row");
DBUG_ASSERT(buf == m_rec0);
/* If we have a timestamp column, update it to the current time */
if (table->timestamp_field_type & TIMESTAMP_AUTO_SET_ON_INSERT)
table->timestamp_field->set_time();
/*
If we have an auto_increment column and we are writing a changed row
or a new row, then update the auto_increment value in the record.
*/
if (table->next_number_field && buf == table->record[0])
update_auto_increment();
#ifdef NOT_NEEDED
if (likely(buf == rec0))
#endif
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment