Commit be63f0af authored by Mattias Jonsson's avatar Mattias Jonsson

Bug#38804: Query deadlock causes all tables to be inaccessible.

Problem was a mutex added in bug n 27405 for solving a problem
with auto_increment in partitioned innodb tables.
(in ha_partition::write_row over partitions file->ha_write_row)

Solution is to use the patch for bug#33479, which refines the
usage of mutexes for auto_increment.

Backport of bug-33479 from 6.0:

Bug-33479: auto_increment failures in partitioning

Several problems with auto_increment in partitioning
(with MyISAM, InnoDB. Locking issues, not handling
multi-row INSERTs properly etc.)

Changed the auto_increment handling for partitioning:
Added a ha_data variable in table_share for storage engine specific data
such as auto_increment value handling in partitioning, also see WL 4305
and using the ha_data->mutex to lock around read + update.

The idea is this:
Store the table's reserved auto_increment value in
the TABLE_SHARE and use a mutex to, lock it for reading and updating it
and unlocking it, in one block. Only accessing all partitions
when it is not initialized.
Also allow reservations of ranges, and if no one has done a reservation
afterwards, lower the reservation to what was actually used after
the statement is done (via release_auto_increment from WL 3146).
The lock is kept from the first reservation if it is statement based
replication and a multi-row INSERT statement where the number of
candidate rows to insert is not known in advance (like INSERT SELECT,
LOAD DATA, unlike INSERT VALUES (row1), (row2),,(rowN)).

This should also lead to better concurrancy (no need to have a mutex
protection around write_row in all cases)
and work with any local storage engine.

mysql-test/suite/parts/inc/partition_auto_increment.inc:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  Test source file for testing auto_increment
mysql-test/suite/parts/r/partition_auto_increment_archive.result:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  result file for testing auto_increment
mysql-test/suite/parts/r/partition_auto_increment_blackhole.result:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  result file for testing auto_increment
mysql-test/suite/parts/r/partition_auto_increment_innodb.result:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  result file for testing auto_increment
mysql-test/suite/parts/r/partition_auto_increment_memory.result:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  result file for testing auto_increment
mysql-test/suite/parts/r/partition_auto_increment_myisam.result:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  result file for testing auto_increment
mysql-test/suite/parts/r/partition_auto_increment_ndb.result:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  result file for testing auto_increment
mysql-test/suite/parts/t/partition_auto_increment_archive.test:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  test file for testing auto_increment
mysql-test/suite/parts/t/partition_auto_increment_blackhole.test:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  test file for testing auto_increment
mysql-test/suite/parts/t/partition_auto_increment_innodb.test:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  test file for testing auto_increment
mysql-test/suite/parts/t/partition_auto_increment_memory.test:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  test file for testing auto_increment
mysql-test/suite/parts/t/partition_auto_increment_myisam.test:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  test file for testing auto_increment
mysql-test/suite/parts/t/partition_auto_increment_ndb.test:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  test file for testing auto_increment
sql/ha_partition.cc:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: Failures using auto_increment and partitioning
  
  Changed ha_partition::get_auto_increment from file->get_auto_increment
  to file->info(HA_AUTO_STATUS), since it is works better with InnoDB
  (InnoDB can have issues with partitioning and auto_increment,
  where get_auto_increment sometimes can return a non updated value.)
  
  Using the new table_share->ha_data for keeping the auto_increment
  value, shared by all instances of the same table.
  It is read+updated when holding a auto_increment specific mutex.
  Also added release_auto_increment to decrease gaps if possible.
  And a lock for multi-row INSERT statements where the number of candidate
  rows to insert is not known in advance (like INSERT SELECT, LOAD DATA;
  Unlike INSERT INTO (row1),(row2),,(rowN)).
  Fixed a small bug, copied++ to (*copied)++ and the same for deleted.
  Changed from current_thd, to ha_thd()
sql/ha_partition.h:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: Failures using auto_increment and partitioning
  
  Added a new struct HA_DATA_PARTITION to be used in table_share->ha_data
  Added a private function to set auto_increment values if needed
  Removed the restore_auto_increment (the hander version is better)
  Added lock/unlock functions for auto_increment handling.
  Changed copied/deleted to const.
sql/handler.h:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: auto_increment failures in partitioning
  
  Added const for changed_partitions
  Added comments about SQLCOM_TRUNCATE for delete_all_rows
sql/table.h:
  Bug#38804: Query deadlock causes all tables to be inaccessible.
  Backporting from 6.0 of:
  Bug-33479: Failures using auto_increment and partitioning
  
  Added a variable in table_share: ha_data for storage of storage engine
  specific data (such as auto_increment handling in partitioning).
parent 7883866c
This diff is collapsed.
This diff is collapsed.
################################################################################
# t/partition_auto_increment_archive.test #
# #
# Purpose: #
# Tests around auto increment column #
# Archive branch #
# #
#------------------------------------------------------------------------------#
# Original Author: MattiasJ #
# Original Date: 2008-09-02 #
# Change Author: #
# Change Date: #
# Change: #
################################################################################
#
# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
# The server must support partitioning.
--source include/have_partition.inc
#------------------------------------------------------------------------------#
# Engine specific settings and requirements
--source include/have_archive.inc
# Archve does not support delete
let $skip_delete= 1;
let $skip_truncate= 1;
let $skip_update= 1;
let $only_ai_pk= 1;
##### Storage engine to be tested
let $engine= 'Archive';
#------------------------------------------------------------------------------#
# Execute the tests to be applied to all storage engines
--source suite/parts/inc/partition_auto_increment.inc
################################################################################
# t/partition_auto_increment_blackhole.test #
# #
# Purpose: #
# Tests around auto increment column #
# Blackhole branch #
# #
#------------------------------------------------------------------------------#
# Original Author: MattiasJ #
# Original Date: 2008-09-02 #
# Change Author: #
# Change Date: #
# Change: #
################################################################################
#
# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
# The server must support partitioning.
--source include/have_partition.inc
#------------------------------------------------------------------------------#
# Engine specific settings and requirements
--source include/have_blackhole.inc
##### Storage engine to be tested
let $engine= 'Blackhole';
#------------------------------------------------------------------------------#
# Execute the tests to be applied to all storage engines
--source suite/parts/inc/partition_auto_increment.inc
################################################################################
# t/partition_auto_increment_innodb.test #
# #
# Purpose: #
# Tests around auto increment column #
# InnoDB branch #
# #
#------------------------------------------------------------------------------#
# Original Author: MattiasJ #
# Original Date: 2008-02-12 #
# Change Author: #
# Change Date: #
# Change: #
################################################################################
#
# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
# The server must support partitioning.
--source include/have_partition.inc
#------------------------------------------------------------------------------#
# Engine specific settings and requirements
##### Storage engine to be tested
let $engine= 'InnoDB';
--source include/have_innodb.inc
#------------------------------------------------------------------------------#
# Execute the tests to be applied to all storage engines
--source suite/parts/inc/partition_auto_increment.inc
################################################################################
# t/partition_auto_increment_memory.test #
# #
# Purpose: #
# Tests around auto increment column #
# Memory branch #
# #
#------------------------------------------------------------------------------#
# Original Author: MattiasJ #
# Original Date: 2008-02-12 #
# Change Author: #
# Change Date: #
# Change: #
################################################################################
#
# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
# The server must support partitioning.
--source include/have_partition.inc
#------------------------------------------------------------------------------#
# Engine specific settings and requirements
##### Storage engine to be tested
let $engine= 'Memory';
#------------------------------------------------------------------------------#
# Execute the tests to be applied to all storage engines
--source suite/parts/inc/partition_auto_increment.inc
################################################################################
# t/partition_auto_increment_myisam.test #
# #
# Purpose: #
# Tests around auto increment column #
# MyISAM branch #
# #
#------------------------------------------------------------------------------#
# Original Author: MattiasJ #
# Original Date: 2008-02-12 #
# Change Author: #
# Change Date: #
# Change: #
################################################################################
#
# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
# The server must support partitioning.
--source include/have_partition.inc
#------------------------------------------------------------------------------#
# Engine specific settings and requirements
##### Storage engine to be tested
let $engine= 'MyISAM';
#------------------------------------------------------------------------------#
# Execute the tests to be applied to all storage engines
--source suite/parts/inc/partition_auto_increment.inc
################################################################################
# t/partition_auto_increment_ndb.test #
# #
# Purpose: #
# Tests around auto increment column #
# NDB branch #
# #
# Note: NDB behavior for auto_increment on secondary column in #
# multi-column-index is NOT like MyISAM, instead it uses the same #
# behavior as if it was the primary column. #
#------------------------------------------------------------------------------#
# Original Author: MattiasJ #
# Original Date: 2008-09-02 #
# Change Author: #
# Change Date: #
# Change: #
################################################################################
#
# NOTE: PLEASE DO NOT ADD NOT MYISAM SPECIFIC TESTCASES HERE !
# TESTCASES WHICH MUST BE APPLIED TO ALL STORAGE ENGINES MUST BE ADDED IN
# THE SOURCED FILES ONLY.
#
# The server must support partitioning.
--source include/have_partition.inc
#------------------------------------------------------------------------------#
# Engine specific settings and requirements
--source include/have_ndb.inc
##### Storage engine to be tested
let $engine= 'NDB';
connection default;
#enable hash partitioning
SET new=on;
#------------------------------------------------------------------------------#
# Execute the tests to be applied to all storage engines
--source suite/parts/inc/partition_auto_increment.inc
This diff is collapsed.
...@@ -37,6 +37,15 @@ typedef struct st_partition_share ...@@ -37,6 +37,15 @@ typedef struct st_partition_share
} PARTITION_SHARE; } PARTITION_SHARE;
#endif #endif
/**
Partition specific ha_data struct.
@todo: move all partition specific data from TABLE_SHARE here.
*/
typedef struct st_ha_data_partition
{
ulonglong next_auto_inc_val; /**< first non reserved value */
bool auto_inc_initialized;
} HA_DATA_PARTITION;
#define PARTITION_BYTES_IN_POS 2 #define PARTITION_BYTES_IN_POS 2
class ha_partition :public handler class ha_partition :public handler
...@@ -141,6 +150,12 @@ private: ...@@ -141,6 +150,12 @@ private:
"own" the m_part_info structure. "own" the m_part_info structure.
*/ */
bool is_clone; bool is_clone;
bool auto_increment_lock; /**< lock reading/updating auto_inc */
/**
Flag to keep the auto_increment lock through out the statement.
This to ensure it will work with statement based replication.
*/
bool auto_increment_safe_stmt_log_lock;
public: public:
handler *clone(MEM_ROOT *mem_root); handler *clone(MEM_ROOT *mem_root);
virtual void set_part_info(partition_info *part_info) virtual void set_part_info(partition_info *part_info)
...@@ -197,8 +212,8 @@ public: ...@@ -197,8 +212,8 @@ public:
virtual char *update_table_comment(const char *comment); virtual char *update_table_comment(const char *comment);
virtual int change_partitions(HA_CREATE_INFO *create_info, virtual int change_partitions(HA_CREATE_INFO *create_info,
const char *path, const char *path,
ulonglong *copied, ulonglong * const copied,
ulonglong *deleted, ulonglong * const deleted,
const uchar *pack_frm_data, const uchar *pack_frm_data,
size_t pack_frm_len); size_t pack_frm_len);
virtual int drop_partitions(const char *path); virtual int drop_partitions(const char *path);
...@@ -212,7 +227,7 @@ public: ...@@ -212,7 +227,7 @@ public:
virtual void change_table_ptr(TABLE *table_arg, TABLE_SHARE *share); virtual void change_table_ptr(TABLE *table_arg, TABLE_SHARE *share);
private: private:
int prepare_for_rename(); int prepare_for_rename();
int copy_partitions(ulonglong *copied, ulonglong *deleted); int copy_partitions(ulonglong * const copied, ulonglong * const deleted);
void cleanup_new_partition(uint part_count); void cleanup_new_partition(uint part_count);
int prepare_new_partition(TABLE *table, HA_CREATE_INFO *create_info, int prepare_new_partition(TABLE *table, HA_CREATE_INFO *create_info,
handler *file, const char *part_name, handler *file, const char *part_name,
...@@ -829,12 +844,51 @@ public: ...@@ -829,12 +844,51 @@ public:
auto_increment_column_changed auto_increment_column_changed
------------------------------------------------------------------------- -------------------------------------------------------------------------
*/ */
virtual void restore_auto_increment(ulonglong prev_insert_id);
virtual void get_auto_increment(ulonglong offset, ulonglong increment, virtual void get_auto_increment(ulonglong offset, ulonglong increment,
ulonglong nb_desired_values, ulonglong nb_desired_values,
ulonglong *first_value, ulonglong *first_value,
ulonglong *nb_reserved_values); ulonglong *nb_reserved_values);
virtual void release_auto_increment(); virtual void release_auto_increment();
private:
virtual int reset_auto_increment(ulonglong value);
virtual void lock_auto_increment()
{
/* lock already taken */
if (auto_increment_safe_stmt_log_lock)
return;
DBUG_ASSERT(table_share->ha_data && !auto_increment_lock);
if(table_share->tmp_table == NO_TMP_TABLE)
{
auto_increment_lock= TRUE;
pthread_mutex_lock(&table_share->mutex);
}
}
virtual void unlock_auto_increment()
{
DBUG_ASSERT(table_share->ha_data);
/*
If auto_increment_safe_stmt_log_lock is true, we have to keep the lock.
It will be set to false and thus unlocked at the end of the statement by
ha_partition::release_auto_increment.
*/
if(auto_increment_lock && !auto_increment_safe_stmt_log_lock)
{
pthread_mutex_unlock(&table_share->mutex);
auto_increment_lock= FALSE;
}
}
virtual void set_auto_increment_if_higher(const ulonglong nr)
{
HA_DATA_PARTITION *ha_data= (HA_DATA_PARTITION*) table_share->ha_data;
lock_auto_increment();
/* must check when the mutex is taken */
if (nr >= ha_data->next_auto_inc_val)
ha_data->next_auto_inc_val= nr + 1;
ha_data->auto_inc_initialized= TRUE;
unlock_auto_increment();
}
public:
/* /*
------------------------------------------------------------------------- -------------------------------------------------------------------------
......
...@@ -1241,8 +1241,8 @@ public: ...@@ -1241,8 +1241,8 @@ public:
int ha_change_partitions(HA_CREATE_INFO *create_info, int ha_change_partitions(HA_CREATE_INFO *create_info,
const char *path, const char *path,
ulonglong *copied, ulonglong * const copied,
ulonglong *deleted, ulonglong * const deleted,
const uchar *pack_frm_data, const uchar *pack_frm_data,
size_t pack_frm_len); size_t pack_frm_len);
int ha_drop_partitions(const char *path); int ha_drop_partitions(const char *path);
...@@ -1859,7 +1859,8 @@ private: ...@@ -1859,7 +1859,8 @@ private:
This is called to delete all rows in a table This is called to delete all rows in a table
If the handler don't support this, then this function will If the handler don't support this, then this function will
return HA_ERR_WRONG_COMMAND and MySQL will delete the rows one return HA_ERR_WRONG_COMMAND and MySQL will delete the rows one
by one. by one. It should reset auto_increment if
thd->lex->sql_command == SQLCOM_TRUNCATE.
*/ */
virtual int delete_all_rows() virtual int delete_all_rows()
{ return (my_errno=HA_ERR_WRONG_COMMAND); } { return (my_errno=HA_ERR_WRONG_COMMAND); }
...@@ -1898,8 +1899,8 @@ private: ...@@ -1898,8 +1899,8 @@ private:
virtual int change_partitions(HA_CREATE_INFO *create_info, virtual int change_partitions(HA_CREATE_INFO *create_info,
const char *path, const char *path,
ulonglong *copied, ulonglong * const copied,
ulonglong *deleted, ulonglong * const deleted,
const uchar *pack_frm_data, const uchar *pack_frm_data,
size_t pack_frm_len) size_t pack_frm_len)
{ return HA_ERR_WRONG_COMMAND; } { return HA_ERR_WRONG_COMMAND; }
......
...@@ -360,6 +360,7 @@ typedef struct st_table_share ...@@ -360,6 +360,7 @@ typedef struct st_table_share
int cached_row_logging_check; int cached_row_logging_check;
#ifdef WITH_PARTITION_STORAGE_ENGINE #ifdef WITH_PARTITION_STORAGE_ENGINE
/** @todo: Move into *ha_data for partitioning */
bool auto_partitioned; bool auto_partitioned;
const char *partition_info; const char *partition_info;
uint partition_info_len; uint partition_info_len;
...@@ -369,6 +370,9 @@ typedef struct st_table_share ...@@ -369,6 +370,9 @@ typedef struct st_table_share
handlerton *default_part_db_type; handlerton *default_part_db_type;
#endif #endif
/** place to store storage engine specific data */
void *ha_data;
/* /*
Set share's table cache key and update its db and table name appropriately. Set share's table cache key and update its db and table name appropriately.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment