1. 16 Nov, 2012 3 commits
  2. 15 Nov, 2012 5 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 94897a63
      Marko Mäkelä authored
      94897a63
    • Marko Mäkelä's avatar
      Bug#15872736 FAILING ASSERTION · e882efe6
      Marko Mäkelä authored
      Remove a bogus debug assertion.
      e882efe6
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 20bbc6f7
      Marko Mäkelä authored
      20bbc6f7
    • Marko Mäkelä's avatar
      Bug#15874001 CREATE INDEX ON A UTF8 CHAR COLUMN FAILS WITH ROW_FORMAT=REDUNDANT · e5ad4171
      Marko Mäkelä authored
      CHAR(n) in ROW_FORMAT=REDUNDANT tables is always fixed-length
      (n*mbmaxlen bytes), but in the temporary file it is variable-length
      (n*mbminlen to n*mbmaxlen bytes) for variable-length character sets,
      such as UTF-8.
      
      The temporary file format used during index creation and online ALTER
      TABLE is based on ROW_FORMAT=COMPACT. Thus, it should use the
      variable-length encoding even if the base table is in
      ROW_FORMAT=REDUNDNAT.
      
      dtype_get_fixed_size_low(): Replace an assertion-like check with a
      debug assertion.
      
      rec_init_offsets_comp_ordinary(), rec_convert_dtuple_to_rec_comp():
      Make this an inline function.  Replace 'ulint extra' with 'bool temp'.
      
      rec_get_converted_size_comp_prefix_low(): Renamed from
      rec_get_converted_size_comp_prefix(), and made inline. Add the
      parameter 'bool temp'. If temp=true, do not add REC_N_NEW_EXTRA_BYTES.
      
      rec_get_converted_size_comp_prefix(): Remove the comment about
      dict_table_is_comp(). This function is only to be called for other
      than ROW_FORMAT=REDUNDANT records.
      
      rec_get_converted_size_temp(): New function for computing temporary
      file record size. Omit REC_N_NEW_EXTRA_BYTES from the sizes.
      
      rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): New functions,
      for operating on temporary file records.
      
      rb:1559 approved by Jimmy Yang
      e5ad4171
    • unknown's avatar
      No commit message · 51787b41
      unknown authored
      No commit message
      51787b41
  3. 14 Nov, 2012 8 commits
    • Nuno Carvalho's avatar
      BUG#12669186: AUTOINC VALUE PERSISTENCY BREAKS CERTAIN REPLICATION SCENARIOS · 16c9c144
      Nuno Carvalho authored
      When master and slave have different schemas, in particular different
      AUTO_INCREMENT columns, INSERT_ID events logged for a given table on
      master may be applied to a different table on slave on SBR, e.g.:
        master has one table (t1) with one auto-inc column and another table
        (t2) without auto-inc column, on slave t1 does not have auto-inc
        column (despite having the same columns) and t2 has a auto-inc
        column. The INSERT_ID that is intended for t1, since t1 on slave
        doesn't have auto-inc column is used on t2, causing consistency
        problems.
      
      To fix this incorrect behaviour, auto-inc interval allocation via
      INSERT_ID is made effectively terminated at the end of top-level
      statements on slave and binlog replay.
      16c9c144
    • Venkata Sidagam's avatar
      BUG#13556107: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST [3] · 118b7561
      Venkata Sidagam authored
      Problem description: Incorrect key file. Key file is corrupted,
      while reading the keys from the file. The problem here is that 
      keyseg->start (which should point to the beginning of a field) 
      is pointing beyond total record length.
      
      Fix: If keyseg->start is greater than total record length then 
      return error.
      118b7561
    • unknown's avatar
      No commit message · 2eb5bc95
      unknown authored
      No commit message
      2eb5bc95
    • unknown's avatar
      No commit message · 41467f5d
      unknown authored
      No commit message
      41467f5d
    • unknown's avatar
      No commit message · 1553e0a1
      unknown authored
      No commit message
      1553e0a1
    • Jimmy Yang's avatar
      Fix Bug #14753402 - FAILING ASSERTION: LEN == IFIELD->FIXED_LEN · e42cd2db
      Jimmy Yang authored
      rb://1411 approved by Marko
      e42cd2db
    • unknown's avatar
      No commit message · c2f4a4af
      unknown authored
      No commit message
      c2f4a4af
    • unknown's avatar
      No commit message · 4885937e
      unknown authored
      No commit message
      4885937e
  4. 13 Nov, 2012 3 commits
    • unknown's avatar
      Merge · 6089f0f4
      unknown authored
      6089f0f4
    • Mattias Jonsson's avatar
      36ac232d
    • Mattias Jonsson's avatar
      Bug#14845133: · 2f3baa74
      Mattias Jonsson authored
      The problem is related to the changes made in bug#13025132.
      get_partition_set can do dynamic pruning which limits the partitions
      to scan even further. This is not accounted for when setting
      the correct start of the preallocated record buffer used in
      the priority queue, thus leading to wrong buffer is used
      (including wrong preset partitioning id, connected to that buffer).
      
      Solution is to fast forward the buffer pointer to point to the correct
      partition record buffer.
      2f3baa74
  5. 12 Nov, 2012 3 commits
  6. 09 Nov, 2012 8 commits
    • unknown's avatar
      No commit message · 858e9ecc
      unknown authored
      No commit message
      858e9ecc
    • Venkata Sidagam's avatar
      Bug#13556000: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST[2] · 9749b60e
      Venkata Sidagam authored
      Problem description: Corrupt key file for the table. Size of the 
      key is greater than the maximum specified size. This results in 
      the overflow of the key buffer while reading the key from key 
      file.
      
      Fix: If size of key is greater than the maximum size it returns 
      an error before writing it into the key buffer. Gives error as 
      corrupt file but no stack overflow.
      9749b60e
    • Annamalai Gurusami's avatar
      c4b86599
    • Annamalai Gurusami's avatar
      Bug #14669848 CRASH DURING ALTER MAKES ORIGINAL TABLE INACCESSIBLE · 2ad007df
      Annamalai Gurusami authored
      When a new primary key is added to an InnoDB table, then the following
      steps are taken by InnoDB plugin:
      
      .  let t1 be the original table.
      .  a temporary table t1@00231 will be created by cloning t1.
      .  all data will be copied from t1 to t1@00231.
      .  rename t1 to t1@00232.
      .  rename t1@00231 to t1.
      .  drop t1@00232.
      
      The rename and drop operations involve file operations.  But file operations
      cannot be rolled back.  So in row_merge_rename_tables(), just after doing data
      dictionary update and before doing any file operations, generate redo logs
      for file operations and commit the transaction.  This will ensure that any
      crash after this commit, the table is still recoverable by moving .ibd and
      .frm files.  Manual recovery is required.
      
      During recovery, the rename file operation redo logs are processed.
      Previously this was being ignored.
      
      rb://1460 approved by Marko Makela.
      2ad007df
    • Annamalai Gurusami's avatar
      2b68f440
    • Anirudh Mangipudi's avatar
      BUG#11762933: MYSQLDUMP WILL SILENTLY SKIP THE `EVENT` · 52a83128
      Anirudh Mangipudi authored
                    TABLE DATA IF DUMPS MYSQL DATABA
      Problem: If mysqldump is run without --events (or with --skip-events)
      it will not dump the mysql.event table's data. This behaviour is inconsistent
      with that of --routines option, which does not affect the dumping of
      mysql.proc table. According to the Manual, --events (--skip-events) defines,
      if the Event Scheduler events for the dumped databases should be included
      in the mysqldump output and this has nothing to do with the mysql.event table
      itself.
      Solution: A warning has been added when mysqldump is used without --events 
      (or with --skip-events) and a separate patch with the behavioral change 
      will be prepared for 5.6/trunk.
      52a83128
    • Anirudh Mangipudi's avatar
      BUG#11762933: MYSQLDUMP WILL SILENTLY SKIP THE `EVENT` · 14dfe6fc
      Anirudh Mangipudi authored
                    TABLE DATA IF DUMPS MYSQL DATABA
      Problem: If mysqldump is run without --events (or with --skip-events)
      it will not dump the mysql.event table's data. This behaviour is inconsistent
      with that of --routines option, which does not affect the dumping of
      mysql.proc table. According to the Manual, --events (--skip-events) defines,
      if the Event Scheduler events for the dumped databases should be included
      in the mysqldump output and this has nothing to do with the mysql.event table
      itself.
      Solution: A warning has been added when mysqldump is used without --events 
      (or with --skip-events) and a separate patch with the behavioral change 
      will be prepared for 5.6/trunk.
      14dfe6fc
    • Thayumanavar's avatar
      BUG#14458232 - CRASH IN THD_IS_TRANSACTION_ACTIVE DURING · 53455866
      Thayumanavar authored
                     THREAD POOLING STRESS TEST
      PROBLEM:
      Connection stress tests which consists of concurrent
      kill connections interleaved with mysql ping queries
      cause the mysqld server which uses thread pool scheduler
      to crash.
      FIX:
      Killing a connection involves shutdown and close of client
      socket and this can cause EPOLLHUP(or EPOLLERR) events to be
      to be queued and handled after disarming and cleanup of 
      of the connection object (THD) is being done.We disarm the 
      the connection by modifying the epoll mask to zero which
      ensure no events come and release the ownership of waiting 
      thread that collect events and then do the cleanup of THD.
      object.As per the linux kernel epoll source code (               
      http://lxr.linux.no/linux+*/fs/eventpoll.c#L1771), EPOLLHUP
      (or EPOLLERR) can't be masked even if we set EPOLL mask
      to zero. So we disarm the connection and thus prevent 
      execution of any query processing handler/queueing to 
      client ctx. queue by removing the client fd from the epoll        
      set via EPOLL_CTL_DEL. Also there is a race condition which
      involve the following threads:
      1) Thread X executing KILL CONNECTION Y and is in THD::awake
      and using mysys_var (holding LOCK_thd_data).
      2) Thread Y in tp_process_event executing and is being killed.
      3) Thread Z receives KILL flag internally and possible call
      the tp_thd_cleanup function which set thread session variable
      and changing mysys_var.
      The fix for the above race is to set thread session variable
      under LOCK_thd_data.
      We also do not call THD::awake if we found the thread in the
      thread list that is to be killed but it's KILL_CONNECTION flag
      set thus avoiding any possible concurrent cleanup. This patch
      is approved by Mikael Ronstrom via email review.
      53455866
  7. 08 Nov, 2012 7 commits
    • Joerg Bruehe's avatar
      Merge the ULN RPM fix into main. · a8f749a6
      Joerg Bruehe authored
      a8f749a6
    • Joerg Bruehe's avatar
      Building RPMs for ULN: · 0f344086
      Joerg Bruehe authored
      The patch "mysql-chain-certs.patch" needs to be adapted
      to code changes in "vio/viosslfactories.c" which were
      done in MySQL 5.5.
      
      Then, the patch can be re-enabled in the spec file.
      0f344086
    • Annamalai Gurusami's avatar
      Bug #14669848 CRASH DURING ALTER MAKES ORIGINAL TABLE INACCESSIBLE · 7ef879bb
      Annamalai Gurusami authored
      When a new primary key is added to an InnoDB table, then the following
      steps are taken by InnoDB plugin:
      
      .  let t1 be the original table.
      .  a temporary table t1@00231 will be created by cloning t1.
      .  all data will be copied from t1 to t1@00231.
      .  rename t1 to t1@00232.
      .  rename t1@00231 to t1.
      .  drop t1@00232.
      
      The rename and drop operations involve file operations.  But file operations
      cannot be rolled back.  So in row_merge_rename_tables(), just after doing data
      dictionary update and before doing any file operations, generate redo logs
      for file operations and commit the transaction.  This will ensure that any
      crash after this commit, the table is still recoverable by moving .ibd and
      .frm files.  Manual recovery is required.
      
      During recovery, the rename file operation redo logs are processed.
      Previously this was being ignored.
      
      rb://1460 approved by Marko Makela.
      7ef879bb
    • Aditya A's avatar
      Bug#14234028 - CRASH DURING SHUTDOWN WITH BACKGROUND PURGE THREAD · 29d08621
      Aditya A authored
       
       Analysis
       --------- 
       
       my_stat() calls stat() and if the stat() call fails we try to set 
       the variable  my_errno which is actually a thread specific data .
       We try to get the  address of this thread specific data using
       my_pthread_getspecifc(),but for the purge thread we have not defined 
       any thread specific data so it returns null and when dereferencing 
       null we get a segmentation fault.
              init_available_charsets() seen in the core stack is invoked 
       through  pthread_once() .pthread_once is used for one time 
       initialization.Since free_charsets() is called before innodb plugin 
       shutdown ,purge thread calls init_avaliable_charsets() which leads 
       to the crash.
      
       Fix
       ---
       Call free_charsets() after the innodb plugin shutdown,since purge 
       threads are still using the charsets. 
      29d08621
    • Aditya A's avatar
      Bug#14234028 - CRASH DURING SHUTDOWN WITH BACKGROUND PURGE THREAD · 7a8c93e6
      Aditya A authored
       
       Analysis
       --------- 
       
       my_stat() calls stat() and if the stat() call fails we try to set 
       the variable  my_errno which is actually a thread specific data .
       We try to get the  address of this thread specific data using
       my_pthread_getspecifc(),but for the purge thread we have not defined 
       any thread specific data so it returns null and when dereferencing 
       null we get a segmentation fault.
              init_available_charsets() seen in the core stack is invoked 
       through  pthread_once() .pthread_once is used for one time 
       initialization.Since free_charsets() is called before innodb plugin 
       shutdown ,purge thread calls init_avaliable_charsets() which leads 
       to the crash.
      
       Fix
       ---
       Call free_charsets() after the innodb plugin shutdown,since purge 
       threads are still using the charsets. 
      7a8c93e6
    • Aditya A's avatar
      Bug#11751825 - OPTIMIZE PARTITION RECREATES FULL TABLE INSTEAD JUST PARTITION · c4be4dc0
      Aditya A authored
      Follow up patch to address the pb2 failures.
      c4be4dc0
    • Aditya A's avatar
      Bug#11751825 - OPTIMIZE PARTITION RECREATES FULL TABLE INSTEAD JUST PARTITION · 078d7a87
      Aditya A authored
      Follow up patch to address the pb2 failures.
      078d7a87
  8. 07 Nov, 2012 3 commits
    • Joerg Bruehe's avatar
      Make RPMs for ULN build again. · a4e7094e
      Joerg Bruehe authored
      A change to "vio/viosslfactories.c" in August, 2012,
      broke a patch which is to be applied during the build
      of ULN RPMs.
      The patch file is
      "packaging/rpm-uln/mysql-chain-certs.patch"
      
      This change bypasses the problem by not trying to apply
      the patch.
      
      This is a regression and must be fixed, not bypassed.
      a4e7094e
    • Joerg Bruehe's avatar
      Placement change: · 7e613db4
      Joerg Bruehe authored
      Top level "SPECIFIC-ULN/" was inappropriate,
      put the files to create RPMs for ULN into
      "packaging/rpm-uln/".
      7e613db4
    • Praveenkumar Hulakund's avatar
      Bug#14466617 - INVALID WRITES AND/OR CRASH WITH USER · d912a758
      Praveenkumar Hulakund authored
                     VARIABLES 
      
      Analysis:
      -------------
      After executing the query, new value of the user defined
      variables are set in the function "select_dumpvar::send_data".
      "select_dumpvar::send_data" first calls function 
      "Item_func_set_user_var::save_item_result()". This function
      checks the nullness of the Item_field passed as parameter 
      to it and saves it. The nullness of item is stored with 
      arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls
      "Item_func_set_user_var::update()" which notices null 
      result that was saved and calls "Item_func_set_user_var::
      update_hash". But here null_value is not set and args[0]
      is different from that given to function "Item_func_set_user_var::
      set_item_result()". This causes "Item_func_set_user_var::
      update_hash" function to believe that its getting non-null value.
      "user_var_entry::length" set to 0 and hence "user_var_entry::value"
      is made to point to extra_area allocated in "user_var_entry".
      And "Item_func_set_user_var::update_hash" tries to write
      at memory beyond extra_area for result type DECIMAL. Because of 
      this invalid write issue is reported by Valgrind.
      
      Before this bug was introduced, we avoided this problem by 
      creating "Item_func_set_user_var" object with the same 
      Item_field as arg[0] and as parameter to 
      Item_func_set_user_var::save_item_result(). But now 
      they are refering to different args[0]. Because of this
      null_value flag set in parameter Item_field in function
      "Item_func_set_user_var::save_item_result()" is not
      reflected in "Item_func_set_user_var" object.
      
      Fix:
      ------------
      This issue is reported on versions 5.5.24. Issue does not exists
      in 5.5.23, 5.1, 5.6 and trunk.
      
      This issue was introduced by
      revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for
      bug #12408412), which was pushed into 5.5 and later releases. This patch
      has later been reversed in 5.6 and trunk by
      revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for
      bug #14664077). Backported this patch in 5.5 also to fix this issue.
      
      
      sql/item_func.cc:
        here unsigned value is converted to signed value.
      sql/item_func.h:
        last_insert_id() gives an auto_incremented value which can be
        positive only,so defined it as a unsigned longlong sets the
        unsigned_flag to 1.
      d912a758