1. 04 May, 2016 1 commit
    • Sujatha Sivakumar's avatar
      Bug#12818255: READ-ONLY OPTION DOES NOT ALLOW · 818b3a91
      Sujatha Sivakumar authored
      INSERTS/UPDATES ON TEMPORARY TABLES
      Bug#14294223: CHANGES NOT ALLOWED TO TEMPORARY TABLES ON
      READ-ONLY SERVERS
      
      Problem:
      ========
      Running 5.5.14 in read only we can create temporary tables
      but can not insert or update records in the table. When we
      try we get Error 1290 : The MySQL server is running with the
      --read-only option so it cannot execute this statement.
      
      Analysis:
      =========
      This bug is very specific to binlog being enabled and
      binlog-format being stmt/mixed. Standalone server without
      binlog enabled or with row based binlog-mode works fine.
      
      How standalone server and row based replication work:
      =====================================================
      Standalone server and row based replication mark the
      transactions as read_write only when they are modifying
      non temporary tables as part of their current transaction.
      
      Because of this when code enters commit phase it checks
      if a transaction is read_write or not. If the transaction
      is read_write and global read only mode is enabled those
      transaction will fail with 'server is read only mode'
      error.
      
      In the case of statement based mode at the time of writing
      to binary log a binlog handler is created and it is always
      marked as read_write. In case of temporary tables even
      though the engine did not mark the transaction as read_write
      but the new transaction that is started by binlog handler is
      considered as read_write.
      
      Hence in this case when code enters commit phase it finds
      one handler which has a read_write transaction even when
      we are modifying temporary table. This causes the server
      to throw an error when global read-only mode is enabled.
      
      Fix:
      ====
      At the time of commit in "ha_commit_trans" if a read_write
      transaction is found, we should check if this transaction is
      coming from a handler other than binlog_handler. This will
      ensure that there is a genuine read_write transaction being
      sent by the engine apart from binlog_handler and only then
      it should be blocked.
      818b3a91
  2. 02 May, 2016 1 commit
  3. 29 Apr, 2016 1 commit
  4. 22 Apr, 2016 1 commit
    • Nisha Gopalakrishnan's avatar
      BUG#23135731: INSERT WITH DUPLICATE KEY UPDATE REPORTS · 3b6f9aac
      Nisha Gopalakrishnan authored
                    INCORRECT ERROR.
      
      Analysis
      ========
      INSERT with DUPLICATE KEY UPDATE and REPLACE on a table
      where foreign key constraint is defined fails with an
      incorrect 'duplicate entry' error rather than foreign
      key constraint violation error.
      
      As part of the bug fix for BUG#22037930, a new flag
      'HA_CHECK_FK_ERROR' was added while checking for non fatal
      errors to manage FK errors based on the 'IGNORE' flag. For
      INSERT with DUPLICATE KEY UPDATE and REPLACE queries, the
      foreign key constraint violation error was marked as non-fatal,
      even though IGNORE was not set. Hence it continued with the
      duplicate key processing resulting in an incorrect error.
      
      Fix:
      ===
      Foreign key violation errors are treated as non fatal only when
      the IGNORE is not set in the above mentioned queries. Hence reports
      the appropriate foreign key violation error.
      3b6f9aac
  5. 19 Apr, 2016 2 commits
    • Karthik Kamath's avatar
      BUG#22286421: NULL POINTER DEREFERENCE · fbf44eed
      Karthik Kamath authored
      ANALYSIS:
      =========
      A LEX_STRING structure pointer is processed during the
      validation of a stored program name. During this processing,
      there is a possibility of null pointer dereference.
      
      FIX:
      ====
      check_routine_name() is invoked by the parser by supplying a
      non-empty string as the SP name. To avoid any potential calls
      to check_routine_name() with NULL value, a debug assert has
      been added to catch such cases.
      fbf44eed
    • Sujatha Sivakumar's avatar
      Bug#22897202: RPL_IO_THD_WAIT_FOR_DISK_SPACE HAS OCCASIONAL · 3a8f43be
      Sujatha Sivakumar authored
      FAILURES
      
      Analysis:
      =========
      Test script is not ensuring that "assert_grep.inc" should be
      called only after 'Disk is full' error is written to the
      error log.
      
      Test checks for "Queueing master event to the relay log"
      state. But this state is set before invoking 'queue_event'.
      Actual 'Disk is full' error happens at a very lower level.
      It can happen that we might even reset the debug point
      before even the actual disk full simulation occurs and the
      "Disk is full" message will never appear in the error log.
      
      In order to guarentee that we must have some mechanism where
      in after we write "Disk is full" error messge into the error
      log we must signal the test to execute SSS and then reset
      the debug point. So that test is deterministic.
      
      Fix:
      ===
      Added debug sync point to make script deterministic.
      3a8f43be
  6. 14 Apr, 2016 1 commit
  7. 23 Mar, 2016 1 commit
  8. 17 Mar, 2016 2 commits
    • mysql-builder@oracle.com's avatar
      No commit message · 9e5222ce
      mysql-builder@oracle.com authored
      No commit message
      9e5222ce
    • Nisha Gopalakrishnan's avatar
      BUG#22594514: HANDLE_FATAL_SIGNAL (SIG=11) IN · 6608f841
      Nisha Gopalakrishnan authored
                    UNIQUE::~UNIQUE | SQL/UNIQUES.CC:355
      
      Analysis
      ========
      
      Enabling the sort_buffer_size with a large value
      can cause operations utilizing the sort buffer
      like DELETE as mentioned in the bug report to
      fail. 5.5 and 5.6 versions reports OOM error
      while in 5.7+, the server crashes.
      
      While initializing the mem_root for the sort buffer
      tree, the block size for the mem_root is determined
      from the 'sort_buffer_size' value. This unsigned
      long value is typecasted to unsigned int, hence
      it becomes zero. Further block_size computation
      while initializing the mem_root results in a very
      large block_size value. Hence while trying to
      allocate a block during the DELETE operation,
      an OOM error is reported. In case of 5.7+, the PFS
      instrumentation for memory allocation, overshoots
      the unsigned value and allocates a block of just
      one byte. While trying to free the block of the
      mem_root, the original block_size is used. This
      triggers the crash since the server tries to free
      unallocated memory.
      
      Fix:
      ====
      In order to restrict usage of such unreasonable
      sort_buffer_size, the typecast of block size
      to 'unsigned int' is removed and hence reports
      OOM error across all versions for sizes
      exceeding unsigned int range.
      6608f841
  9. 07 Mar, 2016 1 commit
  10. 03 Mar, 2016 1 commit
    • Sreeharsha Ramanavarapu's avatar
      Bug #18740222: CRASH IN GET_INTERVAL_INFO WITH WEIRDO · 767bab4a
      Sreeharsha Ramanavarapu authored
                     INTERVALS
      
      ISSUE:
      ------
      Some string functions return one or a combination of the
      parameters as their result. Here the resultant string's
      charset could be incorrectly set to that of the chosen
      parameter.
      
      This results in incorrect behavior when an ascii string is
      expected.
      
      SOLUTION:
      ---------
      Since an ascii string is expected, val_str_ascii should
      explicitly convert the string.
      
      Part of the fix is a backport of Bug#22340858 for mysql-5.5
      and mysql-5.6.
      767bab4a
  11. 01 Mar, 2016 4 commits
    • Shishir Jaiswal's avatar
      Bug#19920049 - MYSQLD_MULTI MISLEADING WHEN MY_PRINT_DEFAULTS · 32d6db3b
      Shishir Jaiswal authored
                     IS NOT FOUND
      
      DESCRIPTION
      ===========
      If script mysqld_multi and utility my_print_defaults are in
      the same folder (not included in $PATH) and the former is
      made to run, it complaints that the mysqld binary is absent
      eventhough the binary exists.
      
      ANALYSIS
      ========
      We've a subroutine my_which() mimicking the behaviour of
      POSIX "which" command. Its current behaviour is to check
      for a given argument as follows:
      - Step 1: Assume the argument to be a command having full
      fledged absolute path. If it exists "as-is", return the
      argument (which will be pathname), else proceed to Step 2.
      - Step 2: Assume the argument to be a plain command with no
      aboslute path. Try locating it in all of the paths
      (mentioned in $PATH) one by one. If found return the
      pathname. If found nowhere, return NULL.
      
      Currently when my_which(my_print_defaults) is called, it
      returns from Step 1 (since utlity exists in current
      folder) and doesn't proceed to Step 2. This is wrong since
      the returned value is same as the argument i.e.
      'my_print_default' which defies the purpose of this
      subroutine whose job is to return a pathname either in Step
      1 or Step 2.
      
      Later when the utility is executed in subroutine
      defaults_for_group(), it evaluates to NULL and returns the
      same. This is because the plain command 'my_print_defaults
      {options} ...' would execute properly only if
      my_print_defaults exists in one of the paths (in $PATH). In
      such a case, in the course of the flow it looks onto the
      variable $mysqld_found which comes out to be NULL and
      hence ethe error.
      
      In this case, call to my_which should fail resulting in
      script being aborted and thus avoiding this mess.
      
      FIX
      ===
      This utility my_print_defaults should be tested only in
      Step 2 since it does not have an absolute path. Thus added
      a condition in Step 1 so that is gets executed iff not
      called for my_print_defaults thus bypassing it to proceed
      to Step 2 where the check is made for various paths (in
      $PATH)
      32d6db3b
    • Sujatha Sivakumar's avatar
      Bug#20685029: SLAVE IO THREAD SHOULD STOP WHEN DISK IS · 83611517
      Sujatha Sivakumar authored
      FULL
      Bug#21753696: MAKE SHOW SLAVE STATUS NON BLOCKING IF IO
      THREAD WAITS FOR DISK SPACE
      
      Problem:
      ========
      Currently SHOW SLAVE STATUS blocks if IO thread waits for
      disk space. This makes automation tools verifying
      server health block on taking relevant action. Finally this
      will create SHOW SLAVE STATUS piles.
      
      Analysis:
      =========
      SHOW SLAVE STATUS hangs on mi->data_lock if relay log write
      is waiting for free disk space while holding mi->data_lock.
      mi->data_lock is needed to protect the format description
      event (mi->format_description_event) which is accessed by
      the clients running FLUSH LOGS and slave IO thread. Note
      relay log writes don't need to be protected by
      mi->data_lock, LOCK_log is used to protect relay log between
      IO and SQL thread (see MYSQL_BIN_LOG::append_event). The
      code takes mi->data_lock to protect
      mi->format_description_event during relay log rotate which
      might get triggered right after relay log write.
      
      Fix:
      ====
      Release the data_lock just for the duration of writing into
      relay log.
      
      Made change to ensure the following lock order is maintained
      to avoid deadlocks.
      
      data_lock, LOCK_log
      
      data_lock is held during relay log rotations to protect
      the description event.
      83611517
    • Venkatesh Duggirala's avatar
      BUG#17018343 SLAVE CRASHES WHEN APPLYING ROW-BASED BINLOG ENTRIES IN CASCADING · bb32ac1d
      Venkatesh Duggirala authored
      REPLICATION
      
      Problem: In RBR mode, merge table updates are not successfully applied on a cascading
      replication.
      
      Analysis & Fix: Every type of row event is preceded by one or more table_map_log_events
      that gives the information about all the tables that are involved in the row
      event. Server maintains the list in RPL_TABLE_LIST and it goes through all the
      tables and checks for the compatibility between master and slave. Before
      checking for the compatibility, it calls 'open_tables()' which takes the list
      of all tables that needs to be locked and opened. In RBR, because of the
      Table_map_log_event , we already have all the tables including base tables in
      the list. But the open_tables() which is generic call takes care of appending
      base tables if the list contains merge tables. There is an assumption in the
      current replication layer logic that these tables (TABLE_LIST type objects) are always
      added in the end of the list. Replication layer maintains the count of
      tables(tables_to_lock_count) that needs to be verified for compatibility check
      and runs through only those many tables from the list and rest of the objects
      in linked list can be skipped. But this assumption is wrong.
      open_tables()->..->add_children_to_list() adds base tables to the list immediately
      after seeing the merge table in the list.
      
      For eg: If the list passed to open_tables() is t1->t2->t3 where t3 is merge
      table (and t1 and t2 are base tables), it adds t1'->t2' to the list after t3.
      New table list looks like t1->t2->t3->t1'->t2'. It looks like it added at the
      end of the list but that is not correct. If the list passed to open_tables()
      is t3->t1->t2 where t3 is merge table (and t1 and t2 are base tables), the new
      prepared list will be t3->t1'->t2'->t1->t2. Where t1' and t2' are of
      TABLE_LIST objects which were added by add_children_to_list() call and replication
      layer should not look into them. Here tables_to_lock_count  will not help as the
      objects are added in between the list.
      
      Fix: After investigating add_children_list() logic (which is called from open_tables()),
      there is no flag/logic in it to skip adding the children to the list even if the
      children are already included in the table list. Hence to fix the issue, a
      logic should be added in the replication layer to skip children in the list by
      checking whether  'parent_l' is non-null or not. If it is children, we will skip 'compatibility'
      check for that table.
      
      Also this patch is not removing 'tables_to_lock_count' logic for the performance issues
      if there are any children at the end of the list, those can be easily skipped directly by
      stopping the loop with tables_to_lock_count check.
      bb32ac1d
    • Arun Kuruvila's avatar
      Bug#21920657: SSL-CA FAILS SILENTLY IF THE PATH CANNOT BE · c7e68606
      Arun Kuruvila authored
                    FOUND
      
      Description:- Failure during the validation of CA
      certificate path which is provided as an option for 'ssl-ca'
      returns two different errors for YaSSL and OPENSSL.
      
      Analysis:- 'ssl-ca', option used for specifying the ssl ca
      certificate path. Failing to validate this certificate with
      OPENSSL returns an error, "ERROR 2026 (HY000): SSL
      connection error: SSL_CTX_set_default_verify_paths failed".
      While YASSL returns "ERROR 2026 (HY000): SSL connection
      error: ASN: bad other signature confirmation". Error
      returned by the OPENSSL is correct since
      "SSL_CTX_load_verify_locations()" returns 0 (in case of
      OPENSSL) for the failure and sets error as
      "SSL_INITERR_BAD_PATHS". In case of YASSL,
      "SSL_CTX_load_verify_locations()" returns an error number
      which is less than or equal to 0 in case of error. Error
      numbers for YASSL is mentioned in the file,
      'extra/yassl/include/openssl/ssl.h'(line no : 292). Also
      'ssl-ca' does not accept tilde home directory path
      substitution.
      
      Fix:- The condition which checks for the error in the
      "SSL_CTX_load_verify_locations()" is changed in order to
      accommodate YASSL as well. A logic is written in
      "mysql_ssl_set()" in order accept the tilde home directory
      path substitution for all ssl options.
      c7e68606
  12. 29 Feb, 2016 1 commit
  13. 26 Feb, 2016 2 commits
    • Yashwant Sahu's avatar
    • Venkatesh Duggirala's avatar
      BUG#20574550 MAIN.MERGE TEST CASE FAILS IF BINLOG_FORMAT=ROW · 29cc2c28
      Venkatesh Duggirala authored
      The main.merge test case was failing when tested using row based
      binlog format.
      
      While analyzing the issue it was found the following issues:
      
      a) The server is calling binlog related code even when a statement will
         not be binlogged;
      b) The child table list was not present into table structure by the time
         to generate the create table statement;
      c) The tables in the child table list will not be opened yet when
         generating table create info using row based replication;
      d) CREATE TABLE LIKE TEMP_TABLE does not preserve original table storage
         engine when using row based replication;
      
      This patch addressed all above issues.
      
      @ sql/sql_class.h
      
      Added a function to determine if the binary log is disabled to
        the current session. This is related with issue (a) above.
      
      @ sql/sql_table.cc
      
      Added code to skip binary logging related code if the statement
        will not be binlogged. This is related with issue (a) above.
      
      Added code to add the children to the query list of the table that
        will have its CREATE TABLE generated. This is related with issue (b)
        above.
      
      Added code to force the storage engine to be generated into the
        CREATE TABLE. This is related with issue (d) above.
      
      @ storage/myisammrg/ha_myisammrg.cc
      
      Added a test to skip a table getting info about a child table if the
        child table is not opened. This is related to issue (c) above.
      29cc2c28
  14. 23 Feb, 2016 3 commits
  15. 19 Feb, 2016 1 commit
  16. 11 Feb, 2016 1 commit
    • Nisha Gopalakrishnan's avatar
      BUG#22037930: INSERT IGNORE FAILS TO IGNORE FOREIGN KEY · d9c541cb
      Nisha Gopalakrishnan authored
                    CONSTRAINT.
      
      Analysis
      =======
      
      INSERT and UPDATE operations using the IGNORE keyword which
      causes FOREIGN KEY constraint violations reports an error
      despite using the IGNORE keyword.
      
      Foreign key violation errors were not ignored and reported
      as errors instead of warnings even when IGNORE was set.
      
      Fix
      ===
      Added code to ignore the foreign key violation errors and
      report them as warnings when the IGNORE keyword is used.
      d9c541cb
  17. 08 Feb, 2016 1 commit
    • Jon Olav Hauglid's avatar
      Bug#22680706: 5.5 DOES NOT BUILD WITH GCC5 · 1fb6d4e6
      Jon Olav Hauglid authored
      Fix the following two build warnings so that 5.5 can be compiled
      with GCC5.
      
      storage/innobase/dict/dict0crea.c:1143:21: error: logical not is only applied
      to the left hand side of comparison [-Werror=logical-not-parentheses]
         ut_a(!node->index == (err != DB_SUCCESS));
                           ^
      storage/innobase/log/log0recv.c:1770:20: error: logical not is only applied
      to the left hand side of comparison [-Werror=logical-not-parentheses]
        ut_ad(!allow_ibuf == mutex_own(&log_sys->mutex));
                          ^
      1fb6d4e6
  18. 05 Feb, 2016 1 commit
  19. 29 Jan, 2016 1 commit
    • Sreeharsha Ramanavarapu's avatar
      Bug #18823979: PS: UCS2 + CASE WHEN THEN ELSE CRASH IN · 718c7879
      Sreeharsha Ramanavarapu authored
                     ITEM_PARAM::SAFE_CHARSET_CONVERTER
      
      ISSUE:
      ------
      Charset conversion on a null parameter is not handled
      correctly.
      
      SOLUTION:
      ---------
      Item_param's charset converter does not handle the case
      where it might have to deal with a null value. This is
      fine for other charset converters since the value is not
      supplied to them at runtime.
      
      The fix is to check if the parameter is now set to null and
      return an Item_null object. Also, there is no need to
      initialize Item_param's cnvitem in the constructor to a
      string. This can be done in
      ITEM_PARAM::SAFE_CHARSET_CONVERTER itself.
      
      Members of Item_param, cnvbuf and cnvstr, have been removed
      and cnvitem has been made a local variable in
      ITEM_PARAM::SAFE_CHARSET_CONVERTER.
      718c7879
  20. 28 Jan, 2016 1 commit
    • Ajo Robert's avatar
      Bug #16912362 LOAD DATA INFILE CLAIMS TO BE HOLDING · 01d41f68
      Ajo Robert authored
      'SYSTEM LOCK' IN PROCESSLIST
      
      Analysis
      =========
      Show processlist shows 'System Lock' in 'State' field while
      LOAD DATA INFILE is running.
      
      thd->proc_info update is missing in LOAD DATA INFILE path.
      Thus any request will get last unpdated status from lock_table()
      during open_table().
      
      Fix:
      =======
      Update state information from LOAD DATA INFILE path.
      01d41f68
  21. 27 Jan, 2016 2 commits
  22. 26 Jan, 2016 1 commit
    • Jon Olav Hauglid's avatar
      Bug#21770366 backport bug#21657078 to 5.5 and 5.6 · a204ce5b
      Jon Olav Hauglid authored
      Post-push fix: The problem was that condition variable
      timeouts could in some cases (slow machines and/or short
      timeouts) be infinite.
      
      When the number of milliseconds to wait is computed, the
      end time is computed before the now() time. This can result
      in the now() time being later than the end time, leading to
      negative timeout. Which after conversion to unsigned becomes
      ~infinite.
      
      This patch fixes the problem by explicitly checking if we
      get negative timeout and then using 0 if this is the case.
      a204ce5b
  23. 22 Jan, 2016 1 commit
  24. 20 Jan, 2016 1 commit
  25. 17 Jan, 2016 1 commit
    • Knut Anders Hatlen's avatar
      Bug#21682356: STOP INJECTING DATA ITEMS IN AN ERROR MESSAGE · 95825fa2
      Knut Anders Hatlen authored
                    GENERATED BY THE EXP() FUNCTION
      
      When generating the error message for numeric overflow, pass a flag to
      Item::print() that prevents it from expanding constant expressions and
      parameters to the values they evaluate to.
      
      For consistency, also pass the flag to Item::print() when
      Item_func_spatial_collection::fix_length_and_dec() generates an error
      message. It doesn't make any difference at the moment, since constant
      expressions haven't been evaluated yet when this function is called.
      95825fa2
  26. 15 Jan, 2016 1 commit
    • Shaohua Wang's avatar
      BUG#22530768 Innodb freeze running REPLACE statements · 93a6142d
      Shaohua Wang authored
      we can see from the hang stacktrace, srv_monitor_thread is blocked
      when getting log_sys::mutex, so that sync_arr_wake_threads_if_sema_free
      cannot get a change to break the mutex deadlock.
      
      The fix is simply removing any mutex wait in srv_monitor_thread.
      
      Patch is reviewed by Sunny over IM.
      93a6142d
  27. 12 Jan, 2016 1 commit
    • Shaohua Wang's avatar
      BUG#22530768 Innodb freeze running REPLACE statements · 79032a7a
      Shaohua Wang authored
      we can see from the hang stacktrace, srv_monitor_thread is blocked
      when getting log_sys::mutex, so that sync_arr_wake_threads_if_sema_free
      cannot get a change to break the mutex deadlock.
      
      The fix is simply removing any mutex wait in srv_monitor_thread.
      
      Patch is reviewed by Sunny over IM.
      79032a7a
  28. 11 Jan, 2016 4 commits