1. 03 Jul, 2014 3 commits
    • Ashish Agarwal's avatar
      WL#7219: Implement audit filter · e12dd225
      Ashish Agarwal authored
      e12dd225
    • Chaithra Reddy's avatar
      Bug#18469276: MOD FOR SMALL DECIMALS FAILS · 8ded4110
      Chaithra Reddy authored
            
      Problem:
      If leading zeroes of fractional part of a decimal
      number exceeds 45, mod operation on the same fails.
            
      Analysis:
      Currently there is a miscalcultion of fractional
      part for very small decimals in do_div_mod.
            
      For ex:
      For 0.000(45 times).....3
      length of the integer part becomes -5 (for a length of one,
      buffer can hold 9 digits. Since number of zeroes are 45, integer
      part becomes 5) and it is negative because of the leading
      zeroes present in the fractional part.
      Fractional part is the number of digits present after the
      point which is 46 and therefore rounded off to the nearest 9
      multiple which is 54. So the length of the resulting fractional
      part becomes 6.
            
      Because of this, the combined length of integer part and fractional
      part exceeds the max length allocated which is 9 and thereby failing.
            
      Solution:
      In case of negative integer value, it indicates there are
      leading zeroes in fractional part. As a result stop1 pointer 
      should be set not just based on frac0 but also intg0. This is
      because the detination buffer will be filled with 0's for the length
      of intg0.
      
      strings/decimal.c:
        Calculate stop1 pointer based on the length of intg0 and frac0.
      8ded4110
    • Annamalai Gurusami's avatar
      Bug #19140907 DUPLICATES IN UNIQUE SECONDARY INDEX BECAUSE OF FIX OF BUG#68021 · 301032d2
      Annamalai Gurusami authored
      Problem:
      
      When a unique secondary index is scanned for duplicate checking, gap locks
      were not taken if the transaction had isolation level <= READ COMMITTED. 
      This change was done while fixing Bug #16133801 UNEXPLAINABLE INNODB UNIQUE
      INDEX LOCKS ON DELETE + INSERT WITH SAME VALUES (rb#2035). Because of this
      the duplicate check logic failed, and resulted in duplicate values in unique
      secondary index.
      
      Solution:
      
      When a unique secondary index is scanned for duplicate checking, gap locks
      must be taken irrespective of the transaction isolation level.  This is
      achieved by reverting rb#2035.
      
      rb#5910 approved by Jimmy
      301032d2
  2. 02 Jul, 2014 2 commits
    • Arun Kuruvila's avatar
      Bug#17873011 NO DEPRECATION WARNING FOR THREAD_CONCURRENCY · 8a4ec676
      Arun Kuruvila authored
      Description:
      THREAD_CONCURRENCY is deprecated and there is no 
      deprecation warning message while setting this variable
      while starting the server.
      
      Analysis:
      This variable is specific to Solaris 8 and earlier systems
      and is ignored on all other platforms. But since many 
      customers, who uses other than Solaris, still has this 
      variable in their configuration file, it is important to
      have a deprecation warning.
      
      Fix:
      THREAD_CONCURRENCY deprecation warning message is added.
      8a4ec676
    • Marcin Babij's avatar
      BUG#18779944: MYSQLDUMP BUFFER OVERFLOW · a69ab08b
      Marcin Babij authored
      Mysqldump overflows stack buffer when copying table name from commandline arguments resulting in stack corruption and ability to execute arbitrary code.
      
      Fix: Check length of all positional arguments passed to mysqldump is smaller than NAME_LEN.
      Note: Mysqldump heavily depends on that database objects (databases, tablespaces, tables, etc) are limited to small size (now it is 64).
      a69ab08b
  3. 01 Jul, 2014 1 commit
  4. 30 Jun, 2014 2 commits
  5. 27 Jun, 2014 5 commits
  6. 26 Jun, 2014 3 commits
    • Luis Soares's avatar
      BUG#13874553: rpl.rpl_stop_slave fails sporadically on pb2 · 5111df08
      Luis Soares authored
      The test case makes use of the fine DEBUG_SYNC facility. Furthermore,
      since it needs synchronization on internal threads (dump and SQL
      threads) the server code has DEBUG_SYNC commands internally deployed
      and activated through the DBUG_EXECUTE_IF macro. The internal
      DBUG_SYNC commands are then controlled from the test case through the
      DEBUG variable.
      
      There were three problems around the DEBUG + DEBUG_SYNC facility
      usage:
      
      1. When signaling the SQL thread to continue, the test would reset
         immediately the DEBUG_SYNC variable. This could mean that the SQL
         thread might loose the signal and continue to wait forever;
      
      2. A similar scenario was happening with the dump thread on the
         master. This thread was instructed to wait, and later it would be
         signaled to continue, but immediately after the DEBUG_SYNC would be
         reset. This could lead to the dump thread missing the signal and
         wait forever;
      
      3. The test was not cleaning itself up with respect to the
         instrumentation of the dump thread. This would leave the
         conditional execution of an internal DEBUG_SYNC command active
         (through the usage of DBUG_EXECUTE_IF). 
      
      We fix #1 and #2 by waiting for the threads to receive the signal and
      only then issue the reset. We fix #3 by reseting the DEBUG variable,
      thus deactivating the dump thread internal DEBUG_SYNC command.
      5111df08
    • Balasubramanian Kandasamy's avatar
    • Arun Kuruvila's avatar
      Bug#18463911 : SERVER CRASHES ON CREATING A TEMP TABLE WITH · 76d3e3bc
      Arun Kuruvila authored
                     CERTAIN MAX_HEAP_TABLE_SIZE VALUES
      
      Followup patch to fix failure on Window machine.
      76d3e3bc
  7. 25 Jun, 2014 6 commits
    • Raghav Kapoor's avatar
      BUG#17665767 - FAILING ASSERTION: PRIMARY_KEY_NO == -1 || PRIMARY_KEY_NO == 0 · cdf72d51
      Raghav Kapoor authored
      BACKGROUND:
      This bug is a followup on Bug#16368875.
      The assertion failure happens because in SQL layer the key
      does not get promoted to PRIMARY KEY but InnoDB takes it
      as PRIMARY KEY.
      
      ANALYSIS:
      Here we are trying to create an index on POINT (GEOMETRY)
      data type which is a type of BLOB (since GEOMETRY is a
      subclass of BLOB).
      In general, we can't create an index over GEOMETRY family
      type field unless we specify the length of the
      keypart (similar to BLOB fields).
      Only exception is the POINT field type. The POINT column
      max size is 25. The problem is that the field is not treated
      as PRIMARY KEY when we create a index on POINT column using
      its max column size as key part prefix. The fix would allow
      index on POINT column to be treated as PRIMARY KEY.
      
      FIX:
      Patch for Bug#16368875 is extended to take into account
      GEOMETRY datatype, POINT in particular to consider it
      as PRIMARY KEY in SQL layer.
      cdf72d51
    • Nisha Gopalakrishnan's avatar
      BUG#18405221: SHOW CREATE VIEW OUTPUT INCORRECT · d63645c8
      Nisha Gopalakrishnan authored
      Fix:
      ---
      The issue reported is same as the BUG#14117018.
      Hence backporting the patch from mysql-trunk
      to mysql-5.5 and mysql-5.6
      d63645c8
    • Terje Rosten's avatar
      Bug#16395459 TEST AND RESULT FILES WITH EXECUTE BIT · 410b1dd8
      Terje Rosten authored
      Bug#16415173 CRLF INSTEAD OF LF IN SQL-BENCH SCRIPTS
            
      Correct perms and converts from Windows style to UNIX style line endings on some files.
      Fix perms on installed ini files.
      
      (MySQL 5.5 version)
      410b1dd8
    • Balasubramanian Kandasamy's avatar
    • Arun Kuruvila's avatar
      Bug #18463911 : SERVER CRASHES ON CREATING A TEMP TABLE · 1177d340
      Arun Kuruvila authored
                      WITH CERTAIN MAX_HEAP_TABLE_SIZE VALUES
      
      Description:
      When the  system variable 'max_heap_table_size'
      is set to 20GB, the server crashes on creation of a
      temporary tables or tables using MEMORY storage engine.
      
      Analysis:
      The variable 'max_record' determines the amount heap
      allocated for the records of the table. This value
      is determined using the 'max_heap_table_size' variable.
      'records_in_block' in turn uses the max_records to
      determine the number of records per block.
      
      When the 'max_heap_table_size' is set to 20GB, then
      the 'records_in_block' is calculated to a value of
      2^28.
      
      The size of the block determined by multiplying the
      'records_in_block' and 'recbuffer' results in overflow
      and hence the value becomes zero. As a result, zero bytes
      of the heap is allocated for the table. This will
      result in a server crash when the table is accessed.
      
      Fix:
      The variables 'records_in_block' and 'recbuffer' are
      typecasted to 'unsigned long' while calculating the
      size of the block.
      1177d340
    • Gopal Shankar's avatar
      Bug#18776592 INNODB: FAILING ASSERTION: PRIMARY_KEY_NO == -1 || · 119984db
      Gopal Shankar authored
                                              PRIMARY_KEY_NO == 0 
      
      This bug is a backport of the following revision of 5.6 source tree:
      # committer: Gopal Shankar <gopal.shankar@oracle.com>
      # branch nick: priKey56
      # timestamp: Wed 2013-05-29 11:11:46 +0530
      # message:
      #   Bug#16368875 INNODB: FAILING ASSERTION:
      119984db
  8. 24 Jun, 2014 2 commits
    • Jon Olav Hauglid's avatar
      Bug#19001781: ADD SUPPORT FOR CMAKE 3 · 6cb3ca59
      Jon Olav Hauglid authored
      Set CMP0026 and CMP0045 policies when using CMake 
      version 3 or higher to restore old CMake behavior.
      6cb3ca59
    • Nisha Gopalakrishnan's avatar
      BUG#18618561: FAILED ALTER TABLE ENGINE CHANGE WITH PARTITIONS · 24756e8e
      Nisha Gopalakrishnan authored
                    CORRUPTS FRM
      
      Analysis:
      ---------
      ALTER TABLE on a partitioned table resulted in the wrong
      engine being written into the table's FRM file and displayed
      in SHOW CREATE TABLE.
      
      The prep_alter_part_table() modifies the partition_info object
      for TABLE instance representing the old version of table.
      If the ALTER TABLE ENGINE statement fails, the partition_info
      object for the TABLE contains the altered storage engine name.
      The SHOW CREATE TABLE uses the TABLE object to display the table
      information, hence displays incorrect storage engine for the table.
      Also a subsequent successful ALTER TABLE operation will write the
      incorrect engine information into the FRM file.
      
      Fix:
      ---
      A copy of the partition_info object is created before modification so
      that any changes would not cause the the original partition_info object
      to be modified if the ALTER TABLE fails.(Backported part of the code
      provided as fix for bug#14156617 in mysql-5.6.6).
      24756e8e
  9. 23 Jun, 2014 2 commits
  10. 19 Jun, 2014 1 commit
  11. 18 Jun, 2014 1 commit
    • Namit Sharma's avatar
      Bug#18949527 SUITE/BINLOG/T/BINLOG_KILLED.TEST FORGETS TO · 63bc784a
      Namit Sharma authored
                   DISCONNECT CON1 AND CON2
        
      Problem:
      The test suite/binlog/t/binlog_killed.test makes the connections
      con1 and con2 but forgets to disconnect them + wait till that
      operation is finished at test end.
      This mistake has the potential to harm subsequent tests in
      case these tests depend on the content of the processlist.
       
      Solution:
      Added disconnect + wait_until_disconnected.inc 
      within the test cleanup.
      63bc784a
  12. 17 Jun, 2014 3 commits
  13. 16 Jun, 2014 1 commit
    • Sujatha Sivakumar's avatar
      Bug#18432495:RBR REPLICATION SLAVE CRASHES WHEN DELETE · 14544cef
      Sujatha Sivakumar authored
      NON-EXISTS RECORDS
      
      Problem:
      ========
      In RBR replication, master deletes a record but the record
      don't exist on slave. when slave tries to apply the
      Delete_row_log_event from master, it will result in an
      assert on slave.
      
      Analysis:
      ========
      This problem exists not only with Delete_rows event but also
      with Update_rows event as well. Trying to update a non
      existing row on the slave from the master will cause the
      same assert.  This assert occurs only for the tables that
      doesn't have primary keys and which basically require
      sequential scan to be done to locate a record. This bug
      occurs only with innodb engine not with myisam.
      
      When update or delete rows is executed on a slave on a table
      which doesn't have primary key the updated record is stored
      in a buffer named table->record[0] and the same is copied to
      table->record[1] so that during sequential scan
      table->record[0] can reloaded with fetched data from the
      table and compared against table->record[1].  In a special
      case where there is no record on the slave side scan will
      result in EOF in that case we reinit the scan and we try to
      compare record[0]  with record[1] which are basically the
      same. This comparison is incorrect. Since they both are the
      same record_compare() will report that record is found and
      we try to go ahead and try to update/delete non existing
      row. Ideally if the scan results in EOF means no data found
      hence no need to do a record_compare() at all.
      
      Fix:
      ===
      Avoid comparision of records on EOF.
      
      sql/log_event.cc:
        Avoid record comparison on end of file.
      sql/log_event_old.cc:
        Avoid record comparison on end of file.
      14544cef
  14. 10 Jun, 2014 1 commit
    • Annamalai Gurusami's avatar
      Bug #18806829 OPENING INNODB TABLES WITH MANY FOREIGN KEY REFERENCES IS · b5299f35
      Annamalai Gurusami authored
      SLOW/CRASHES SEMAPHORE
      
      Problem:
      
      There are 2 lakh tables - fk_000001, fk_000002 ... fk_200000.  All of them
      are related to the same parent_table through a foreign key constraint.
      When the parent_table is loaded into the dictionary cache, all the child table
      will also be loaded.  This is taking lot of time.  Since this operation happens
      when the dictionary latch is taken, the scenario leads to "long semaphore wait"
      situation and the server gets killed.
      
      Analysis:
      
      A simple performance analysis showed that the slowness is because of the
      dict_foreign_find() function.  It does a linear search on two linked list
      table->foreign_list and table->referenced_list, looking for a particular
      foreign key object based on foreign->id as the key.  This is called two
      times for each foreign key object.
      
      Solution:
      
      Introduce a rb tree in table->foreign_rbt and table->referenced_rbt, which
      are some sort of index on table->foreign_list and table->referenced_list
      respectively, using foreign->id as the key.  These rbt structures will be
      solely used by dict_foreign_find().  
      
      rb#5599 approved by Vasil
      
      b5299f35
  15. 06 Jun, 2014 1 commit
  16. 29 May, 2014 1 commit
  17. 22 May, 2014 1 commit
  18. 16 May, 2014 2 commits
    • Tor Didriksen's avatar
      Bug#18315770 BUG#12368495 FIX IS INCOMPLETE · ab8bd02b
      Tor Didriksen authored
      Item_func_ltrim::val_str did not handle multibyte charsets.
      Fix: factor out common code for Item_func_trim and Item_func_ltrim.
      ab8bd02b
    • Arun Kuruvila's avatar
      Bug #18163964 PASSWORD IS VISIBLE WHILE CHANGING IT FROM · 2dbebf77
      Arun Kuruvila authored
                    MYSQLADMIN IN PROCESSES LIST
      
      Description: Checking the process status (with ps -ef)  
      while executing "mysqladmin" with old password and new 
      password via command-line will show the new password in the
      process list sporadically.
      
      Analysis: The old password is being masked by "mysqladmin".
      So masking the new password in the similar manner would 
      reduce hitting the bug. But this would not completely fix
      the bug, because if "ps -ef " command hits the mysqladmin
      before it masks the passwords it will show both the old and
      new passwords in the process list. But the chances of
      hitting this is very less.
      
      Fix: The new password also masked in the similar manner
      that of the --password argument.
      2dbebf77
  19. 15 May, 2014 2 commits
    • Neeraj Bisht's avatar
      Bug#18207212 : FILE NAME IS NOT ESCAPED IN BINLOG FOR LOAD DATA INFILE STATEMENT · 10978e0a
      Neeraj Bisht authored
      Problem:
      Load_log_event::print_query() function does not put escape character in file name 
      for "LOAD DATA INFILE" statement.
      
      Analysis:
      When we have "'" in our file name for "LOAD DATA INFILE" statement,
      Load_log_event::print_query() function does not put escape character 
      in our file name.
      
      This one result that when we show binary-log, we get file name without 
      escape character.
      
      Solution:
      To put escape character when we have "'" in file name, for this instead of using 
      simple memcpy() to put file-name, we will use pretty_print_str().
      
      10978e0a
    • mithun's avatar
      Bug#17217128 : BAD INTERACTION BETWEEN MIN/MAX AND · f2202335
      mithun authored
                     "HAVING SUM(DISTINCT)": WRONG RESULTS.
      ISSUE:
      ------
      If a query uses loose index scan and it has both
      AGG(DISTINCT) and MIN()/MAX()functions. Then, result values
      of MIN/MAX() is set improperly.
      When query has AGG(DISTINCT) then end_select is set to
      end_send_group. "end_send_group" keeps doing aggregation
      until it sees a record from next group. And, then it will
      send out the result row of that group.
      Since query also has MIN()/MAX() and loose index scan is
      used, values of MIN/MAX() are set as part of loose index
      scan itself. Setting MIN()/MAX() values as part of loose
      index scan overwrites values computed in end_send_group.
      This caused invalid result.
      For such queries to work loose index scan should stop
      performing MIN/MAX() aggregation. And, let end_send_group to
      do the same. But according to current design loose index
      scan can produce only one row per group key. If we have both
      MIN() and MAX() then it has to give two records out. This is
      not possible as interface has to use common buffer
      record[0]! for both records at a time.
      
      SOLUTIONS:
      ----------
      For such queries to work we need a new interface for loose
      index scan. Hence, do not choose loose_index_scan for such
      cases. So a new rule SA7 is introduced to take care of the
      same.
      
      SA7: "If Q has both AGG_FUNC(DISTINCT ...) and
            MIN/MAX() functions then loose index scan access
            method is not used."
      
      mysql-test/r/group_min_max.result:
        Expected result.
      mysql-test/t/group_min_max.test:
        1. Test with various combination of AGG(DISTINCT) and
        MIN(), MAX() functions.
        2. Corrected the plan for old queries.
      sql/opt_range.cc:
        A new rule SA7 is introduced.
      f2202335