1. 16 May, 2012 4 commits
    • Annamalai Gurusami's avatar
      Merge from mysql-5.1 to mysql-5.5. · 453fcc0c
      Annamalai Gurusami authored
      453fcc0c
    • Olav Sandstaa's avatar
      Fix for Bug#12667154 SAME QUERY EXEC AS WHERE SUBQ GIVES DIFFERENT · c5dc1ea5
      Olav Sandstaa authored
                           RESULTS ON IN() & NOT IN() COMP #3
      
      This bug causes a wrong result in mysql-trunk when ICP is used
      and bad performance in mysql-5.5 and mysql-trunk.
      
      Using the query from bug report to explain what happens and causes
      the wrong result from the query when ICP is enabled:
      
      1. The t3 table contains four records. The outer query will read
         these and for each of these it will execute the subquery.
      
      2. Before the first execution of the subquery it will be optimized. In
         this case the important is what happens to the first table t1:
         -make_join_select() will call the range optimizer which decides
          that t1 should be accessed using a range scan on the k1 index
          It creates a QUICK_RANGE_SELECT object for this.
         -As the last part of optimization the ICP code pushes the
          condition down to the storage engine for table t1 on the k1 index.
      
         This produces the following information in the explain for this table:
      
           2 DEPENDENT SUBQUERY t1 range k1 k1 5 NULL 3 Using index condition; Using filesort
      
         Note the use of filesort.
      
      3. The first execution of the subquery does (among other things) due
         to the need for sorting:
         a. Call create_sort_index() which again will call find_all_keys():
         b. find_all_keys() will read the required keys for all qualifying
            rows from the storage engine. To do this it checks if it has a
            quick-select for the table. It will use the quick-select for
            reading records. In this case it will read four records from the
            storage engine (based on the range criteria). The storage engine
            will evaluate the pushed index condition for each record.
         c. At the end of create_sort_index() there is code that cleans up a
            lot of stuff on the join tab. One of the things that is cleaned
            is the select object. The result of this is that the
            quick-select object created in make_join_select is deleted.
      
      4. The second execution of the subquery does the same as the first but
         the result is different:
         a. Call create_sort_index() which again will call find_all_keys()
            (same as for the first execution)
         b. find_all_keys() will read the keys from the storage engine. To
            do this it checks if it has a quick-select for the table. Now
            there is NO quick-select object(!) (since it was deleted in
            step 3c). So find_all_keys defaults to read the table using a
            table scan instead. So instead of reading the four relevant records
            in the range it reads the entire table (6 records). It then
            evaluates the table's condition (and here it goes wrong). Since
            the entire condition has been pushed down to the storage engine
            using ICP all 6 records qualify. (Note that the storage engine
            will not evaluate the pushed index condition in this case since
            it was pushed for the k1 index and now we do a table scan
            without any index being used).
            The result is that here we return six qualifying key values
            instead of four due to not evaluating the table's condition.
         c. As above.
      
      5. The two last execution of the subquery will also produce wrong results
         for the same reason.
      
      Summary: The problem occurs due to all but the first executions of the
      subquery is done as a table scan without evaluating the table's
      condition (which is pushed to the storage engine on a different
      index). This is caused by the create_sort_index() function deleting
      the quick-select object that should have been used for executing the
      subquery as a range scan.
      
      Note that this bug in addition to causing wrong results also can
      result in bad performance due to executing the subquery using a table
      scan instead of a range scan. This is an issue in MySQL 5.5.
      
      The fix for this problem is to avoid that the Quick-select-object that
      the optimizer created is deleted when create_sort_index() is doing
      clean-up of the join-tab. This will ensure that the quick-select
      object and the corresponding pushed index condition will be available
      and used by all following executions of the subquery.
      
      
      sql/sql_select.cc:
        Fix for Bug#12667154: Change how create_sort_index() cleans up the 
        join_tab's select and quick-select objects in order to avoid that a 
        quick-select object created outside of create_sort_index() is deleted.
      c5dc1ea5
    • unknown's avatar
      No commit message · 3780f473
      unknown authored
      No commit message
      3780f473
    • Annamalai Gurusami's avatar
      Bug #12752572 61579: REPLICATION FAILURE WHILE · bcb5d737
      Annamalai Gurusami authored
      INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
      
      When an insert stmt like "insert into t values (1),(2),(3)" is
      executed, the autoincrement values assigned to these three rows are
      expected to be contiguous.  In the given lock mode
      (innodb_autoinc_lock_mode=1), the auto inc lock will be released
      before the end of the statement.  So to make the autoincrement
      contiguous for a given statement, we need to reserve the auto inc
      values at the beginning of the statement.  
      
      rb://1074 approved by Alexander Nozdrin
      bcb5d737
  2. 15 May, 2012 8 commits
  3. 14 May, 2012 1 commit
  4. 10 May, 2012 2 commits
    • Annamalai Gurusami's avatar
      326b40c9
    • Annamalai Gurusami's avatar
      Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED · 391ea219
      Annamalai Gurusami authored
      BY A CONCURRENT TRANSACTIO
      
      The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
      a table handler clone. Innodb does not provide a clone operation.  
      The ha_innobase::clone() is not there. The handler::clone() does not 
      take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
      this what happens is that for one index we do a locking read, and 
      for the other index we were doing a non-locking (consistent) read. 
      The patch introduces ha_innobase::clone() member function.  
      It is implemented similar to ha_myisam::clone().  It calls the 
      base class handler::clone() and then does any additional operation 
      required.  I am setting the ha_innobase->prebuilt->select_lock_type 
      correctly. 
      
      rb://1060 approved by Marko
      391ea219
  5. 09 May, 2012 1 commit
  6. 08 May, 2012 1 commit
  7. 07 May, 2012 3 commits
    • Joerg Bruehe's avatar
      Merge 5.5.24 back into main 5.5. · 5be07cea
      Joerg Bruehe authored
      This is a weave merge, but without any conflicts.
      In 14 source files, the copyright year needed to be updated to 2012.
      5be07cea
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · 1d47bbe3
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      
      Merging the fix from mysql-5.1 to mysql-5.5
      
      mysql-test/t/mysqldump.test:
        There is a difference in the testcase which is added as 
        part of this fix, when compared with mysql-5.1. In mysql-5.5 
        and mysql-5.6, "DROP mysql database" fails by enabling 
        logging, hence removed those lines.
      1d47bbe3
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · e7364ec2
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      Problem Statement:
      ------------------
      mysqldump is not having the dump stmts for general_log and slow_log
      tables. That is because of the fix for Bug#26121. Hence, after 
      dropping the mysql database, and applying the dump by enabling the 
      logging, "'general_log' table not found" errors are logged into the 
      server log file.
      
      Analysis:
      ---------
      As part of the fix for Bug#26121, we skipped the dumping of tables 
      for general_log and slow_log, because the data dump of those tables 
      are taking LOCKS, which is not allowed for log tables.
      
      Fix:
      ----
      We came up with an approach that instead of taking both meta data 
      and data dump information for those tables, take only the meta data 
      dump which doesn't need LOCKS.
      As part of fixing the issue we came up with below algorithm.
      Design before fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      Design with the fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Explicitly call the 'show create table' for general_log and 
         slow_log
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      While taking the meta data dump for general_log and slow_log the 
      "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
      This is because we skipped "DROP TABLE" for those tables, 
      "DROP TABLE" fails for these tables if logging is enabled. 
      Customer is applying the dump by enabling logging so, if the dump 
      has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
      stmts for those tables.
        
      After the fix we could observe "Table 'mysql.general_log' 
      doesn't exist" errors initially that is because in the customer 
      scenario they are dropping the mysql database by enabling the 
      logging, Hence, those errors are expected. Once we apply the 
      dump which is taken before the "drop database mysql", the errors 
      will not be there.
      
      client/mysqldump.c:
        In get_table_structure() added code to skip the DROP TABLE stmts for general_log
        and slow_log tables, because when logging is enabled those stmts will fail. And
        replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
        sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
        those tables.
        In dump_all_tables_in_db() added code to call get_table_structure() for 
        general_log and slow_log tables.
      mysql-test/r/mysqldump.result:
        Added a test as part of fix for Bug #11754178
      mysql-test/t/mysqldump.test:
        Added a test as part of fix for Bug #11754178
      e7364ec2
  8. 04 May, 2012 2 commits
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · daafaa0f
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      Problem Statement:
      ------------------
      mysqldump is not having the dump stmts for general_log and slow_log
      tables. That is because of the fix for Bug#26121. Hence, after 
      dropping the mysql database, and applying the dump by enabling the 
      logging, "'general_log' table not found" errors are logged into the 
      server log file.
      
      Analysis:
      ---------
      As part of the fix for Bug#26121, we skipped the dumping of tables 
      for general_log and slow_log, because the data dump of those tables 
      are taking LOCKS, which is not allowed for log tables.
      
      Fix:
      ----
      We came up with an approach that instead of taking both meta data 
      and data dump information for those tables, take only the meta data 
      dump which doesn't need LOCKS.
      As part of fixing the issue we came up with below algorithm.
      Design before fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      Design with the fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Explicitly call the 'show create table' for general_log and 
         slow_log
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      While taking the meta data dump for general_log and slow_log the 
      "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
      This is because we skipped "DROP TABLE" for those tables, 
      "DROP TABLE" fails for these tables if logging is enabled. 
      Customer is applying the dump by enabling logging so, if the dump 
      has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
      stmts for those tables.
        
      After the fix we could observe "Table 'mysql.general_log' 
      doesn't exist" errors initially that is because in the customer 
      scenario they are dropping the mysql database by enabling the 
      logging, Hence, those errors are expected. Once we apply the 
      dump which is taken before the "drop database mysql", the errors 
      will not be there.
      
      client/mysqldump.c:
        In get_table_structure() added code to skip the DROP TABLE stmts for general_log
        and slow_log tables, because when logging is enabled those stmts will fail. And
        replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
        sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
        those tables.
        In dump_all_tables_in_db() added code to call get_table_structure() for 
        general_log and slow_log tables.
      mysql-test/r/mysqldump.result:
        Added a test as part of fix for Bug #11754178
      mysql-test/t/mysqldump.test:
        Added a test as part of fix for Bug #11754178
      daafaa0f
    • Annamalai Gurusami's avatar
      In perl, to break out of a foreach loop we need to use · b757c130
      Annamalai Gurusami authored
      the keyword "last" and not "break".  Fixing the failing
      test case. 
      b757c130
  9. 27 Apr, 2012 6 commits
  10. 26 Apr, 2012 3 commits
  11. 24 Apr, 2012 1 commit
  12. 23 Apr, 2012 8 commits