1. 30 May, 2012 2 commits
  2. 29 May, 2012 1 commit
    • Rohit Kalhans's avatar
      Bug#11762667: MYSQLBINLOG IGNORES ERRORS WHILE WRITING OUTPUT · d8b2d4a0
      Rohit Kalhans authored
      Problem: mysqlbinlog exits without any error code in case of
      file write error. It is because of the fact that the calls
      to Log_event::print() method does not return a value and the
      thus any error were being ignored.
      
      Resolution: We resolve this problem by checking for the 
      IO_CACHE::error == -1 after every call to Log_event:: print()
      and terminating the further execution.
      
      client/mysqlbinlog.cc:
        - handled error conditions during event->print() calls
        - added check for error in end_io_cache()
      mysys/my_write.c:
        Added debug code to simulate file write error.
        error returned will be ENOSPC=> error no space on the disk
      sql/log_event.cc:
        Added debug code to simulate file write error, by reducing the size of io cache.
      d8b2d4a0
  3. 24 May, 2012 1 commit
    • Inaam Rana's avatar
      Bug #14100254 65389: MVCC IS BROKEN WITH IMPLICIT LOCK · 01748ce1
      Inaam Rana authored
      rb://1088
      approved by: Marko Makela
      
      This bug was introduced in early stages of plugin. We were not
      checking for an implicit lock on sec index rec for trx_id that is
      stamped on current version of the clust_index in case where the
      clust_index has a previous delete marked version.
      01748ce1
  4. 21 May, 2012 1 commit
    • Annamalai Gurusami's avatar
      Bug #12752572 61579: REPLICATION FAILURE WHILE · e979417c
      Annamalai Gurusami authored
      INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
      
      When an insert stmt like "insert into t values (1),(2),(3)" is
      executed, the autoincrement values assigned to these three rows are
      expected to be contiguous.  In the given lock mode
      (innodb_autoinc_lock_mode=1), the auto inc lock will be released
      before the end of the statement.  So to make the autoincrement
      contiguous for a given statement, we need to reserve the auto inc
      values at the beginning of the statement.  
      
      Modified the fix based on review comment by Svoj.  
      e979417c
  5. 18 May, 2012 1 commit
    • Rohit Kalhans's avatar
      BUG#14005409 - 64624 · 781137c0
      Rohit Kalhans authored
            
      Problem: After the fix for Bug#12589870, a new field that
      stores the length of db name was added in the buffer that
      stores the query to be executed. Unlike for the plain user
      session, the replication execution did not allocate the
      necessary chunk in Query-event constructor. This caused an
      invalid read while accessing this field.
            
      Solution: We fix this problem by allocating a necessary chunk
      in the buffer created in the Query_log_event::Query_log_event()
      and store the length of database name.
      
      sql/log_event.cc:
        Added a new field in the buffer created in the
        Query_log_event's constructor and store the length
        of database name.
      781137c0
  6. 17 May, 2012 3 commits
    • Gopal Shankar's avatar
      Bug#12636001 : deadlock from thd_security_context · 21faded5
      Gopal Shankar authored
      PROBLEM:
      Threads end-up in deadlock due to locks acquired as described
      below,
      
      con1: Run Query on a table. 
        It is important that this SELECT must back-off while
        trying to open the t1 and enter into wait_for_condition().
        The SELECT then is blocked trying to lock mysys_var->mutex
        which is held by con3. The very significant fact here is
        that mysys_var->current_mutex will still point to LOCK_open,
        even if LOCK_open is no longer held by con1 at this point.
      
      con2: Try dropping table used in con1 or query some table.
        It will hold LOCK_open and be blocked trying to lock
        kernel_mutex held by con4.
      
      con3: Try killing the query run by con1.
        It will hold THD::LOCK_thd_data belonging to con1 while
        trying to lock mysys_var->current_mutex belonging to con1.
        But current_mutex will point to LOCK_open which is held
        by con2.
      
      con4: Get innodb engine status
        It will hold kernel_mutex, trying to lock THD::LOCK_thd_data
        belonging to con1 which is held by con3.
      
      So while technically only con2, con3 and con4 participate in the
      deadlock, con1's mysys_var->current_mutex pointing to LOCK_open
      is a vital component of the deadlock.
      
      CYCLE = (THD::LOCK_thd_data -> LOCK_open ->
               kernel_mutex -> THD::LOCK_thd_data)
      
      FIX:
      LOCK_thd_data has responsibility of protecting,
      1) thd->query, thd->query_length
      2) VIO
      3) thd->mysys_var (used by KILL statement and shutdown)
      4) THD during thread delete.
      
      Among above responsibilities, 1), 2)and (3,4) seems to be three
      independent group of responsibility. If there is different LOCK
      owning responsibility of (3,4), the above mentioned deadlock cycle
      can be avoid. This fix introduces LOCK_thd_kill to handle
      responsibility (3,4), which eliminates the deadlock issue.
      
      Note: The problem is not found in 5.5. Introduction MDL subsystem 
      caused metadata locking responsibility to be moved from TDC/TC to
      MDL subsystem. Due to this, responsibility of LOCK_open is reduced. 
      As the use of LOCK_open is removed in open_table() and 
      mysql_rm_table() the above mentioned CYCLE does not form.
      Revision ID for changes,
      open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz
      mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
      
      21faded5
    • unknown's avatar
      No commit message · 3e1758d3
      unknown authored
      No commit message
      3e1758d3
    • unknown's avatar
      No commit message · 932376f9
      unknown authored
      No commit message
      932376f9
  7. 16 May, 2012 3 commits
    • Annamalai Gurusami's avatar
      Bug #13943231: ALTER TABLE AFTER DISCARD MAY CRASH THE SERVER · ce9e6b5a
      Annamalai Gurusami authored
      The following scenario crashes our mysql server:
      
      1.  set global innodb_file_per_table=1;
      2.  create table t1(c1 int) engine=innodb;
      3.  alter table t1 discard tablespace;
      4.  alter table t1 add unique index(c1);
      
      Step 4 crashes the server.  This patch introduces a check on discarded
      tablespace to avoid the crash.
      
      rb://1041 approved by Marko Makela
      ce9e6b5a
    • Venkata Sidagam's avatar
      Bug #13955256: KEYCACHE CRASHES, CORRUPTIONS/HANGS WITH, · 6b05e434
      Venkata Sidagam authored
                     FULLTEXT INDEX AND CONCURRENT DML.
      
      Problem Statement:
      ------------------
      1) Create a table with FT index.
      2) Enable concurrent inserts.
      3) In multiple threads do below operations repeatedly
         a) truncate table
         b) insert into table ....
         c) select ... match .. against .. non-boolean/boolean mode
      
      After some time we could observe two different assert core dumps
      
      Analysis:
      --------
      1)assert core dump at key_read_cache():
      Two select threads operating in-parallel on same key 
      root block.
      1st select thread block->status is set to BLOCK_ERROR 
      because the my_pread() in read_block() is returning '0'. 
      Truncate table made the index file size as 1024 and pread 
      was asked to get the block of count bytes(1024 bytes) 
      from offset of 1024 which it cannot read since its 
      "end of file" and retuning '0' setting 
      "my_errno= HA_ERR_FILE_TOO_SHORT" and the key_file_length, 
      key_root[0] is same i.e. 1024. Since block status has BLOCK_ERROR 
      the 1st select thread enter into the free_block() and will 
      be under wait on conditional mutex by making status as 
      BLOCK_REASSIGNED and goes for wait_on_readers(). Other select 
      thread will also work on the same block and sees the status as 
      BLOCK_ERROR and enters into free_block(), checks for BLOCK_REASSIGNED 
      and asserting the server.
      
      2)assert core dump at key_write_cache():
      One select thread and One insert thread.
      Select thread gets the unlocks the 'keycache->cache_lock', 
      which allows other threads to continue and gets the pread() 
      return value as'0'(please see the explanation above) and 
      tries to get the lock on 'keycache->cache_lock' and waits 
      there for the lock.
      Insert thread requests for the block, block will be assigned 
      from the hash list and makes the page_status as 
      'PAGE_WAIT_TO_BE_READ' and goes for the read_block(), waits 
      in the queue since there are some other threads performing 
      reads on the same block.
      Select thread which was waiting for the 'keycache->cache_lock' 
      mutex in the read_block() will continue after getting the my_pread() 
      value as '0' and sets the block status as BLOCK_ERROR and goes to 
      the free_block() and go to the wait_for_readers().
      Now the insert thread will awake and continues. and checks 
      block->status as not BLOCK_READ and it asserts.  
      
      Fix:
      ---
      In the full text code, multiple readers of index file is not guarded. 
      Hence added below below code in _ft2_search() and walk_and_match().
      
      to lock the key_root I have used below code in _ft2_search()
       if (info->s->concurrent_insert)
          mysql_rwlock_rdlock(&share->key_root_lock[0]);
      
      and to unlock 
       if (info->s->concurrent_insert)
         mysql_rwlock_unlock(&share->key_root_lock[0]);
      
      storage/myisam/ft_boolean_search.c:
        Since its a recursion function, to avoid confusion in taking and 
        releasing the locks, renamed _ft2_search() to _ft2_search_internal() 
        function. And _ft2_search() will take the lock, call 
        _ft2_search_internal() and release the lock in case of concurrent 
        inserts.
      storage/myisam/ft_nlq_search.c:
        Added read locks code in walk_and_match()
      6b05e434
    • Annamalai Gurusami's avatar
      Bug #12752572 61579: REPLICATION FAILURE WHILE · bcb5d737
      Annamalai Gurusami authored
      INNODB_AUTOINC_LOCK_MODE=1 AND USING TRIGGER
      
      When an insert stmt like "insert into t values (1),(2),(3)" is
      executed, the autoincrement values assigned to these three rows are
      expected to be contiguous.  In the given lock mode
      (innodb_autoinc_lock_mode=1), the auto inc lock will be released
      before the end of the statement.  So to make the autoincrement
      contiguous for a given statement, we need to reserve the auto inc
      values at the beginning of the statement.  
      
      rb://1074 approved by Alexander Nozdrin
      bcb5d737
  8. 15 May, 2012 4 commits
  9. 10 May, 2012 1 commit
    • Annamalai Gurusami's avatar
      Bug #14007649 65111: INNODB SOMETIMES FAILS TO UPDATE ROWS INSERTED · 391ea219
      Annamalai Gurusami authored
      BY A CONCURRENT TRANSACTIO
      
      The member function QUICK_RANGE_SELECT::init_ror_merged_scan() performs
      a table handler clone. Innodb does not provide a clone operation.  
      The ha_innobase::clone() is not there. The handler::clone() does not 
      take care of the ha_innobase->prebuilt->select_lock_type.  Because of 
      this what happens is that for one index we do a locking read, and 
      for the other index we were doing a non-locking (consistent) read. 
      The patch introduces ha_innobase::clone() member function.  
      It is implemented similar to ha_myisam::clone().  It calls the 
      base class handler::clone() and then does any additional operation 
      required.  I am setting the ha_innobase->prebuilt->select_lock_type 
      correctly. 
      
      rb://1060 approved by Marko
      391ea219
  10. 08 May, 2012 1 commit
  11. 07 May, 2012 1 commit
    • Venkata Sidagam's avatar
      Bug #11754178 45740: MYSQLDUMP DOESN'T DUMP GENERAL_LOG AND SLOW_QUERY · e7364ec2
      Venkata Sidagam authored
                           CAUSES RESTORE PROBLEM
      Problem Statement:
      ------------------
      mysqldump is not having the dump stmts for general_log and slow_log
      tables. That is because of the fix for Bug#26121. Hence, after 
      dropping the mysql database, and applying the dump by enabling the 
      logging, "'general_log' table not found" errors are logged into the 
      server log file.
      
      Analysis:
      ---------
      As part of the fix for Bug#26121, we skipped the dumping of tables 
      for general_log and slow_log, because the data dump of those tables 
      are taking LOCKS, which is not allowed for log tables.
      
      Fix:
      ----
      We came up with an approach that instead of taking both meta data 
      and data dump information for those tables, take only the meta data 
      dump which doesn't need LOCKS.
      As part of fixing the issue we came up with below algorithm.
      Design before fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      Design with the fix:
      1) mysql database is having tables like db, event,... general_log,
         ... slow_log...
      2) Skip general_log and slow_log while preparing the tables list
      3) Explicitly call the 'show create table' for general_log and 
         slow_log
      3) Take the TL_READ lock on tables which are present in the table 
         list and do 'show create table'.
      4) Release the lock.
      
      While taking the meta data dump for general_log and slow_log the 
      "CREATE TABLE" is replaced with "CREATE TABLE IF NOT EXISTS". 
      This is because we skipped "DROP TABLE" for those tables, 
      "DROP TABLE" fails for these tables if logging is enabled. 
      Customer is applying the dump by enabling logging so, if the dump 
      has "DROP TABLE" it will fail. Hence, removed the "DROP TABLE" 
      stmts for those tables.
        
      After the fix we could observe "Table 'mysql.general_log' 
      doesn't exist" errors initially that is because in the customer 
      scenario they are dropping the mysql database by enabling the 
      logging, Hence, those errors are expected. Once we apply the 
      dump which is taken before the "drop database mysql", the errors 
      will not be there.
      
      client/mysqldump.c:
        In get_table_structure() added code to skip the DROP TABLE stmts for general_log
        and slow_log tables, because when logging is enabled those stmts will fail. And
        replaced CREATE TABLE with CREATE IF NOT EXISTS for those tables, just to make 
        sure CREATE stmt for those tables doesn't fail since we removed DROP stmts for
        those tables.
        In dump_all_tables_in_db() added code to call get_table_structure() for 
        general_log and slow_log tables.
      mysql-test/r/mysqldump.result:
        Added a test as part of fix for Bug #11754178
      mysql-test/t/mysqldump.test:
        Added a test as part of fix for Bug #11754178
      e7364ec2
  12. 27 Apr, 2012 1 commit
  13. 26 Apr, 2012 1 commit
  14. 23 Apr, 2012 2 commits
  15. 20 Apr, 2012 2 commits
    • Nuno Carvalho's avatar
      BUG#13979418: SHOW BINLOG EVENTS MAY CRASH THE SERVER · cdaae169
      Nuno Carvalho authored
      The function mysql_show_binlog_events has a local stack variable
      'LOG_INFO linfo;', which is assigned to thd->current_linfo, however
      this variable goes out of scope and is destroyed before clean
      thd->current_linfo.
      
      The problem is solved by moving 'LOG_INFO linfo;' to function scope.
      cdaae169
    • Andrei Elkin's avatar
      BUG#11754117 incorrect logging of INSERT into auto-increment · 49e484c8
      Andrei Elkin authored
      BUG#11761686 insert_id event is not filtered.
        
      Two issues are covered.
        
      INSERT into autoincrement field which is not the first part in the composed primary key 
      is unsafe by autoincrement logging design. The case is specific to MyISAM engine
      because Innodb does not allow such table definition.
        
      However no warnings and row-format logging in the MIXED mode was done, and
      that is fixed.
        
      Int-, Rand-, User-var log-events were not filtered along with their parent
      query that made possible them to screw up execution context of the following
      query.
        
      Fixed with deferring their execution until the parent query.
      
      ******
      Bug#11754117 
      
      Post review fixes.
      
      mysql-test/suite/rpl/r/rpl_auto_increment_bug45679.result:
        a new result file is added.
      mysql-test/suite/rpl/r/rpl_filter_tables_not_exist.result:
        results updated.
      mysql-test/suite/rpl/t/rpl_auto_increment_bug45679.test:
        regression test for BUG#11754117-45670 is added.
      mysql-test/suite/rpl/t/rpl_filter_tables_not_exist.test:
        regression test for filtering issue of BUG#11754117 - 45670 is added.
      sql/log_event.cc:
        Logics are added for deferring and executing events associated 
        with the Query event.
      sql/log_event.h:
        Interface to deferred events batch execution is added.
      sql/rpl_rli.cc:
        initialization for new RLI members is added.
      sql/rpl_rli.h:
        New members to RLI are added to facilitate deferred events gathering
        and execution control;
        two general character RLI cleanup methods are constructed.
      sql/rpl_utility.cc:
        Deferred_log_events methods are difined.
      sql/rpl_utility.h:
        A new class Deferred_log_events is defined to implement
        IRU events gathering, execution and cleanup.
      sql/slave.cc:
        Necessary changes to initialize `rli->deferred_events' and prevent
        deferred event deletion in the main read-exec branch.
      sql/sql_base.cc:
        A new safe-check function for multi-part pk with auto-increment is defined
        and deployed in lock_tables().
      sql/sql_class.cc:
        Initialization for a new member and replication cleanups are added
        to THD class.
      sql/sql_class.h:
        THD class receives a new member to hold a specific execution
        context for slave applier.
      sql/sql_parse.cc:
        Execution of the deferred event in started prior to its parent query.
      49e484c8
  16. 19 Apr, 2012 1 commit
    • Mayank Prasad's avatar
      BUG#12427262 : 60961: SHOW TABLES VERY SLOW WHEN NOT IN SYSTEM DISK CACHE · bf4161ad
      Mayank Prasad authored
      Reason:
       This is a regression happened because of changes done in code refactoring 
       in 5.1 from 5.0.
      
      Issue: 
       While doing "Show tables" lex->verbose was being checked to avoid opening
       FRM files to get table type. In case of "Show full table", lex->verbose
       is true to indicate table type is required. In 5.0, this check was
       present which got missing in >=5.5.
      
      Fix:
       Added the required check to avoid opening FRM files unnecessarily in case
       of "Show tables".
      bf4161ad
  17. 18 Apr, 2012 4 commits
    • Tor Didriksen's avatar
      892106db
    • Tor Didriksen's avatar
      Backport 5.5=>5.1 Patch for Bug#13805127: · 11b2cf4f
      Tor Didriksen authored
      Stored program cache produces wrong result in same THD.
      11b2cf4f
    • Nuno Carvalho's avatar
      WL#6236: Allow SHOW MASTER LOGS and SHOW BINARY LOGS with REPLICATION CLIENT · 448c3d62
      Nuno Carvalho authored
      Currently SHOW MASTER LOGS and SHOW BINARY LOGS require the SUPER
      privilege. Monitoring tools (such as MEM) often want to check this 
      output - for instance MEM generates the SUM of the sizes of the logs 
      reported here, and puts that in the Replication overview within the MEM
      Dashboard.
      However, because of the SUPER requirement, these tools often have an 
      account that holds open the connection whilst monitoring, and can lock
      out administrators when the server gets overloaded and reaches
      max_connections - there is already another SUPER privileged account
      connected, the "monitor". 
      
      As SHOW MASTER STATUS, and all other replication related statements,
      return with either REPLICATION CLIENT or SUPER privileges, this worklog 
      is to make SHOW MASTER LOGS and SHOW BINARY LOGS be consistent with this
      as well, and allow both of these commands with either SUPER or 
      REPLICATION CLIENT. 
      This allows monitoring tools to not require a SUPER privilege any more,
      so is safer in overloaded situations, as well as being more secure, as 
      lighter privileges can be given to users of such tools or scripts.
      448c3d62
    • Chaithra Gopalareddy's avatar
      Bug#12713907:STRANGE OPTIMIZE & WRONG RESULT UNDER · 25f82f8a
      Chaithra Gopalareddy authored
                         ORDER BY COUNT(*) LIMIT.
      
      PROBLEM:
      With respect to problem in the bug description, we
      exhibit different behaviors for the two tables
      presented, because innodb statistics (rec_per_key
      in this case) are updated for the first table
      and not so for the second one. As a result the
      query plan gets changed in test_if_skip_sort_order
      to use 'index' scan. Hence the difference in the
      explain output. (NOTE: We can reproduce the problem
      with first table by reducing the number of tuples
      and changing the table structure)
      
      The varied output w.r.t the query on the second table
      is because of the result in the query plan change.
      When a query plan is changed to use 'index' scan,
      after the call to test_if_skip_sort_order, we set
      keyread to TRUE immedietly. If for some reason
      we drop this index scan for a filesort later on,
      we fetch only the keys not the entire tuple.
      As a result we would see junk values in the result set.
      
      Following is the code flow:
      
      Call test_if_skip_sort_order
      -Choose an index to give sorted output
      -If this is a covering index, set_keyread to TRUE
      -Set the scan to INDEX scan
      
      Call test_if_skip_sort_order second time
      -Index is not chosen (note that we do not pass the
      actual limit value second time. Hence we do not choose
      index scan second time which in itself is a bug fixed
      in 5.6 with WL#5558)
      -goto filesort
      
      Call filesort
      -Create quick range on a different index
      -Since keyread is set to TRUE, we fetch only the columns of
      the index
      -results in the required columns are not fetched
      
      FIX:
      Remove the call to set_keyread(TRUE) from
      test_if_skip_sort_order. The access function which is
      'join_read_first' or 'join_read_last' calls set_keyread anyways.
      
      
      mysql-test/r/func_group_innodb.result:
        Added test result for Bug#12713907
      mysql-test/t/func_group_innodb.test:
        Added test case for Bug#12713907
      sql/sql_select.cc:
        Remove the call to set_keyread as we do it from access
        functions 'join_read_first' and 'join_read_last'
      25f82f8a
  18. 17 Apr, 2012 1 commit
  19. 10 Apr, 2012 1 commit
  20. 09 Apr, 2012 1 commit
  21. 06 Apr, 2012 2 commits
    • Mayank Prasad's avatar
      BUG#13738989 : 62136 : FAILED TO FETCH SELECT RESULT USING EMBEDDED MYSQLD · fe3f7993
      Mayank Prasad authored
      Background : 
      In mysql-5.1, in a fix for bug#47485, code has been changed for 
      mysql client (libmysql/libmysql.c) but corresponding code was not
      changed for embedded mysql. In that code change, after execution
      of a statement, mysql_stmt_store_result() checks for mysql->state
      to be MYSQL_STATUS_STATEMENT_GET_RESULT, instead of
      MYSQL_STATUS_GET_RESULT (earlier).
      
      Reason:
      In embedded mysql code, after execution, mysql->state was not
      set to MYSQL_STATUS_STATEMENT_GET_RESULT, so it was throwing
      OUT_OF_SYNC error.
      
      Fix:
      Fixed the code in libmysqld/lib_sql.cc to have mysql->state
      to be set to MYSQL_STATUS_STATEMENT_GET_RESULT after execution.
      
      fe3f7993
    • Georgi Kodinov's avatar
      Bug #13934049: 64884: LOGINS WITH INCORRECT PASSWORD ARE ALLOWED · 7dcf0a66
      Georgi Kodinov authored
      Fixed an improper type conversion on return that can make the server accept
      logins with a wrong password.
      7dcf0a66
  22. 04 Apr, 2012 1 commit
    • Sergey Glukhov's avatar
      Bug#11766300 59387: FAILING ASSERTION: CURSOR->POS_STATE == 1997660512 (BTR_PCUR_IS_POSITIONE · b5c690aa
      Sergey Glukhov authored
      Bug#13639204 64111: CRASH ON SELECT SUBQUERY WITH NON UNIQUE INDEX
      The crash happened due to wrong calculation
      of key length during creation of reference for
      sort order index. The problem is that
      keyuse->used_tables can have OUTER_REF_TABLE_BIT enabled
      but used_tables parameter(create_ref_for_key() func) does
      not have it. So key parts which have OUTER_REF_TABLE_BIT
      are ommited and it could lead to incorrect key length
      calculation(zero key length).
      
      
      mysql-test/r/subselect_innodb.result:
        test result
      mysql-test/t/subselect_innodb.test:
        test case
      sql/sql_select.cc:
        added OUTER_REF_TABLE_BIT to the used_tables parameter
        for create_ref_for_key() function.
      storage/innobase/handler/ha_innodb.cc:
        added assertion, request from Inno team
      storage/innodb_plugin/handler/ha_innodb.cc:
        added assertion, request from Inno team
      b5c690aa
  23. 28 Mar, 2012 3 commits
    • Praveenkumar Hulakund's avatar
      Bug#11763507 - 56224: FUNCTION NAME IS CASE-SENSITIVE · f3d5127f
      Praveenkumar Hulakund authored
      Analysis:
      -------------------------------
      According to the Manual
      (http://dev.mysql.com/doc/refman/5.1/en/identifier-case-sensitivity.html):
      "Column, index, stored routine, and event names are not case sensitive on any
      platform, nor are column aliases."
      
      In other words, 'lower_case_table_names' does not affect the behaviour of 
      those identifiers.
      
      On the other hand, trigger names are case sensitive on some platforms,
      and case insensitive on others. 'lower_case_table_names' does not affect
      the behaviour of trigger names either.
      
      The bug was that SHOW statements did case sensitive comparison
      for stored procedure / stored function / event names.
      
      Fix:
      Modified the code so that comparison in case insensitive for routines 
      and events for "SHOW" operation.
      
      As part of this commit, only fixing the test failures due to the actual code fix.
      f3d5127f
    • Sunny Bains's avatar
      Merge from mysql-5.0 · 83a5a20d
      Sunny Bains authored
      83a5a20d
    • Sunny Bains's avatar
      Bug# 13847885 - PURGING STALLS WHEN PURGE_SYS->N_PAGES_HANDLED OVERFLOWS · 65126ffa
      Sunny Bains authored
      Change the type of purge_sys_t::n_pages_handled and purge_sys_t::handle_limit
      to ulonglong from ulint. On a 32 bit system doing ~700 deletes per second the
      counters can overflow in ~3.5 months, if they are 32 bit.
      
      Approved by Jimmy Yang over IM.
      65126ffa
  24. 27 Mar, 2012 1 commit