1. 21 Aug, 2012 2 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · b2529335
      Marko Mäkelä authored
      b2529335
    • Marko Mäkelä's avatar
      Fix regression from Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG · 3f249921
      Marko Mäkelä authored
      HEURISTICS FOR COMPRESSED PAGE SIZE
      
      The fix of Bug#12845774 was supposed to skip known-to-fail
      btr_cur_optimistic_insert() calls. There was only one such call, in
      btr_cur_pessimistic_update(). All other callers of
      btr_cur_pessimistic_insert() would release and reacquire the B-tree
      page latch before attempting the pessimistic insert. This would allow
      other threads to restructure the B-tree, allowing (and requiring) the
      insert to succeed as an optimistic (single-page) operation.
      
      Failure to attempt an optimistic insert before a pessimistic one would
      trigger an attempt to split an empty page.
      
      rb:1234 approved by Sunny Bains
      3f249921
  2. 20 Aug, 2012 6 commits
  3. 17 Aug, 2012 3 commits
  4. 16 Aug, 2012 4 commits
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · bd6dbf21
      Marko Mäkelä authored
      bd6dbf21
    • Marko Mäkelä's avatar
      Bug#12595091 POSSIBLY INVALID ASSERTION IN BTR_CUR_PESSIMISTIC_UPDATE() · e288e649
      Marko Mäkelä authored
      Facebook got a case where the page compresses really well so that
      btr_cur_optimistic_update() returns DB_UNDERFLOW, but when a record
      gets updated, the compression rate radically changes so that
      btr_cur_insert_if_possible() can not insert in place despite
      reorganizing/recompressing the page, leading to the assertion failing.
      
      rb:1220 approved by Sunny Bains
      e288e649
    • Marko Mäkelä's avatar
      Bug#12845774 OPTIMISTIC INSERT/UPDATE USES WRONG HEURISTICS FOR · 6d7f6baa
      Marko Mäkelä authored
      COMPRESSED PAGE SIZE
      
      This was submitted as MySQL Bug 61456 and a patch provided by
      Facebook. This patch follows the same idea, but instead of adding a
      parameter to btr_cur_pessimistic_insert(), we simply remove the
      btr_cur_optimistic_insert() call there and add it to the only caller
      that needs it.
      
      btr_cur_pessimistic_insert(): Do not try btr_cur_optimistic_insert().
      
      btr_insert_on_non_leaf_level_func(): Invoke btr_cur_optimistic_insert()
      before invoking btr_cur_pessimistic_insert().
      
      btr_cur_pessimistic_update(): Clarify in a comment why it is not
      necessary to invoke btr_cur_optimistic_insert().
      
      btr_root_raise_and_insert(): Assert that the root page is not empty.
      This could happen if a pessimistic insert (involving a split or merge)
      is performed without first attempting an optimistic (intra-page) insert.
      
      rb:1219 approved by Sunny Bains
      6d7f6baa
    • Marko Mäkelä's avatar
      Bug#13523839 ASSERTION FAILURES ON COMPRESSED INNODB TABLES · 95247de2
      Marko Mäkelä authored
      btr_cur_optimistic_insert(): Remove a bogus assertion. The insert may
      fail after reorganizing the page.
      
      btr_cur_optimistic_update(): Do not attempt to reorganize compressed pages,
      because compression may fail after reorganization.
      
      page_copy_rec_list_start(): Use page_rec_get_nth() to restore to the
      ret_pos, which may also be the page infimum.
      
      rb:1221
      95247de2
  5. 15 Aug, 2012 2 commits
    • Mattias Jonsson's avatar
      manual merge 5.1->5.5 · 404cce0f
      Mattias Jonsson authored
      404cce0f
    • Mattias Jonsson's avatar
      Bug#13025132 - PARTITIONS USE TOO MUCH MEMORY · bcee9f18
      Mattias Jonsson authored
      The buffer for the current read row from each partition
      (m_ordered_rec_buffer) used for sorted reads was
      allocated on open and freed when the ha_partition handler
      was closed or destroyed.
      
      For tables with many partitions and big records this could
      take up too much valuable memory.
      
      Solution is to only allocate the memory when it is needed
      and free it when nolonger needed. I.e. allocate it in
      index_init and free it in index_end (and to handle failures
      also free it on reset, close etc.)
      
      Also only allocating needed memory, according to
      partitioning pruning.
      
      Manually tested that it does not use as much memory and
      releases it after queries.
      bcee9f18
  6. 14 Aug, 2012 3 commits
    • Venkata Sidagam's avatar
      Bug #12992993 MYSQLHOTCOPY FAILS IF VIEW EXISTS · 94bd7bd6
      Venkata Sidagam authored
      Problem description:
      mysqlhotcopy fails if a view presents in the database.
      
      Analysis:
      Before 5.5 'FLUSH TABLES <tbl_name> ... WITH READ LOCK' will able 
      to get lock for all tables (i.e. base tables and view tables). 
      In 5.5 onwards 'FLUSH TABLES <tbl_name> ... WITH READ LOCK' for 
      'view tables' will not work, because taking flush locks on view 
      tables is not valid.
      
      Fix:
      Take flush lock for 'base tables' and read lock for 'view table' 
      separately.
      
      Note: most of the patch has been backported from bug#13006947's patch
      94bd7bd6
    • Sujatha Sivakumar's avatar
      merge from 5.1 to 5.5 · 3af67068
      Sujatha Sivakumar authored
      3af67068
    • Sujatha Sivakumar's avatar
      Bug#13596613:SHOW SLAVE STATUS GIVES WRONG OUTPUT WITH · 03bfc41b
      Sujatha Sivakumar authored
      MASTER-MASTER AND USING SET USE
      
      Problem:
      =======
      In a master-master set-up, a master can show a wrong
      'SHOW SLAVE STATUS' output.
      
      Requirements:
      - master-master
      - log_slave_updates
      
      This is caused when using SET user-variables and then using
      it to perform writes. From then on the master that performed
      the insert will have a SHOW SLAVE STATUS that is wrong and  
      it will never get updated until a write happens on the other
      master. On"Master A" the "exec_master_log_pos" is not
      getting updated.
      
      Analysis:
      ========
      Slave receives a "User_var" event from the master and after
      applying the event, when "log_slave_updates" option is
      enabled the slave tries to write this applied event into
      its own binary log. At the time of writing this event the
      slave should use the "originating server-id". But in the
      above case the sever always logs the  "user var events"
      by using its global server-id. Due to this in a
      "master-master" replication when the event comes back to the
      originating server the "User_var_event" doesn't get skipped.
      "User_var_events" are context based events and they always
      follow with a query event which marks their end of group.
      Due to the above mentioned problem with "User_var_event"
      logging the "User_var_event" never gets skipped where as
      its corresponding "query_event" gets skipped. Hence the
      "User_var" event always waits for the next "query event"
      and the "Exec_master_log_position" does not get updated
      properly.
      
      Fix:
      ===
      `MYSQL_BIN_LOG::write' function is used to write events
      into binary log. Within this function a new object for
      "User_var_log_event" is created and this new object is used
      to write the "User_var" event in the binlog. "User var"
      event is inherited from "Log_event". This "Log_event" has
      different overloaded constructors. When a "THD" object
      is present "Log_event(thd,...)" constructor should be used
      to initialise the objects and in the absence of a valid
      "THD" object "Log_event()" minimal constructor should be
      used. In the above mentioned problem always default minimal
      constructor was used which is incorrect. This minimal
      constructor is replaced with "Log_event(thd,...)".
      
      sql/log_event.h:
        Replaced the default constructor with another constructor
        which takes "THD" object as an argument.
      03bfc41b
  7. 13 Aug, 2012 1 commit
  8. 11 Aug, 2012 2 commits
  9. 09 Aug, 2012 10 commits
    • Sergey Glukhov's avatar
      5.1 -> 5.5 merge · 51672ec2
      Sergey Glukhov authored
      51672ec2
    • Sergey Glukhov's avatar
      Bug #14409015 MEMORY LEAK WHEN REFERENCING OUTER FIELD IN HAVING · 2f30b340
      Sergey Glukhov authored
      When resolving outer fields, Item_field::fix_outer_fields()
      creates new Item_refs for each execution of a prepared statement, so
      these must be allocated in the runtime memroot. The memroot switching
      before resolving JOIN::having causes these to be allocated in the
      statement root, leaking memory for each PS execution.
      
      
      sql/item_subselect.cc:
        addon, fix for 11829691, item could be created in
        runtime memroot, so we need to use real_item instead.
      2f30b340
    • Mattias Jonsson's avatar
      Bug#14342883: SELECT QUERY RETURNS NOT ALL · 6592afd5
      Mattias Jonsson authored
      ROWS THAT ARE EXPECTED
      
      For non range/list partitioned tables (i.e. HASH/KEY):
      
      When prune_partitions finds a multi-range list
      (or in this test '<>') for a field of the partition index,
      even if it cannot make any use of the multi-range,
      it will continue with the next field of the partition index
      and use that for pruning (even if it the previous
      field could not be used). This results in partitions is
      pruned away, leaving partitions that only matches
      the last field in the partition index, and will exclude
      partitions which might match any previous fields.
      
      Fixed by skipping rest of partitioning key fields/parts
      if current key field/part could not be used.
      
      Also notice it is the order of the fields in the CREATE TABLE
      statement that triggers this bug, not the order of fields in
      primary/unique key or PARTITION BY KEY ().
      It must not be the last field in the partitioning expression that
      is not equal (or have a non single point range).
      I.e. the partitioning index is created with the same field order
      as in the CREATE TABLE. And for the bug to appear
      the last field must be a single point and some previous field
      must be a multi-point range.
      6592afd5
    • unknown's avatar
      No commit message · 6e4b8b02
      unknown authored
      No commit message
      6e4b8b02
    • unknown's avatar
      No commit message · 776ae950
      unknown authored
      No commit message
      776ae950
    • Marko Mäkelä's avatar
      Null merge from mysql-5.1. · 250270a8
      Marko Mäkelä authored
      250270a8
    • Marko Mäkelä's avatar
      Merge from mysql-5.1 to working copy. · eede4140
      Marko Mäkelä authored
      eede4140
    • Marko Mäkelä's avatar
      Merge mysql-5.1 to mysql-5.5. · 05c6614d
      Marko Mäkelä authored
      05c6614d
    • Marko Mäkelä's avatar
      Bug#14399148 INNODB TABLES UNDER LOAD PRODUCE DUPLICATE COPIES OF ROWS · bb849479
      Marko Mäkelä authored
      IN QUERIES
      
      This bug was caused by an incorrect fix of
      Bug#13807811 BTR_PCUR_RESTORE_POSITION() CAN SKIP A RECORD
      
      There was nothing wrong with btr_pcur_restore_position(), but with the
      use of it in the table scan during index creation.
      
      rb:1206 approved by Jimmy Yang
      bb849479
    • Sunanda Menon's avatar
      Merge from mysql-5.1.65-release · f58a6967
      Sunanda Menon authored
      f58a6967
  10. 08 Aug, 2012 2 commits
    • Rohit Kalhans's avatar
      upmerge from mysql-5.1=>mysql-5.5 · 17c5725c
      Rohit Kalhans authored
      17c5725c
    • Rohit Kalhans's avatar
      BUG#11757312: MYSQLBINLOG DOES NOT ACCEPT INPUT FROM STDIN · ff04c5bd
      Rohit Kalhans authored
      WHEN STDIN IS A PIPE
                  
      Problem: Mysqlbinlog does not accept the input from STDIN when 
      STDIN is a pipe. This prevents the users from passing the input file
      through a shell pipe.    
      
      Background: The my_seek() function does not check if the file descriptor
      passed to it is regular (seekable) file. The check_header() function in
      mysqlbinlog calls the my_b_seek() unconditionally and it fails when
      the underlying file is a PIPE.  
                  
      Resolution: We resolve this problem by checking if the underlying file
      is a regular file by using my_fstat() before calling my_b_seek(). 
      If the underlying file is not seekable we skip the call to my_b_seek()
      in check_header().
      
      client/mysqlbinlog.cc:
        Added a check to avoid the my_b_seek() call if the
        underlying file is a PIPE.
      ff04c5bd
  11. 07 Aug, 2012 5 commits
    • Nirbhay Choubey's avatar
      ffdc4bc8
    • Nirbhay Choubey's avatar
      Bug#13928675 MYSQL CLIENT COPYRIGHT NOTICE MUST · 5ad8292c
      Nirbhay Choubey authored
                   SHOW 2012 INSTEAD OF 2011
      
      * Added a new macro to hold the current year :
        COPYRIGHT_NOTICE_CURRENT_YEAR
      * Modified ORACLE_WELCOME_COPYRIGHT_NOTICE macro
        to take the initial year as parameter and pick
        current year from the above mentioned macro.
      5ad8292c
    • Harin Vadodaria's avatar
      Bug#14068244: INCOMPATIBILITY BETWEEN LIBMYSQLCLIENT/LIBMYSQLCLIENT_R · d0affa9b
      Harin Vadodaria authored
                    AND LIBCRYPTO
      
      Description: Merge from 5.1 to 5.5
      d0affa9b
    • Harin Vadodaria's avatar
      Bug#14068244: INCOMPATIBILITY BETWEEN LIBMYSQLCLIENT/LIBMYSQLCLIENT_R · d86d0634
      Harin Vadodaria authored
                    AND LIBCRYPTO
      
      Problem: libmysqlclient_r exports symbols from yaSSL library which
               conflict with openSSL symbols. This issue is related to symbols
               used by CURL library and are defined in taocrypt. Taocrypt has
               dummy implementation of these functions. Due to this when a
               program which uses libcurl library functions is compiled using
               libmysqlclient_r and libcurl, it hits segmentation fault in
               execution phase.
      
      Solution: MySQL should not be exporting such symbols. However, these
                functions are not used by MySQL code at all. So avoid compiling
                them in the first place.
      d86d0634
    • Praveenkumar Hulakund's avatar
      Bug#13058122 - DML, LOCK/UNLOCK TABLES AND SELECT LEAD TO · d0766534
      Praveenkumar Hulakund authored
      FOREVER MDL LOCK
      
      Analysis:
      ----------
      While granting MDL lock for the lock requests in wait queue,
      first the lock is granted to the high priority lock types
      and then to the low priority lock types.
      
      MDL Priority Matrix,
        +-------------+----+---+---+---+----+-----+
        | Locks       |    |   |   |   |    |     |
        | has Priority|    |   |   |   |    |     |
        | over --->   |  S | SR| SW| SU| SNW| SNRW|   
        +-------------+----+---+---+---+----+-----+
        | X           |  + | + | + | + | +  | +   |
        +-------------|----|---|---|---|----|-----|
        | SNRW        |  - | + | + | - | -  | -   |
        +-------------|----|---|---|---|----|-----|
        | SNW         |  - | - | + | - | -  | -   |
        +-------------+----+---+---+---+----+-----+
      
      Here '+' means, Lock priority is higher.
           '-' means, Has same priority
      
      In the scenario where,
         *. Lock wait queue has requests of type S/SR/SW/SU.
         *. And locks of high priority X/SNRW/SNW are requested 
            continuously.
      
      In this case, while granting lock, always first high priority 
      lock requests(X/SNRW/SNW) are considered. Low priority 
      locks(S/SR/SW/SU) will not get chance and they will 
      wait forever.
      
      In the scenario for which this bug is reported, application
      executed many LOCK TABLES ... WRITE statements concurrently.
      These statements request SNRW lock. Also there were some
      connections trying to execute DML statements requesting SR
      lock. Since SNRW lock request has higher priority (and as
      they were too many waiting SNRW requests) lock is always 
      granted to it. So, lock request SR will wait forever, resulting
      in DML starvation.
      
      How is this handled in 5.1?
      ---------------------------
      Even in 5.1 we have low priority lock starvation issue.
      But, in 5.1 thread locking, system variable 
      "max_write_lock_count" can be configured to grant
      some pending read lock requests. After 
      "max_write_lock_count" of write lock grants all the low
      priority locks are granted.
      
      Why this issue is seen in 5.5/trunk?
      ---------------------------------
      In 5.5/trunk MDL locking, "max_write_lock_count" system 
      variable exists but not used in MDL, only thread lock uses
      it. So no effect of "max_write_lock_count" in MDL locking.
      This means that starvation of metadata locks is possible 
      even if max_write_lock_count is used.
      
      Looks like, customer was using "max_write_lock_count" in
      5.1 and when upgraded to 5.5, starvation is seen because
      of not having effect of "max_write_lock_count" in MDL.
      
      Fix:
      ----------
      As a fix, support for max_write_lock_count is added to MDL.
      To maintain write lock counter per MDL_lock object, new
      member "m_hog_lock_count" is added in MDL_lock.
      
      And following logic is added to increment the counter in 
      function reschedule_waiters, 
      (reschedule_waiters function is called while thread is
       releasing the lock)
          - After granting lock request from the wait queue.
          -  Check if there are any S/SR/SU/SW exists in the wait queue
            - If yes then increment the "m_hog_lock_count"
      
      And following logic is added in the same function to
      handle pending S/SU/SR/SW locks
          
          - Before granting locks 
          - Check if max_write_lock_count <= m_hog_lock_count
          - If Yes, then try to grant S/SR/SW/SU locks. 
            (Since all of these has same priority, all locks are
             granted together. But some lock grant may fail because
             of grant incompatibility)
          - Reset m_hog_lock_count if there no low priority lock
            requests in wait queue. 
          - return
      
      Note:
      --------------------------
      In the lock priority matrix explained above,
      though X has priority over the SNW and SNRW. X locks is
      taken mostly for RENAME, TRUNCATE, CREATE ... operations.
      So lock type X may not be requested in loop continuously 
      in real world applications, as compared to other lock 
      request types. So, lock request of type SNW and SNRW are 
      not starved. So, we can grant all S/SR/SU/SW in one shot,
      without considering SNW & SNRW lock request starvation.
      
      ALTER table operations take SU lock first and then 
      upgrade to SNW if required. All S, SR, SW, SU have same
      lock priority. So while granting SU, request of types
      SR, SW, S are also granted in one shot. So, lock request 
      of type SU->SNW in loop will not make other low priority 
      lock request to starve.
      
      But, when there is request for lock of type SNRW, lock
      requests of lower priority types are not granted. And if 
      SNRW is requested in loop continuously then all 
      S, SR, SW, SU are starved.
      
      This patch addresses the latter scenario.
      When we have S/SR/SW/SU in wait queue and if 
      there are
          - Continuous SNRW lock requests
          - OR one or more X and Continuous SNRW lock requests.
          - OR one SNW and Continuous SNRW lock requests.
          - OR one SNW, one or more X and continuous SNRW lock 
            requests.
      in wait queue then, S/SR/SW/SU lock request are starved.
      
      d0766534