1. 25 Jul, 2012 4 commits
  2. 24 Jul, 2012 5 commits
    • Harin Vadodaria's avatar
      Bug#13904906: YASSL PRE-AUTH CRASH WITH 5.1.62, 5.5.22 · 7baba644
      Harin Vadodaria authored
      Problem: Valgrind reports errors when an invalid certificate is used on the
               client.
      
      Solution: Updated yaSSL to version 2.2.2.
      7baba644
    • Sujatha Sivakumar's avatar
      Bug#13961678:MULTI-STATEMENT TRANSACTION REQUIRED MORE THAN · 03993d03
      Sujatha Sivakumar authored
      'MAX_BINLOG_CACHE_SIZE' ERROR
            
      Problem:
      =======
      MySQL returns following error in win64.
      "ERROR 1197 (HY000): Multi-statement transaction required more than
      'max_binlog_cache_size' bytes of storage; increase this mysqld variable
      and try again" when user tries to load >4G file even if
      max_binlog_cache_size set to maximum value. On Linux everything
      works fine.
            
      Analysis:
      ========
      The `max_binlog_cache_size' variable is of type `ulonglong'.  This
      value is set to `ULONGLONG_MAX' at the time of server start up. The
      above value is stored in an intermediate variable named
      `saved_max_binlog_cache_size' which is of type `ulong'. In visual
      c++ complier the `ulong' type is of 4bytes in size and hence the value
      is getting truncated to '4GB' and the cache is not able to grow beyond
      4GB size. The same limitation is observed with 
      "max_binlog_stmt_cache_size" as well. Similar fix has been applied.
            
      Fix:
      ===
      As part of fix the type "ulong" is replaced with "my_off_t" which is of
      type "ulonglong". 
      
      mysys/mf_iocache.c:
        Added debug statement to simulate a scenario where the cache
        file's current position is set to >4GB
      sql/log.cc:
        Replaced the type of `saved_max_binlog_cache_size' from "ulong" to
        "my_off_t", which is a type def for "ulonglong".
      03993d03
    • Joerg Bruehe's avatar
      Fix bug#14318456 SPEC FILE DOES NOT RUN THE TEST SUITE DURING RPM BUILD · d4caad52
      Joerg Bruehe authored
      Add a macro "runselftest" to the spec file for RPM builds.
      
      If its value is 1 (the default), the test suite will be run during
      the RPM build.
      To prevent that, add this to the rpmbuild command line:
          --define "runselftest 0"
      Failures of the test suite will NOT make the RPM build fail!
      
      
      support-files/mysql.spec.sh:
        Add the "runselftest" macro following the model provided by RedHat.
        
        This code is similar to what we plan to use for ULN RPMs.
      d4caad52
    • Alexander Barkov's avatar
      Merging from 5.1 · 9e75b589
      Alexander Barkov authored
      9e75b589
    • Alexander Barkov's avatar
      Fixing wrong copyright. Index.xml was modified in 2005, · 1cb513ba
      Alexander Barkov authored
      while the copyright notice still mentioned 2003.
      1cb513ba
  3. 23 Jul, 2012 1 commit
    • Ashish Agarwal's avatar
      BUG#13555854: CHECK AND REPAIR TABLE SHOULD BE MORE ROBUST [1] · d15e2f72
      Ashish Agarwal authored
      ISSUE: Incorrect key file. Key file is corrupted,
             Reading incorrect key information (keyseg)
             from index file. Key definition in .MYI
             and .FRM file differs. Starting pointer
             to read the keyseg information is changed
             to a value greater than the pack_reclength.
             Memcpy tries to read keyseg information from
             unallocated memory which causes the crash.
      
      SOLUTION: One more check added to compare the
                the key definition in .MYI and .FRM
                file. If the definition differ, server
                produces an error.
      d15e2f72
  4. 19 Jul, 2012 8 commits
  5. 18 Jul, 2012 2 commits
    • Chaithra Gopalareddy's avatar
      Merge from 5.1 to 5.5 · 0ef427ae
      Chaithra Gopalareddy authored
      0ef427ae
    • Chaithra Gopalareddy's avatar
      Bug#11762052: 54599: BUG IN QUERY PLANNER ON QUERIES WITH · ddcd6867
      Chaithra Gopalareddy authored
                           "ORDER BY" AND "LIMIT BY" CLAUSE
      
      PROBLEM:
      When a 'limit' clause is specified in a query along with
      group by and order by, optimizer chooses wrong index
      there by examining more number of rows than required.
      However without the 'limit' clause, optimizer chooses
      the right index.
      
      ANALYSIS:
      With respect to the query specified, range optimizer chooses
      the first index as there is a range present ( on 'a'). Optimizer
      then checks for an index which would give records in sorted
      order for the 'group by' clause.
      
      While checking chooses the second index (on 'c,b,a') based on
      the 'limit' specified and the selectivity of
      'quick_condition_rows' (number of rows present in the range)
      in 'test_if_skip_sort_order' function. 
      But, it fails to consider that an order by clause on a
      different column will result in scanning the entire index and 
      hence the estimated number of rows calculated above are 
      wrong (which results in choosing the second index).
      
      FIX:
      Do not enforce the 'limit' clause in the call to
      'test_if_skip_sort_order' if we are creating a temporary
      table. Creation of temporary table indicates that there would be
      more post-processing and hence will need all the rows.
      
      This fix is backported from 5.6. This problem is fixed in 5.6 as   
      part of changes for work log #5558
      
      
      mysql-test/r/subselect.result:
        Changes for Bug#11762052 results in the correct number of rows.
      sql/sql_select.cc:
        Do not pass the actual 'limit' value if 'need_tmp' is true.
      ddcd6867
  6. 13 Jul, 2012 1 commit
    • Nuno Carvalho's avatar
      BUG#14310067: RPL_CANT_READ_EVENT_INCIDENT AND RPL_BUG41902 FAIL ON 5.5 · 8dc96fc7
      Nuno Carvalho authored
      rpl_cant_read_event_incident:
      Slave applies updates from bug11747416_32228_binlog.000001 file which 
      contains a CREATE TABLE t statement and an incident, when SQL thread is
      running slowly IO thread may reach the incident before SQL thread
      executes the create table statement. 
      Execute "drop table if exists t" and also perform a RESET MASTER to
      clean slave binary logs.
      
      rpl_bug41902:
      Error "MYSQL_BIN_LOG::purge_logs was called with file
      ./master-bin.000001 not listed in the index." suppression is not 
      considering windows path, there is ".\master-bin.000001".
      Changed suppression to: "MYSQL_BIN_LOG::purge_logs was called with file
      ..master-bin.000001 not listed in the index", to match ".\" and "./".
      8dc96fc7
  7. 12 Jul, 2012 2 commits
  8. 11 Jul, 2012 6 commits
    • unknown's avatar
      Raise version number after cloning 5.5.27 · 425f07ea
      unknown authored
      425f07ea
    • Bjorn Munch's avatar
      Empty version change upmerge · 1877016c
      Bjorn Munch authored
      1877016c
    • unknown's avatar
      Raise version number after cloning 5.1.65 · 4c33e849
      unknown authored
      4c33e849
    • unknown's avatar
      No commit message · 21bc74e0
      unknown authored
      No commit message
      21bc74e0
    • unknown's avatar
      No commit message · 28255d4c
      unknown authored
      No commit message
      28255d4c
    • Chaithra Gopalareddy's avatar
      Bug #13444084:PRIMARY KEY OR UNIQUE KEY >453 BYTES FAILS FOR · fc74e2e0
      Chaithra Gopalareddy authored
                    COUNT DISTINCT GROUP BY
      
      PROBLEM:
      To calculate the final result of the count(distinct(select 1))
      we call 'end_send' function instead of 'end_send_group'.
      'end_send' cannot be called if we have aggregate functions
      that need to be evaluated.
      
      ANALYSIS:
      While evaluating for a possible loose_index_scan option for
      the query, the variable 'is_agg_distinct' is set to 'false'
      as the item in the distinct clause is not a field. But, we
      choose loose_index_scan by not taking this into 
      consideration.
      So, while setting the final 'select_function' to evaluate
      the result, 'precomputed_group_by' is set to TRUE as in
      this case loose_index_scan is chosen and we do not have
      agg_distinct in the query (which is clearly wrong as we
      have one).
      As a result, 'end_send' function is chosen as the final
      select_function instead of 'end_send_group'. The difference
      between the two being, 'end_send_group' evaluates the
      aggregates while 'end_send' does not. Hence the wrong result.
      
      FIX:
      The variable 'is_agg_distinct' always represents if 
      'loose_idnex_scan' can be chosen for aggregate_distinct 
      functions present in the select.
      So, we check for this variable to continue with 
      loose_index_scan option.
      
      
      sql/opt_range.cc:
        Do not continue if is_agg_distinct is not set in case
        of agg_distinct functions.
      fc74e2e0
  9. 10 Jul, 2012 11 commits
    • Rohit Kalhans's avatar
      bug#11759333: · 6fe6288d
      Rohit Kalhans authored
      follow-up patch for the failure on pb2 windows build
      6fe6288d
    • Mayank Prasad's avatar
      Bug#13889741: HANDLE_FATAL_SIGNAL IN _DB_ENTER_ |HANDLE_FATAL_SIGNAL IN STRNLEN · 3a71ab08
      Mayank Prasad authored
      Follow up patch to resolve pb2 failure on windows platform
      3a71ab08
    • Jon Olav Hauglid's avatar
      Bug#12623923 Server can crash after failure to create · a47e778a
      Jon Olav Hauglid authored
                   primary key with innodb tables
      
      The bug was triggered if a single ALTER TABLE statement both
      added and dropped indexes and ALTER TABLE failed during drop
      (e.g. because the index was needed in a foreign key constraint).
      In such cases, the server index information would get out of
      sync with InnoDB - the added index would be present inside
      InnoDB, but not in the server. This could then lead to InnoDB
      error messages and/or server crashes.
      
      The root cause is that new indexes are added before old indexes
      are dropped. This means that if ALTER TABLE fails while dropping
      indexes, index changes will be reverted in the server but not
      inside InnoDB.
      
      This patch fixes the problem by dropping any added indexes
      if drop fails (for ALTER TABLE statements that both adds
      and drops indexes). 
      
      However, this won't work if we added a primary key as this
      key might not be possible to drop inside InnoDB. Therefore,
      we resort to the copy algorithm if a primary key is added
      by an ALTER TABLE statement that also drops an index.
      
      In 5.6 this bug is more properly fixed by the handler interface
      changes done in the scope of WL#5534 "Online ALTER".
      a47e778a
    • unknown's avatar
      No commit message · 51a47a8d
      unknown authored
      No commit message
      51a47a8d
    • unknown's avatar
      No commit message · dfa00930
      unknown authored
      No commit message
      dfa00930
    • Rohit Kalhans's avatar
      BUG#11759333: SBR LOGGING WARNING MESSAGES FOR PRIMARY · 6da51d17
      Rohit Kalhans authored
      KEY UPDATES WITH A LIMIT OF 1
      
      Problem: The unsafety warning for statements such as
      update...limit1 where pk=1 are thrown when binlog-format
      = STATEMENT,despite of the fact that such statements are
      actually safe. this leads to filling up of the disk space 
      with false warnings.
       
      Solution: This is not a complete fix for the problem, but
      prevents the disks from getting filled up. This should
      therefore be regarded as a workaround. In the future this
      should be overriden by server general suppress/filtering
      framework. It should also be noted that another worklog is
      supposed to defeat this case's artificial unsafety.
      
      We use a warning suppression mechanism to detect warning flood,
      enable the suppression, and disable this when the average
      warnings/second has reduced to acceptable limits.
       
        Activation: The supression for LIMIT unsafe statements are
        activated when the last 50 warnings were logged in less 
        than 50 seconds. 
       
        Supression: Once activated this supression will prevent the
        individual warnings to be logged in the error log, but print
        the warning for every 50 warnings with the note:
        "The last warning was repeated N times in last S seconds"  
        Noteworthy is the fact that this supression works only on the
        error logs and the warnings seen by the clients will remain as
        it is (i.e. one warning/ unsafe statement)
       
        Deactivation: The supression will be deactivated once the
        average # of warnings/sec have gone down to the acceptable limits.
      
      
      
      sql/sql_class.cc:
        Added code to supress warning while logging them to error-log.
      6da51d17
    • Andrei Elkin's avatar
      null-merge from 5.1. · c60ad575
      Andrei Elkin authored
      c60ad575
    • Andrei Elkin's avatar
      merge from 5.5 repo. · f4dc9215
      Andrei Elkin authored
      f4dc9215
    • Andrei Elkin's avatar
      merge from 5.1 repo. · cd0912a4
      Andrei Elkin authored
      cd0912a4
    • Bjorn Munch's avatar
      null upmerge · 88a74a16
      Bjorn Munch authored
      88a74a16
    • Andrei Elkin's avatar
      merge from 5.1 repo. · eca29d5f
      Andrei Elkin authored
      eca29d5f