1. 17 Aug, 2010 4 commits
    • Vasil Dimov's avatar
      Adjust type_bit_innodb.result · 260ff5be
      Vasil Dimov authored
      This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a
      which improved the sampling algorithm.
      260ff5be
    • Vasil Dimov's avatar
      Adjust rowid_order_innodb.result · 0c7b3904
      Vasil Dimov authored
      This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a
      which improved the sampling algorithm.
      0c7b3904
    • Vasil Dimov's avatar
      Adjust innodb_gis.result · 4a3ba734
      Vasil Dimov authored
        
      This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a
      which improved the sampling algorithm.
      4a3ba734
    • Vasil Dimov's avatar
      Adjust innodb_mysql.result · f8b58430
      Vasil Dimov authored
      This is a followup to vasil.dimov@oracle.com-20100816142329-yimenbuktd416z1a
      which improved the sampling algorithm. I have manually checked that the new
      values are actually the correct ones, for example:
      -rows	16
      +rows	32
      the number of rows returned by the query is 32.
      f8b58430
  2. 16 Aug, 2010 1 commit
    • Vasil Dimov's avatar
      Fix Bug#53761 RANGE estimation for matched rows may be 200 times different · c292616a
      Vasil Dimov authored
      Improve the range estimation algorithm.
      
      Previously:
      For a given level the algo knows the number of pages in the requested range and the n
      
      With this change:
      Same idea, but peek a few (10) of the intermediate pages to get a better estimate of 
      
      In the bug report one of the examples has a btree with a snippet of the leaf level li
      page1(899 records), page2(1 record), page3(1 record), page4(1 record)
      so when trying to estimate, the previous algo, assumed there are average (899+1)/2=45
      Fix Bug#53761 RANGE estimation for matched rows may be 200 times different
      
      Improve the range estimation algorithm.
      
      Previously:
      For a given level the algo knows the number of pages in the requested range
      and the number of records on the leftmost and the rightmost page. Then it
      assumes all pages in between contain the average between the two border pages
      and multiplies this average number by the number of intermediate pages.
      
      With this change:
      Same idea, but peek a few (10) of the intermediate pages to get a better
      estimate of the average number of records per page. If there are less than 10
      intermediate pages then all of them will be scanned and the result will be
      precise, not an estimation.
      
      In the bug report one of the examples has a btree with a snippet of the leaf
      level like this:
      page1(899 records), page2(1 record), page3(1 record), page4(1 record)
      so when trying to estimate, the previous algo, assumed there are average
      (899+1)/2=450 records per page which went terribly wrong. With this change
      page2 and page3 will be read and the exact number of records will be returned.
      
      Approved by:	Sunny (rb://401)
      c292616a
  3. 13 Aug, 2010 2 commits
  4. 12 Aug, 2010 1 commit
  5. 10 Aug, 2010 2 commits
    • Vasil Dimov's avatar
      bf061a81
    • Marko Mäkelä's avatar
      Bug#54914: InnoDB: performance drop with innodb_change_buffering=all · ef6f561a
      Marko Mäkelä authored
      Reduce ibuf_mutex and ibuf_pessimistic_insert_mutex contention further.
      
      Protect ibuf->empty by the insert buffer root page latch, not ibuf_mutex.
      
      ibuf_tree_root_get(): Assert that ibuf_mutex is owned by the
      caller. Assert that the stamped page number is correct. Assert that
      ibuf->empty agrees with the root page.
      
      ibuf_size_update(): Do not update ibuf->empty.
      
      ibuf_init_at_db_start(): Update ibuf->empty while holding the root page latch.
      
      ibuf_add_free_page(): Return TRUE/FALSE instead of DB_SUCCESS/DB_STRONG_FAIL.
      
      ibuf_remove_free_page(): Release ibuf_pessimistic_insert_mutex as
      early as possible.
      
      ibuf_contract_ext(): Rely on a dirty read of ibuf->empty, unless the
      server is being shut down. Never acquire ibuf_mutex. Eliminate n_stored.
      
      ibuf_contract_after_insert(): Never acquire ibuf_mutex. Perform dirty
      reads of ibuf->size and ibuf->max_size.
      
      ibuf_insert_low(): Only acquire ibuf_mutex for mode==BTR_MODIFY_TREE.
      Perform dirty reads of ibuf->size and ibuf->max_size. Update
      ibuf->empty while holding the root page latch.
      
      ibuf_delete_rec(): Update ibuf->empty while holding the root page latch.
      
      ibuf_is_empty(): Release ibuf_mutex earlier.
      ef6f561a
  6. 09 Aug, 2010 1 commit
    • Marko Mäkelä's avatar
      Reduce the ibuf_mutex hold time. This does not fix the update · 84fbabac
      Marko Mäkelä authored
      regression in Bug #54914, but it does speed up the execution for
      innodb_change_buffering=inserts.
      
      ibuf_add_ops(), ibuf_merge_or_delete_for_page(),
      ibuf_delete_for_discarded_space(): Use atomic built-ins instead of
      ibuf_mutex, when available.
      
      ibuf_add_free_page(), ibuf_remove_free_page(), ibuf_contract_ext():
      Release ibuf_mutex earlier.
      
      ibuf_free_excess_pages(): Release ibuf_mutex before a conditional branch.
      
      ibuf_insert_low(): Release ibuf_mutex before a conditional
      branch. Create ibuf_entry before re-acquiring ibuf_mutex. Simplify a
      loop to reduce code footprint. Release ibuf_mutex before mtr_commit()
      [btr_pcur_close()].
      
      ibuf_is_empty(): Release ibuf_mutex before mtr_commit().
      84fbabac
  7. 05 Aug, 2010 2 commits
  8. 03 Aug, 2010 1 commit
  9. 30 Jul, 2010 2 commits
  10. 29 Jul, 2010 1 commit
  11. 28 Jul, 2010 3 commits
  12. 26 Jul, 2010 1 commit
    • Davi Arnaut's avatar
      Bug#45377: ARCHIVE tables aren't discoverable after OPTIMIZE · ed434ce0
      Davi Arnaut authored
      The problem was that the optimize method of the ARCHIVE storage
      engine was not preserving the FRM embedded in the ARZ file when
      rewriting the ARZ file for optimization. The ARCHIVE engine stores
      the FRM in the ARZ file so it can be transferred from machine to
      machine without also copying the FRM -- the engine restores the
      embedded FRM during discovery.
      
      The solution is to copy over the FRM when rewriting the ARZ file.
      In addition, some initial error checking is performed to ensure
      garbage is not copied over.
      ed434ce0
  13. 28 Jul, 2010 1 commit
  14. 27 Jul, 2010 1 commit
  15. 26 Jul, 2010 5 commits
  16. 25 Jul, 2010 1 commit
    • Vladislav Vaintroub's avatar
      Cleanup after bild team push. · 99a26e0f
      Vladislav Vaintroub authored
      * Fixed obvious errors (HAVE_BROKEN_PREAD is not true for on any
      of systems we use, definitely not on HPUX)
      
      * Remove other junk flags for OSX and HPUX
      
      * Avoid checking type sizes in universal builds on OSX, again 
      (CMake2.8.0 fails is different architectures return different results)
      
      * Do not compile template instantiation stuff unless 
      EXPLICIT_TEMPLATE_INSTANTIATION is used.
      
      * Some cleanup (make gen_lex_hash simpler, avoid dependencies)
      
      * Exclude some unused files from compilation (strtol.c etc)
      99a26e0f
  17. 24 Jul, 2010 4 commits
  18. 23 Jul, 2010 7 commits