1. 16 Oct, 2006 1 commit
  2. 11 Oct, 2006 1 commit
    • istruewing@chilla.local's avatar
      Bug#12240 - Rows Examined in Slow Log showing incorrect number? · 296d64db
      istruewing@chilla.local authored
      Examined rows are counted for every join part. The per-join-part
      counter was incremented over all iterations. The result variable
      was replaced at the end of every iteration. The final result was
      the number of examined rows by the join part that ended its
      execution as the last one. The numbers of other join parts was
      lost.
      
      Now we reset the per-join-part counter before every iteration and
      add it to the result variable at the end of the iteration. That
      way we get the sum of all iterations of all join parts.
      
      No test case. Testing this needs a look into the slow query log.
      I don't know of a way to do this portably with the test suite.
      296d64db
  3. 09 Oct, 2006 2 commits
    • istruewing@chilla.local's avatar
      Merge chilla.local:/home/mydev/mysql-4.1-bug8283 · 1daa6a71
      istruewing@chilla.local authored
      into  chilla.local:/home/mydev/mysql-4.1-bug8283-one
      1daa6a71
    • istruewing@chilla.local's avatar
      Bug#8283 - OPTIMIZE TABLE causes data loss · 5f08a831
      istruewing@chilla.local authored
      OPTIMIZE TABLE with myisam_repair_threads > 1 performs a non-quick 
      parallel repair. This means that it does not only rebuild all 
      indexes, but also the data file.
      
      Non-quick parallel repair works so that there is one thread per 
      index. The first of the threads rebuilds also the new data file.
      
      The problem was that all threads shared the read io cache on the
      old data file. If there were holes (deleted records) in the table,
      the first thread skipped them, writing only contiguous, non-deleted
      records to the new data file. Then it built the new index so that
      its entries pointed to the correct record positions. But the other
      threads didn't know the new record positions, but put the positions
      from the old data file into the index.
      
      The new design is so that there is a shared io cache which is filled
      by the first thread (the data file writer) with the new contiguous
      records and read by the other threads. Now they know the new record
      positions.
      
      Another problem was that for the parallel repair of compressed
      tables a common bit_buff and rec_buff was used. I changed it so
      that thread specific buffers are used for parallel repair.
      
      A similar problem existed for checksum calculation. I made this
      multi-thread safe too.
      5f08a831
  4. 06 Oct, 2006 4 commits
  5. 05 Oct, 2006 2 commits
  6. 02 Oct, 2006 2 commits
  7. 29 Sep, 2006 3 commits
  8. 28 Sep, 2006 6 commits
  9. 27 Sep, 2006 6 commits
  10. 25 Sep, 2006 3 commits
  11. 24 Sep, 2006 1 commit
  12. 23 Sep, 2006 3 commits
  13. 22 Sep, 2006 3 commits
  14. 21 Sep, 2006 2 commits
  15. 20 Sep, 2006 1 commit
    • igor@rurik.mysql.com's avatar
      Fixed bug #20108. · d9576364
      igor@rurik.mysql.com authored
      Any default value for a enum fields over UCS2 charsets was corrupted
      when we put it into the frm file, as it had been overwritten by its
      HEX representation.
      To fix it now we save a copy of structure that represents the enum
      type and when putting the default values we use this copy. 
      d9576364