1. 08 Aug, 2017 2 commits
  2. 07 Aug, 2017 3 commits
    • Jan Lindström's avatar
      MDEV-13443: Port innochecksum tests from 10.2 innodb_zip suite to 10.1 · 2ef7a5a1
      Jan Lindström authored
      This is basically port of WL6045:Improve Innochecksum with some
      code refactoring on innochecksum.
      
      Added page0size.h include from 10.2 to make 10.1 vrs 10.2 innochecksum
      as identical as possible.
      
      Added page 0 checksum checking and if that fails whole test fails.
      2ef7a5a1
    • Monty's avatar
      Fixed compiler warnings · 19f2b3d0
      Monty authored
      19f2b3d0
    • Monty's avatar
      MDEV-13179 main.errors fails with wrong errno · 74543698
      Monty authored
      The problem was that the introduction of max-thread-mem-used can cause
      an allocation error very early, even before mysql_parse() is called.
      As mysql_parse() calls thd->reset_for_next_command(), which called
      clear_error(), the error number was lost.
      
      Fixed by adding an option to have unique messages for each KILL
      signal and change max-thread-mem-used to use this new feature.
      This removes a lot of problems with the original approach, where
      one could get errors signaled silenty almost any time.
      
      ixed by moving clear_error() from reset_for_next_command() to
      do_command(), before any memory allocation for the thread.
      
      Related changes:
      - reset_for_next_command() now have an optional parameter if we should
        call clear_error() or not. By default it's called, but not anymore from
        dispatch_command() which was the original problem.
      - Added optional paramater to clear_error() to force calling of
        reset_diagnostics_area(). Before clear_error() only called
        reset_diagnostics_area() if there was no error, so we normally
        called reset_diagnostics_area() twice.
      - This change removed several duplicated calls to clear_error()
        when starting a query.
      - Reset max_mem_used on COM_QUIT, to protect against kill during
        quit.
      - Use fatal_error() instead of setting is_fatal_error (cleanup)
      - Set fatal_error if max_thead_mem_used is signaled.
        (Same logic we use for other places where we are out of resources)
      74543698
  3. 05 Aug, 2017 1 commit
  4. 03 Aug, 2017 1 commit
    • Jan Lindström's avatar
      MDEV-11939: innochecksum mistakes a file for an encrypted one (page 0 invalid) · 8b019f87
      Jan Lindström authored
      Always read full page 0 to determine does tablespace contain
      encryption metadata. Tablespaces that are page compressed or
      page compressed and encrypted do not compare checksum as
      it does not exists. For encrypted tables use checksum
      verification written for encrypted tables and normal tables
      use normal method.
      
      buf_page_is_checksum_valid_crc32
      buf_page_is_checksum_valid_innodb
      buf_page_is_checksum_valid_none
      	Add Innochecksum logging to file
      
      buf_page_is_corrupted
              Remove ib_logf and page_warn_strict_checksum
              calls in innochecksum compilation. Add innochecksum
              logging to file.
      
      fil0crypt.cc fil0crypt.h
              Modify to be able to use in innochecksum compilation and
      	move fil_space_verify_crypt_checksum to end of the file.
      	Add innochecksum logging to file.
      
      univ.i
              Add innochecksum strict_verify, log_file and cur_page_num
              variables as extern.
      
      page_zip_verify_checksum
              Add innochecksum logging to file.
      
      innochecksum.cc
              Lot of changes most notable able to read encryption
              metadata from page 0 of the tablespace.
      
      Added test case where we corrupt intentionally
      FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION (encryption key version)
      FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION+4 (post encryption checksum)
      FIL_DATA+10 (data)
      8b019f87
  5. 01 Aug, 2017 1 commit
  6. 20 Jul, 2017 1 commit
    • Jan Lindström's avatar
      MDEV-13227: Assertion failure len < 16384 in file rem0rec.cc line 1285 · d1b3e428
      Jan Lindström authored
      Crashes with innodb_page_size=64K. Does not crash at <= 32K.
      
      Problem was that when blob record that was earlier < 16k is
      enlarged at update wo that length > 16K it should be stored
      externally. However, that was not enforced when page-size = 64K
      (note that 16K+1 < 64K/2 i.e. half of the btree leaf page).
      
      btr_cur_optimistic_update: limit max record size to 16K
      or in REDUNDANT row format to 16K-1.
      d1b3e428
  7. 13 Jul, 2017 1 commit
  8. 12 Jul, 2017 1 commit
    • Jan Lindström's avatar
      MDEV-11828: innodb_page_size=64k must reject ROW_FORMAT=REDUNDANT records longer than 16383 bytes · 9284e8b2
      Jan Lindström authored
      In all InnoDB row formats, the pointers or lengths stored in the record
      header can be at most 14 bits, that is, count up to 16383.
      In ROW_FORMAT=REDUNDANT, this limits the maximum possible record length
      to 16383 bytes. In other ROW_FORMAT, it could merely limit the maximum
      length of variable-length fields.
      
      When MySQL 5.7 introduced innodb_page_size=32k and 64k, the maximum
      record length was limited to 16383 bytes (I hope 16383, not 16384,
      to be able to distinguish from a record whose length is 0 bytes).
      This change is present in MariaDB Server 10.2.
      
      btr_cur_optimistic_update(): Restrict maximum record size to 16K-1
      for REDUNDANT and 64K page size.
      
      dict_index_too_big_for_tree(): The maximum allowed record size
      is half a B-tree page or 16K(-1 for REDUNDANT) for 64K page size.
      
      convert_error_code_to_mysql(): Fix error message to print
      correct limits.
      
      my_error_innodb(): Fix error message to print correct limits.
      
      page_zip_rec_needs_ext() : record size was already restricted to 16K.
      Restrict REDUNDANT to 16K-1.
      
      rem0rec.h: Introduce REDUNDANT_REC_MAX_DATA_SIZE (16K-1)
      and COMPRESSED_REC_MAX_DATA_SIZE (16K).
      9284e8b2
  9. 07 Jul, 2017 1 commit
  10. 06 Jul, 2017 5 commits
    • Sergei Golubchik's avatar
      after-merge fix for a7ed4644 · 6b99859f
      Sergei Golubchik authored
      (10.0+ changes, as specified in the MDEV)
      
      and remove unused variable (compiler warning)
      6b99859f
    • Sergei Golubchik's avatar
      Merge branch '5.5' into 10.0 · 89dc445a
      Sergei Golubchik authored
      89dc445a
    • Sergei Golubchik's avatar
      coverity medium warnings · 4d213135
      Sergei Golubchik authored
      4d213135
    • Sergei Golubchik's avatar
      bugfix: long partition names · f305a7ce
      Sergei Golubchik authored
      f305a7ce
    • Marko Mäkelä's avatar
      MDEV-13247 innodb_log_compressed_pages=OFF breaks crash recovery of ROW_FORMAT=COMPRESSED tables · 2b5c9bc2
      Marko Mäkelä authored
      The option innodb_log_compressed_pages was contributed by
      Facebook to MySQL 5.6. It was disabled in the 5.6.10 GA release
      due to problems that were fixed in 5.6.11, which is when the
      option was enabled.
      
      The option was set to innodb_log_compressed_pages=ON by default
      (disabling the feature), because safety was considered more
      important than speed. The option innodb_log_compressed_pages=OFF
      can *CORRUPT* ROW_FORMAT=COMPRESSED tables on crash recovery
      if the zlib deflate function is behaving differently (producing
      a different amount of compressed data) from how it behaved
      when the redo log records were written (prior to the crash recovery).
      
      In MDEV-6935, the default value was changed to
      innodb_log_compressed_pages=OFF. This is inherently unsafe, because
      there are very many different environments where MariaDB can be
      running, using different zlib versions. While zlib can decompress
      data just fine, there are no guarantees that different versions will
      always compress the same data to the exactly same size. To avoid
      problems related to zlib upgrades or version mismatch, we must
      use a safe default setting.
      
      This will reduce the write performance for users of
      ROW_FORMAT=COMPRESSED tables. If you configure
      innodb_log_compressed_pages=ON, please make sure that you will
      always cleanly shut down InnoDB before upgrading the server
      or zlib.
      2b5c9bc2
  11. 05 Jul, 2017 4 commits
  12. 04 Jul, 2017 2 commits
  13. 03 Jul, 2017 8 commits
  14. 02 Jul, 2017 2 commits
    • Andrei Elkin's avatar
      Fix for MDEV-9670 server_id mysteriously set to 0 · 946a07e8
      Andrei Elkin authored
      Problem was that in a circular replication setup the master remembers
      position to events it has generated itself when reading from a slave.
      If there are no new events in the queue from the slave, a
      Gtid_list_log_event is generated to remember the last skipped event.
      The problem happens if there is a network delay and we generate a
      Gtid_list_log_event in the middle of the transaction, in which case there
      will be an implicit comment and a new transaction with serverid=0 will be
      logged.
      
      The fix was to not generate any Gtid_list_log_events in the middle of a
      transaction.
      946a07e8
    • Monty's avatar
      Fix for MDEV-13191. Assert for !is_set() when doing LOAD DATA · 46d6f74c
      Monty authored
      This could happen when the client connection dies while sending a progress
      report packet.
      Fixed by not raising any errors when sending progress packets.
      46d6f74c
  15. 01 Jul, 2017 1 commit
  16. 30 Jun, 2017 6 commits