• Ard Biesheuvel's avatar
    pstore: Base compression input buffer size on estimated compressed size · 94160062
    Ard Biesheuvel authored
    Commit 1756ddea ("pstore: Remove worst-case compression size logic")
    removed some clunky per-algorithm worst case size estimation routines on
    the basis that we can always store pstore records uncompressed, and
    these worst case estimations are about how much the size might
    inadvertently *increase* due to encapsulation overhead when the input
    cannot be compressed at all. So if compression results in a size
    increase, we just store the original data instead.
    
    However, it seems that the original code was misinterpreting these
    calculations as an estimation of how much uncompressed data might fit
    into a compressed buffer of a given size, and it was using the results
    to consume the input data in larger chunks than the pstore record size,
    relying on the compression to ensure that what ultimately gets stored
    fits into the available space.
    
    One result of this, as observed and reported by Linus, is that upgrading
    to a newer kernel that includes the given commit may result in pstore
    decompression errors reported in the kernel log. This is due to the fact
    that the existing records may unexpectedly decompress to a size that is
    larger than the pstore record size.
    
    Another potential problem caused by this change is that we may
    underutilize the fixed sized records on pstore backends such as ramoops.
    And on pstore backends with variable sized records such as EFI, we will
    end up creating many more entries than before to store the same amount
    of compressed data.
    
    So let's fix both issues, by bringing back the typical case estimation of
    how much ASCII text captured from the dmesg log might fit into a pstore
    record of a given size after compression. The original implementation
    used the computation given below for zlib:
    
      switch (size) {
      /* buffer range for efivars */
      case 1000 ... 2000:
      	cmpr = 56;
      	break;
      case 2001 ... 3000:
      	cmpr = 54;
      	break;
      case 3001 ... 3999:
      	cmpr = 52;
      	break;
      /* buffer range for nvram, erst */
      case 4000 ... 10000:
      	cmpr = 45;
      	break;
      default:
      	cmpr = 60;
      	break;
      }
    
      return (size * 100) / cmpr;
    
    We will use the previous worst-case of 60% for compression. For
    decompression go extra large (3x) so we make sure there's enough space
    for anything.
    
    While at it, rate limit the error message so we don't flood the log
    unnecessarily on systems that have accumulated a lot of pstore history.
    
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Eric Biggers <ebiggers@kernel.org>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Herbert Xu <herbert@gondor.apana.org.au>
    Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
    Link: https://lore.kernel.org/r/20230830212238.135900-1-ardb@kernel.orgCo-developed-by: default avatarKees Cook <keescook@chromium.org>
    Signed-off-by: default avatarKees Cook <keescook@chromium.org>
    94160062
platform.c 18.2 KB