• Dave Rodgman's avatar
    lib/lzo: implement run-length encoding · 5ee4014a
    Dave Rodgman authored
    Patch series "lib/lzo: run-length encoding support", v5.
    
    Following on from the previous lzo-rle patchset:
    
      https://lkml.org/lkml/2018/11/30/972
    
    This patchset contains only the RLE patches, and should be applied on
    top of the non-RLE patches ( https://lkml.org/lkml/2019/2/5/366 ).
    
    Previously, some questions were raised around the RLE patches.  I've
    done some additional benchmarking to answer these questions.  In short:
    
     - RLE offers significant additional performance (data-dependent)
    
     - I didn't measure any regressions that were clearly outside the noise
    
    One concern with this patchset was around performance - specifically,
    measuring RLE impact separately from Matt Sealey's patches (CTZ & fast
    copy).  I have done some additional benchmarking which I hope clarifies
    the benefits of each part of the patchset.
    
    Firstly, I've captured some memory via /dev/fmem from a Chromebook with
    many tabs open which is starting to swap, and then split this into 4178
    4k pages.  I've excluded the all-zero pages (as zram does), and also the
    no-zero pages (which won't tell us anything about RLE performance).
    This should give a realistic test dataset for zram.  What I found was
    that the data is VERY bimodal: 44% of pages in this dataset contain 5%
    or fewer zeros, and 44% contain over 90% zeros (30% if you include the
    no-zero pages).  This supports the idea of special-casing zeros in zram.
    
    Next, I've benchmarked four variants of lzo on these pages (on 64-bit
    Arm at max frequency): baseline LZO; baseline + Matt Sealey's patches
    (aka MS); baseline + RLE only; baseline + MS + RLE.  Numbers are for
    weighted roundtrip throughput (the weighting reflects that zram does
    more compression than decompression).
    
      https://drive.google.com/file/d/1VLtLjRVxgUNuWFOxaGPwJYhl_hMQXpHe/view?usp=sharing
    
    Matt's patches help in all cases for Arm (and no effect on Intel), as
    expected.
    
    RLE also behaves as expected: with few zeros present, it makes no
    difference; above ~75%, it gives a good improvement (50 - 300 MB/s on
    top of the benefit from Matt's patches).
    
    Best performance is seen with both MS and RLE patches.
    
    Finally, I have benchmarked the same dataset on an x86-64 device.  Here,
    the MS patches make no difference (as expected); RLE helps, similarly as
    on Arm.  There were no definite regressions; allowing for observational
    error, 0.1% (3/4178) of cases had a regression > 1 standard deviation,
    of which the largest was 4.6% (1.2 standard deviations).  I think this
    is probably within the noise.
    
      https://drive.google.com/file/d/1xCUVwmiGD0heEMx5gcVEmLBI4eLaageV/view?usp=sharing
    
    One point to note is that the graphs show RLE appears to help very
    slightly with no zeros present! This is because the extra code causes
    the clang optimiser to change code layout in a way that happens to have
    a significant benefit.  Taking baseline LZO and adding a do-nothing line
    like "__builtin_prefetch(out_len);" immediately before the "goto next"
    has the same effect.  So this is a real, but basically spurious effect -
    it's small enough not to upset the overall findings.
    
    This patch (of 3):
    
    When using zram, we frequently encounter long runs of zero bytes.  This
    adds a special case which identifies runs of zeros and encodes them
    using run-length encoding.
    
    This is faster for both compression and decompresion.  For high-entropy
    data which doesn't hit this case, impact is minimal.
    
    Compression ratio is within a few percent in all cases.
    
    This modifies the bitstream in a way which is backwards compatible
    (i.e., we can decompress old bitstreams, but old versions of lzo cannot
    decompress new bitstreams).
    
    Link: http://lkml.kernel.org/r/20190205155944.16007-2-dave.rodgman@arm.comSigned-off-by: default avatarDave Rodgman <dave.rodgman@arm.com>
    Cc: David S. Miller <davem@davemloft.net>
    Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    Cc: Herbert Xu <herbert@gondor.apana.org.au>
    Cc: Markus F.X.J. Oberhumer <markus@oberhumer.com>
    Cc: Matt Sealey <matt.sealey@arm.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Nitin Gupta <nitingupta910@gmail.com>
    Cc: Richard Purdie <rpurdie@openedhand.com>
    Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
    Cc: Sonny Rao <sonnyrao@google.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    5ee4014a
lzo.h 1.38 KB