Commit 4e1a33b1 authored by Sven Schmidt's avatar Sven Schmidt Committed by Linus Torvalds

lib: update LZ4 compressor module

Patch series "Update LZ4 compressor module", v7.

This patchset updates the LZ4 compression module to a version based on
LZ4 v1.7.3 allowing to use the fast compression algorithm aka LZ4 fast
which provides an "acceleration" parameter as a tradeoff between high
compression ratio and high compression speed.

We want to use LZ4 fast in order to support compression in lustre and
(mostly, based on that) investigate data reduction techniques in behalf
of storage systems.

Also, it will be useful for other users of LZ4 compression, as with LZ4
fast it is possible to enable applications to use fast and/or high
compression depending on the usecase.  For instance, ZRAM is offering a
LZ4 backend and could benefit from an updated LZ4 in the kernel.

LZ4 homepage: http://www.lz4.org/
LZ4 source repository: https://github.com/lz4/lz4 Source version: 1.7.3

Benchmark (taken from [1], Core i5-4300U @1.9GHz):
----------------|--------------|----------------|----------
Compressor      | Compression  | Decompression  | Ratio
----------------|--------------|----------------|----------
memcpy          |  4200 MB/s   |  4200 MB/s     | 1.000
LZ4 fast 50     |  1080 MB/s   |  2650 MB/s     | 1.375
LZ4 fast 17     |   680 MB/s   |  2220 MB/s     | 1.607
LZ4 fast 5      |   475 MB/s   |  1920 MB/s     | 1.886
LZ4 default     |   385 MB/s   |  1850 MB/s     | 2.101

[1] http://fastcompression.blogspot.de/2015/04/sampling-or-faster-lz4.html

[PATCH 1/5] lib: Update LZ4 compressor module
[PATCH 2/5] lib/decompress_unlz4: Change module to work with new LZ4 module version
[PATCH 3/5] crypto: Change LZ4 modules to work with new LZ4 module version
[PATCH 4/5] fs/pstore: fs/squashfs: Change usage of LZ4 to work with new LZ4 version
[PATCH 5/5] lib/lz4: Remove back-compat wrappers

This patch (of 5):

Update the LZ4 kernel module to LZ4 v1.7.3 by Yann Collet.  The kernel
module is inspired by the previous work by Chanho Min.  The updated LZ4
module will not break existing code since the patchset contains
appropriate changes.

API changes:

New method LZ4_compress_fast which differs from the variant available in
kernel by the new acceleration parameter, allowing to trade compression
ratio for more compression speed and vice versa.

LZ4_decompress_fast is the respective decompression method, featuring a
very fast decoder (multiple GB/s per core), able to reach RAM speed in
multi-core systems.  The decompressor allows to decompress data
compressed with LZ4 fast as well as the LZ4 HC (high compression)
algorithm.

Also the useful functions LZ4_decompress_safe_partial and
LZ4_compress_destsize were added.  The latter reverses the logic by
trying to compress as much data as possible from source to dest while
the former aims to decompress partial blocks of data.

A bunch of streaming functions were also added which allow
compressig/decompressing data in multiple steps (so called "streaming
mode").

The methods lz4_compress and lz4_decompress_unknownoutputsize are now
known as LZ4_compress_default respectivley LZ4_decompress_safe.  The old
methods will be removed since there's no callers left in the code.

[arnd@arndb.de: fix KERNEL_LZ4 support]
  Link: http://lkml.kernel.org/r/20170208211946.2839649-1-arnd@arndb.de
[akpm@linux-foundation.org: simplify]
[akpm@linux-foundation.org: fix the simplification]
[4sschmid@informatik.uni-hamburg.de: fix performance regressions]
  Link: http://lkml.kernel.org/r/1486898178-17125-2-git-send-email-4sschmid@informatik.uni-hamburg.de
[4sschmid@informatik.uni-hamburg.de: v8]
  Link: http://lkml.kernel.org/r/1487182598-15351-2-git-send-email-4sschmid@informatik.uni-hamburg.de
Link: http://lkml.kernel.org/r/1486321748-19085-2-git-send-email-4sschmid@informatik.uni-hamburg.deSigned-off-by: default avatarSven Schmidt <4sschmid@informatik.uni-hamburg.de>
Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
Cc: Bongkyu Kim <bongkyu.kim@lge.com>
Cc: Rui Salvaterra <rsalvaterra@gmail.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: David S. Miller <davem@davemloft.net>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 8893f519
This diff is collapsed.
ccflags-y += -O3
obj-$(CONFIG_LZ4_COMPRESS) += lz4_compress.o obj-$(CONFIG_LZ4_COMPRESS) += lz4_compress.o
obj-$(CONFIG_LZ4HC_COMPRESS) += lz4hc_compress.o obj-$(CONFIG_LZ4HC_COMPRESS) += lz4hc_compress.o
obj-$(CONFIG_LZ4_DECOMPRESS) += lz4_decompress.o obj-$(CONFIG_LZ4_DECOMPRESS) += lz4_decompress.o
This diff is collapsed.
This diff is collapsed.
#ifndef __LZ4DEFS_H__
#define __LZ4DEFS_H__
/* /*
* lz4defs.h -- architecture specific defines * lz4defs.h -- common and architecture specific defines for the kernel usage
*
* Copyright (C) 2013, LG Electronics, Kyungsik Lee <kyungsik.lee@lge.com> * LZ4 - Fast LZ compression algorithm
* Copyright (C) 2011-2016, Yann Collet.
* BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following disclaimer
* in the documentation and/or other materials provided with the
* distribution.
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
* You can contact the author at :
* - LZ4 homepage : http://www.lz4.org
* - LZ4 source repository : https://github.com/lz4/lz4
* *
* This program is free software; you can redistribute it and/or modify * Changed for kernel usage by:
* it under the terms of the GNU General Public License version 2 as * Sven Schmidt <4sschmid@informatik.uni-hamburg.de>
* published by the Free Software Foundation.
*/ */
/* #include <asm/unaligned.h>
* Detects 64 bits mode #include <linux/string.h> /* memset, memcpy */
*/
#define FORCE_INLINE __always_inline
/*-************************************
* Basic Types
**************************************/
#include <linux/types.h>
typedef uint8_t BYTE;
typedef uint16_t U16;
typedef uint32_t U32;
typedef int32_t S32;
typedef uint64_t U64;
typedef uintptr_t uptrval;
/*-************************************
* Architecture specifics
**************************************/
#if defined(CONFIG_64BIT) #if defined(CONFIG_64BIT)
#define LZ4_ARCH64 1 #define LZ4_ARCH64 1
#else #else
#define LZ4_ARCH64 0 #define LZ4_ARCH64 0
#endif #endif
/* #if defined(__LITTLE_ENDIAN)
* Architecture-specific macros #define LZ4_LITTLE_ENDIAN 1
*/ #else
#define BYTE u8 #define LZ4_LITTLE_ENDIAN 0
typedef struct _U16_S { u16 v; } U16_S;
typedef struct _U32_S { u32 v; } U32_S;
typedef struct _U64_S { u64 v; } U64_S;
#if defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)
#define A16(x) (((U16_S *)(x))->v)
#define A32(x) (((U32_S *)(x))->v)
#define A64(x) (((U64_S *)(x))->v)
#define PUT4(s, d) (A32(d) = A32(s))
#define PUT8(s, d) (A64(d) = A64(s))
#define LZ4_READ_LITTLEENDIAN_16(d, s, p) \
(d = s - A16(p))
#define LZ4_WRITE_LITTLEENDIAN_16(p, v) \
do { \
A16(p) = v; \
p += 2; \
} while (0)
#else /* CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS */
#define A64(x) get_unaligned((u64 *)&(((U16_S *)(x))->v))
#define A32(x) get_unaligned((u32 *)&(((U16_S *)(x))->v))
#define A16(x) get_unaligned((u16 *)&(((U16_S *)(x))->v))
#define PUT4(s, d) \
put_unaligned(get_unaligned((const u32 *) s), (u32 *) d)
#define PUT8(s, d) \
put_unaligned(get_unaligned((const u64 *) s), (u64 *) d)
#define LZ4_READ_LITTLEENDIAN_16(d, s, p) \
(d = s - get_unaligned_le16(p))
#define LZ4_WRITE_LITTLEENDIAN_16(p, v) \
do { \
put_unaligned_le16(v, (u16 *)(p)); \
p += 2; \
} while (0)
#endif #endif
#define COPYLENGTH 8 /*-************************************
#define ML_BITS 4 * Constants
#define ML_MASK ((1U << ML_BITS) - 1) **************************************/
#define MINMATCH 4
#define WILDCOPYLENGTH 8
#define LASTLITERALS 5
#define MFLIMIT (WILDCOPYLENGTH + MINMATCH)
/* Increase this value ==> compression run slower on incompressible data */
#define LZ4_SKIPTRIGGER 6
#define HASH_UNIT sizeof(size_t)
#define KB (1 << 10)
#define MB (1 << 20)
#define GB (1U << 30)
#define MAXD_LOG 16
#define MAX_DISTANCE ((1 << MAXD_LOG) - 1)
#define STEPSIZE sizeof(size_t)
#define ML_BITS 4
#define ML_MASK ((1U << ML_BITS) - 1)
#define RUN_BITS (8 - ML_BITS) #define RUN_BITS (8 - ML_BITS)
#define RUN_MASK ((1U << RUN_BITS) - 1) #define RUN_MASK ((1U << RUN_BITS) - 1)
#define MEMORY_USAGE 14
#define MINMATCH 4 /*-************************************
#define SKIPSTRENGTH 6 * Reading and writing into memory
#define LASTLITERALS 5 **************************************/
#define MFLIMIT (COPYLENGTH + MINMATCH) static FORCE_INLINE U16 LZ4_read16(const void *ptr)
#define MINLENGTH (MFLIMIT + 1) {
#define MAXD_LOG 16 return get_unaligned((const U16 *)ptr);
#define MAXD (1 << MAXD_LOG) }
#define MAXD_MASK (u32)(MAXD - 1)
#define MAX_DISTANCE (MAXD - 1) static FORCE_INLINE U32 LZ4_read32(const void *ptr)
#define HASH_LOG (MAXD_LOG - 1) {
#define HASHTABLESIZE (1 << HASH_LOG) return get_unaligned((const U32 *)ptr);
#define MAX_NB_ATTEMPTS 256 }
#define OPTIMAL_ML (int)((ML_MASK-1)+MINMATCH)
#define LZ4_64KLIMIT ((1<<16) + (MFLIMIT - 1)) static FORCE_INLINE size_t LZ4_read_ARCH(const void *ptr)
#define HASHLOG64K ((MEMORY_USAGE - 2) + 1) {
#define HASH64KTABLESIZE (1U << HASHLOG64K) return get_unaligned((const size_t *)ptr);
#define LZ4_HASH_VALUE(p) (((A32(p)) * 2654435761U) >> \ }
((MINMATCH * 8) - (MEMORY_USAGE-2)))
#define LZ4_HASH64K_VALUE(p) (((A32(p)) * 2654435761U) >> \ static FORCE_INLINE void LZ4_write16(void *memPtr, U16 value)
((MINMATCH * 8) - HASHLOG64K)) {
#define HASH_VALUE(p) (((A32(p)) * 2654435761U) >> \ put_unaligned(value, (U16 *)memPtr);
((MINMATCH * 8) - HASH_LOG)) }
#if LZ4_ARCH64/* 64-bit */ static FORCE_INLINE void LZ4_write32(void *memPtr, U32 value)
#define STEPSIZE 8 {
put_unaligned(value, (U32 *)memPtr);
#define LZ4_COPYSTEP(s, d) \ }
do { \
PUT8(s, d); \ static FORCE_INLINE U16 LZ4_readLE16(const void *memPtr)
d += 8; \ {
s += 8; \ return get_unaligned_le16(memPtr);
} while (0) }
#define LZ4_COPYPACKET(s, d) LZ4_COPYSTEP(s, d) static FORCE_INLINE void LZ4_writeLE16(void *memPtr, U16 value)
{
#define LZ4_SECURECOPY(s, d, e) \ return put_unaligned_le16(value, memPtr);
do { \ }
if (d < e) { \
LZ4_WILDCOPY(s, d, e); \ static FORCE_INLINE void LZ4_copy8(void *dst, const void *src)
} \ {
} while (0) #if LZ4_ARCH64
#define HTYPE u32 U64 a = get_unaligned((const U64 *)src);
#ifdef __BIG_ENDIAN put_unaligned(a, (U64 *)dst);
#define LZ4_NBCOMMONBYTES(val) (__builtin_clzll(val) >> 3) #else
U32 a = get_unaligned((const U32 *)src);
U32 b = get_unaligned((const U32 *)src + 1);
put_unaligned(a, (U32 *)dst);
put_unaligned(b, (U32 *)dst + 1);
#endif
}
/*
* customized variant of memcpy,
* which can overwrite up to 7 bytes beyond dstEnd
*/
static FORCE_INLINE void LZ4_wildCopy(void *dstPtr,
const void *srcPtr, void *dstEnd)
{
BYTE *d = (BYTE *)dstPtr;
const BYTE *s = (const BYTE *)srcPtr;
BYTE *const e = (BYTE *)dstEnd;
do {
LZ4_copy8(d, s);
d += 8;
s += 8;
} while (d < e);
}
static FORCE_INLINE unsigned int LZ4_NbCommonBytes(register size_t val)
{
#if LZ4_LITTLE_ENDIAN
return __ffs(val) >> 3;
#else #else
#define LZ4_NBCOMMONBYTES(val) (__builtin_ctzll(val) >> 3) return (BITS_PER_LONG - 1 - __fls(val)) >> 3;
#endif
}
static FORCE_INLINE unsigned int LZ4_count(
const BYTE *pIn,
const BYTE *pMatch,
const BYTE *pInLimit)
{
const BYTE *const pStart = pIn;
while (likely(pIn < pInLimit - (STEPSIZE - 1))) {
size_t const diff = LZ4_read_ARCH(pMatch) ^ LZ4_read_ARCH(pIn);
if (!diff) {
pIn += STEPSIZE;
pMatch += STEPSIZE;
continue;
}
pIn += LZ4_NbCommonBytes(diff);
return (unsigned int)(pIn - pStart);
}
#if LZ4_ARCH64
if ((pIn < (pInLimit - 3))
&& (LZ4_read32(pMatch) == LZ4_read32(pIn))) {
pIn += 4;
pMatch += 4;
}
#endif #endif
#else /* 32-bit */ if ((pIn < (pInLimit - 1))
#define STEPSIZE 4 && (LZ4_read16(pMatch) == LZ4_read16(pIn))) {
pIn += 2;
pMatch += 2;
}
#define LZ4_COPYSTEP(s, d) \ if ((pIn < pInLimit) && (*pMatch == *pIn))
do { \ pIn++;
PUT4(s, d); \
d += 4; \
s += 4; \
} while (0)
#define LZ4_COPYPACKET(s, d) \ return (unsigned int)(pIn - pStart);
do { \ }
LZ4_COPYSTEP(s, d); \
LZ4_COPYSTEP(s, d); \
} while (0)
#define LZ4_SECURECOPY LZ4_WILDCOPY typedef enum { noLimit = 0, limitedOutput = 1 } limitedOutput_directive;
#define HTYPE const u8* typedef enum { byPtr, byU32, byU16 } tableType_t;
#ifdef __BIG_ENDIAN typedef enum { noDict = 0, withPrefix64k, usingExtDict } dict_directive;
#define LZ4_NBCOMMONBYTES(val) (__builtin_clz(val) >> 3) typedef enum { noDictIssue = 0, dictSmall } dictIssue_directive;
#else
#define LZ4_NBCOMMONBYTES(val) (__builtin_ctz(val) >> 3)
#endif
#endif typedef enum { endOnOutputSize = 0, endOnInputSize = 1 } endCondition_directive;
typedef enum { full = 0, partial = 1 } earlyEnd_directive;
#define LZ4_WILDCOPY(s, d, e) \ #endif
do { \
LZ4_COPYPACKET(s, d); \
} while (d < e)
#define LZ4_BLINDCOPY(s, d, l) \
do { \
u8 *e = (d) + l; \
LZ4_WILDCOPY(s, d, e); \
d = e; \
} while (0)
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment