Commit c146a2b9 authored by Alexander Potapenko's avatar Alexander Potapenko Committed by Linus Torvalds

mm, kasan: account for object redzone in SLUB's nearest_obj()

When looking up the nearest SLUB object for a given address, correctly
calculate its offset if SLAB_RED_ZONE is enabled for that cache.

Previously, when KASAN had detected an error on an object from a cache
with SLAB_RED_ZONE set, the actual start address of the object was
miscalculated, which led to random stacks having been reported.

When looking up the nearest SLUB object for a given address, correctly
calculate its offset if SLAB_RED_ZONE is enabled for that cache.

Fixes: 7ed2f9e6 ("mm, kasan: SLAB support")
Link: http://lkml.kernel.org/r/1468347165-41906-2-git-send-email-glider@google.comSigned-off-by: default avatarAlexander Potapenko <glider@google.com>
Cc: Andrey Konovalov <adech.fo@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Steven Rostedt (Red Hat) <rostedt@goodmis.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Kostya Serebryany <kcc@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kuthonuzo Luruo <kuthonuzo.luruo@hpe.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 734537c9
...@@ -119,15 +119,17 @@ static inline void sysfs_slab_remove(struct kmem_cache *s) ...@@ -119,15 +119,17 @@ static inline void sysfs_slab_remove(struct kmem_cache *s)
void object_err(struct kmem_cache *s, struct page *page, void object_err(struct kmem_cache *s, struct page *page,
u8 *object, char *reason); u8 *object, char *reason);
void *fixup_red_left(struct kmem_cache *s, void *p);
static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, static inline void *nearest_obj(struct kmem_cache *cache, struct page *page,
void *x) { void *x) {
void *object = x - (x - page_address(page)) % cache->size; void *object = x - (x - page_address(page)) % cache->size;
void *last_object = page_address(page) + void *last_object = page_address(page) +
(page->objects - 1) * cache->size; (page->objects - 1) * cache->size;
if (unlikely(object > last_object)) void *result = (unlikely(object > last_object)) ? last_object : object;
return last_object;
else result = fixup_red_left(cache, result);
return object; return result;
} }
#endif /* _LINUX_SLUB_DEF_H */ #endif /* _LINUX_SLUB_DEF_H */
...@@ -124,7 +124,7 @@ static inline int kmem_cache_debug(struct kmem_cache *s) ...@@ -124,7 +124,7 @@ static inline int kmem_cache_debug(struct kmem_cache *s)
#endif #endif
} }
static inline void *fixup_red_left(struct kmem_cache *s, void *p) inline void *fixup_red_left(struct kmem_cache *s, void *p)
{ {
if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE)
p += s->red_left_pad; p += s->red_left_pad;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment