gitlab.arm.com will be in the maintainance mode on Wednesday June 29th 01:00 - 10:00 (UTC+1). Repositories is read only during the maintainance.

Commit aea4df4c authored by Laura Abbott's avatar Laura Abbott Committed by Linus Torvalds
Browse files

mm: slub: really fix slab walking for init_on_free

Commit 1b7e816f ("mm: slub: Fix slab walking for init_on_free")
fixed one problem with the slab walking but missed a key detail: When
walking the list, the head and tail pointers need to be updated since we
end up reversing the list as a result.  Without doing this, bulk free is
broken.

One way this is exposed is a NULL pointer with slub_debug=F:

  =============================================================================
  BUG skbuff_head_cache (Tainted: G                T): Object already free
  -----------------------------------------------------------------------------

  INFO: Slab 0x000000000d2d2f8f objects=16 used=3 fp=0x0000000064309071 flags=0x3fff00000000201
  BUG: kernel NULL pointer dereference, address: 0000000000000000
  Oops: 0000 [#1] PREEMPT SMP PTI
  RIP: 0010:print_trailer+0x70/0x1d5
  Call Trace:
   <IRQ>
   free_debug_processing.cold.37+0xc9/0x149
   __slab_free+0x22a/0x3d0
   kmem_cache_free_bulk+0x415/0x420
   __kfree_skb_flush+0x30/0x40
   net_rx_action+0x2dd/0x480
   __do_softirq+0xf0/0x246
   irq_exit+0x93/0xb0
   do_IRQ+0xa0/0x110
   common_interrupt+0xf/0xf
   </IRQ>

Given we're now almost identical to the existing debugging code which
correctly walks the list, combine with that.

Link: https://lkml.kernel.org/r/20191104170303.GA50361@gandi.net
Link: http://lkml.kernel.org/r/20191106222208.26815-1-labbott@redhat.com
Fixes: 1b7e816f

 ("mm: slub: Fix slab walking for init_on_free")
Signed-off-by: default avatarLaura Abbott <labbott@redhat.com>
Reported-by: default avatarThibaut Sautereau <thibaut.sautereau@clip-os.org>
Acked-by: default avatarDavid Rientjes <rientjes@google.com>
Tested-by: default avatarAlexander Potapenko <glider@google.com>
Acked-by: default avatarAlexander Potapenko <glider@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <clipos@ssi.gouv.fr>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 0362f326
...@@ -1433,12 +1433,15 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, ...@@ -1433,12 +1433,15 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
void *old_tail = *tail ? *tail : *head; void *old_tail = *tail ? *tail : *head;
int rsize; int rsize;
if (slab_want_init_on_free(s)) { /* Head and tail of the reconstructed freelist */
void *p = NULL; *head = NULL;
*tail = NULL;
do { do {
object = next; object = next;
next = get_freepointer(s, object); next = get_freepointer(s, object);
if (slab_want_init_on_free(s)) {
/* /*
* Clear the object and the metadata, but don't touch * Clear the object and the metadata, but don't touch
* the redzone. * the redzone.
...@@ -1448,29 +1451,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, ...@@ -1448,29 +1451,8 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
: 0; : 0;
memset((char *)object + s->inuse, 0, memset((char *)object + s->inuse, 0,
s->size - s->inuse - rsize); s->size - s->inuse - rsize);
set_freepointer(s, object, p);
p = object;
} while (object != old_tail);
}
/*
* Compiler cannot detect this function can be removed if slab_free_hook()
* evaluates to nothing. Thus, catch all relevant config debug options here.
*/
#if defined(CONFIG_LOCKDEP) || \
defined(CONFIG_DEBUG_KMEMLEAK) || \
defined(CONFIG_DEBUG_OBJECTS_FREE) || \
defined(CONFIG_KASAN)
next = *head; }
/* Head and tail of the reconstructed freelist */
*head = NULL;
*tail = NULL;
do {
object = next;
next = get_freepointer(s, object);
/* If object's reuse doesn't have to be delayed */ /* If object's reuse doesn't have to be delayed */
if (!slab_free_hook(s, object)) { if (!slab_free_hook(s, object)) {
/* Move object to the new freelist */ /* Move object to the new freelist */
...@@ -1485,9 +1467,6 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, ...@@ -1485,9 +1467,6 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s,
*tail = NULL; *tail = NULL;
return *head != NULL; return *head != NULL;
#else
return true;
#endif
} }
static void *setup_object(struct kmem_cache *s, struct page *page, static void *setup_object(struct kmem_cache *s, struct page *page,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment