Commit f9e13c0a authored by Shakeel Butt's avatar Shakeel Butt Committed by Linus Torvalds
Browse files

slab, slub: skip unnecessary kasan_cache_shutdown()

The kasan quarantine is designed to delay freeing slab objects to catch
use-after-free.  The quarantine can be large (several percent of machine
memory size).  When kmem_caches are deleted related objects are flushed
from the quarantine but this requires scanning the entire quarantine
which can be very slow.  We have seen the kernel busily working on this
while holding slab_mutex and badly affecting cache_reaper, slabinfo
readers and memcg kmem cache creations.

It can easily reproduced by following script:

	yes . | head -1000000 | xargs stat > /dev/null
	for i in `seq 1 10`; do
		seq 500 | (cd /cg/memory && xargs mkdir)
		seq 500 | xargs -I{} sh -c 'echo $BASHPID > \
			/cg/memory/{}/tasks && exec stat .' > /dev/null
		seq 500 | (cd /cg/memory && xargs rmdir)

The busy stack:

This patch is based on the observation that if the kmem_cache to be
destroyed is empty then there should not be any objects of this cache in
the quarantine.

Without the patch the script got stuck for couple of hours.  With the
patch the script completed within a second.


Signed-off-by: default avatarShakeel Butt <>
Reviewed-by: default avatarAndrew Morton <>
Acked-by: default avatarAndrey Ryabinin <>
Acked-by: default avatarChristoph Lameter <>
Cc: Vladimir Davydov <>
Cc: Alexander Potapenko <>
Cc: Greg Thelen <>
Cc: Dmitry Vyukov <>
Cc: Pekka Enberg <>
Cc: David Rientjes <>
Cc: Joonsoo Kim <>
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 1ba586de
......@@ -382,7 +382,8 @@ void kasan_cache_shrink(struct kmem_cache *cache)
void kasan_cache_shutdown(struct kmem_cache *cache)
if (!__kmem_cache_empty(cache))
size_t kasan_metadata_size(struct kmem_cache *cache)
......@@ -2291,6 +2291,18 @@ out:
return nr_freed;
bool __kmem_cache_empty(struct kmem_cache *s)
int node;
struct kmem_cache_node *n;
for_each_kmem_cache_node(s, node, n)
if (!list_empty(&n->slabs_full) ||
return false;
return true;
int __kmem_cache_shrink(struct kmem_cache *cachep)
int ret = 0;
......@@ -166,6 +166,7 @@ static inline slab_flags_t kmem_cache_flags(unsigned int object_size,
bool __kmem_cache_empty(struct kmem_cache *);
int __kmem_cache_shutdown(struct kmem_cache *);
void __kmem_cache_release(struct kmem_cache *);
int __kmem_cache_shrink(struct kmem_cache *);
......@@ -3696,6 +3696,17 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
discard_slab(s, page);
bool __kmem_cache_empty(struct kmem_cache *s)
int node;
struct kmem_cache_node *n;
for_each_kmem_cache_node(s, node, n)
if (n->nr_partial || slabs_node(s, node))
return false;
return true;
* Release all resources used by a slab cache.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment