Commit 0ee931c4 authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds
Browse files

mm: treewide: remove GFP_TEMPORARY allocation flag

GFP_TEMPORARY was introduced by commit e12ba74d ("Group short-lived
and reclaimable kernel allocations") along with __GFP_RECLAIMABLE.  It's
primary motivation was to allow users to tell that an allocation is
short lived and so the allocator can try to place such allocations close
together and prevent long term fragmentation.  As much as this sounds
like a reasonable semantic it becomes much less clear when to use the
highlevel GFP_TEMPORARY allocation flag.  How long is temporary? Can the
context holding that memory sleep? Can it take locks? It seems there is
no good answer for those questions.

The current implementation of GFP_TEMPORARY is basically GFP_KERNEL |
__GFP_RECLAIMABLE which in itself is tricky because basically none of
the existing caller provide a way to reclaim the allocated memory.  So
this is rather misleading and hard to evaluate for any benefits.

I have checked some random users and none of them has added the flag
with a specific justification.  I suspect most of them just copied from
other existing users and others just thought it might be a good idea to
use without any measuring.  This suggests that GFP_TEMPORARY just
motivates for cargo cult usage without any reasoning.

I believe that our gfp flags are quite complex already and especially
those with highlevel semantic should be clearly defined to prevent from
confusion and abuse.  Therefore I propose dropping GFP_TEMPORARY and
replace all existing users to simply use GFP_KERNEL.  Please note that
SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and
so they will be placed properly for memory fragmentation prevention.

I can see reasons we might want some gfp flag to reflect shorterm
allocations but I propose starting from a clear semantic definition and
only then add users with proper justification.

This was been brought up before LSF this year by Matthew [1] and it
turned out that GFP_TEMPORARY really doesn't have a clear semantic.  It
seems to be a heuristic without any measured advantage for most (if not
all) its current users.  The follow up discussion has revealed that
opinions on what might be temporary allocation differ a lot between
developers.  So rather than trying to tweak existing users into a
semantic which they haven't expected I propose to simply remove the flag
and start from scratch if we really need a semantic for short term
allocations.

[1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org

[akpm@linux-foundation.org: fix typo]
[akpm@linux-foundation.org: coding-style fixes]
[sfr@canb.auug.org.au: drm/i915: fix up]
  Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au
Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.org

Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Neil Brown <neilb@suse.de>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d0dbf771
......@@ -510,7 +510,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
goto done;
}
str = (char *)__get_free_page(GFP_TEMPORARY);
str = (char *)__get_free_page(GFP_KERNEL);
if (!str)
goto done;
......
......@@ -178,7 +178,7 @@ void show_regs(struct pt_regs *regs)
struct callee_regs *cregs;
char *buf;
buf = (char *)__get_free_page(GFP_TEMPORARY);
buf = (char *)__get_free_page(GFP_KERNEL);
if (!buf)
return;
......
......@@ -914,7 +914,7 @@ int rtas_online_cpus_mask(cpumask_var_t cpus)
if (ret) {
cpumask_var_t tmp_mask;
if (!alloc_cpumask_var(&tmp_mask, GFP_TEMPORARY))
if (!alloc_cpumask_var(&tmp_mask, GFP_KERNEL))
return ret;
/* Use tmp_mask to preserve cpus mask from first failure */
......@@ -962,7 +962,7 @@ int rtas_ibm_suspend_me(u64 handle)
return -EIO;
}
if (!alloc_cpumask_var(&offline_mask, GFP_TEMPORARY))
if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
return -ENOMEM;
atomic_set(&data.working, 0);
......
......@@ -151,7 +151,7 @@ static ssize_t store_hibernate(struct device *dev,
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (!alloc_cpumask_var(&offline_mask, GFP_TEMPORARY))
if (!alloc_cpumask_var(&offline_mask, GFP_KERNEL))
return -ENOMEM;
stream_id = simple_strtoul(buf, NULL, 16);
......
......@@ -319,7 +319,7 @@ static int drm_atomic_helper_crtc_normalize_zpos(struct drm_crtc *crtc,
DRM_DEBUG_ATOMIC("[CRTC:%d:%s] calculating normalized zpos values\n",
crtc->base.id, crtc->name);
states = kmalloc_array(total_planes, sizeof(*states), GFP_TEMPORARY);
states = kmalloc_array(total_planes, sizeof(*states), GFP_KERNEL);
if (!states)
return -ENOMEM;
......
......@@ -111,7 +111,7 @@ ssize_t drm_dp_dual_mode_write(struct i2c_adapter *adapter,
void *data;
int ret;
data = kmalloc(msg.len, GFP_TEMPORARY);
data = kmalloc(msg.len, GFP_KERNEL);
if (!data)
return -ENOMEM;
......
......@@ -102,7 +102,7 @@ ssize_t drm_scdc_write(struct i2c_adapter *adapter, u8 offset,
void *data;
int err;
data = kmalloc(1 + size, GFP_TEMPORARY);
data = kmalloc(1 + size, GFP_KERNEL);
if (!data)
return -ENOMEM;
......
......@@ -37,7 +37,7 @@ static struct etnaviv_gem_submit *submit_create(struct drm_device *dev,
struct etnaviv_gem_submit *submit;
size_t sz = size_vstruct(nr, sizeof(submit->bos[0]), sizeof(*submit));
submit = kmalloc(sz, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
submit = kmalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (submit) {
submit->dev = dev;
submit->gpu = gpu;
......
......@@ -2540,7 +2540,7 @@ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj,
if (n_pages > ARRAY_SIZE(stack_pages)) {
/* Too big for stack -- allocate temporary array instead */
pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_TEMPORARY);
pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL);
if (!pages)
return NULL;
}
......
......@@ -293,7 +293,7 @@ static int eb_create(struct i915_execbuffer *eb)
* as possible to perform the allocation and warn
* if it fails.
*/
flags = GFP_TEMPORARY;
flags = GFP_KERNEL;
if (size > 1)
flags |= __GFP_NORETRY | __GFP_NOWARN;
......@@ -1515,7 +1515,7 @@ static int eb_copy_relocations(const struct i915_execbuffer *eb)
urelocs = u64_to_user_ptr(eb->exec[i].relocs_ptr);
size = nreloc * sizeof(*relocs);
relocs = kvmalloc_array(size, 1, GFP_TEMPORARY);
relocs = kvmalloc_array(size, 1, GFP_KERNEL);
if (!relocs) {
kvfree(relocs);
err = -ENOMEM;
......@@ -2077,7 +2077,7 @@ get_fence_array(struct drm_i915_gem_execbuffer2 *args,
return ERR_PTR(-EFAULT);
fences = kvmalloc_array(args->num_cliprects, sizeof(*fences),
__GFP_NOWARN | GFP_TEMPORARY);
__GFP_NOWARN | GFP_KERNEL);
if (!fences)
return ERR_PTR(-ENOMEM);
......@@ -2463,9 +2463,9 @@ i915_gem_execbuffer(struct drm_device *dev, void *data,
/* Copy in the exec list from userland */
exec_list = kvmalloc_array(args->buffer_count, sizeof(*exec_list),
__GFP_NOWARN | GFP_TEMPORARY);
__GFP_NOWARN | GFP_KERNEL);
exec2_list = kvmalloc_array(args->buffer_count + 1, sz,
__GFP_NOWARN | GFP_TEMPORARY);
__GFP_NOWARN | GFP_KERNEL);
if (exec_list == NULL || exec2_list == NULL) {
DRM_DEBUG("Failed to allocate exec list for %d buffers\n",
args->buffer_count);
......@@ -2543,7 +2543,7 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data,
/* Allocate an extra slot for use by the command parser */
exec2_list = kvmalloc_array(args->buffer_count + 1, sz,
__GFP_NOWARN | GFP_TEMPORARY);
__GFP_NOWARN | GFP_KERNEL);
if (exec2_list == NULL) {
DRM_DEBUG("Failed to allocate exec list for %d buffers\n",
args->buffer_count);
......
......@@ -3231,7 +3231,7 @@ intel_rotate_pages(struct intel_rotation_info *rot_info,
/* Allocate a temporary list of source pages for random access. */
page_addr_list = kvmalloc_array(n_pages,
sizeof(dma_addr_t),
GFP_TEMPORARY);
GFP_KERNEL);
if (!page_addr_list)
return ERR_PTR(ret);
......
......@@ -507,7 +507,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
ret = -ENOMEM;
pinned = 0;
pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_TEMPORARY);
pvec = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
if (pvec != NULL) {
struct mm_struct *mm = obj->userptr.mm->mm;
unsigned int flags = 0;
......@@ -643,7 +643,7 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
if (mm == current->mm) {
pvec = kvmalloc_array(num_pages, sizeof(struct page *),
GFP_TEMPORARY |
GFP_KERNEL |
__GFP_NORETRY |
__GFP_NOWARN);
if (pvec) /* defer to worker if malloc fails */
......
......@@ -787,16 +787,16 @@ int i915_error_state_buf_init(struct drm_i915_error_state_buf *ebuf,
*/
ebuf->size = count + 1 > PAGE_SIZE ? count + 1 : PAGE_SIZE;
ebuf->buf = kmalloc(ebuf->size,
GFP_TEMPORARY | __GFP_NORETRY | __GFP_NOWARN);
GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN);
if (ebuf->buf == NULL) {
ebuf->size = PAGE_SIZE;
ebuf->buf = kmalloc(ebuf->size, GFP_TEMPORARY);
ebuf->buf = kmalloc(ebuf->size, GFP_KERNEL);
}
if (ebuf->buf == NULL) {
ebuf->size = 128;
ebuf->buf = kmalloc(ebuf->size, GFP_TEMPORARY);
ebuf->buf = kmalloc(ebuf->size, GFP_KERNEL);
}
if (ebuf->buf == NULL)
......
......@@ -62,7 +62,7 @@ unsigned int *i915_random_order(unsigned int count, struct rnd_state *state)
{
unsigned int *order, i;
order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
order = kmalloc_array(count, sizeof(*order), GFP_KERNEL);
if (!order)
return order;
......
......@@ -117,12 +117,12 @@ static int igt_random_insert_remove(void *arg)
mock_engine_reset(engine);
waiters = kvmalloc_array(count, sizeof(*waiters), GFP_TEMPORARY);
waiters = kvmalloc_array(count, sizeof(*waiters), GFP_KERNEL);
if (!waiters)
goto out_engines;
bitmap = kcalloc(DIV_ROUND_UP(count, BITS_PER_LONG), sizeof(*bitmap),
GFP_TEMPORARY);
GFP_KERNEL);
if (!bitmap)
goto out_waiters;
......@@ -187,12 +187,12 @@ static int igt_insert_complete(void *arg)
mock_engine_reset(engine);
waiters = kvmalloc_array(count, sizeof(*waiters), GFP_TEMPORARY);
waiters = kvmalloc_array(count, sizeof(*waiters), GFP_KERNEL);
if (!waiters)
goto out_engines;
bitmap = kcalloc(DIV_ROUND_UP(count, BITS_PER_LONG), sizeof(*bitmap),
GFP_TEMPORARY);
GFP_KERNEL);
if (!bitmap)
goto out_waiters;
......@@ -368,7 +368,7 @@ static int igt_wakeup(void *arg)
mock_engine_reset(engine);
waiters = kvmalloc_array(count, sizeof(*waiters), GFP_TEMPORARY);
waiters = kvmalloc_array(count, sizeof(*waiters), GFP_KERNEL);
if (!waiters)
goto out_engines;
......
......@@ -127,7 +127,7 @@ static int intel_uncore_check_forcewake_domains(struct drm_i915_private *dev_pri
return 0;
valid = kzalloc(BITS_TO_LONGS(FW_RANGE) * sizeof(*valid),
GFP_TEMPORARY);
GFP_KERNEL);
if (!valid)
return -ENOMEM;
......
......@@ -28,7 +28,7 @@ unsigned int *drm_random_order(unsigned int count, struct rnd_state *state)
{
unsigned int *order, i;
order = kmalloc_array(count, sizeof(*order), GFP_TEMPORARY);
order = kmalloc_array(count, sizeof(*order), GFP_KERNEL);
if (!order)
return order;
......
......@@ -40,7 +40,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev,
if (sz > SIZE_MAX)
return NULL;
submit = kmalloc(sz, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
submit = kmalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (!submit)
return NULL;
......
......@@ -1627,7 +1627,7 @@ static int igt_topdown(void *ignored)
goto err;
bitmap = kzalloc(count / BITS_PER_LONG * sizeof(unsigned long),
GFP_TEMPORARY);
GFP_KERNEL);
if (!bitmap)
goto err_nodes;
......@@ -1741,7 +1741,7 @@ static int igt_bottomup(void *ignored)
goto err;
bitmap = kzalloc(count / BITS_PER_LONG * sizeof(unsigned long),
GFP_TEMPORARY);
GFP_KERNEL);
if (!bitmap)
goto err_nodes;
......
......@@ -1279,7 +1279,7 @@ ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
}
/* use bounce buffer for copy */
tbuf = (void *)__get_free_page(GFP_TEMPORARY);
tbuf = (void *)__get_free_page(GFP_KERNEL);
if (!tbuf)
return -ENOMEM;
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment