Commit cbef8478 authored by Naoya Horiguchi's avatar Naoya Horiguchi Committed by Linus Torvalds
Browse files

mm/hugetlb: pmd_huge() returns true for non-present hugepage

Migrating hugepages and hwpoisoned hugepages are considered as non-present
hugepages, and they are referenced via migration entries and hwpoison
entries in their page table slots.

This behavior causes race condition because pmd_huge() doesn't tell
non-huge pages from migrating/hwpoisoned hugepages.  follow_page_mask() is
one example where the kernel would call follow_page_pte() for such
hugepage while this function is supposed to handle only normal pages.

To avoid this, this patch makes pmd_huge() return true when pmd_none() is
true *and* pmd_present() is false.  We don't have to worry about mixing up
non-present pmd entry with normal pmd (pointing to leaf level pte entry)
because pmd_present() is true in normal pmd.

The same race condition could happen in (x86-specific) gup_pmd_range(),
where this patch simply adds pmd_present() check instead of pmd_huge().
This is because gup_pmd_range() is fast path.  If we have non-present
hugepage in this function, we will go into gup_huge_pmd(), then return 0
at flag mask check, and finally fall back to the slow path.

Fixes: 290408d4

 ("hugetlb: hugepage migration core")
Signed-off-by: default avatarNaoya Horiguchi <>
Cc: Hugh Dickins <>
Cc: James Hogan <>
Cc: David Rientjes <>
Cc: Mel Gorman <>
Cc: Johannes Weiner <>
Cc: Michal Hocko <>
Cc: Rik van Riel <>
Cc: Andrea Arcangeli <>
Cc: Luiz Capitulino <>
Cc: Nishanth Aravamudan <>
Cc: Lee Schermerhorn <>
Cc: Steve Capper <>
Cc: <>	[2.6.36+]
Signed-off-by: default avatarAndrew Morton <>
Signed-off-by: default avatarLinus Torvalds <>
parent 61f77eda
......@@ -172,7 +172,7 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
if (pmd_none(pmd) || pmd_trans_splitting(pmd))
return 0;
if (unlikely(pmd_large(pmd))) {
if (unlikely(pmd_large(pmd) || !pmd_present(pmd))) {
* NUMA hinting faults need to be handled in the GUP
* slowpath for accounting purposes and so that they
......@@ -54,9 +54,15 @@ int pud_huge(pud_t pud)
* pmd_huge() returns 1 if @pmd is hugetlb related entry, that is normal
* hugetlb entry or non-present (migration or hwpoisoned) hugetlb entry.
* Otherwise, returns 0.
int pmd_huge(pmd_t pmd)
return !!(pmd_val(pmd) & _PAGE_PSE);
return !pmd_none(pmd) &&
(pmd_val(pmd) & (_PAGE_PRESENT|_PAGE_PSE)) != _PAGE_PRESENT;
int pud_huge(pud_t pud)
......@@ -3679,6 +3679,8 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
struct page *page;
if (!pmd_present(*pmd))
return NULL;
page = pte_page(*(pte_t *)pmd);
if (page)
page += ((address & ~PMD_MASK) >> PAGE_SHIFT);
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment