1. 24 Sep, 2019 2 commits
  2. 20 Aug, 2019 1 commit
  3. 21 May, 2019 1 commit
  4. 09 May, 2019 1 commit
    • Dave Hansen's avatar
      x86/mpx, mm/core: Fix recursive munmap() corruption · 5a28fc94
      Dave Hansen authored
      This is a bit of a mess, to put it mildly.  But, it's a bug
      that only seems to have showed up in 4.20 but wasn't noticed
      until now, because nobody uses MPX.
      
      MPX has the arch_unmap() hook inside of munmap() because MPX
      uses bounds tables that protect other areas of memory.  When
      memory is unmapped, there is also a need to unmap the MPX
      bounds tables.  Barring this, unused bounds tables can eat 80%
      of the address space.
      
      But, the recursive do_munmap() that gets called vi arch_unmap()
      wreaks havoc with __do_munmap()'s state.  It can result in
      freeing populated page tables, accessing bogus VMA state,
      double-freed VMAs and more.
      
      See the "long story" further below for the gory details.
      
      To fix this, call arch_unmap() before __do_unmap() has a chance
      to do anything meaningful.  Also, remove the 'vma' argument
      and force the MPX code to do its own, independent VMA lookup.
      
      == UML / unicore32 impact ==
      
      Remove unused 'vma' argument to arch_unmap().  No functional
      change.
      
      I compile tested this on UML but not unicore32.
      
      == powerpc impact ==
      
      powerpc uses arch_unmap() well to watch for munmap() on the
      VDSO and zeroes out 'current->mm->context.vdso_base'.  Moving
      arch_unmap() makes this happen earlier in __do_munmap().  But,
      'vdso_base' seems to only be used in perf and in the signal
      delivery that happens near the return to userspace.  I can not
      find any likely impact to powerpc, other than the zeroing
      happening a little earlier.
      
      powerpc does not use the 'vma' argument and is unaffected by
      its removal.
      
      I compile-tested a 64-bit powerpc defconfig.
      
      == x86 impact ==
      
      For the common success case this is functionally identical to
      what was there before.  For the munmap() failure case, it's
      possible that some MPX tables will be zapped for memory that
      continues to be in use.  But, this is an extraordinarily
      unlikely scenario and the harm would be that MPX provides no
      protection since the bounds table got reset (zeroed).
      
      I can't imagine anyone doing this:
      
      	ptr = mmap();
      	// use ptr
      	ret = munmap(ptr);
      	if (ret)
      		// oh, there was an error, I'll
      		// keep using ptr.
      
      Because if you're doing munmap(), you are *done* with the
      memory.  There's probably no good data in there _anyway_.
      
      This passes the original reproducer from Richard Biener as
      well as the existing mpx selftests/.
      
      The long story:
      
      munmap() has a couple of pieces:
      
       1. Find the affected VMA(s)
       2. Split the start/end one(s) if neceesary
       3. Pull the VMAs out of the rbtree
       4. Actually zap the memory via unmap_region(), including
          freeing page tables (or queueing them to be freed).
       5. Fix up some of the accounting (like fput()) and actually
          free the VMA itself.
      
      This specific ordering was actually introduced by:
      
        dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap")
      
      during the 4.20 merge window.  The previous __do_munmap() code
      was actually safe because the only thing after arch_unmap() was
      remove_vma_list().  arch_unmap() could not see 'vma' in the
      rbtree because it was detached, so it is not even capable of
      doing operations unsafe for remove_vma_list()'s use of 'vma'.
      
      Richard Biener reported a test that shows this in dmesg:
      
        [1216548.787498] BUG: Bad rss-counter state mm:0000000017ce560b idx:1 val:551
        [1216548.787500] BUG: non-zero pgtables_bytes on freeing mm: 24576
      
      What triggered this was the recursive do_munmap() called via
      arch_unmap().  It was freeing page tables that has not been
      properly zapped.
      
      But, the problem was bigger than this.  For one, arch_unmap()
      can free VMAs.  But, the calling __do_munmap() has variables
      that *point* to VMAs and obviously can't handle them just
      getting freed while the pointer is still in use.
      
      I tried a couple of things here.  First, I tried to fix the page
      table freeing problem in isolation, but I then found the VMA
      issue.  I also tried having the MPX code return a flag if it
      modified the rbtree which would force __do_munmap() to re-walk
      to restart.  That spiralled out of control in complexity pretty
      fast.
      
      Just moving arch_unmap() and accepting that the bonkers failure
      case might eat some bounds tables seems like the simplest viable
      fix.
      
      This was also reported in the following kernel bugzilla entry:
      
        https://bugzilla.kernel.org/show_bug.cgi?id=203123
      
      There are some reports that this commit triggered this bug:
      
        dd2283f2
      
       ("mm: mmap: zap pages with read mmap_sem in munmap")
      
      While that commit certainly made the issues easier to hit, I believe
      the fundamental issue has been with us as long as MPX itself, thus
      the Fixes: tag below is for one of the original MPX commits.
      
      [ mingo: Minor edits to the changelog and the patch. ]
      
      Reported-by: default avatarRichard Biener <rguenther@suse.de>
      Reported-by: default avatarH.J. Lu <hjl.tools@gmail.com>
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Reviewed-by Thomas Gleixner <tglx@linutronix.de>
      Reviewed-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rik van Riel <riel@surriel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: linux-arch@vger.kernel.org
      Cc: linux-mm@kvack.org
      Cc: linux-um@lists.infradead.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: stable@vger.kernel.org
      Fixes: dd2283f2 ("mm: mmap: zap pages with read mmap_sem in munmap")
      Link: http://lkml.kernel.org/r/20190419194747.5E1AD6DC@viggo.jf.intel.com
      
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5a28fc94
  5. 19 Apr, 2019 1 commit
    • Andrea Arcangeli's avatar
      coredump: fix race condition between mmget_not_zero()/get_task_mm() and core dumping · 04f5866e
      Andrea Arcangeli authored
      The core dumping code has always run without holding the mmap_sem for
      writing, despite that is the only way to ensure that the entire vma
      layout will not change from under it.  Only using some signal
      serialization on the processes belonging to the mm is not nearly enough.
      This was pointed out earlier.  For example in Hugh's post from Jul 2017:
      
        https://lkml.kernel.org/r/alpine.LSU.2.11.1707191716030.2055@eggly.anvils
      
        "Not strictly relevant here, but a related note: I was very surprised
         to discover, only quite recently, how handle_mm_fault() may be called
         without down_read(mmap_sem) - when core dumping. That seems a
         misguided optimization to me, which would also be nice to correct"
      
      In particular because the growsdown and growsup can move the
      vm_start/vm_end the various loops the core dump does around the vma will
      not be consistent if page faults can happen concurrently.
      
      Pretty much all users calling mmget_not_zero()/get_task_mm() and then
      taking the mmap_sem had the potential to introduce unexpected side
      effects in the core dumping code.
      
      Adding mmap_sem for writing around the ->core_dump invocation is a
      viable long term fix, but it requires removing all copy user and page
      faults and to replace them with get_dump_page() for all binary formats
      which is not suitable as a short term fix.
      
      For the time being this solution manually covers the places that can
      confuse the core dump either by altering the vma layout or the vma flags
      while it runs.  Once ->core_dump runs under mmap_sem for writing the
      function mmget_still_valid() can be dropped.
      
      Allowing mmap_sem protected sections to run in parallel with the
      coredump provides some minor parallelism advantage to the swapoff code
      (which seems to be safe enough by never mangling any vma field and can
      keep doing swapins in parallel to the core dumping) and to some other
      corner case.
      
      In order to facilitate the backporting I added "Fixes: 86039bd3"
      however the side effect of this same race condition in /proc/pid/mem
      should be reproducible since before 2.6.12-rc2 so I couldn't add any
      other "Fixes:" because there's no hash beyond the git genesis commit.
      
      Because find_extend_vma() is the only location outside of the process
      context that could modify the "mm" structures under mmap_sem for
      reading, by adding the mmget_still_valid() check to it, all other cases
      that take the mmap_sem for reading don't need the new check after
      mmget_not_zero()/get_task_mm().  The expand_stack() in page fault
      context also doesn't need the new check, because all tasks under core
      dumping are frozen.
      
      Link: http://lkml.kernel.org/r/20190325224949.11068-1-aarcange@redhat.com
      Fixes: 86039bd3
      
       ("userfaultfd: add new syscall to provide memory externalization")
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: default avatarJann Horn <jannh@google.com>
      Suggested-by: default avatarOleg Nesterov <oleg@redhat.com>
      Acked-by: default avatarPeter Xu <peterx@redhat.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarOleg Nesterov <oleg@redhat.com>
      Reviewed-by: default avatarJann Horn <jannh@google.com>
      Acked-by: default avatarJason Gunthorpe <jgg@mellanox.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      04f5866e
  6. 06 Mar, 2019 2 commits
  7. 28 Feb, 2019 1 commit
  8. 28 Dec, 2018 1 commit
  9. 10 Dec, 2018 1 commit
    • Steve Capper's avatar
      mm: mmap: Allow for "high" userspace addresses · f6795053
      Steve Capper authored
      
      
      This patch adds support for "high" userspace addresses that are
      optionally supported on the system and have to be requested via a hint
      mechanism ("high" addr parameter to mmap).
      
      Architectures such as powerpc and x86 achieve this by making changes to
      their architectural versions of arch_get_unmapped_* functions. However,
      on arm64 we use the generic versions of these functions.
      
      Rather than duplicate the generic arch_get_unmapped_* implementations
      for arm64, this patch instead introduces two architectural helper macros
      and applies them to arch_get_unmapped_*:
       arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
       arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint
      
      If these macros are not defined in architectural code then they default
      to (TASK_SIZE) and (base) so should not introduce any behavioural
      changes to architectures that do not define them.
      
      Signed-off-by: Steve Capper's avatarSteve Capper <steve.capper@arm.com>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      f6795053
  10. 26 Oct, 2018 5 commits
    • Yang Shi's avatar
      mm: brk: downgrade mmap_sem to read when shrinking · 9bc8039e
      Yang Shi authored
      brk might be used to shrink memory mapping too other than munmap().  So,
      it may hold write mmap_sem for long time when shrinking large mapping, as
      what commit ("mm: mmap: zap pages with read mmap_sem in munmap")
      described.
      
      The brk() will not manipulate vmas anymore after __do_munmap() call for
      the mapping shrink use case.  But, it may set mm->brk after __do_munmap(),
      which needs hold write mmap_sem.
      
      However, a simple trick can workaround this by setting mm->brk before
      __do_munmap().  Then restore the original value if __do_munmap() fails.
      With this trick, it is safe to downgrade to read mmap_sem.
      
      So, the same optimization, which downgrades mmap_sem to read for zapping
      pages, is also feasible and reasonable to this case.
      
      The period of holding exclusive mmap_sem for shrinking large mapping would
      be reduced significantly with this optimization.
      
      [akpm@linux-foundation.org: tweak comment]
      [yang.shi@linux.alibaba.com: fix unsigned compare against 0 issue]
        Link: http://lkml.kernel.org/r/1538687672-17795-1-git-send-email-yang.shi@linux.alibaba.com
      Link: http://lkml.kernel.org/r/1538067582-60038-2-git-send-email-yang.shi@linux.alibaba.com
      
      
      Signed-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
      Cc: Colin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9bc8039e
    • Yang Shi's avatar
      mm: mremap: downgrade mmap_sem to read when shrinking · 85a06835
      Yang Shi authored
      Other than munmap, mremap might be used to shrink memory mapping too.
      So, it may hold write mmap_sem for long time when shrinking large
      mapping, as what commit ("mm: mmap: zap pages with read mmap_sem in
      munmap") described.
      
      The mremap() will not manipulate vmas anymore after __do_munmap() call for
      the mapping shrink use case, so it is safe to downgrade to read mmap_sem.
      
      So, the same optimization, which downgrades mmap_sem to read for zapping
      pages, is also feasible and reasonable to this case.
      
      The period of holding exclusive mmap_sem for shrinking large mapping
      would be reduced significantly with this optimization.
      
      MREMAP_FIXED and MREMAP_MAYMOVE are more complicated to adopt this
      optimization since they need manipulate vmas after do_munmap(),
      downgrading mmap_sem may create race window.
      
      Simple mapping shrink is the low hanging fruit, and it may cover the
      most cases of unmap with munmap together.
      
      [akpm@linux-foundation.org: tweak comment]
      [yang.shi@linux.alibaba.com: fix unsigned compare against 0 issue]
        Link: http://lkml.kernel.org/r/1538687672-17795-2-git-send-email-yang.shi@linux.alibaba.com
      Link: http://lkml.kernel.org/r/1538067582-60038-1-git-send-email-yang.shi@linux.alibaba.com
      
      
      Signed-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
      Cc: Colin Ian King <colin.king@canonical.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      85a06835
    • Yang Shi's avatar
      mm: unmap VM_PFNMAP mappings with optimized path · cb492249
      Yang Shi authored
      When unmapping VM_PFNMAP mappings, vm flags need to be updated.  Since the
      vmas have been detached, so it sounds safe to update vm flags with read
      mmap_sem.
      
      Link: http://lkml.kernel.org/r/1537376621-51150-4-git-send-email-yang.shi@linux.alibaba.com
      
      
      Signed-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb492249
    • Yang Shi's avatar
      mm: unmap VM_HUGETLB mappings with optimized path · b4cefb36
      Yang Shi authored
      When unmapping VM_HUGETLB mappings, vm flags need to be updated.  Since
      the vmas have been detached, so it sounds safe to update vm flags with
      read mmap_sem.
      
      Link: http://lkml.kernel.org/r/1537376621-51150-3-git-send-email-yang.shi@linux.alibaba.com
      
      
      Signed-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4cefb36
    • Yang Shi's avatar
      mm: mmap: zap pages with read mmap_sem in munmap · dd2283f2
      Yang Shi authored
      Patch series "mm: zap pages with read mmap_sem in munmap for large
      mapping", v11.
      
      Background:
      Recently, when we ran some vm scalability tests on machines with large memory,
      we ran into a couple of mmap_sem scalability issues when unmapping large memory
      space, please refer to https://lkml.org/lkml/2017/12/14/733 and
      https://lkml.org/lkml/2018/2/20/576.
      
      History:
      Then akpm suggested to unmap large mapping section by section and drop mmap_sem
      at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).
      
      V1 patch series was submitted to the mailing list per Andrew's suggestion
      (see https://lkml.org/lkml/2018/3/20/786).  Then I received a lot great
      feedback and suggestions.
      
      Then this topic was discussed on LSFMM summit 2018.  In the summit, Michal
      Hocko suggested (also in the v1 patches review) to try "two phases"
      approach.  Zapping pages with read mmap_sem, then doing via cleanup with
      write mmap_sem (for discussion detail, see
      https://lwn.net/Articles/753269/)
      
      Approach:
      Zapping pages is the most time consuming part, according to the suggestion from
      Michal Hocko [1], zapping pages can be done with holding read mmap_sem, like
      what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas.
      
      But, we can't call MADV_DONTNEED directly, since there are two major drawbacks:
        * The unexpected state from PF if it wins the race in the middle of munmap.
          It may return zero page, instead of the content or SIGSEGV.
        * Can't handle VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe mappings, which
          is a showstopper from akpm
      
      But, some part may need write mmap_sem, for example, vma splitting. So,
      the design is as follows:
              acquire write mmap_sem
              lookup vmas (find and split vmas)
              deal with special mappings
              detach vmas
              downgrade_write
      
              zap pages
              free page tables
              release mmap_sem
      
      The vm events with read mmap_sem may come in during page zapping, but
      since vmas have been detached before, they, i.e.  page fault, gup, etc,
      will not be able to find valid vma, then just return SIGSEGV or -EFAULT as
      expected.
      
      If the vma has VM_HUGETLB | VM_PFNMAP, they are considered as special
      mappings.  They will be handled by falling back to regular do_munmap()
      with exclusive mmap_sem held in this patch since they may update vm flags.
      
      But, with the "detach vmas first" approach, the vmas have been detached
      when vm flags are updated, so it sounds safe to update vm flags with read
      mmap_sem for this specific case.  So, VM_HUGETLB and VM_PFNMAP will be
      handled by using the optimized path in the following separate patches for
      bisectable sake.
      
      Unmapping uprobe areas may need update mm flags (MMF_RECALC_UPROBES).
      However it is fine to have false-positive MMF_RECALC_UPROBES according to
      uprobes developer.  So, uprobe unmap will not be handled by the regular
      path.
      
      With the "detach vmas first" approach we don't have to re-acquire mmap_sem
      again to clean up vmas to avoid race window which might get the address
      space changed since downgrade_write() doesn't release the lock to lead
      regression, which simply downgrades to read lock.
      
      And, since the lock acquire/release cost is managed to the minimum and
      almost as same as before, the optimization could be extended to any size
      of mapping without incurring significant penalty to small mappings.
      
      For the time being, just do this in munmap syscall path.  Other
      vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
      intact due to some implementation difficulties since they acquire write
      mmap_sem from very beginning and hold it until the end, do_munmap() might
      be called in the middle.  But, the optimized do_munmap would like to be
      called without mmap_sem held so that we can do the optimization.  So, if
      we want to do the similar optimization for mmap/mremap path, I'm afraid we
      would have to redesign them.  mremap might be called on very large area
      depending on the usecases, the optimization to it will be considered in
      the future.
      
      This patch (of 3):
      
      When running some mmap/munmap scalability tests with large memory (i.e.
      > 300GB), the below hung task issue may happen occasionally.
      
      INFO: task ps:14018 blocked for more than 120 seconds.
             Tainted: G            E 4.9.79-009.ali3000.alios7.x86_64 #1
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
      message.
       ps              D    0 14018      1 0x00000004
        ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0
        ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040
        00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000
       Call Trace:
        [<ffffffff817154d0>] ? __schedule+0x250/0x730
        [<ffffffff817159e6>] schedule+0x36/0x80
        [<ffffffff81718560>] rwsem_down_read_failed+0xf0/0x150
        [<ffffffff81390a28>] call_rwsem_down_read_failed+0x18/0x30
        [<ffffffff81717db0>] down_read+0x20/0x40
        [<ffffffff812b9439>] proc_pid_cmdline_read+0xd9/0x4e0
        [<ffffffff81253c95>] ? do_filp_open+0xa5/0x100
        [<ffffffff81241d87>] __vfs_read+0x37/0x150
        [<ffffffff812f824b>] ? security_file_permission+0x9b/0xc0
        [<ffffffff81242266>] vfs_read+0x96/0x130
        [<ffffffff812437b5>] SyS_read+0x55/0xc0
        [<ffffffff8171a6da>] entry_SYSCALL_64_fastpath+0x1a/0xc5
      
      It is because munmap holds mmap_sem exclusively from very beginning to all
      the way down to the end, and doesn't release it in the middle.  When
      unmapping large mapping, it may take long time (take ~18 seconds to unmap
      320GB mapping with every single page mapped on an idle machine).
      
      Zapping pages is the most time consuming part, according to the suggestion
      from Michal Hocko [1], zapping pages can be done with holding read
      mmap_sem, like what MADV_DONTNEED does.  Then re-acquire write mmap_sem to
      cleanup vmas.
      
      But, some part may need write mmap_sem, for example, vma splitting. So,
      the design is as follows:
              acquire write mmap_sem
              lookup vmas (find and split vmas)
              deal with special mappings
              detach vmas
              downgrade_write
      
              zap pages
              free page tables
              release mmap_sem
      
      The vm events with read mmap_sem may come in during page zapping, but
      since vmas have been detached before, they, i.e.  page fault, gup, etc,
      will not be able to find valid vma, then just return SIGSEGV or -EFAULT as
      expected.
      
      If the vma has VM_HUGETLB | VM_PFNMAP, they are considered as special
      mappings.  They will be handled by without downgrading mmap_sem in this
      patch since they may update vm flags.
      
      But, with the "detach vmas first" approach, the vmas have been detached
      when vm flags are updated, so it sounds safe to update vm flags with read
      mmap_sem for this specific case.  So, VM_HUGETLB and VM_PFNMAP will be
      handled by using the optimized path in the following separate patches for
      bisectable sake.
      
      Unmapping uprobe areas may need update mm flags (MMF_RECALC_UPROBES).
      However it is fine to have false-positive MMF_RECALC_UPROBES according to
      uprobes developer.
      
      With the "detach vmas first" approach we don't have to re-acquire mmap_sem
      again to clean up vmas to avoid race window which might get the address
      space changed since downgrade_write() doesn't release the lock to lead
      regression, which simply downgrades to read lock.
      
      And, since the lock acquire/release cost is managed to the minimum and
      almost as same as before, the optimization could be extended to any size
      of mapping without incurring significant penalty to small mappings.
      
      For the time being, just do this in munmap syscall path.  Other
      vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
      intact due to some implementation difficulties since they acquire write
      mmap_sem from very beginning and hold it until the end, do_munmap() might
      be called in the middle.  But, the optimized do_munmap would like to be
      called without mmap_sem held so that we can do the optimization.  So, if
      we want to do the similar optimization for mmap/mremap path, I'm afraid we
      would have to redesign them.  mremap might be called on very large area
      depending on the usecases, the optimization to it will be considered in
      the future.
      
      With the patches, exclusive mmap_sem hold time when munmap a 80GB address
      space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to us level
      from second.
      
      munmap_test-15002 [008]   594.380138: funcgraph_entry: |
      __vm_munmap() {
      munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us
      |    unmap_region();
      munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us
      |  }
      
      Here the execution time of unmap_region() is used to evaluate the time of
      holding read mmap_sem, then the remaining time is used with holding
      exclusive lock.
      
      [1] https://lwn.net/Articles/753269/
      
      Link: http://lkml.kernel.org/r/1537376621-51150-2-git-send-email-yang.shi@linux.alibaba.com
      
      
      Signed-off-by: default avatarYang Shi &lt;yang.shi@linux.alibaba.com&gt;Suggested-by: Michal Hocko <mhocko@kernel.org>
      Suggested-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Suggested-by: default avatarMatthew Wilcox <willy@infradead.org>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dd2283f2
  11. 13 Oct, 2018 1 commit
    • Jann Horn's avatar
      mm/mmap.c: don't clobber partially overlapping VMA with MAP_FIXED_NOREPLACE · 7aa867dd
      Jann Horn authored
      Daniel Micay reports that attempting to use MAP_FIXED_NOREPLACE in an
      application causes that application to randomly crash.  The existing check
      for handling MAP_FIXED_NOREPLACE looks up the first VMA that either
      overlaps or follows the requested region, and then bails out if that VMA
      overlaps *the start* of the requested region.  It does not bail out if the
      VMA only overlaps another part of the requested region.
      
      Fix it by checking that the found VMA only starts at or after the end of
      the requested region, in which case there is no overlap.
      
      Test case:
      
      user@debian:~$ cat mmap_fixed_simple.c
      #include <sys/mman.h>
      #include <errno.h>
      #include <stdio.h>
      #include <stdlib.h>
      #include <unistd.h>
      
      #ifndef MAP_FIXED_NOREPLACE
      #define MAP_FIXED_NOREPLACE 0x100000
      #endif
      
      int main(void) {
        char *p;
      
        errno = 0;
        p = mmap((void*)0x10001000, 0x4000, PROT_NONE,
      MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED_NOREPLACE, -1, 0);
        printf("p1=%p err=%m\n", p);
      
        errno = 0;
        p = mmap((void*)0x10000000, 0x2000, PROT_READ,
      MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED_NOREPLACE, -1, 0);
        printf("p2=%p err=%m\n", p);
      
        char cmd[100];
        sprintf(cmd, "cat /proc/%d/maps", getpid());
        system(cmd);
      
        return 0;
      }
      user@debian:~$ gcc -o mmap_fixed_simple mmap_fixed_simple.c
      user@debian:~$ ./mmap_fixed_simple
      p1=0x10001000 err=Success
      p2=0x10000000 err=Success
      10000000-10002000 r--p 00000000 00:00 0
      10002000-10005000 ---p 00000000 00:00 0
      564a9a06f000-564a9a070000 r-xp 00000000 fe:01 264004
        /home/user/mmap_fixed_simple
      564a9a26f000-564a9a270000 r--p 00000000 fe:01 264004
        /home/user/mmap_fixed_simple
      564a9a270000-564a9a271000 rw-p 00001000 fe:01 264004
        /home/user/mmap_fixed_simple
      564a9a54a000-564a9a56b000 rw-p 00000000 00:00 0                          [heap]
      7f8eba447000-7f8eba5dc000 r-xp 00000000 fe:01 405885
        /lib/x86_64-linux-gnu/libc-2.24.so
      7f8eba5dc000-7f8eba7dc000 ---p 00195000 fe:01 405885
        /lib/x86_64-linux-gnu/libc-2.24.so
      7f8eba7dc000-7f8eba7e0000 r--p 00195000 fe:01 405885
        /lib/x86_64-linux-gnu/libc-2.24.so
      7f8eba7e0000-7f8eba7e2000 rw-p 00199000 fe:01 405885
        /lib/x86_64-linux-gnu/libc-2.24.so
      7f8eba7e2000-7f8eba7e6000 rw-p 00000000 00:00 0
      7f8eba7e6000-7f8eba809000 r-xp 00000000 fe:01 405876
        /lib/x86_64-linux-gnu/ld-2.24.so
      7f8eba9e9000-7f8eba9eb000 rw-p 00000000 00:00 0
      7f8ebaa06000-7f8ebaa09000 rw-p 00000000 00:00 0
      7f8ebaa09000-7f8ebaa0a000 r--p 00023000 fe:01 405876
        /lib/x86_64-linux-gnu/ld-2.24.so
      7f8ebaa0a000-7f8ebaa0b000 rw-p 00024000 fe:01 405876
        /lib/x86_64-linux-gnu/ld-2.24.so
      7f8ebaa0b000-7f8ebaa0c000 rw-p 00000000 00:00 0
      7ffcc99fa000-7ffcc9a1b000 rw-p 00000000 00:00 0                          [stack]
      7ffcc9b44000-7ffcc9b47000 r--p 00000000 00:00 0                          [vvar]
      7ffcc9b47000-7ffcc9b49000 r-xp 00000000 00:00 0                          [vdso]
      ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0
        [vsyscall]
      user@debian:~$ uname -a
      Linux debian 4.19.0-rc6+ #181 SMP Wed Oct 3 23:43:42 CEST 2018 x86_64 GNU/Linux
      user@debian:~$
      
      As you can see, the first page of the mapping at 0x10001000 was clobbered.
      
      Link: http://lkml.kernel.org/r/20181010152736.99475-1-jannh@google.com
      Fixes: a4ff8e86
      
       ("mm: introduce MAP_FIXED_NOREPLACE")
      Signed-off-by: default avatarJann Horn <jannh@google.com>
      Reported-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7aa867dd
  12. 22 Aug, 2018 2 commits
    • Michal Hocko's avatar
      mm, oom: remove oom_lock from oom_reaper · af5679fb
      Michal Hocko authored
      oom_reaper used to rely on the oom_lock since e2fe1456 ("oom_reaper:
      close race with exiting task").  We do not really need the lock anymore
      though.  21292580 ("mm: oom: let oom_reap_task and exit_mmap run
      concurrently") has removed serialization with the exit path based on the
      mm reference count and so we do not really rely on the oom_lock anymore.
      
      Tetsuo was arguing that at least MMF_OOM_SKIP should be set under the lock
      to prevent from races when the page allocator didn't manage to get the
      freed (reaped) memory in __alloc_pages_may_oom but it sees the flag later
      on and move on to another victim.  Although this is possible in principle
      let's wait for it to actually happen in real life before we make the
      locking more complex again.
      
      Therefore remove the oom_lock for oom_reaper paths (both exit_mmap and
      oom_reap_task_mm).  The reaper serializes with exit_mmap by mmap_sem +
      MMF_OOM_SKIP flag.  There is no synchronization with out_of_memory path
      now.
      
      [mhocko@kernel.org: oom_reap_task_mm should return false when __oom_reap_task_mm did]
        Link: http://lkml.kernel.org/r/20180724141747.GP28386@dhcp22.suse.cz
      Link: http://lkml.kernel.org/r/20180719075922.13784-1-mhocko@kernel.org
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Suggested-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      af5679fb
    • Michal Hocko's avatar
      mm, oom: distinguish blockable mode for mmu notifiers · 93065ac7
      Michal Hocko authored
      There are several blockable mmu notifiers which might sleep in
      mmu_notifier_invalidate_range_start and that is a problem for the
      oom_reaper because it needs to guarantee a forward progress so it cannot
      depend on any sleepable locks.
      
      Currently we simply back off and mark an oom victim with blockable mmu
      notifiers as done after a short sleep.  That can result in selecting a new
      oom victim prematurely because the previous one still hasn't torn its
      memory down yet.
      
      We can do much better though.  Even if mmu notifiers use sleepable locks
      there is no reason to automatically assume those locks are held.  Moreover
      majority of notifiers only care about a portion of the address space and
      there is absolutely zero reason to fail when we are unmapping an unrelated
      range.  Many notifiers do really block and wait for HW which is harder to
      handle and we have to bail out though.
      
      This patch handles the low hanging fruit.
      __mmu_notifier_invalidate_range_start gets a blockable flag and callbacks
      are not allowed to sleep if the flag is set to false.  This is achieved by
      using trylock instead of the sleepable lock for most callbacks and
      continue as long as we do not block down the call chain.
      
      I think we can improve that even further because there is a common pattern
      to do a range lookup first and then do something about that.  The first
      part can be done without a sleeping lock in most cases AFAICS.
      
      The oom_reaper end then simply retries if there is at least one notifier
      which couldn't make any progress in !blockable mode.  A retry loop is
      already implemented to wait for the mmap_sem and this is basically the
      same thing.
      
      The simplest way for driver developers to test this code path is to wrap
      userspace code which uses these notifiers into a memcg and set the hard
      limit to hit the oom.  This can be done e.g.  after the test faults in all
      the mmu notifier managed memory and set the hard limit to something really
      small.  Then we are looking for a proper process tear down.
      
      [akpm@linux-foundation.org: coding style fixes]
      [akpm@linux-foundation.org: minor code simplification]
      Link: http://lkml.kernel.org/r/20180716115058.5559-1-mhocko@kernel.org
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: Christian König <christian.koenig@amd.com> # AMD notifiers
      Acked-by: Leon Romanovsky <leonro@mellanox.com> # mlx and umem_odp
      Reported-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: "David (ChunMing) Zhou" <David1.Zhou@amd.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Alex Deucher <alexander.deucher@amd.com>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Doug Ledford <dledford@redhat.com>
      Cc: Jason Gunthorpe <jgg@ziepe.ca>
      Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
      Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
      Cc: Sudeep Dutt <sudeep.dutt@intel.com>
      Cc: Ashutosh Dixit <ashutosh.dixit@intel.com>
      Cc: Dimitri Sivanich <sivanich@sgi.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Felix Kuehling <felix.kuehling@amd.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      93065ac7
  13. 17 Aug, 2018 1 commit
  14. 27 Jul, 2018 1 commit
    • Kirill A. Shutemov's avatar
      mm: fix vma_is_anonymous() false-positives · bfd40eaf
      Kirill A. Shutemov authored
      vma_is_anonymous() relies on ->vm_ops being NULL to detect anonymous
      VMA.  This is unreliable as ->mmap may not set ->vm_ops.
      
      False-positive vma_is_anonymous() may lead to crashes:
      
      	next ffff8801ce5e7040 prev ffff8801d20eca50 mm ffff88019c1e13c0
      	prot 27 anon_vma ffff88019680cdd8 vm_ops 0000000000000000
      	pgoff 0 file ffff8801b2ec2d00 private_data 0000000000000000
      	flags: 0xff(read|write|exec|shared|mayread|maywrite|mayexec|mayshare)
      	------------[ cut here ]------------
      	kernel BUG at mm/memory.c:1422!
      	invalid opcode: 0000 [#1] SMP KASAN
      	CPU: 0 PID: 18486 Comm: syz-executor3 Not tainted 4.18.0-rc3+ #136
      	Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google
      	01/01/2011
      	RIP: 0010:zap_pmd_range mm/memory.c:1421 [inline]
      	RIP: 0010:zap_pud_range mm/memory.c:1466 [inline]
      	RIP: 0010:zap_p4d_range mm/memory.c:1487 [inline]
      	RIP: 0010:unmap_page_range+0x1c18/0x2220 mm/memory.c:1508
      	Call Trace:
      	 unmap_single_vma+0x1a0/0x310 mm/memory.c:1553
      	 zap_page_range_single+0x3cc/0x580 mm/memory.c:1644
      	 unmap_mapping_range_vma mm/memory.c:2792 [inline]
      	 unmap_mapping_range_tree mm/memory.c:2813 [inline]
      	 unmap_mapping_pages+0x3a7/0x5b0 mm/memory.c:2845
      	 unmap_mapping_range+0x48/0x60 mm/memory.c:2880
      	 truncate_pagecache+0x54/0x90 mm/truncate.c:800
      	 truncate_setsize+0x70/0xb0 mm/truncate.c:826
      	 simple_setattr+0xe9/0x110 fs/libfs.c:409
      	 notify_change+0xf13/0x10f0 fs/attr.c:335
      	 do_truncate+0x1ac/0x2b0 fs/open.c:63
      	 do_sys_ftruncate+0x492/0x560 fs/open.c:205
      	 __do_sys_ftruncate fs/open.c:215 [inline]
      	 __se_sys_ftruncate fs/open.c:213 [inline]
      	 __x64_sys_ftruncate+0x59/0x80 fs/open.c:213
      	 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290
      	 entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Reproducer:
      
      	#include <stdio.h>
      	#include <stddef.h>
      	#include <stdint.h>
      	#include <stdlib.h>
      	#include <string.h>
      	#include <sys/types.h>
      	#include <sys/stat.h>
      	#include <sys/ioctl.h>
      	#include <sys/mman.h>
      	#include <unistd.h>
      	#include <fcntl.h>
      
      	#define KCOV_INIT_TRACE			_IOR('c', 1, unsigned long)
      	#define KCOV_ENABLE			_IO('c', 100)
      	#define KCOV_DISABLE			_IO('c', 101)
      	#define COVER_SIZE			(1024<<10)
      
      	#define KCOV_TRACE_PC  0
      	#define KCOV_TRACE_CMP 1
      
      	int main(int argc, char **argv)
      	{
      		int fd;
      		unsigned long *cover;
      
      		system("mount -t debugfs none /sys/kernel/debug");
      		fd = open("/sys/kernel/debug/kcov", O_RDWR);
      		ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE);
      		cover = mmap(NULL, COVER_SIZE * sizeof(unsigned long),
      				PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
      		munmap(cover, COVER_SIZE * sizeof(unsigned long));
      		cover = mmap(NULL, COVER_SIZE * sizeof(unsigned long),
      				PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
      		memset(cover, 0, COVER_SIZE * sizeof(unsigned long));
      		ftruncate(fd, 3UL << 20);
      		return 0;
      	}
      
      This can be fixed by assigning anonymous VMAs own vm_ops and not relying
      on it being NULL.
      
      If ->mmap() failed to set ->vm_ops, mmap_region() will set it to
      dummy_vm_ops.  This way we will have non-NULL ->vm_ops for all VMAs.
      
      Link: http://lkml.kernel.org/r/20180724121139.62570-4-kirill.shutemov@linux.intel.com
      
      
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: default avatar <syzbot+3f84280d52be9b7083cc@syzkaller.appspotmail.com>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bfd40eaf
  15. 21 Jul, 2018 3 commits
    • Linus Torvalds's avatar
      mm: make vm_area_alloc() initialize core fields · 490fc053
      Linus Torvalds authored
      
      
      Like vm_area_dup(), it initializes the anon_vma_chain head, and the
      basic mm pointer.
      
      The rest of the fields end up being different for different users,
      although the plan is to also initialize the 'vm_ops' field to a dummy
      entry.
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      490fc053
    • Linus Torvalds's avatar
      mm: make vm_area_dup() actually copy the old vma data · 95faf699
      Linus Torvalds authored
      
      
      .. and re-initialize th eanon_vma_chain head.
      
      This removes some boiler-plate from the users, and also makes it clear
      why it didn't need use the 'zalloc()' version.
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      95faf699
    • Linus Torvalds's avatar
      mm: use helper functions for allocating and freeing vm_area structs · 3928d4f5
      Linus Torvalds authored
      
      
      The vm_area_struct is one of the most fundamental memory management
      objects, but the management of it is entirely open-coded evertwhere,
      ranging from allocation and freeing (using kmem_cache_[z]alloc and
      kmem_cache_free) to initializing all the fields.
      
      We want to unify this in order to end up having some unified
      initialization of the vmas, and the first step to this is to at least
      have basic allocation functions.
      
      Right now those functions are literally just wrappers around the
      kmem_cache_*() calls.  This is a purely mechanical conversion:
      
          # new vma:
          kmem_cache_zalloc(vm_area_cachep, GFP_KERNEL) -> vm_area_alloc()
      
          # copy old vma
          kmem_cache_alloc(vm_area_cachep, GFP_KERNEL) -> vm_area_dup(old)
      
          # free vma
          kmem_cache_free(vm_area_cachep, vma) -> vm_area_free(vma)
      
      to the point where the old vma passed in to the vm_area_dup() function
      isn't even used yet (because I've left all the old manual initialization
      alone).
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3928d4f5
  16. 14 Jul, 2018 1 commit
    • Michal Hocko's avatar
      mm: do not bug_on on incorrect length in __mm_populate() · bb177a73
      Michal Hocko authored
      syzbot has noticed that a specially crafted library can easily hit
      VM_BUG_ON in __mm_populate
      
        kernel BUG at mm/gup.c:1242!
        invalid opcode: 0000 [#1] SMP
        CPU: 2 PID: 9667 Comm: a.out Not tainted 4.18.0-rc3 #644
        Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/19/2017
        RIP: 0010:__mm_populate+0x1e2/0x1f0
        Code: 55 d0 65 48 33 14 25 28 00 00 00 89 d8 75 21 48 83 c4 20 5b 41 5c 41 5d 41 5e 41 5f 5d c3 e8 75 18 f1 ff 0f 0b e8 6e 18 f1 ff <0f> 0b 31 db eb c9 e8 93 06 e0 ff 0f 1f 00 55 48 89 e5 53 48 89 fb
        Call Trace:
           vm_brk_flags+0xc3/0x100
           vm_brk+0x1f/0x30
           load_elf_library+0x281/0x2e0
           __ia32_sys_uselib+0x170/0x1e0
           do_fast_syscall_32+0xca/0x420
           entry_SYSENTER_compat+0x70/0x7f
      
      The reason is that the length of the new brk is not page aligned when we
      try to populate the it.  There is no reason to bug on that though.
      do_brk_flags already aligns the length properly so the mapping is
      expanded as it should.  All we need is to tell mm_populate about it.
      Besides that there is absolutely no reason to to bug_on in the first
      place.  The worst thing that could happen is that the last page wouldn't
      get populated and that is far from putting system into an inconsistent
      state.
      
      Fix the issue by moving the length sanitization code from do_brk_flags
      up to vm_brk_flags.  The only other caller of do_brk_flags is brk
      syscall entry and it makes sure to provide the proper length so t here
      is no need for sanitation and so we can use do_brk_flags without it.
      
      Also remove the bogus BUG_ONs.
      
      [osalvador@techadventures.net: fix up vm_brk_flags s@request@len@]
      Link: http://lkml.kernel.org/r/20180706090217.GI32658@dhcp22.suse.cz
      
      
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarsyzbot <syzbot+5dcb560fe12aa5091c06@syzkaller.appspotmail.com>
      Tested-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Reviewed-by: default avatarOscar Salvador <osalvador@suse.de>
      Cc: Zi Yan <zi.yan@cs.rutgers.edu>
      Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michael S. Tsirkin <mst@redhat.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bb177a73
  17. 08 Jun, 2018 1 commit
  18. 19 May, 2018 1 commit
    • Linus Torvalds's avatar
      mmap: relax file size limit for regular files · 423913ad
      Linus Torvalds authored
      Commit be83bbf8 ("mmap: introduce sane default mmap limits") was
      introduced to catch problems in various ad-hoc character device drivers
      doing mmap and getting the size limits wrong.  In the process, it used
      "known good" limits for the normal cases of mapping regular files and
      block device drivers.
      
      It turns out that the "s_maxbytes" limit was less "known good" than I
      thought.  In particular, /proc doesn't set it, but exposes one regular
      file to mmap: /proc/vmcore.  As a result, that file got limited to the
      default MAX_INT s_maxbytes value.
      
      This went unnoticed for a while, because apparently the only thing that
      needs it is the s390 kernel zfcpdump, but there might be other tools
      that use this too.
      
      Vasily suggested just changing s_maxbytes for all of /proc, which isn't
      wrong, but makes me nervous at this stage.  So instead, just make the
      new mmap limit always be MAX_LFS_FILESIZE for regular files, which won't
      affect anything else.  It wasn't the regular file case I was worried
      about.
      
      I'd really prefer for maxsize to have been per-inode, but that is not
      how things are today.
      
      Fixes: be83bbf8
      
       ("mmap: introduce sane default mmap limits")
      Reported-by: default avatarVasily Gorbik <gor@linux.ibm.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      423913ad
  19. 12 May, 2018 1 commit
    • David Rientjes's avatar
      mm, oom: fix concurrent munlock and oom reaper unmap, v3 · 27ae357f
      David Rientjes authored
      Since exit_mmap() is done without the protection of mm->mmap_sem, it is
      possible for the oom reaper to concurrently operate on an mm until
      MMF_OOM_SKIP is set.
      
      This allows munlock_vma_pages_all() to concurrently run while the oom
      reaper is operating on a vma.  Since munlock_vma_pages_range() depends
      on clearing VM_LOCKED from vm_flags before actually doing the munlock to
      determine if any other vmas are locking the same memory, the check for
      VM_LOCKED in the oom reaper is racy.
      
      This is especially noticeable on architectures such as powerpc where
      clearing a huge pmd requires serialize_against_pte_lookup().  If the pmd
      is zapped by the oom reaper during follow_page_mask() after the check
      for pmd_none() is bypassed, this ends up deferencing a NULL ptl or a
      kernel oops.
      
      Fix this by manually freeing all possible memory from the mm before
      doing the munlock and then setting MMF_OOM_SKIP.  The oom reaper can not
      run on the mm anymore so the munlock is safe to do in exit_mmap().  It
      also matches the logic that the oom reaper currently uses for
      determining when to set MMF_OOM_SKIP itself, so there's no new risk of
      excessive oom killing.
      
      This issue fixes CVE-2018-1000200.
      
      Link: http://lkml.kernel.org/r/alpine.DEB.2.21.1804241526320.238665@chino.kir.corp.google.com
      Fixes: 21292580
      
       ("mm: oom: let oom_reap_task and exit_mmap run concurrently")
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Suggested-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: <stable@vger.kernel.org>	[4.14+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      27ae357f
  20. 11 May, 2018 1 commit
    • Linus Torvalds's avatar
      mmap: introduce sane default mmap limits · be83bbf8
      Linus Torvalds authored
      
      
      The internal VM "mmap()" interfaces are based on the mmap target doing
      everything using page indexes rather than byte offsets, because
      traditionally (ie 32-bit) we had the situation that the byte offset
      didn't fit in a register.  So while the mmap virtual address was limited
      by the word size of the architecture, the backing store was not.
      
      So we're basically passing "pgoff" around as a page index, in order to
      be able to describe backing store locations that are much bigger than
      the word size (think files larger than 4GB etc).
      
      But while this all makes a ton of sense conceptually, we've been dogged
      by various drivers that don't really understand this, and internally
      work with byte offsets, and then try to work with the page index by
      turning it into a byte offset with "pgoff << PAGE_SHIFT".
      
      Which obviously can overflow.
      
      Adding the size of the mapping to it to get the byte offset of the end
      of the backing store just exacerbates the problem, and if you then use
      this overflow-prone value to check various limits of your device driver
      mmap capability, you're just setting yourself up for problems.
      
      The correct thing for drivers to do is to do their limit math in page
      indices, the way the interface is designed.  Because the generic mmap
      code _does_ test that the index doesn't overflow, since that's what the
      mmap code really cares about.
      
      HOWEVER.
      
      Finding and fixing various random drivers is a sisyphean task, so let's
      just see if we can just make the core mmap() code do the limiting for
      us.  Realistically, the only "big" backing stores we need to care about
      are regular files and block devices, both of which are known to do this
      properly, and which have nice well-defined limits for how much data they
      can access.
      
      So let's special-case just those two known cases, and then limit other
      random mmap users to a backing store that still fits in "unsigned long".
      Realistically, that's not much of a limit at all on 64-bit, and on
      32-bit architectures the only worry might be the GPU drivers, which can
      have big physical address spaces.
      
      To make it possible for drivers like that to say that they are 64-bit
      clean, this patch does repurpose the "FMODE_UNSIGNED_OFFSET" bit in the
      file flags to allow drivers to mark their file descriptors as safe in
      the full 64-bit mmap address space.
      
      [ The timing for doing this is less than optimal, and this should really
        go in a merge window. But realistically, this needs wide testing more
        than it needs anything else, and being main-line is the only way to do
        that.
      
        So the earlier the better, even if it's outside the proper development
        cycle        - Linus ]
      
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Dave Airlie <airlied@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      be83bbf8
  21. 25 Apr, 2018 1 commit
    • Dave Hansen's avatar
      x86/pti: Filter at vma->vm_page_prot population · 316d097c
      Dave Hansen authored
      commit ce9962bf7e22bb3891655c349faff618922d4a73
      
      0day reported warnings at boot on 32-bit systems without NX support:
      
      attempted to set unsupported pgprot: 8000000000000025 bits: 8000000000000000 supported: 7fffffffffffffff
      WARNING: CPU: 0 PID: 1 at
      arch/x86/include/asm/pgtable.h:540 handle_mm_fault+0xfc1/0xfe0:
       check_pgprot at arch/x86/include/asm/pgtable.h:535
       (inlined by) pfn_pte at arch/x86/include/asm/pgtable.h:549
       (inlined by) do_anonymous_page at mm/memory.c:3169
       (inlined by) handle_pte_fault at mm/memory.c:3961
       (inlined by) __handle_mm_fault at mm/memory.c:4087
       (inlined by) handle_mm_fault at mm/memory.c:4124
      
      The problem is that due to the recent commit which removed auto-massaging
      of page protections, filtering page permissions at PTE creation time is not
      longer done, so vma->vm_page_prot is passed unfiltered to PTE creation.
      
      Filter the page protections before they are installed in vma->vm_page_prot.
      
      Fixes: fb43d6cb
      
       ("x86/mm: Do not auto-massage page protections")
      Reported-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: linux-mm@kvack.org
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Link: https://lkml.kernel.org/r/20180420222028.99D72858@viggo.jf.intel.com
      
      316d097c
  22. 16 Apr, 2018 1 commit
  23. 11 Apr, 2018 1 commit
    • Michal Hocko's avatar
      mm: introduce MAP_FIXED_NOREPLACE · a4ff8e86
      Michal Hocko authored
      Patch series "mm: introduce MAP_FIXED_NOREPLACE", v2.
      
      This has started as a follow up discussion [3][4] resulting in the
      runtime failure caused by hardening patch [5] which removes MAP_FIXED
      from the elf loader because MAP_FIXED is inherently dangerous as it
      might silently clobber an existing underlying mapping (e.g.  stack).
      The reason for the failure is that some architectures enforce an
      alignment for the given address hint without MAP_FIXED used (e.g.  for
      shared or file backed mappings).
      
      One way around this would be excluding those archs which do alignment
      tricks from the hardening [6].  The patch is really trivial but it has
      been objected, rightfully so, that this screams for a more generic
      solution.  We basically want a non-destructive MAP_FIXED.
      
      The first patch introduced MAP_FIXED_NOREPLACE which enforces the given
      address but unlike MAP_FIXED it fails with EEXIST if the given range
      conflicts with an existing one.  The flag is introduced as a completely
      new one rather than a MAP_FIXED extension because of the backward
      compatibility.  We really want a never-clobber semantic even on older
      kernels which do not recognize the flag.  Unfortunately mmap sucks
      wrt flags evaluation because we do not EINVAL on unknown flags.  On
      those kernels we would simply use the traditional hint based semantic so
      the caller can still get a different address (which sucks) but at least
      not silently corrupt an existing mapping.  I do not see a good way
      around that.  Except we won't export expose the new semantic to the
      userspace at all.
      
      It seems there are users who would like to have something like that.
      Jemalloc has been mentioned by Michael Ellerman [7]
      
      Florian Weimer has mentioned the following:
      : glibc ld.so currently maps DSOs without hints.  This means that the kernel
      : will map right next to each other, and the offsets between them a completely
      : predictable.  We would like to change that and supply a random address in a
      : window of the address space.  If there is a conflict, we do not want the
      : kernel to pick a non-random address. Instead, we would try again with a
      : random address.
      
      John Hubbard has mentioned CUDA example
      : a) Searches /proc/<pid>/maps for a "suitable" region of available
      : VA space.  "Suitable" generally means it has to have a base address
      : within a certain limited range (a particular device model might
      : have odd limitations, for example), it has to be large enough, and
      : alignment has to be large enough (again, various devices may have
      : constraints that lead us to do this).
      :
      : This is of course subject to races with other threads in the process.
      :
      : Let's say it finds a region starting at va.
      :
      : b) Next it does:
      :     p = mmap(va, ...)
      :
      : *without* setting MAP_FIXED, of course (so va is just a hint), to
      : attempt to safely reserve that region. If p != va, then in most cases,
      : this is a failure (almost certainly due to another thread getting a
      : mapping from that region before we did), and so this layer now has to
      : call munmap(), before returning a "failure: retry" to upper layers.
      :
      :     IMPROVEMENT: --> if instead, we could call this:
      :
      :             p = mmap(va, ... MAP_FIXED_NOREPLACE ...)
      :
      :         , then we could skip the munmap() call upon failure. This
      :         is a small thing, but it is useful here. (Thanks to Piotr
      :         Jaroszynski and Mark Hairgrove for helping me get that detail
      :         exactly right, btw.)
      :
      : c) After that, CUDA suballocates from p, via:
      :
      :      q = mmap(sub_region_start, ... MAP_FIXED ...)
      :
      : Interestingly enough, "freeing" is also done via MAP_FIXED, and
      : setting PROT_NONE to the subregion. Anyway, I just included (c) for
      : general interest.
      
      Atomic address range probing in the multithreaded programs in general
      sounds like an interesting thing to me.
      
      The second patch simply replaces MAP_FIXED use in elf loader by
      MAP_FIXED_NOREPLACE.  I believe other places which rely on MAP_FIXED
      should follow.  Actually real MAP_FIXED usages should be docummented
      properly and they should be more of an exception.
      
      [1] http://lkml.kernel.org/r/20171116101900.13621-1-mhocko@kernel.org
      [2] http://lkml.kernel.org/r/20171129144219.22867-1-mhocko@kernel.org
      [3] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au
      [4] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com
      [5] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org
      [6] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz
      [7] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au
      
      This patch (of 2):
      
      MAP_FIXED is used quite often to enforce mapping at the particular range.
      The main problem of this flag is, however, that it is inherently dangerous
      because it unmaps existing mappings covered by the requested range.  This
      can cause silent memory corruptions.  Some of them even with serious
      security implications.  While the current semantic might be really
      desiderable in many cases there are others which would want to enforce the
      given range but rather see a failure than a silent memory corruption on a
      clashing range.  Please note that there is no guarantee that a given range
      is obeyed by the mmap even when it is free - e.g.  arch specific code is
      allowed to apply an alignment.
      
      Introduce a new MAP_FIXED_NOREPLACE flag for mmap to achieve this
      behavior.  It has the same semantic as MAP_FIXED wrt.  the given address
      request with a single exception that it fails with EEXIST if the requested
      address is already covered by an existing mapping.  We still do rely on
      get_unmaped_area to handle all the arch specific MAP_FIXED treatment and
      check for a conflicting vma after it returns.
      
      The flag is introduced as a completely new one rather than a MAP_FIXED
      extension because of the backward compatibility.  We really want a
      never-clobber semantic even on older kernels which do not recognize the
      flag.  Unfortunately mmap sucks wrt.  flags evaluation because we do not
      EINVAL on unknown flags.  On those kernels we would simply use the
      traditional hint based semantic so the caller can still get a different
      address (which sucks) but at least not silently corrupt an existing
      mapping.  I do not see a good way around that.
      
      [mpe@ellerman.id.au: fix whitespace]
      [fail on clashing range with EEXIST as per Florian Weimer]
      [set MAP_FIXED before round_hint_to_min as per Khalid Aziz]
      Link: http://lkml.kernel.org/r/20171213092550.2774-2-mhocko@kernel.org
      
      
      Reviewed-by: default avatarKhalid Aziz <khalid.aziz@oracle.com>
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Cc: Khalid Aziz <khalid.aziz@oracle.com>
      Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
      Cc: Joel Stanley <joel@jms.id.au>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Jason Evans <jasone@google.com>
      Cc: David Goldblatt <davidtgoldblatt@gmail.com>
      Cc: Edward Tomasz Napierała <trasz@FreeBSD.org>
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a4ff8e86
  24. 06 Apr, 2018 1 commit
  25. 02 Apr, 2018 1 commit
  26. 15 Dec, 2017 1 commit
    • Michal Hocko's avatar
      mm, oom_reaper: fix memory corruption · 4837fe37
      Michal Hocko authored
      David Rientjes has reported the following memory corruption while the
      oom reaper tries to unmap the victims address space
      
        BUG: Bad page map in process oom_reaper  pte:6353826300000000 pmd:00000000
        addr:00007f50cab1d000 vm_flags:08100073 anon_vma:ffff9eea335603f0 mapping:          (null) index:7f50cab1d
        file:          (null) fault:          (null) mmap:          (null) readpage:          (null)
        CPU: 2 PID: 1001 Comm: oom_reaper
        Call Trace:
           unmap_page_range+0x1068/0x1130
           __oom_reap_task_mm+0xd5/0x16b
           oom_reaper+0xff/0x14c
           kthread+0xc1/0xe0
      
      Tetsuo Handa has noticed that the synchronization inside exit_mmap is
      insufficient.  We only synchronize with the oom reaper if
      tsk_is_oom_victim which is not true if the final __mmput is called from
      a different context than the oom victim exit path.  This can trivially
      happen from context of any task which has grabbed mm reference (e.g.  to
      read /proc/<pid>/ file which requires mm etc.).
      
      The race would look like this
      
        oom_reaper		oom_victim		task
      						mmget_not_zero
      			do_exit
      			  mmput
        __oom_reap_task_mm				mmput
        						  __mmput
      						    exit_mmap
      						      remove_vma
          unmap_page_range
      
      Fix this issue by providing a new mm_is_oom_victim() helper which
      operates on the mm struct rather than a task.  Any context which
      operates on a remote mm struct should use this helper in place of
      tsk_is_oom_victim.  The flag is set in mark_oom_victim and never cleared
      so it is stable in the exit_mmap path.
      
      Debugged by Tetsuo Handa.
      
      Link: http://lkml.kernel.org/r/20171210095130.17110-1-mhocko@kernel.org
      Fixes: 21292580
      
       ("mm: oom: let oom_reap_task and exit_mmap run concurrently")
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Andrea Argangeli <andrea@kernel.org>
      Cc: <stable@vger.kernel.org>	[4.14]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4837fe37
  27. 30 Nov, 2017 1 commit
  28. 03 Nov, 2017 1 commit
    • Dan Williams's avatar
      mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flags · 1c972597
      Dan Williams authored
      
      
      The mmap(2) syscall suffers from the ABI anti-pattern of not validating
      unknown flags. However, proposals like MAP_SYNC need a mechanism to
      define new behavior that is known to fail on older kernels without the
      support. Define a new MAP_SHARED_VALIDATE flag pattern that is
      guaranteed to fail on all legacy mmap implementations.
      
      It is worth noting that the original proposal was for a standalone
      MAP_VALIDATE flag. However, when that  could not be supported by all
      archs Linus observed:
      
          I see why you *think* you want a bitmap. You think you want
          a bitmap because you want to make MAP_VALIDATE be part of MAP_SYNC
          etc, so that people can do
      
          ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED
      		    | MAP_SYNC, fd, 0);
      
          and "know" that MAP_SYNC actually takes.
      
          And I'm saying that whole wish is bogus. You're fundamentally
          depending on special semantics, just make it explicit. It's already
          not portable, so don't try to make it so.
      
          Rename that MAP_VALIDATE as MAP_SHARED_VALIDATE, make it have a value
          of 0x3, and make people do
      
          ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED_VALIDATE
      		    | MAP_SYNC, fd, 0);
      
          and then the kernel side is easier too (none of that random garbage
          playing games with looking at the "MAP_VALIDATE bit", but just another
          case statement in that map type thing.
      
          Boom. Done.
      
      Similar to ->fallocate() we also want the ability to validate the
      support for new flags on a per ->mmap() 'struct file_operations'
      instance basis.  Towards that end arrange for flags to be generically
      validated against a mmap_supported_flags exported by 'struct
      file_operations'. By default all existing flags are implicitly
      supported, but new flags require MAP_SHARED_VALIDATE and
      per-instance-opt-in.
      
      Cc: Jan Kara <jack@suse.cz>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Suggested-by: default avatarChristoph Hellwig <hch@lst.de>
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: default avatarRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      1c972597
  29. 09 Sep, 2017 1 commit
  30. 07 Sep, 2017 2 commits