1. 30 Jan, 2021 4 commits
    • Andrey Konovalov's avatar
      kasan: don't run tests in async mode · a546d2fd
      Andrey Konovalov authored and Vincenzo Frascino's avatar Vincenzo Frascino committed
      
      
      Asynchronous KASAN mode doesn't guarantee that a tag fault will be
      detected immediately and causes tests to fail. Forbid running them
      in asynchronous mode.
      
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      a546d2fd
    • Vincenzo Frascino's avatar
      arm64: mte: Enable async tag check fault · 8c24f249
      Vincenzo Frascino authored
      
      
      MTE provides a mode that asynchronously updates the TFSR_EL1 register
      when a tag check exception is detected.
      
      To take advantage of this mode the kernel has to verify the status of
      the register at:
        1. Context switching
        2. Return to user/EL0 (Not required in entry from EL0 since the kernel
        did not run)
        3. Kernel entry from EL1
        4. Kernel exit to EL1
      
      If the register is non-zero a trace is reported.
      
      Add the required features for EL1 detection and reporting.
      
      Note: ITFSB bit is set in the SCTLR_EL1 register hence it guaranties that
      the indirect writes to TFSR_EL1 are synchronized at exception entry to
      EL1. On the context switch path the synchronization is guarantied by the
      dsb() in __switch_to().
      The dsb(nsh) in mte_check_tfsr_exit() is provisional pending
      confirmation by the architects.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: Vincenzo Frascino's avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      8c24f249
    • Vincenzo Frascino's avatar
      kasan: Add report for async mode · c218e40d
      Vincenzo Frascino authored
      
      
      KASAN provides an asynchronous mode of execution.
      
      Add reporting functionality for this mode.
      
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: Vincenzo Frascino's avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      c218e40d
    • Vincenzo Frascino's avatar
      kasan: Add KASAN mode kernel parameter · 0fee07e2
      Vincenzo Frascino authored
      
      
      Architectures supported by KASAN_HW_TAGS can provide a sync or async mode
      of execution. On an MTE enabled arm64 hw for example this can be identified
      with the synchronous or asynchronous tagging mode of execution.
      In synchronous mode, an exception is triggered if a tag check fault occurs.
      In asynchronous mode, if a tag check fault occurs, the TFSR_EL1 register is
      updated asynchronously. The kernel checks the corresponding bits
      periodically.
      
      KASAN requires a specific kernel command line parameter to make use of this
      hw features.
      
      Add KASAN HW execution mode kernel command line parameter.
      
      Note: This patch adds the kasan.mode kernel parameter and the
      sync/async kernel command line options to enable the described features.
      
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: Vincenzo Frascino's avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      [ Add a new var instead of exposing kasan_arg_mode to be consistent with
        flags for other command line arguments. ]
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      0fee07e2
  2. 29 Jan, 2021 36 commits
    • Vincenzo Frascino's avatar
      arm64: mte: Add asynchronous mode support · 76efb90d
      Vincenzo Frascino authored
      
      
      MTE provides an asynchronous mode for detecting tag exceptions. In
      particular instead of triggering a fault the arm64 core updates a
      register which is checked by the kernel after the asynchronous tag
      check fault has occurred.
      
      Add support for MTE asynchronous mode.
      
      The exception handling mechanism will be added with a future patch.
      
      Note: KASAN HW activates async mode via kasan.mode kernel parameter.
      The default mode is set to synchronous.
      The code that verifies the status of TFSR_EL1 will be added with a
      future patch.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: Vincenzo Frascino's avatarVincenzo Frascino <vincenzo.frascino@arm.com>
      76efb90d
    • Mike Rapoport's avatar
      secretmem: test: add basic selftest for memfd_secret(2) · c73144a8
      Mike Rapoport authored
      The test verifies that file descriptor created with memfd_secret does not
      allow read/write operations, that secret memory mappings respect
      RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
      ptrace() to the secret memory fail.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-12-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      c73144a8
    • Mike Rapoport's avatar
      arch, mm: wire up memfd_secret system call where relevant · 4d52887c
      Mike Rapoport authored
      Wire up memfd_secret system call on architectures that define
      ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-11-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarPalmer Dabbelt <palmerdabbelt@google.com>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      4d52887c
    • Mike Rapoport's avatar
      PM: hibernate: disable when there are active secretmem users · 6ef8f768
      Mike Rapoport authored
      It is unsafe to allow saving of secretmem areas to the hibernation
      snapshot as they would be visible after the resume and this essentially
      will defeat the purpose of secret memory mappings.
      
      Prevent hibernation whenever there are active secret memory users.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-10-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      6ef8f768
    • Mike Rapoport's avatar
      secretmem: add memcg accounting · 6465cb72
      Mike Rapoport authored
      Account memory consumed by secretmem to memcg.  The accounting is updated
      when the memory is actually allocated and freed.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-9-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Reviewed-by: default avatarShakeel Butt <shakeelb@google.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      6465cb72
    • Mike Rapoport's avatar
      secretmem: use PMD-size pages to amortize direct map fragmentation · 34c1e040
      Mike Rapoport authored
      Removing a PAGE_SIZE page from the direct map every time such page is
      allocated for a secret memory mapping will cause severe fragmentation of
      the direct map.  This fragmentation can be reduced by using PMD-size pages
      as a pool for small pages for secret memory mappings.
      
      Add a gen_pool per secretmem inode and lazily populate this pool with
      PMD-size pages.
      
      As pages allocated by secretmem become unmovable, use CMA to back large
      page caches so that page allocator won't be surprised by failing attempt
      to migrate these pages.
      
      The CMA area used by secretmem is controlled by the "secretmem=" kernel
      parameter.  This allows explicit control over the memory available for
      secretmem and provides upper hard limit for secretmem consumption.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-8-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      34c1e040
    • Mike Rapoport's avatar
      mm: introduce memfd_secret system call to create "secret" memory areas · cc9327f3
      Mike Rapoport authored
      Introduce "memfd_secret" system call with the ability to create memory
      areas visible only in the context of the owning process and not mapped not
      only to other processes but in the kernel page tables as well.
      
      The user will create a file descriptor using the memfd_secret() system
      call. The memory areas created by mmap() calls from this file descriptor
      will be unmapped from the kernel direct map and they will be only mapped in
      the page table of the owning mm.
      
      The secret memory remains accessible in the process context using uaccess
      primitives, but it is not accessible using direct/linear map addresses.
      
      Functions in the follow_page()/get_user_page() family will refuse to return
      a page that belongs to the secret memory area.
      
      A page that was a part of the secret memory area is cleared when it is
      freed.
      
      The following example demonstrates creation of a secret mapping (error
      handling is omitted):
      
      	fd = memfd_secret(0);
      	ftruncate(fd, MAP_SIZE);
      	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-7-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: default avatarHagen Paul Pfeifer <hagen@jauu.net>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      cc9327f3
    • Arnd Bergmann's avatar
      arm64: kfence: fix header inclusion · 5afa819b
      Arnd Bergmann authored
      Randconfig builds started warning about a missing function declaration
      after set_memory_valid() is moved to a new file:
      
      In file included from mm/kfence/core.c:26:
      arch/arm64/include/asm/kfence.h:17:2: error: implicit declaration of function 'set_memory_valid' [-Werror,-Wimplicit-function-declaration]
      
      Include the correct header again.
      
      Link: https://lkml.kernel.org/r/20210125125025.102381-1-arnd@kernel.org
      
      
      Fixes: 9e18ec3cfabd ("set_memory: allow querying whether set_direct_map_*() is actually enabled")
      Fixes: 204555ff8bd6 ("arm64, kfence: enable KFENCE for ARM64")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Marco Elver <elver@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@google.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Mike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      5afa819b
    • Mike Rapoport's avatar
      set_memory: allow querying whether set_direct_map_*() is actually enabled · 471f26f4
      Mike Rapoport authored
      On arm64, set_direct_map_*() functions may return 0 without actually
      changing the linear map.  This behaviour can be controlled using kernel
      parameters, so we need a way to determine at runtime whether calls to
      set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
      any effect.
      
      Extend set_memory API with can_set_direct_map() function that allows
      checking if calling set_direct_map_*() will actually change the page
      table, replace several occurrences of open coded checks in arm64 with the
      new function and provide a generic stub for architectures that always
      modify page tables upon calls to set_direct_map APIs.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-6-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      471f26f4
    • Mike Rapoport's avatar
      set_memory: allow set_direct_map_*_noflush() for multiple pages · 70a947ff
      Mike Rapoport authored
      The underlying implementations of set_direct_map_invalid_noflush() and
      set_direct_map_default_noflush() allow updating multiple contiguous pages
      at once.
      
      Add numpages parameter to set_direct_map_*_noflush() to expose this
      ability with these APIs.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-5-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      70a947ff
    • Mike Rapoport's avatar
      riscv/Kconfig: make direct map manipulation options depend on MMU · 287b85d3
      Mike Rapoport authored
      ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
      no meaning when CONFIG_MMU is disabled and there is no point to enable
      them for the nommu case.
      
      Add an explicit dependency on MMU for these options.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-4-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      287b85d3
    • Mike Rapoport's avatar
      mmap: make mlock_future_check() global · 49aa347a
      Mike Rapoport authored
      It will be used by the upcoming secret memory implementation.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-3-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      49aa347a
    • Mike Rapoport's avatar
      mm: add definition of PMD_PAGE_ORDER · 92bb909a
      Mike Rapoport authored
      Patch series "mm: introduce memfd_secret system call to create "secret" memory areas", v16.
      
      This is an implementation of "secret" mappings backed by a file
      descriptor.
      
      The file descriptor backing secret memory mappings is created using a
      dedicated memfd_secret system call The desired protection mode for the
      memory is configured using flags parameter of the system call.  The mmap()
      of the file descriptor created with memfd_secret() will create a "secret"
      memory mapping.  The pages in that mapping will be marked as not present
      in the direct map and will be present only in the page table of the owning
      mm.
      
      Although normally Linux userspace mappings are protected from other users,
      such secret mappings are useful for environments where a hostile tenant is
      trying to trick the kernel into giving them access to other tenants
      mappings.
      
      Additionally, in the future the secret mappings may be used as a mean to
      protect guest memory in a virtual machine host.
      
      For demonstration of secret memory usage we've created a userspace library
      
      https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
      
      that does two things: the first is act as a preloader for openssl to
      redirect all the OPENSSL_malloc calls to secret memory meaning any secret
      keys get automatically protected this way and the other thing it does is
      expose the API to the user who needs it.  We anticipate that a lot of the
      use cases would be like the openssl one: many toolkits that deal with
      secret keys already have special handling for the memory to try to give
      them greater protection, so this would simply be pluggable into the
      toolkits without any need for user application modification.
      
      Hiding secret memory mappings behind an anonymous file allows (ab)use of
      the page cache for tracking pages allocated for the "secret" mappings as
      well as using address_space_operations for e.g.  page migration callbacks.
      
      The anonymous file may be also used implicitly, like hugetlb files, to
      implement mmap(MAP_SECRET) and use the secret memory areas with "native"
      mm ABIs in the future.
      
      To limit fragmentation of the direct map to splitting only PUD-size pages,
      I've added an amortizing cache of PMD-size pages to each file descriptor
      that is used as an allocation pool for the secret memory areas.
      
      As the memory allocated by secretmem becomes unmovable, we use CMA to back
      large page caches so that page allocator won't be surprised by failing
      attempt to migrate these pages.
      
      This patch (of 11):
      
      The definition of PMD_PAGE_ORDER denoting the number of base pages in the
      second-level leaf page is already used by DAX and maybe handy in other
      cases as well.
      
      Several architectures already have definition of PMD_ORDER as the size of
      second level page table, so to avoid conflict with these definitions use
      PMD_PAGE_ORDER name and update DAX respectively.
      
      Link: https://lkml.kernel.org/r/20210121122723.3446-1-rppt@kernel.org
      Link: https://lkml.kernel.org/r/20210121122723.3446-2-rppt@kernel.org
      
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.ibm.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Elena Reshetova <elena.reshetova@intel.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: James Bottomley <jejb@linux.ibm.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
      Cc: Roman Gushchin <guro@fb.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tycho Andersen <tycho@tycho.ws>
      Cc: Will Deacon <will@kernel.org>
      Cc: Hagen Paul Pfeifer <hagen@jauu.net>
      Cc: Palmer Dabbelt <palmerdabbelt@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      92bb909a
    • Stephen Rothwell's avatar
      Merge branch 'akpm-current/current' · 09e02afa
      Stephen Rothwell authored
      09e02afa
    • Stephen Rothwell's avatar
    • Stephen Rothwell's avatar
    • Stephen Rothwell's avatar
      97780161
    • Stephen Rothwell's avatar
      27453ac9
    • Stephen Rothwell's avatar
      a4ad0b12
    • Stephen Rothwell's avatar
      248c243c
    • Stephen Rothwell's avatar
      0568063c
    • Stephen Rothwell's avatar
      88a1957f
    • Stephen Rothwell's avatar
      fa5a89c1
    • Stephen Rothwell's avatar
      d4c25cc8
    • Stephen Rothwell's avatar
      Merge remote-tracking branch 'kspp/for-next/kspp' · 871df445
      Stephen Rothwell authored
      # Conflicts:
      #	include/asm-generic/vmlinux.lds.h
      871df445
    • Stephen Rothwell's avatar
      b0a0fe78
    • Stephen Rothwell's avatar
      8d65e792
    • Stephen Rothwell's avatar
      0a0884de
    • Stephen Rothwell's avatar
      3f5f78d3
    • Stephen Rothwell's avatar
      9deab8c6
    • Stephen Rothwell's avatar
      2bdab718
    • Stephen Rothwell's avatar
      cfe6204c
    • Stephen Rothwell's avatar
      Merge remote-tracking branch 'gpio-brgl/gpio/for-next' · 6930faca
      Stephen Rothwell authored
      # Conflicts:
      #	arch/arm64/boot/dts/toshiba/tmpv7708-rm-mbrc.dts
      6930faca
    • Stephen Rothwell's avatar
      56e6d5da
    • Stephen Rothwell's avatar
      5b4bf485
    • Stephen Rothwell's avatar
      895ba70c