1. 16 Jan, 2018 1 commit
  2. 15 Jan, 2018 3 commits
  3. 08 Jan, 2018 2 commits
  4. 22 Dec, 2017 2 commits
  5. 12 Dec, 2017 1 commit
    • Shanker Donthineni's avatar
      arm64: Add software workaround for Falkor erratum 1041 · 932b50c7
      Shanker Donthineni authored
      
      
      The ARM architecture defines the memory locations that are permitted
      to be accessed as the result of a speculative instruction fetch from
      an exception level for which all stages of translation are disabled.
      Specifically, the core is permitted to speculatively fetch from the
      4KB region containing the current program counter 4K and next 4K.
      
      When translation is changed from enabled to disabled for the running
      exception level (SCTLR_ELn[M] changed from a value of 1 to 0), the
      Falkor core may errantly speculatively access memory locations outside
      of the 4KB region permitted by the architecture. The errant memory
      access may lead to one of the following unexpected behaviors.
      
      1) A System Error Interrupt (SEI) being raised by the Falkor core due
         to the errant memory access attempting to access a region of memory
         that is protected by a slave-side memory protection unit.
      2) Unpredictable device behavior due to a speculative read from device
         memory. This behavior may only occur if the instruction cache is
         disabled prior to or coincident with translation being changed from
         enabled to disabled.
      
      The conditions leading to this erratum will not occur when either of the
      following occur:
       1) A higher exception level disables translation of a lower exception level
         (e.g. EL2 changing SCTLR_EL1[M] from a value of 1 to 0).
       2) An exception level disabling its stage-1 translation if its stage-2
          translation is enabled (e.g. EL1 changing SCTLR_EL1[M] from a value of 1
          to 0 when HCR_EL2[VM] has a value of 1).
      
      To avoid the errant behavior, software must execute an ISB immediately
      prior to executing the MSR that will change SCTLR_ELn[M] from 1 to 0.
      
      Signed-off-by: default avatarShanker Donthineni <shankerd@codeaurora.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      932b50c7
  6. 11 Dec, 2017 4 commits
  7. 16 Nov, 2017 1 commit
    • Will Deacon's avatar
      arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow · e17d8025
      Will Deacon authored
      The kasan shadow is currently mapped using vmemmap_populate() since that
      provides a semi-convenient way to map pages into init_top_pgt.  However,
      since that no longer zeroes the mapped pages, it is not suitable for
      kasan, which requires zeroed shadow memory.
      
      Add kasan_populate_shadow() interface and use it instead of
      vmemmap_populate().  Besides, this allows us to take advantage of
      gigantic pages and use them to populate the shadow, which should save us
      some memory wasted on page tables and reduce TLB pressure.
      
      Link: http://lkml.kernel.org/r/20171103185147.2688-3-pasha.tatashin@oracle.com
      
      
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Steven Sistare <steven.sistare@oracle.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Bob Picco <bob.picco@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e17d8025
  8. 13 Nov, 2017 1 commit
    • Dave Martin's avatar
      arm64: Make ARMV8_DEPRECATED depend on SYSCTL · 6cfa7cc4
      Dave Martin authored
      
      
      If CONFIG_SYSCTL=n and CONFIG_ARMV8_DEPRECATED=y, the deprecated
      instruction emulation code currently leaks some memory at boot
      time, and won't have any runtime control interface.  This does
      not feel like useful or intended behaviour...
      
      This patch adds a dependency on CONFIG_SYSCTL, so that such a
      kernel can't be built in the first place.
      
      It's probably not worth adding the error-handling / cleanup code
      that would be needed to deal with this otherwise: people who
      desperately need the emulation can still enable SYSCTL.
      
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      6cfa7cc4
  9. 03 Nov, 2017 2 commits
  10. 25 Oct, 2017 1 commit
  11. 19 Oct, 2017 2 commits
  12. 12 Oct, 2017 1 commit
  13. 04 Oct, 2017 1 commit
  14. 02 Oct, 2017 1 commit
  15. 15 Aug, 2017 1 commit
  16. 09 Aug, 2017 2 commits
  17. 12 Jul, 2017 1 commit
    • Daniel Micay's avatar
      include/linux/string.h: add the option of fortified string.h functions · 6974f0c4
      Daniel Micay authored
      This adds support for compiling with a rough equivalent to the glibc
      _FORTIFY_SOURCE=1 feature, providing compile-time and runtime buffer
      overflow checks for string.h functions when the compiler determines the
      size of the source or destination buffer at compile-time.  Unlike glibc,
      it covers buffer reads in addition to writes.
      
      GNU C __builtin_*_chk intrinsics are avoided because they would force a
      much more complex implementation.  They aren't designed to detect read
      overflows and offer no real benefit when using an implementation based
      on inline checks.  Inline checks don't add up to much code size and
      allow full use of the regular string intrinsics while avoiding the need
      for a bunch of _chk functions and per-arch assembly to avoid wrapper
      overhead.
      
      This detects various overflows at compile-time in various drivers and
      some non-x86 core kernel code.  There will likely be issues caught in
      regular use at runtime too.
      
      Future improvements left out of initial implementation for simplicity,
      as it's all quite optional and can be done incrementally:
      
      * Some of the fortified string functions (strncpy, strcat), don't yet
        place a limit on reads from the source based on __builtin_object_size of
        the source buffer.
      
      * Extending coverage to more string functions like strlcat.
      
      * It should be possible to optionally use __builtin_object_size(x, 1) for
        some functions (C strings) to detect intra-object overflows (like
        glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative
        approach to avoid likely compatibility issues.
      
      * The compile-time checks should be made available via a separate config
        option which can be enabled by default (or always enabled) once enough
        time has passed to get the issues it catches fixed.
      
      Kees said:
       "This is great to have. While it was out-of-tree code, it would have
        blocked at least CVE-2016-3858 from being exploitable (improper size
        argument to strlcpy()). I've sent a number of fixes for
        out-of-bounds-reads that this detected upstream already"
      
      [arnd@arndb.de: x86: fix fortified memcpy]
        Link: http://lkml.kernel.org/r/20170627150047.660360-1-arnd@arndb.de
      [keescook@chromium.org: avoid panic() in favor of BUG()]
        Link: http://lkml.kernel.org/r/20170626235122.GA25261@beast
      [keescook@chromium.org: move from -mm, add ARCH_HAS_FORTIFY_SOURCE, tweak Kconfig help]
      Link: http://lkml.kernel.org/r/20170526095404.20439-1-danielmicay@gmail.com
      Link: http://lkml.kernel.org/r/1497903987-21002-8-git-send-email-keescook@chromium.org
      
      
      Signed-off-by: default avatarDaniel Micay <danielmicay@gmail.com>
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Daniel Axtens <dja@axtens.net>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6974f0c4
  18. 06 Jul, 2017 1 commit
  19. 22 Jun, 2017 1 commit
    • Tyler Baicar's avatar
      acpi: apei: handle SEA notification type for ARMv8 · 7edda088
      Tyler Baicar authored
      
      
      ARM APEI extension proposal added SEA (Synchronous External Abort)
      notification type for ARMv8.
      Add a new GHES error source handling function for SEA. If an error
      source's notification type is SEA, then this function can be registered
      into the SEA exception handler. That way GHES will parse and report
      SEA exceptions when they occur.
      An SEA can interrupt code that had interrupts masked and is treated as
      an NMI. To aid this the page of address space for mapping APEI buffers
      while in_nmi() is always reserved, and ghes_ioremap_pfn_nmi() is
      changed to use the helper methods to find the prot_t to map with in
      the same way as ghes_ioremap_pfn_irq().
      
      Signed-off-by: default avatarTyler Baicar <tbaicar@codeaurora.org>
      CC: Jonathan (Zhixiong) Zhang <zjzhang@codeaurora.org>
      Reviewed-by: James Morse's avatarJames Morse <james.morse@arm.com>
      Acked-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      7edda088
  20. 20 Jun, 2017 1 commit
  21. 15 Jun, 2017 2 commits
  22. 13 Jun, 2017 1 commit
  23. 12 Jun, 2017 1 commit
  24. 09 Jun, 2017 1 commit
  25. 07 Jun, 2017 1 commit
    • Ard Biesheuvel's avatar
      arm64: ftrace: add support for far branches to dynamic ftrace · e71a4e1b
      Ard Biesheuvel authored
      
      
      Currently, dynamic ftrace support in the arm64 kernel assumes that all
      core kernel code is within range of ordinary branch instructions that
      occur in module code, which is usually the case, but is no longer
      guaranteed now that we have support for module PLTs and address space
      randomization.
      
      Since on arm64, all patching of branch instructions involves function
      calls to the same entry point [ftrace_caller()], we can emit the modules
      with a trampoline that has unlimited range, and patch both the trampoline
      itself and the branch instruction to redirect the call via the trampoline.
      
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: minor clarification to smp_wmb() comment]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      e71a4e1b
  26. 03 Jun, 2017 1 commit
  27. 26 Apr, 2017 2 commits
  28. 23 Apr, 2017 1 commit
    • Ingo Molnar's avatar
      Revert "x86/mm/gup: Switch GUP to the generic get_user_page_fast() implementation" · 6dd29b3d
      Ingo Molnar authored
      This reverts commit 2947ba05.
      
      Dan Williams reported dax-pmem kernel warnings with the following signature:
      
         WARNING: CPU: 8 PID: 245 at lib/percpu-refcount.c:155 percpu_ref_switch_to_atomic_rcu+0x1f5/0x200
         percpu ref (dax_pmem_percpu_release [dax_pmem]) <= 0 (0) after switching to atomic
      
      ... and bisected it to this commit, which suggests possible memory corruption
      caused by the x86 fast-GUP conversion.
      
      He also pointed out:
      
       "
        This is similar to the backtrace when we were not properly handling
        pud faults and was fixed with this commit: 220ced16
      
       "mm: fix
        get_user_pages() vs device-dax pud mappings"
      
        I've found some missing _devmap checks in the generic
        get_user_pages_fast() path, but this does not fix the regression
        [...]
       "
      
      So given that there are known bugs, and a pretty robust looking bisection
      points to this commit suggesting that are unknown bugs in the conversion
      as well, revert it for the time being - we'll re-try in v4.13.
      
      Reported-by: default avatarDan Williams <dan.j.williams@intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: aneesh.kumar@linux.vnet.ibm.com
      Cc: dann.frazier@canonical.com
      Cc: dave.hansen@intel.com
      Cc: steve.capper@linaro.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      6dd29b3d