gitlab.arm.com will be in the maintainance mode on Wednesday June 29th 01:00 - 10:00 (UTC+1). Repositories is read only during the maintainance.

  1. 28 Dec, 2018 2 commits
  2. 31 Oct, 2018 2 commits
    • Mike Rapoport's avatar
      mm: remove include/linux/bootmem.h · 57c8a661
      Mike Rapoport authored
      Move remaining definitions and declarations from include/linux/bootmem.h
      into include/linux/memblock.h and remove the redundant header.
      
      The includes were replaced with the semantic patch below and then
      semi-automated removal of duplicated '#include <linux/memblock.h>
      
      @@
      @@
      - #include <linux/bootmem.h>
      + #include <linux/memblock.h>
      
      [sfr@canb.auug.org.au: dma-direct: fix up for the removal of linux/bootmem.h]
        Link: http://lkml.kernel.org/r/20181002185342.133d1680@canb.auug.org.au
      [sfr@canb.auug.org.au: powerpc: fix up for removal of linux/bootmem.h]
        Link: http://lkml.kernel.org/r/20181005161406.73ef8727@canb.auug.org.au
      [sfr@canb.auug.org.au: x86/kaslr, ACPI/NUMA: fix for linux/bootmem.h removal]
        Link: http://lkml.kernel.org/r/20181008190341.5e396491@canb.auug.org.au
      Link: http://lkml.kernel.org/r/1536927045-23536-30-git-send-email-rppt@linux.vnet.ibm.com
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      57c8a661
    • Mike Rapoport's avatar
      memblock: remove _virt from APIs returning virtual address · eb31d559
      Mike Rapoport authored
      The conversion is done using
      
      sed -i 's@memblock_virt_alloc@memblock_alloc@g' \
      	$(git grep -l memblock_virt_alloc)
      
      Link: http://lkml.kernel.org/r/1536927045-23536-8-git-send-email-rppt@linux.vnet.ibm.com
      
      Signed-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Serge Semin <fancer.lancer@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eb31d559
  3. 05 Oct, 2018 1 commit
  4. 17 Apr, 2018 1 commit
    • Mark Rutland's avatar
      arm64: kasan: avoid pfn_to_nid() before page array is initialized · 800cb2e5
      Mark Rutland authored and Catalin Marinas's avatar Catalin Marinas committed
      In arm64's kasan_init(), we use pfn_to_nid() to find the NUMA node a
      span of memory is in, hoping to allocate shadow from the same NUMA node.
      However, at this point, the page array has not been initialized, and
      thus this is bogus.
      
      Since commit:
      
        f165b378 ("mm: uninitialized struct page poisoning sanity")
      
      ... accessing fields of the page array results in a boot time Oops(),
      highlighting this problem:
      
      [    0.000000] Unable to handle kernel paging request at virtual address dfff200000000000
      [    0.000000] Mem abort info:
      [    0.000000]   ESR = 0x96000004
      [    0.000000]   Exception class = DABT (current EL), IL = 32 bits
      [    0.000000]   SET = 0, FnV = 0
      [    0.000000]   EA = 0, S1PTW = 0
      [    0.000000] Data abort info:
      [    0.000000]   ISV = 0, ISS = 0x00000004
      [    0.000000]   CM = 0, WnR = 0
      [    0.000000] [dfff200000000000] address between user and kernel address ranges
      [    0.000000] Internal error: Oops: 96000004 [#1] PREEMPT SMP
      [    0.000000] Modules linked in:
      [    0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.16.0-07317-gf165b378
      
       #42
      [    0.000000] Hardware name: ARM Juno development board (r1) (DT)
      [    0.000000] pstate: 80000085 (Nzcv daIf -PAN -UAO)
      [    0.000000] pc : __asan_load8+0x8c/0xa8
      [    0.000000] lr : __dump_page+0x3c/0x3b8
      [    0.000000] sp : ffff2000099b7ca0
      [    0.000000] x29: ffff2000099b7ca0 x28: ffff20000a1762c0
      [    0.000000] x27: ffff7e0000000000 x26: ffff2000099dd000
      [    0.000000] x25: ffff200009a3f960 x24: ffff200008f9c38c
      [    0.000000] x23: ffff20000a9d3000 x22: ffff200009735430
      [    0.000000] x21: fffffffffffffffe x20: ffff7e0001e50420
      [    0.000000] x19: ffff7e0001e50400 x18: 0000000000001840
      [    0.000000] x17: ffffffffffff8270 x16: 0000000000001840
      [    0.000000] x15: 0000000000001920 x14: 0000000000000004
      [    0.000000] x13: 0000000000000000 x12: 0000000000000800
      [    0.000000] x11: 1ffff0012d0f89ff x10: ffff10012d0f89ff
      [    0.000000] x9 : 0000000000000000 x8 : ffff8009687c5000
      [    0.000000] x7 : 0000000000000000 x6 : ffff10000f282000
      [    0.000000] x5 : 0000000000000040 x4 : fffffffffffffffe
      [    0.000000] x3 : 0000000000000000 x2 : dfff200000000000
      [    0.000000] x1 : 0000000000000005 x0 : 0000000000000000
      [    0.000000] Process swapper (pid: 0, stack limit = 0x        (ptrval))
      [    0.000000] Call trace:
      [    0.000000]  __asan_load8+0x8c/0xa8
      [    0.000000]  __dump_page+0x3c/0x3b8
      [    0.000000]  dump_page+0xc/0x18
      [    0.000000]  kasan_init+0x2e8/0x5a8
      [    0.000000]  setup_arch+0x294/0x71c
      [    0.000000]  start_kernel+0xdc/0x500
      [    0.000000] Code: aa0403e0 9400063c 17ffffee d343fc00 (38e26800)
      [    0.000000] ---[ end trace 67064f0e9c0cc338 ]---
      [    0.000000] Kernel panic - not syncing: Attempted to kill the idle task!
      [    0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]---
      
      Let's fix this by using early_pfn_to_nid(), as other architectures do in
      their kasan init code. Note that early_pfn_to_nid acquires the nid from
      the memblock array, which we iterate over in kasan_init(), so this
      should be fine.
      Signed-off-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      Fixes: 39d114dd
      
       ("arm64: add KASAN support")
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      800cb2e5
  5. 16 Feb, 2018 1 commit
  6. 07 Feb, 2018 1 commit
  7. 16 Nov, 2017 1 commit
    • Will Deacon's avatar
      arm64/mm/kasan: don't use vmemmap_populate() to initialize shadow · e17d8025
      Will Deacon authored
      The kasan shadow is currently mapped using vmemmap_populate() since that
      provides a semi-convenient way to map pages into init_top_pgt.  However,
      since that no longer zeroes the mapped pages, it is not suitable for
      kasan, which requires zeroed shadow memory.
      
      Add kasan_populate_shadow() interface and use it instead of
      vmemmap_populate().  Besides, this allows us to take advantage of
      gigantic pages and use them to populate the shadow, which should save us
      some memory wasted on page tables and reduce TLB pressure.
      
      Link: http://lkml.kernel.org/r/20171103185147.2688-3-pasha.tatashin@oracle.com
      
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarPavel Tatashin <pasha.tatashin@oracle.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Steven Sistare <steven.sistare@oracle.com>
      Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
      Cc: Bob Picco <bob.picco@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e17d8025
  8. 10 Jul, 2017 1 commit
  9. 10 Mar, 2017 1 commit
    • Mark Rutland's avatar
      arm64: kasan: avoid bad virt_to_pfn() · b0de0ccc
      Mark Rutland authored
      
      
      Booting a v4.11-rc1 kernel with DEBUG_VIRTUAL and KASAN enabled produces
      the following splat (trimmed for brevity):
      
      [    0.000000] virt_to_phys used for non-linear address: ffff200008080000 (0xffff200008080000)
      [    0.000000] WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x48/0x70
      [    0.000000] PC is at __virt_to_phys+0x48/0x70
      [    0.000000] LR is at __virt_to_phys+0x48/0x70
      [    0.000000] Call trace:
      [    0.000000] [<ffff2000080b1ac0>] __virt_to_phys+0x48/0x70
      [    0.000000] [<ffff20000a03b86c>] kasan_init+0x1c0/0x498
      [    0.000000] [<ffff20000a034018>] setup_arch+0x2fc/0x948
      [    0.000000] [<ffff20000a030c68>] start_kernel+0xb8/0x570
      [    0.000000] [<ffff20000a0301e8>] __primary_switched+0x6c/0x74
      
      This is because we use virt_to_pfn() on a kernel image address when
      trying to figure out its nid, so that we can allocate its shadow from
      the same node.
      
      As with other recent changes, this patch uses lm_alias() to solve this.
      
      We could instead use NUMA_NO_NODE, as x86 does for all shadow
      allocations, though we'll likely want the "real" memory shadow to be
      backed from its corresponding nid anyway, so we may as well be
      consistent and find the nid for the image shadow.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: default avatarLaura Abbott <labbott@redhat.com>
      Signed-off-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      b0de0ccc
  10. 02 Mar, 2017 1 commit
  11. 12 Jan, 2017 1 commit
  12. 11 Mar, 2016 2 commits
    • Catalin Marinas's avatar
      arm64: kasan: Fix zero shadow mapping overriding kernel image shadow · 2776e0e8
      Catalin Marinas authored
      With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
      PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
      kimg_shadow_end is not page aligned (_end shifted by
      KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
      shadow via vmemmap_populate() may be overridden by subsequent calls to
      kasan_populate_zero_shadow(), leading to kernel panics like below:
      
      ------------------------------------------------------------------------------
      Unable to handle kernel paging request at virtual address fffffc100135068c
      pgd = fffffc8009ac0000
      [fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
      Internal error: Oops: 9600004f [#1] PREEMPT SMP
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
      Hardware name: Juno (DT)
      task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
      PC is at __memset+0x4c/0x200
      LR is at kasan_unpoison_shadow+0x34/0x50
      pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
      sp : fffffe0900203db0
      x29: fffffe0900203db0 x28: 0000000000000000
      x27: 0000000000000000 x26: 0000000000000000
      x25: fffffc80099b69d0 x24: 0000000000000001
      x23: 0000000000000000 x22: 0000000000002000
      x21: dffffc8000000000 x20: 1fffff9001350a8c
      x19: 0000000000002000 x18: 0000000000000008
      x17: 0000000000000147 x16: ffffffffffffffff
      x15: 79746972100e041d x14: ffffff0000000000
      x13: ffff000000000000 x12: 0000000000000000
      x11: 0101010101010101 x10: 1fffffc11c000000
      x9 : 0000000000000000 x8 : fffffc100135068c
      x7 : 0000000000000000 x6 : 000000000000003f
      x5 : 0000000000000040 x4 : 0000000000000004
      x3 : fffffc100134f651 x2 : 0000000000000400
      x1 : 0000000000000000 x0 : fffffc100135068c
      
      Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
      Call trace:
      [<fffffc800846f1cc>] __memset+0x4c/0x200
      [<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
      [<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
      [<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
      [<fffffc80089e1948>] kernel_init+0x10/0xf8
      [<fffffc8008093a00>] ret_from_fork+0x10/0x50
      ------------------------------------------------------------------------------
      
      This patch aligns kimg_shadow_start and kimg_shadow_end to
      SWAPPER_BLOCK_SIZE in all configurations.
      
      Fixes: f9040773
      
       ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      2776e0e8
    • Catalin Marinas's avatar
      arm64: kasan: Use actual memory node when populating the kernel image shadow · 2f76969f
      Catalin Marinas authored
      With the 16KB or 64KB page configurations, the generic
      vmemmap_populate() implementation warns on potential offnode
      page_structs via vmemmap_verify() because the arm64 kasan_init() passes
      NUMA_NO_NODE instead of the actual node for the kernel image memory.
      
      Fixes: f9040773
      
       ("arm64: move kernel image to base of vmalloc area")
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: James Morse's avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      2f76969f
  13. 24 Feb, 2016 1 commit
    • Ard Biesheuvel's avatar
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
  14. 18 Feb, 2016 1 commit
    • Ard Biesheuvel's avatar
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      f9040773
  15. 16 Feb, 2016 2 commits
    • Mark Rutland's avatar
      arm64: mm: create new fine-grained mappings at boot · 068a17a5
      Mark Rutland authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      At boot we may change the granularity of the tables mapping the kernel
      (by splitting or making sections). This may happen when we create the
      linear mapping (in __map_memblock), or at any point we try to apply
      fine-grained permissions to the kernel (e.g. fixup_executable,
      mark_rodata_ro, fixup_init).
      
      Changing the active page tables in this manner may result in multiple
      entries for the same address being allocated into TLBs, risking problems
      such as TLB conflict aborts or issues derived from the amalgamation of
      TLB entries. Generally, a break-before-make (BBM) approach is necessary
      to avoid conflicts, but we cannot do this for the kernel tables as it
      risks unmapping text or data being used to do so.
      
      Instead, we can create a new set of tables from scratch in the safety of
      the existing mappings, and subsequently migrate over to these using the
      new cpu_replace_ttbr1 helper, which avoids the two sets of tables being
      active simultaneously.
      
      To avoid issues when we later modify permissions of the page tables
      (e.g. in fixup_init), we must create the page tables at a granularity
      such that later modification does not result in splitting of tables.
      
      This patch applies this strategy, creating a new set of fine-grained
      page tables from scratch, and safely migrating to them. The existing
      fixmap and kasan shadow page tables are reused in the new fine-grained
      tables.
      Signed-off-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      068a17a5
    • Mark Rutland's avatar
      arm64: kasan: avoid TLB conflicts · c1a88e91
      Mark Rutland authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      The page table modification performed during the KASAN init risks the
      allocation of conflicting TLB entries, as it swaps a set of valid global
      entries for another without suitable TLB maintenance.
      
      The presence of conflicting TLB entries can result in the delivery of
      synchronous TLB conflict aborts, or may result in the use of erroneous
      data being returned in response to a TLB lookup. This can affect
      explicit data accesses from software as well as translations performed
      asynchronously (e.g. as part of page table walks or speculative I-cache
      fetches), and can therefore result in a wide variety of problems.
      
      To avoid this, use cpu_replace_ttbr1 to swap the page tables. This
      ensures that when the new tables are installed there are no stale
      entries from the old tables which may conflict. As all updates are made
      to the tables while they are not active, the updates themselves are
      safe.
      
      At the same time, add the missing barrier to ensure that the tmp_pg_dir
      entries updated via memcpy are visible to the page table walkers at the
      point the tmp_pg_dir is installed. All other page table updates made as
      part of KASAN initialisation have the requisite barriers due to the use
      of the standard page table accessors.
      Signed-off-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      c1a88e91
  16. 25 Jan, 2016 1 commit
  17. 13 Oct, 2015 1 commit
  18. 12 Oct, 2015 1 commit
    • Andrey Ryabinin's avatar
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      39d114dd