1. 14 Jun, 2018 1 commit
  2. 01 May, 2018 1 commit
  3. 27 Mar, 2018 1 commit
  4. 06 Mar, 2018 1 commit
    • Catalin Marinas's avatar
      arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size) · 1f85b42a
      Catalin Marinas authored
      Commit 97303480
       ("arm64: Increase the max granular size") increased
      the cache line size to 128 to match Cavium ThunderX, apparently for some
      performance benefit which could not be confirmed. This change, however,
      has an impact on the network packets allocation in certain
      circumstances, requiring slightly over a 4K page with a significant
      performance degradation.
      This patch reverts L1_CACHE_SHIFT back to 6 (64-byte cache line) while
      keeping ARCH_DMA_MINALIGN at 128. The cache_line_size() function was
      changed to default to ARCH_DMA_MINALIGN in the absence of a meaningful
      CTR_EL0.CWG bit field.
      In addition, if a system with ARCH_DMA_MINALIGN < CTR_EL0.CWG is
      detected, the kernel will force swiotlb bounce buffering for all
      non-coherent devices since DMA cache maintenance on sub-CWG ranges is
      not safe, leading to data corruption.
      Cc: Tirumalesh Chalamarla <tchalamarla@cavium.com>
      Cc: Timur Tabi <timur@codeaurora.org>
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Acked-by: Robin Murphy's avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  5. 19 Jan, 2018 1 commit
  6. 15 Jan, 2018 2 commits
  7. 11 Dec, 2017 1 commit
    • Steve Capper's avatar
      arm64: Initialise high_memory global variable earlier · f24e5834
      Steve Capper authored
      The high_memory global variable is used by
      cma_declare_contiguous(.) before it is defined.
      We don't notice this as we compute __pa(high_memory - 1), and it looks
      like we're processing a VA from the direct linear map.
      This problem becomes apparent when we flip the kernel virtual address
      space and the linear map is moved to the bottom of the kernel VA space.
      This patch moves the initialisation of high_memory before it used.
      Cc: <stable@vger.kernel.org>
      Fixes: f7426b98
       ("mm: cma: adjust address limit to avoid hitting low/high memory boundary")
      Signed-off-by: Steve Capper's avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  8. 05 Apr, 2017 4 commits
  9. 17 Jan, 2017 1 commit
    • Alexander Graf's avatar
      arm64: Fix swiotlb fallback allocation · 524dabe1
      Alexander Graf authored and Catalin Marinas's avatar Catalin Marinas committed
      Commit b67a8b29 introduced logic to skip swiotlb allocation when all memory
      is DMA accessible anyway.
      While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
      to allocate memory when there is no CMA memory available. The swiotlb code is
      called to ensure that we at least try get_free_pages().
      Without initialization, swiotlb allocation code tries to access io_tlb_list
      which is NULL. That results in a stack trace like this:
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
        [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
        [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
        [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
        [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
        [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
        [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
        [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
        [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
        [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
        [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
      Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
      option. This patch configures the swiotlb code to use that if we decide not to
      initialize the swiotlb framework.
      Fixes: b67a8b29
       ("arm64: mm: only initialize swiotlb when necessary")
      Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
      CC: Jisheng Zhang <jszhang@marvell.com>
      CC: Geert Uytterhoeven <geert+renesas@glider.be>
      CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
  10. 12 Jan, 2017 1 commit
  11. 19 Dec, 2016 1 commit
  12. 20 Oct, 2016 1 commit
    • Mark Rutland's avatar
      arm64: remove pr_cont abuse from mem_init · f7881bd6
      Mark Rutland authored
      All the lines printed by mem_init are independent, with each ending with
      a newline. While they logically form a large block, none are actually
      continuations of previous lines.
      The kernel-side printk code and the userspace demsg tool differ in their
      handling of KERN_CONT following a newline, and while this isn't always a
      problem kernel-side, it does cause difficulty for userspace. Using
      pr_cont causes the userspace tool to not print line prefix (e.g.
      timestamps) even when following a newline, mis-aligning the output and
      making it harder to read, e.g.
      [    0.000000] Virtual kernel memory layout:
      [    0.000000]     modules : 0xffff000000000000 - 0xffff000008000000   (   128 MB)
          vmalloc : 0xffff000008000000 - 0xffff7dffbfff0000   (129022 GB)
            .text : 0xffff000008080000 - 0xffff0000088b0000   (  8384 KB)
          .rodata : 0xffff0000088b0000 - 0xffff000008c50000   (  3712 KB)
            .init : 0xffff000008c50000 - 0xffff000008d50000   (  1024 KB)
            .data : 0xffff000008d50000 - 0xffff000008e25200   (   853 KB)
             .bss : 0xffff000008e25200 - 0xffff000008e6bec0   (   284 KB)
          fixed   : 0xffff7dfffe7fd000 - 0xffff7dfffec00000   (  4108 KB)
          PCI I/O : 0xffff7dfffee00000 - 0xffff7dffffe00000   (    16 MB)
          vmemmap : 0xffff7e0000000000 - 0xffff800000000000   (  2048 GB maximum)
                    0xffff7e0000000000 - 0xffff7e0026000000   (   608 MB actual)
          memory  : 0xffff800000000000 - 0xffff800980000000   ( 38912 MB)
      [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
      Fix this by using pr_notice consistently for all lines, which both the
      kernel and userspace are happy with.
      Signed-off-by: Mark Rutland's avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  13. 06 Sep, 2016 1 commit
  14. 22 Aug, 2016 1 commit
  15. 28 Jul, 2016 1 commit
    • Dennis Chen's avatar
      arm64:acpi: fix the acpi alignment exception when 'mem=' specified · cb0a6502
      Dennis Chen authored
      When booting an ACPI enabled kernel with 'mem=x', there is the
      possibility that ACPI data regions from the firmware will lie above the
      memory limit.  Ordinarily these will be removed by
      Unfortunately, this means that these regions will then be mapped by
      acpi_os_ioremap(.) as device memory (instead of normal) thus unaligned
      accessess will then provoke alignment faults.
      In this patch we adopt memblock_mem_limit_remove_map instead, and this
      preserves these ACPI data regions (marked NOMAP) thus ensuring that
      these regions are not mapped as device memory.
      For example, below is an alignment exception observed on ARM platform
      when booting the kernel with 'acpi=on mem=8G':
        Unable to handle kernel paging request at virtual address ffff0000080521e7
        pgd = ffff000008aa0000
        [ffff0000080521e7] *pgd=000000801fffe003, *pud=000000801fffd003, *pmd=000000801fffc003, *pte=00e80083ff1c1707
        Internal error: Oops: 96000021 [#1] PREEMPT SMP
        Modules linked in:
        CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160616+ #172
        Hardware name: AMD Overdrive/Supercharger/Default string, BIOS ROD1001A 02/09/2016
        task: ffff800001ef0000 ti: ffff800001ef8000 task.ti: ffff800001ef8000
        PC is at acpi_ns_lookup+0x520/0x734
        LR is at acpi_ns_lookup+0x4a4/0x734
        pc : [<ffff0000083b8b10>] lr : [<ffff0000083b8a94>] pstate: 60000045
        sp : ffff800001efb8b0
        x29: ffff800001efb8c0 x28: 000000000000001b
        x27: 0000000000000001 x26: 0000000000000000
        x25: ffff800001efb9e8 x24: ffff000008a10000
        x23: 0000000000000001 x22: 0000000000000001
        x21: ffff000008724000 x20: 000000000000001b
        x19: ffff0000080521e7 x18: 000000000000000d
        x17: 00000000000038ff x16: 0000000000000002
        x15: 0000000000000007 x14: 0000000000007fff
        x13: ffffff0000000000 x12: 0000000000000018
        x11: 000000001fffd200 x10: 00000000ffffff76
        x9 : 000000000000005f x8 : ffff000008725fa8
        x7 : ffff000008a8df70 x6 : ffff000008a8df70
        x5 : ffff000008a8d000 x4 : 0000000000000010
        x3 : 0000000000000010 x2 : 000000000000000c
        x1 : 0000000000000006 x0 : 0000000000000000
        Code: b9009fbc 2a00037b 36380057 3219037b (b9400260)
        ---[ end trace 03381e5eb0a24de4 ]---
        Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      With 'efi=debug', we can see those ACPI regions loaded by firmware on
      that board as:
        efi:   0x0083ff185000-0x0083ff1b4fff [Reserved           |   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
        efi:   0x0083ff1b5000-0x0083ff1c2fff [ACPI Reclaim Memory|   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
        efi:   0x0083ff223000-0x0083ff224fff [ACPI Memory NVS    |   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
      Link: http://lkml.kernel.org/r/1468475036-5852-3-git-send-email-dennis.chen@arm.com
      Acked-by: Steve Capper's avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarDennis Chen <dennis.chen@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Kaly Xin <kaly.xin@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  16. 27 Jun, 2016 2 commits
  17. 21 Jun, 2016 1 commit
  18. 19 Apr, 2016 2 commits
  19. 15 Apr, 2016 1 commit
  20. 14 Apr, 2016 4 commits
    • Ard Biesheuvel's avatar
      arm64: mm: move vmemmap region right below the linear region · 3e1907d5
      Ard Biesheuvel authored
      This moves the vmemmap region right below PAGE_OFFSET, aka the start
      of the linear region, and redefines its size to be a power of two.
      Due to the placement of PAGE_OFFSET in the middle of the address space,
      whose size is a power of two as well, this guarantees that virt to
      page conversions and vice versa can be implemented efficiently, by
      masking and shifting rather than ordinary arithmetic.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
    • Ard Biesheuvel's avatar
      arm64: mm: free __init memory via the linear mapping · d386825c
      Ard Biesheuvel authored
      The implementation of free_initmem_default() expects __init_begin
      and __init_end to be covered by the linear mapping, which is no
      longer the case. So open code it instead, using addresses that are
      explicitly translated from kernel virtual to linear virtual.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
    • Ard Biesheuvel's avatar
      arm64: add the initrd region to the linear mapping explicitly · 177e15f0
      Ard Biesheuvel authored
      Instead of going out of our way to relocate the initrd if it turns out
      to occupy memory that is not covered by the linear mapping, just add the
      initrd to the linear mapping. This puts the burden on the bootloader to
      pass initrd= and mem= options that are mutually consistent.
      Note that, since the placement of the linear region in the PA space is
      also dependent on the placement of the kernel Image, which may reside
      anywhere in memory, we may still end up with a situation where the initrd
      and the kernel Image are simply too far apart to be covered by the linear
      Since we now leave it up to the bootloader to pass the initrd in memory
      that is guaranteed to be accessible by the kernel, add a mention of this to
      the arm64 boot protocol specification as well.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
    • Ard Biesheuvel's avatar
      arm64/mm: ensure memstart_addr remains sufficiently aligned · 2958987f
      Ard Biesheuvel authored
      After choosing memstart_addr to be the highest multiple of
      ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
      address, we clip the memblocks to the maximum size of the linear region.
      Since the kernel may be high up in memory, we take care not to clip the
      kernel itself, which means we have to clip some memory from the bottom if
      this occurs, to ensure that the distance between the first and the last
      usable physical memory address can be covered by the linear region.
      However, we fail to update memstart_addr if this clipping from the bottom
      occurs, which means that we may still end up with virtual addresses that
      wrap into the userland range. So increment memstart_addr as appropriate to
      prevent this from happening.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
  21. 21 Mar, 2016 1 commit
  22. 02 Mar, 2016 1 commit
  23. 29 Feb, 2016 2 commits
  24. 26 Feb, 2016 3 commits
  25. 24 Feb, 2016 1 commit
  26. 18 Feb, 2016 3 commits
    • Ard Biesheuvel's avatar
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel authored and Catalin Marinas's avatar Catalin Marinas committed
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: defer __va translation of initrd_start and initrd_end · a89dea58
      Ard Biesheuvel authored and Catalin Marinas's avatar Catalin Marinas committed
      Before deferring the assignment of memstart_addr in a subsequent patch, to
      the moment where all memory has been discovered and possibly clipped based
      on the size of the linear region and the presence of a mem= command line
      parameter, we need to ensure that memstart_addr is not used to perform __va
      translations before it is assigned.
      One such use is in the generic early DT discovery of the initrd location,
      which is recorded as a virtual address in the globals initrd_start and
      initrd_end. So wire up the generic support to declare the initrd addresses,
      and implement it without __va() translations, and perform the translation
      after memstart_addr has been assigned.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
    • Ard Biesheuvel's avatar
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel authored and Catalin Marinas's avatar Catalin Marinas committed
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>