1. 08 Mar, 2016 1 commit
    • Bogicevic Sasa's avatar
      PCI: Include pci/pcie/Kconfig directly from pci/Kconfig · 5f8fc432
      Bogicevic Sasa authored
      
      
      Include pci/pcie/Kconfig directly from pci/Kconfig, so arches don't
      have to source both pci/Kconfig and pci/pcie/Kconfig.
      
      Note that this effectively adds pci/pcie/Kconfig to the following
      arches, because they already sourced drivers/pci/Kconfig but they
      previously did not source drivers/pci/pcie/Kconfig:
      
        alpha
        avr32
        blackfin
        frv
        m32r
        m68k
        microblaze
        mn10300
        parisc
        sparc
        unicore32
        xtensa
      
      [bhelgaas: changelog, source pci/pcie/Kconfig at top of pci/Kconfig, whitespace]
      Signed-off-by: default avatarSasa Bogicevic <brutallesale@gmail.com>
      Signed-off-by: default avatarBjorn Helgaas <bhelgaas@google.com>
      5f8fc432
  2. 21 Jan, 2016 1 commit
    • Christoph Hellwig's avatar
      dma-mapping: always provide the dma_map_ops based implementation · e1c7e324
      Christoph Hellwig authored
      
      
      Move the generic implementation to <linux/dma-mapping.h> now that all
      architectures support it and remove the HAVE_DMA_ATTR Kconfig symbol now
      that everyone supports them.
      
      [valentinrothberg@gmail.com: remove leftovers in Kconfig]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Helge Deller <deller@gmx.de>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Sebastian Ott <sebott@linux.vnet.ibm.com>
      Signed-off-by: default avatarValentin Rothberg <valentinrothberg@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e1c7e324
  3. 15 Jan, 2016 1 commit
    • Daniel Cashman's avatar
      arm64: mm: support ARCH_MMAP_RND_BITS · 8f0d3aa9
      Daniel Cashman authored
      
      
      arm64: arch_mmap_rnd() uses STACK_RND_MASK to generate the random offset
      for the mmap base address.  This value represents a compromise between
      increased ASLR effectiveness and avoiding address-space fragmentation.
      Replace it with a Kconfig option, which is sensibly bounded, so that
      platform developers may choose where to place this compromise.  Keep
      default values as new minimums.
      
      Signed-off-by: default avatarDaniel Cashman <dcashman@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      Cc: Heinrich Schuchardt <xypron.glpk@gmx.de>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mark Salyzyn <salyzyn@android.com>
      Cc: Jeff Vander Stoep <jeffv@google.com>
      Cc: Nick Kralevich <nnk@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Hector Marco-Gisbert <hecmargi@upv.es>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8f0d3aa9
  4. 09 Jan, 2016 1 commit
  5. 04 Jan, 2016 1 commit
  6. 21 Dec, 2015 2 commits
    • David Woods's avatar
      arm64: hugetlb: add support for PTE contiguous bit · 66b3923a
      David Woods authored
      
      
      The arm64 MMU supports a Contiguous bit which is a hint that the TTE
      is one of a set of contiguous entries which can be cached in a single
      TLB entry.  Supporting this bit adds new intermediate huge page sizes.
      
      The set of huge page sizes available depends on the base page size.
      Without using contiguous pages the huge page sizes are as follows.
      
       4KB:   2MB  1GB
      64KB: 512MB
      
      With a 4KB granule, the contiguous bit groups together sets of 16 pages
      and with a 64KB granule it groups sets of 32 pages.  This enables two new
      huge page sizes in each case, so that the full set of available sizes
      is as follows.
      
       4KB:  64KB   2MB  32MB  1GB
      64KB:   2MB 512MB  16GB
      
      If a 16KB granule is used then the contiguous bit groups 128 pages
      at the PTE level and 32 pages at the PMD level.
      
      If the base page size is set to 64KB then 2MB pages are enabled by
      default.  It is possible in the future to make 2MB the default huge
      page size for both 4KB and 64KB granules.
      
      Reviewed-by: default avatarChris Metcalf <cmetcalf@ezchip.com>
      Reviewed-by: default avatarSteve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarDavid Woods <dwoods@ezchip.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      66b3923a
    • Stefano Stabellini's avatar
      arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops · dfd57bc3
      Stefano Stabellini authored
      
      
      Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
      Necessary duplication of paravirt.h and paravirt.c with ARM.
      
      The only paravirt interface supported is pv_time_ops.steal_clock, so no
      runtime pvops patching needed.
      
      This allows us to make use of steal_account_process_tick for stolen
      ticks accounting.
      
      Signed-off-by: default avatarStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      dfd57bc3
  7. 03 Dec, 2015 1 commit
  8. 26 Nov, 2015 1 commit
  9. 24 Nov, 2015 1 commit
  10. 10 Nov, 2015 1 commit
  11. 28 Oct, 2015 1 commit
  12. 20 Oct, 2015 1 commit
    • Catalin Marinas's avatar
      arm64: Make 36-bit VA depend on EXPERT · 56a3f30e
      Catalin Marinas authored
      Commit 21539939
      
       (arm64: 36 bit VA) introduced 36-bit VA support for
      the arm64 kernel when the 16KB page configuration is enabled. While this
      is a valid hardware configuration, it's not something we want to
      encourage since it reduces the memory (and I/O) range that the kernel
      can access. Make this depend on EXPERT to avoid complaints of Linux not
      mapping the whole RAM, especially on platforms following the ARM
      recommended memory map.
      
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      56a3f30e
  13. 19 Oct, 2015 4 commits
  14. 15 Oct, 2015 1 commit
  15. 12 Oct, 2015 1 commit
    • Andrey Ryabinin's avatar
      arm64: add KASAN support · 39d114dd
      Andrey Ryabinin authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      This patch adds arch specific code for kernel address sanitizer
      (see Documentation/kasan.txt).
      
      1/8 of kernel addresses reserved for shadow memory. There was no
      big enough hole for this, so virtual addresses for shadow were
      stolen from vmalloc area.
      
      At early boot stage the whole shadow region populated with just
      one physical page (kasan_zero_page). Later, this page reused
      as readonly zero shadow for some memory that KASan currently
      don't track (vmalloc).
      After mapping the physical memory, pages for shadow memory are
      allocated and mapped.
      
      Functions like memset/memmove/memcpy do a lot of memory accesses.
      If bad pointer passed to one of these function it is important
      to catch this. Compiler's instrumentation cannot do this since
      these functions are written in assembly.
      KASan replaces memory functions with manually instrumented variants.
      Original functions declared as weak symbols so strong definitions
      in mm/kasan/kasan.c could replace them. Original functions have aliases
      with '__' prefix in name, so we could call non-instrumented variant
      if needed.
      Some files built without kasan instrumentation (e.g. mm/slub.c).
      Original mem* function replaced (via #define) with prefixed variants
      to disable memory access checks for such files.
      
      Signed-off-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      39d114dd
  16. 09 Oct, 2015 1 commit
    • Yang Yingliang's avatar
      arm64: fix a migrating irq bug when hotplug cpu · 217d453d
      Yang Yingliang authored and Catalin Marinas's avatar Catalin Marinas committed
      
      
      When cpu is disabled, all irqs will be migratged to another cpu.
      In some cases, a new affinity is different, the old affinity need
      to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE,
      the old affinity can not be updated. Fix it by using irq_do_set_affinity.
      
      And migrating interrupts is a core code matter, so use the generic
      function irq_migrate_all_off_this_cpu() to migrate interrupts in
      kernel/irq/migration.c.
      
      Cc: Jiang Liu <jiang.liu@linux.intel.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      217d453d
  17. 07 Oct, 2015 1 commit
  18. 29 Sep, 2015 2 commits
  19. 17 Sep, 2015 1 commit
    • Will Deacon's avatar
      arm64: errata: add module build workaround for erratum #843419 · df057cc7
      Will Deacon authored
      
      
      Cortex-A53 processors <= r0p4 are affected by erratum #843419 which can
      lead to a memory access using an incorrect address in certain sequences
      headed by an ADRP instruction.
      
      There is a linker fix to generate veneers for ADRP instructions, but
      this doesn't work for kernel modules which are built as unlinked ELF
      objects.
      
      This patch adds a new config option for the erratum which, when enabled,
      builds kernel modules with the mcmodel=large flag. This uses absolute
      addressing for all kernel symbols, thereby removing the use of ADRP as
      a PC-relative form of addressing. The ADRP relocs are removed from the
      module loader so that we fail to load any potentially affected modules.
      
      Cc: <stable@vger.kernel.org>
      Acked-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      df057cc7
  20. 15 Sep, 2015 1 commit
  21. 19 Aug, 2015 1 commit
  22. 03 Aug, 2015 1 commit
  23. 27 Jul, 2015 7 commits
    • Will Deacon's avatar
      arm64: kconfig: group the v8.1 features together · 0e4a0709
      Will Deacon authored
      
      
      ARMv8 CPUs do not support any of the v8.1 features, so group them
      together in Kconfig to make it clear that they're part of 8.1 and not
      relevant to older cores.
      
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      0e4a0709
    • Will Deacon's avatar
      arm64: kconfig: select HAVE_CMPXCHG_LOCAL · 95eff6b2
      Will Deacon authored
      
      
      We implement an optimised cmpxchg_local macro, so let the kernel know.
      
      Reviewed-by: Steve Capper's avatarSteve Capper <steve.capper@arm.com>
      Acked-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      95eff6b2
    • Will Deacon's avatar
      arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics · c0385b24
      Will Deacon authored
      
      
      In order to patch in the new atomic instructions at runtime, we need to
      generate wrappers around the out-of-line exclusive load/store atomics.
      
      This patch adds a new Kconfig option, CONFIG_ARM64_LSE_ATOMICS. which
      causes our atomic functions to branch to the out-of-line ll/sc
      implementations. To avoid the register spill overhead of the PCS, the
      out-of-line functions are compiled with specific compiler flags to
      force out-of-line save/restore of any registers that are usually
      caller-saved.
      
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c0385b24
    • Dave P Martin's avatar
      arm64/BUG: Use BRK instruction for generic BUG traps · 9fb7410f
      Dave P Martin authored
      
      
      Currently, the minimal default BUG() implementation from asm-
      generic is used for arm64.
      
      This patch uses the BRK software breakpoint instruction to generate
      a trap instead, similarly to most other arches, with the generic
      BUG code generating the dmesg boilerplate.
      
      This allows bug metadata to be moved to a separate table and
      reduces the amount of inline code at BUG and WARN sites.  This also
      avoids clobbering any registers before they can be dumped.
      
      To mitigate the size of the bug table further, this patch makes
      use of the existing infrastructure for encoding addresses within
      the bug table as 32-bit offsets instead of absolute pointers.
      (Note that this limits the kernel size to 2GB.)
      
      Traps are registered at arch_initcall time for aarch64, but BUG
      has minimal real dependencies and it is desirable to be able to
      generate bug splats as early as possible.  This patch redirects
      all debug exceptions caused by BRK directly to bug_handler() until
      the full debug exception support has been initialised.
      
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      9fb7410f
    • James Morse's avatar
      arm64: kernel: Add support for Privileged Access Never · 338d4f49
      James Morse authored
      
      
      'Privileged Access Never' is a new arm8.1 feature which prevents
      privileged code from accessing any virtual address where read or write
      access is also permitted at EL0.
      
      This patch enables the PAN feature on all CPUs, and modifies {get,put}_user
      helpers temporarily to permit access.
      
      This will catch kernel bugs where user memory is accessed directly.
      'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN.
      
      Reviewed-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: James Morse's avatarJames Morse <james.morse@arm.com>
      [will: use ALTERNATIVE in asm and tidy up pan_enable check]
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      338d4f49
    • Will Deacon's avatar
      arm64: force CONFIG_SMP=y and remove redundant #ifdefs · 4b3dc967
      Will Deacon authored
      
      
      Nobody seems to be producing !SMP systems anymore, so this is just
      becoming a source of kernel bugs, particularly if people want to use
      coherent DMA with non-shared pages.
      
      This patch forces CONFIG_SMP=y for arm64, removing a modest amount of
      code in the process.
      
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4b3dc967
    • Catalin Marinas's avatar
      arm64: Add support for hardware updates of the access and dirty pte bits · 2f4b829c
      Catalin Marinas authored
      
      
      The ARMv8.1 architecture extensions introduce support for hardware
      updates of the access and dirty information in page table entries. With
      TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit
      cleared in the page table, instead of raising an access flag fault the
      CPU sets the actual page table entry bit. To ensure that kernel
      modifications to the page tables do not inadvertently revert a change
      introduced by hardware updates, the exclusive monitor (ldxr/stxr) is
      adopted in the pte accessors.
      
      When TCR_EL1.HD is enabled, a write access to a memory location with the
      DBM (Dirty Bit Management) bit set in the corresponding pte
      automatically clears the read-only bit (AP[2]). Such DBM bit maps onto
      the Linux PTE_WRITE bit and to check whether a writable (DBM set) page
      is dirty, the kernel tests the PTE_RDONLY bit. In order to allow
      read-only and dirty pages, the kernel needs to preserve the software
      dirty bit. The hardware dirty status is transferred to the software
      dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and
      pte_modify().
      
      Signed-off-by: Catalin Marinas's avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2f4b829c
  24. 20 Jul, 2015 1 commit
  25. 17 Jul, 2015 1 commit
  26. 07 Jul, 2015 1 commit
  27. 15 Jun, 2015 1 commit
  28. 05 Jun, 2015 1 commit
  29. 29 May, 2015 1 commit