- 23 May, 2021 1 commit
-
-
Jisheng Zhang authored
lkp reported a randconfig failure: arch/riscv/kernel/probes/kprobes.c:90:22: error: use of undeclared identifier 'PAGE_KERNEL_READ_EXEC' We implemented the alloc_insn_page() to allocate PAGE_KERNEL_READ_EXEC page for kprobes insn page for STRICT_MODULE_RWX. But if MMU=n, we should fall back to the generic weak alloc_insn_page() by generic kprobe subsystem. Fixes: cdd1b2bd ("riscv: kprobes: Implement alloc_insn_page()") Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- 26 Apr, 2021 2 commits
-
-
Liao Chang authored
The execution of sys_read end up hitting a BUG_ON() in __find_get_block after installing kprobe at sys_read, the BUG message like the following: [ 65.708663] ------------[ cut here ]------------ [ 65.709987] kernel BUG at fs/buffer.c:1251! [ 65.711283] Kernel BUG [#1] [ 65.712032] Modules linked in: [ 65.712925] CPU: 0 PID: 51 Comm: sh Not tainted 5.12.0-rc4 #1 [ 65.714407] Hardware name: riscv-virtio,qemu (DT) [ 65.715696] epc : __find_get_block+0x218/0x2c8 [ 65.716835] ra : __getblk_gfp+0x1c/0x4a [ 65.717831] epc : ffffffe00019f11e ra : ffffffe00019f56a sp : ffffffe002437930 [ 65.719553] gp : ffffffe000f06030 tp : ffffffe0015abc00 t0 : ffffffe00191e038 [ 65.721290] t1 : ffffffe00191e038 t2 : 000000000000000a s0 : ffffffe002437960 [ 65.723051] s1 : ffffffe00160ad00 a0 : ffffffe00160ad00 a1 : 000000000000012a [ 65.724772] a2 : 0000000000000400 a3 : 0000000000000008 a4 : 0000000000000040 [ 65.726545] a5 : 0000000000000000 a6 : ffffffe00191e000 a7 : 0000000000000000 [ 65.728308] s2 : 000000000000012a s3 : 0000000000000400 s4 : 0000000000000008 [ 65.730049] s5 : 000000000000006c s6 : ffffffe00240f800 s7 : ffffffe000f080a8 [ 65.731802] s8 : 0000000000000001 s9 : 000000000000012a s10: 0000000000000008 [ 65.733516] s11: 0000000000000008 t3 : 00000000000003ff t4 : 000000000000000f [ 65.734434] t5 : 00000000000003ff t6 : 0000000000040000 [ 65.734613] status: 0000000000000100 badaddr: 0000000000000000 cause: 0000000000000003 [ 65.734901] Call Trace: [ 65.735076] [<ffffffe00019f11e>] __find_get_block+0x218/0x2c8 [ 65.735417] [<ffffffe00020017a>] __ext4_get_inode_loc+0xb2/0x2f6 [ 65.735618] [<ffffffe000201b6c>] ext4_get_inode_loc+0x3a/0x8a [ 65.735802] [<ffffffe000203380>] ext4_reserve_inode_write+0x2e/0x8c [ 65.735999] [<ffffffe00020357a>] __ext4_mark_inode_dirty+0x4c/0x18e [ 65.736208] [<ffffffe000206bb0>] ext4_dirty_inode+0x46/0x66 [ 65.736387] [<ffffffe000192914>] __mark_inode_dirty+0x12c/0x3da [ 65.736576] [<ffffffe000180dd2>] touch_atime+0x146/0x150 [ 65.736748] [<ffffffe00010d762>] filemap_read+0x234/0x246 [ 65.736920] [<ffffffe00010d834>] generic_file_read_iter+0xc0/0x114 [ 65.737114] [<ffffffe0001f5d7a>] ext4_file_read_iter+0x42/0xea [ 65.737310] [<ffffffe000163f2c>] new_sync_read+0xe2/0x15a [ 65.737483] [<ffffffe000165814>] vfs_read+0xca/0xf2 [ 65.737641] [<ffffffe000165bae>] ksys_read+0x5e/0xc8 [ 65.737816] [<ffffffe000165c26>] sys_read+0xe/0x16 [ 65.737973] [<ffffffe000003972>] ret_from_syscall+0x0/0x2 [ 65.738858] ---[ end trace fe93f985456c935d ]--- A simple reproducer looks like: echo 'p:myprobe sys_read fd=%a0 buf=%a1 count=%a2' > /sys/kernel/debug/tracing/kprobe_events echo 1 > /sys/kernel/debug/tracing/events/kprobes/myprobe/enable cat /sys/kernel/debug/tracing/trace Here's what happens to hit that BUG_ON(): 1) After installing kprobe at entry of sys_read, the first instruction is replaced by 'ebreak' instruction on riscv64 platform. 2) Once kernel reach the 'ebreak' instruction at the entry of sys_read, it trap into the riscv breakpoint handler, where it do something to setup for coming single-step of origin instruction, including backup the 'sstatus' in pt_regs, followed by disable interrupt during single stepping via clear 'SIE' bit of 'sstatus' in pt_regs. 3) Then kernel restore to the instruction slot contains two instructions, one is original instruction at entry of sys_read, the other is 'ebreak'. Here it trigger a 'Instruction page fault' exception (value at 'scause' is '0xc'), if PF is not filled into PageTabe for that slot yet. 4) Again kernel trap into page fault exception handler, where it choose different policy according to the state of running kprobe. Because afte 2) the state is KPROBE_HIT_SS, so kernel reset the current kprobe and 'pc' points back to the probe address. 5) Because 'epc' point back to 'ebreak' instrution at sys_read probe, kernel trap into breakpoint handler again, and repeat the operations at 2), however 'sstatus' without 'SIE' is keep at 4), it cause the real 'sstatus' saved at 2) is overwritten by the one withou 'SIE'. 6) When kernel cross the probe the 'sstatus' CSR restore with value without 'SIE', and reach __find_get_block where it requires the interrupt must be enabled. Fix this is very trivial, just restore the value of 'sstatus' in pt_regs with backup one at 2) when the instruction being single stepped cause a page fault. Fixes: c22b0bcb ("riscv: Add kprobes supported") Signed-off-by:
Liao Chang <liaochang1@huawei.com> Cc: stable@vger.kernel.org Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
Jisheng Zhang authored
Allocate PAGE_KERNEL_READ_EXEC(read only, executable) page for kprobes insn page. This is to prepare for STRICT_MODULE_RWX. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- 17 Mar, 2021 1 commit
-
-
kernel test robot authored
Use BUG_ON instead of a if condition followed by BUG. Generated by: scripts/coccinelle/misc/bugon.cocci Fixes: c22b0bcb ("riscv: Add kprobes supported") CC: Guo Ren <guoren@linux.alibaba.com> Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Julia Lawall <julia.lawall@inria.fr> Reviewed-by:
Pekka Enberg <penberg@kernel.org> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- 19 Feb, 2021 1 commit
-
-
Palmer Dabbelt authored
Neither of these are actually correct: the instruction stream is defined (for versions of the ISA manual newer than 2.2) as a stream of 16-bit little-endian parcels, which is different than just being little-endian. In theory we should represent this as a type, but we don't have any concrete plans for the big endian stuff so it doesn't seem worth the time -- we've got variants of this all over the place. Instead I'm just dropping the unnecessary type conversion, which is a NOP on LE systems but causes an sparse error as the types are all mixed up. Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com> Acked-by:
Guo Ren <guoren@kernel.org> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- 14 Jan, 2021 1 commit
-
-
Guo Ren authored
This patch enables "kprobe & kretprobe" to work with ftrace interface. It utilized software breakpoint as single-step mechanism. Some instructions which can't be single-step executed must be simulated in kernel execution slot, such as: branch, jal, auipc, la ... Some instructions should be rejected for probing and we use a blacklist to filter, such as: ecall, ebreak, ... We use ebreak & c.ebreak to replace origin instruction and the kprobe handler prepares an executable memory slot for out-of-line execution with a copy of the original instruction being probed. In execution slot we add ebreak behind original instruction to simulate a single-setp mechanism. The patch is based on packi's work [1] and csky's work [2]. - The kprobes_trampoline.S is all from packi's patch - The single-step mechanism is new designed for riscv without hw single-step trap - The simulation codes are from csky - Frankly, all codes refer to other archs' implementation [1] https://lore.kernel.org/linux-riscv/20181113195804.22825-1-me@packi.ch/ [2] https://lore.kernel.org/linux-csky/20200403044150.20562-9-guoren@kernel.org/ Signed-off-by:
Guo Ren <guoren@linux.alibaba.com> Co-developed-by:
Patrick Stählin <me@packi.ch> Signed-off-by:
Patrick Stählin <me@packi.ch> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Tested-by:
Zong Li <zong.li@sifive.com> Reviewed-by:
Pekka Enberg <penberg@kernel.org> Cc: Patrick Stählin <me@packi.ch> Cc: Palmer Dabbelt <palmerdabbelt@google.com> Cc: Björn Töpel <bjorn.topel@gmail.com> Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- 08 Sep, 2020 1 commit
-
-
Masami Hiramatsu authored
Use the generic kretprobe trampoline handler. Don't use framepointer verification. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Acked-by:
Guo Ren <guoren@kernel.org> Link: https://lore.kernel.org/r/159870605775.1229682.2627276871589951304.stgit@devnote2
-
- 03 Apr, 2020 1 commit
-
-
Guo Ren authored
This patch enable kprobes, kretprobes, ftrace interface. It utilized software breakpoint and single step debug exceptions, instructions simulation on csky. We use USR_BKPT replace origin instruction, and the kprobe handler prepares an excutable memory slot for out-of-line execution with a copy of the original instruction being probed. Most of instructions could be executed by single-step, but some instructions need origin pc value to execute and we need software simulate these instructions. Signed-off-by:
Guo Ren <guoren@linux.alibaba.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 28 Oct, 2019 1 commit
-
-
Since commit 73267498 ("arm64: unwind: reference pt_regs via embedded stack frame") arm64 has not used the __exception annotation to dump the pt_regs during stack tracing. in_exception_text() has no callers. This annotation is only used to blacklist kprobes, it means the same as __kprobes. Section annotations like this require the functions to be grouped together between the start/end markers, and placed according to the linker script. For kprobes we also have NOKPROBE_SYMBOL() which logs the symbol address in a section that kprobes parses and blacklists at boot. Using NOKPROBE_SYMBOL() instead lets kprobes publish the list of blacklisted symbols, and saves us from having an arm64 specific spelling of __kprobes. do_debug_exception() already has a NOKPROBE_SYMBOL() annotation. Signed-off-by:
James Morse <james.morse@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 02 Aug, 2019 1 commit
-
-
Masami Hiramatsu authored
kprobes manipulates the interrupted PSTATE for single step, and doesn't restore it. Thus, if we put a kprobe where the pstate.D (debug) masked, the mask will be cleared after the kprobe hits. Moreover, in the most complicated case, this can lead a kernel crash with below message when a nested kprobe hits. [ 152.118921] Unexpected kernel single-step exception at EL1 When the 1st kprobe hits, do_debug_exception() will be called. At this point, debug exception (= pstate.D) must be masked (=1). But if another kprobes hits before single-step of the first kprobe (e.g. inside user pre_handler), it unmask the debug exception (pstate.D = 0) and return. Then, when the 1st kprobe setting up single-step, it saves current DAIF, mask DAIF, enable single-step, and restore DAIF. However, since "D" flag in DAIF is cleared by the 2nd kprobe, the single-step exception happens soon after restoring DAIF. This has been introduced by commit 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") To solve this issue, this stores all DAIF bits and restore it after single stepping. Reported-by:
Naresh Kamboju <naresh.kamboju@linaro.org> Fixes: 7419333f ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler") Reviewed-by:
James Morse <james.morse@arm.com> Tested-by:
James Morse <james.morse@arm.com> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
- 24 Jun, 2019 1 commit
-
-
In order to avoid transient inconsistencies where freed code pages are remapped writable while stale TLB entries still exist on other cores, mark the kprobes text pages with the VM_FLUSH_RESET_PERMS attribute. This instructs the core vmalloc code not to defer the TLB flush when this region is unmapped and returned to the page allocator. Acked-by:
Will Deacon <will@kernel.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 30 May, 2019 1 commit
-
-
Thomas Gleixner authored
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 655 file(s). Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Allison Randal <allison@lohutok.net> Reviewed-by:
Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by:
Richard Fontana <rfontana@redhat.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070034.575739538@linutronix.de Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 09 Apr, 2019 3 commits
-
-
Will Deacon authored
kprobes and uprobes reserve some BRK immediates for installing their probes. Define these along with the other reservations in brk-imm.h and rename the ESR definitions to be consistent with the others that we already have. Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Now that the debug hook dispatching code takes the triggering exception level into account, there's no need for the hooks themselves to poke around with user_mode(regs). Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Kprobes bypasses our debug hook registration code so that it doesn't get tangled up with recursive debug exceptions from things like lockdep: http://lists.infradead.org/pipermail/linux-arm-kernel/2015-February/324385.html However, since then, (a) the hook list has become RCU protected and (b) the kprobes hooks were found not to filter out exceptions from userspace correctly. On top of that, the step handler is invoked directly from single_step_handler(), which *does* use the debug hook list, so it's clearly not the end of the world. For now, have kprobes use the debug hook registration API like everybody else. We can revisit this in the future if this is found to limit coverage significantly. Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 19 Mar, 2019 4 commits
-
-
Use arch_populate_kprobe_blacklist() instead of arch_within_kprobe_blacklist() so that we can see the full blacklisted symbols under the debugfs. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Add arch_populate_kprobe_blacklist() comment] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Move exception/irqentry text address check in blacklist, since those are symbol based rejection. If we prohibit probing on the symbols in exception_text, those should be blacklisted. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Remove unneeded RODATA check from arch_prepare_kprobe(). Since check_kprobe_address_safe() already ensured that the probe address is in kernel text, we don't need to check whether the address in RODATA or not. That must be always false. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Move extable address check into arch_prepare_kprobe() from arch_within_kprobe_blacklist(). The blacklist is exposed via debugfs as a list of symbols. The extable entries are smaller, so must be filtered out by arch_prepare_kprobe(). Acked-by:
Will Deacon <will.deacon@arm.com> Reviewed-by:
James Morse <james.morse@arm.com> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 01 Mar, 2019 1 commit
-
-
Debug exception handlers may be called for exceptions generated both by user and kernel code. In many cases, this is checked explicitly, but in other cases things either happen to work by happy accident or they go slightly wrong. For example, executing 'brk #4' from userspace will enter the kprobes code and be ignored, but the instruction will be retried forever in userspace instead of delivering a SIGTRAP. Fix this issue in the most stable-friendly fashion by simply adding explicit checks of the triggering exception level to all of our debug exception handlers. Cc: <stable@vger.kernel.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 01 Feb, 2019 1 commit
-
-
James Morse authored
On systems with VHE the kernel and KVM's world-switch code run at the same exception level. Code that is only used on a VHE system does not need to be annotated as __hyp_text as it can reside anywhere in the kernel text. __hyp_text was also used to prevent kprobes from patching breakpoint instructions into this region, as this code runs at a different exception level. While this is no longer true with VHE, KVM still switches VBAR_EL1, meaning a kprobe's breakpoint executed in the world-switch code will cause a hyp-panic. Move the __hyp_text check in the kprobes blacklist so it applies on VHE systems too, to cover the common code and guest enter/exit assembly. Fixes: 888b3c87 ("arm64: Treat all entry code as non-kprobe-able") Reviewed-by:
Christoffer Dall <christoffer.dall@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 02 Nov, 2018 1 commit
-
-
Commit 1404d6f1 ("arm64: dump: Add checking for writable and exectuable pages") has successfully identified code that leaves a page with W+X permissions. [ 3.245140] arm64/mm: Found insecure W+X mapping at address (____ptrval____)/0xffff000000d90000 [ 3.245771] WARNING: CPU: 0 PID: 1 at ../arch/arm64/mm/dump.c:232 note_page+0x410/0x420 [ 3.246141] Modules linked in: [ 3.246653] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.0-rc5-next-20180928-00001-ge70ae259b853-dirty #62 [ 3.247008] Hardware name: linux,dummy-virt (DT) [ 3.247347] pstate: 80000005 (Nzcv daif -PAN -UAO) [ 3.247623] pc : note_page+0x410/0x420 [ 3.247898] lr : note_page+0x410/0x420 [ 3.248071] sp : ffff00000804bcd0 [ 3.248254] x29: ffff00000804bcd0 x28: ffff000009274000 [ 3.248578] x27: ffff00000921a000 x26: ffff80007dfff000 [ 3.248845] x25: ffff0000093f5000 x24: ffff000009526f6a [ 3.249109] x23: 0000000000000004 x22: ffff000000d91000 [ 3.249396] x21: ffff000000d90000 x20: 0000000000000000 [ 3.249661] x19: ffff00000804bde8 x18: 0000000000000400 [ 3.249924] x17: 0000000000000000 x16: 0000000000000000 [ 3.250271] x15: ffffffffffffffff x14: 295f5f5f5f6c6176 [ 3.250594] x13: 7274705f5f5f5f28 x12: 2073736572646461 [ 3.250941] x11: 20746120676e6970 x10: 70616d20582b5720 [ 3.251252] x9 : 6572756365736e69 x8 : 3039643030303030 [ 3.251519] x7 : 306666666678302f x6 : ffff0000095467b2 [ 3.251802] x5 : 0000000000000000 x4 : 0000000000000000 [ 3.252060] x3 : 0000000000000000 x2 : ffffffffffffffff [ 3.252323] x1 : 4d151327adc50b00 x0 : 0000000000000000 [ 3.252664] Call trace: [ 3.252953] note_page+0x410/0x420 [ 3.253186] walk_pgd+0x12c/0x238 [ 3.253417] ptdump_check_wx+0x68/0xf8 [ 3.253637] mark_rodata_ro+0x68/0x98 [ 3.253847] kernel_init+0x38/0x160 [ 3.254103] ret_from_fork+0x10/0x18 kprobes allocates a writable executable page with module_alloc() in order to store executable code. Reworked to that when allocate a page it sets mode RO. Inspired by commit 63fef14f ("kprobes/x86: Make insn buffer always ROX and use text_poke()"). Suggested-by:
Arnd Bergmann <arnd@arndb.de> Suggested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Will Deacon <will.deacon@arm.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by:
Laura Abbott <labbott@redhat.com> Signed-off-by:
Anders Roxell <anders.roxell@linaro.org> [catalin.marinas@arm.com: removed unnecessary casts] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 01 Oct, 2018 1 commit
-
-
There is an extra semicolon in arch_prepare_kprobe, remove it. Signed-off-by:
zhong jiang <zhongjiang@huawei.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 21 Jun, 2018 4 commits
-
-
Masami Hiramatsu authored
Fix %p uses in error messages by removing it because those are redundant or meaningless. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Howells <dhowells@redhat.com> Cc: David S . Miller <davem@davemloft.net> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Jon Medhurst <tixy@linaro.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Tobin C . Harding <me@tobin.cc> Cc: acme@kernel.org Cc: akpm@linux-foundation.org Cc: brueckner@linux.vnet.ibm.com Cc: linux-arch@vger.kernel.org Cc: rostedt@goodmis.org Cc: schwidefsky@de.ibm.com Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/lkml/152491908405.9916.12425053035317241111.stgit@devbox Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
Masami Hiramatsu authored
Clear current_kprobe and enable preemption in kprobe even if pre_handler returns !0. This simplifies function override using kprobes. Jprobe used to require to keep the preemption disabled and keep current_kprobe until it returned to original function entry. For this reason kprobe_int3_handler() and similar arch dependent kprobe handers checks pre_handler result and exit without enabling preemption if the result is !0. After removing the jprobe, Kprobes does not need to keep preempt disabled even if user handler returns !0 anymore. But since the function override handler in error-inject and bpf is also returns !0 if it overrides a function, to balancing the preempt count, it enables preemption and reset current kprobe by itself. That is a bad design that is very buggy. This fixes such unbalanced preempt-count and current_kprobes setting in kprobes, bpf and error-inject. Note: for powerpc and x86, this removes all preempt_disable from kprobe_ftrace_handler because ftrace callbacks are called under preempt disabled. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: David S. Miller <davem@davemloft.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: James Hogan <jhogan@kernel.org> Cc: Josef Bacik <jbacik@fb.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Cc: linux-ia64@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: linux-s390@vger.kernel.org Cc: linux-sh@vger.kernel.org Cc: linux-snps-arc@lists.infradead.org Cc: linuxppc-dev@lists.ozlabs.org Cc: sparclinux@vger.kernel.org Link: https://lore.kernel.org/lkml/152942494574.15209.12323837825873032258.stgit@devbox Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
Masami Hiramatsu authored
Don't call the ->break_handler() from the arm64 kprobes code, because it was only used by jprobes which got removed. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/152942474231.15209.17684808374429473004.stgit@devbox Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
Masami Hiramatsu authored
Remove arch dependent setjump/longjump functions and unused fields in kprobe_ctlblk for jprobes from arch/arm64. Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: linux-arch@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org Link: https://lore.kernel.org/lkml/152942442318.15209.17767976282305601884.stgit@devbox Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- 30 May, 2017 1 commit
-
-
Kefeng Wang authored
Generic code expects show_regs() to also dump the stack, but arm64's show_reg() does not do this. Some arm64 callers of show_regs() *only* want the registers dumped, without the stack. To enable generic code to work as expected, we need to make show_regs() dump the stack. Where we only want the registers dumped, we must use __show_regs(). This patch updates code to use __show_regs() where only registers are desired. A subsequent patch will modify show_regs(). Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 10 Mar, 2017 1 commit
-
-
Naveen N. Rao authored
Commit fc62d020 ("kprobes: Introduce weak variant of kprobe_exceptions_notify()") introduces a generic empty version of the function for architectures that don't need special handling, like arm64. As such, remove the arch/arm64/ specific handler. Signed-off-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 02 Mar, 2017 1 commit
-
-
Ingo Molnar authored
We are going to split <linux/sched/debug.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/debug.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by:
Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- 24 Dec, 2016 1 commit
-
-
Linus Torvalds authored
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by:
Al Viro <viro@zeniv.linux.org.uk> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 07 Nov, 2016 1 commit
-
-
decode-insn code has to be reused by arm64 uprobe implementation as well. Therefore, this patch protects some portion of kprobe code and renames few other, so that decode-insn functionality can be reused by uprobe even when CONFIG_KPROBES is not defined. kprobe_opcode_t and struct arch_specific_insn are also defined by linux/kprobes.h, when CONFIG_KPROBES is not defined. So, protect these definitions in asm/probes.h. linux/kprobes.h already includes asm/kprobes.h. Therefore, remove inclusion of asm/kprobes.h from decode-insn.c. There are some definitions like kprobe_insn and kprobes_handler_t etc can be re-used by uprobe. So, it would be better to remove 'k' from their names. struct arch_specific_insn is specific to kprobe. Therefore, introduce a new struct arch_probe_insn which will be common for both kprobe and uprobe, so that decode-insn code can be shared. Modify kprobe code accordingly. Function arm_probe_decode_insn() will be needed by uprobe as well. So make it global. Signed-off-by:
Pratyush Anand <panand@redhat.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Sep, 2016 1 commit
-
-
Paul Gortmaker authored
These files were only including module.h for exception table related functions. We've now separated that content out into its own file "extable.h" so now move over to that and avoid all the extra header content in module.h that we don't really need to compile these files. Cc: Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Signed-off-by:
Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 25 Aug, 2016 2 commits
-
-
James Morse authored
Each time new section markers are added, kernel/vmlinux.ld.S is updated, and new extern char __start_foo[] definitions are scattered through the tree. Create asm/include/sections.h to collect these definitions (and include the existing asm-generic version). Signed-off-by:
James Morse <james.morse@arm.com> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Pratyush Anand authored
Whenever we are hitting a kprobe from a none-kprobe debug exception handler, we hit an infinite occurrences of "Unexpected kernel single-step exception at EL1" PSTATE.D is debug exception mask bit. It is set whenever we enter into an exception mode. When it is set then Watchpoint, Breakpoint, and Software Step exceptions are masked. However, software Breakpoint Instruction exceptions can never be masked. Therefore, if we ever execute a BRK instruction, irrespective of D-bit setting, we will be receiving a corresponding breakpoint exception. For example: - We are executing kprobe pre/post handler, and kprobe has been inserted in one of the instruction of a function called by handler. So, it executes BRK instruction and we land into the case of KPROBE_REENTER. (This case is already handled by current code) - We are executing uprobe handler or any other BRK handler such as in WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we enter into kprobe breakpoint handler,from another BRK handler.(This case is not being handled currently) In all such cases kprobe breakpoint exception will be raised when we were already in debug exception mode. SPSR's D bit (bit 9) shows the value of PSTATE.D immediately before the exception was taken. So, in above example cases we would find it set in kprobe breakpoint handler. Single step exception will always be followed by a kprobe breakpoint exception.However, it will only be raised gracefully if we clear D bit while returning from breakpoint exception. If D bit is set then, it results into undefined exception and when it's handler enables dbg then single step exception is generated, however it will never be handled(because address does not match and therefore treated as unexpected). This patch clears D-flag unconditionally in setup_singlestep, so that we can always get single step exception correctly after returning from breakpoint exception. Additionally, it also removes D-flag set statement for KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will always take place in a debug exception state. So, D-flag will already be set in this case. Acked-by:
Sandeepa Prabhu <sandeepa.s.prabhu@gmail.com> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Pratyush Anand <panand@redhat.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 11 Aug, 2016 1 commit
-
-
Because the arm64 calling standard allows stacked function arguments to be anywhere in the stack frame, do not attempt to duplicate the stack frame for jprobes handler functions. Documentation changes to describe this issue have been broken out into a separate patch in order to simultaneously address them in other architecture(s). Signed-off-by:
David A. Long <dave.long@linaro.org> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 21 Jul, 2016 2 commits
-
-
Catalin Marinas authored
This patch disables KASAN around the memcpy from/to the kernel or IRQ stacks to avoid warnings like below: BUG: KASAN: stack-out-of-bounds in setjmp_pre_handler+0xe4/0x170 at addr ffff800935cbbbc0 Read of size 128 by task swapper/0/1 page:ffff7e0024d72ec0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x1000000000000000() page dumped because: kasan: bad access detected CPU: 4 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4+ #1 Hardware name: ARM Juno development board (r0) (DT) Call trace: [<ffff20000808ad88>] dump_backtrace+0x0/0x280 [<ffff20000808b01c>] show_stack+0x14/0x20 [<ffff200008563a64>] dump_stack+0xa4/0xc8 [<ffff20000824a1fc>] kasan_report_error+0x4fc/0x528 [<ffff20000824a5e8>] kasan_report+0x40/0x48 [<ffff20000824948c>] check_memory_region+0x144/0x1a0 [<ffff200008249814>] memcpy+0x34/0x68 [<ffff200008c3ee2c>] setjmp_pre_handler+0xe4/0x170 [<ffff200008c3ec5c>] kprobe_breakpoint_handler+0xec/0x1d8 [<ffff2000080853a4>] brk_handler+0x5c/0xa0 [<ffff2000080813f0>] do_debug_exception+0xa0/0x138 Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
jprobe_return seems to have aged badly. Comments referring to non-existent behaviours, and a dangerous habit of messing with registers without telling the compiler. This patches applies the following remedies: - Fix the comments to describe the actual behaviour - Tidy up the asm sequence to directly assign the stack pointer without clobbering extra registers - Mark the rest of the function as unreachable() so that the compiler knows that there is no need for an epilogue - Stop making jprobe_return_break a global function (you really don't want to call that guy, and it isn't even a function). Tested with tcp_probe. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Jul, 2016 1 commit
-
-
The MIN_STACK_SIZE macro tries evaluate how much stack space needs to be saved in the jprobes_stack array, sized at 128 bytes. When using the IRQ stack, said macro can happily return up to IRQ_STACK_SIZE, which is 16kB. Mayhem follows. This patch fixes things by getting rid of the crazy macro and limiting the copy to be at most the size of the jprobes_stack array, no matter which stack we're on. Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Jul, 2016 1 commit
-
-
Stepping with PSTATE.D=1 is bad news. The step won't generate a debug exception and we'll likely walk off into random data structures. This should never happen, but when it does, it's a PITA to debug. Add a WARN_ON to shout if we realise this is about to take place. Signed-off-by:
Will Deacon <will.deacon@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-