Commit 0c04eb72 authored by David S. Miller's avatar David S. Miller
Browse files

Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf



Alexei Starovoitov says:

====================
pull-request: bpf 2019-09-06

The following pull-request contains BPF updates for your *net* tree.

The main changes are:

1) verifier precision tracking fix, from Alexei.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 74346c43 2339cd6c
......@@ -1772,16 +1772,21 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno,
bitmap_from_u64(mask, stack_mask);
for_each_set_bit(i, mask, 64) {
if (i >= func->allocated_stack / BPF_REG_SIZE) {
/* This can happen if backtracking
* is propagating stack precision where
* caller has larger stack frame
* than callee, but backtrack_insn() should
* have returned -ENOTSUPP.
/* the sequence of instructions:
* 2: (bf) r3 = r10
* 3: (7b) *(u64 *)(r3 -8) = r0
* 4: (79) r4 = *(u64 *)(r10 -8)
* doesn't contain jmps. It's backtracked
* as a single block.
* During backtracking insn 3 is not recognized as
* stack access, so at the end of backtracking
* stack slot fp-8 is still marked in stack_mask.
* However the parent state may not have accessed
* fp-8 and it's "unallocated" stack space.
* In such case fallback to conservative.
*/
verbose(env, "BUG spi %d stack_size %d\n",
i, func->allocated_stack);
WARN_ONCE(1, "verifier backtracking bug");
return -EFAULT;
mark_all_scalars_precise(env, st);
return 0;
}
if (func->stack[i].slot_type[0] != STACK_SPILL) {
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment