1. 21 Jan, 2023 6 commits
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Combine dynptr_get_spi and is_spi_bounds_valid · f5b625e5
      Kumar Kartikeya Dwivedi authored
      Currently, a check on spi resides in dynptr_get_spi, while others
      checking its validity for being within the allocated stack slots happens
      in is_spi_bounds_valid. Almost always barring a couple of cases (where
      being beyond allocated stack slots is not an error as stack slots need
      to be populated), both are used together to make checks. Hence, subsume
      the is_spi_bounds_valid check in dynptr_get_spi, and return -ERANGE to
      specially distinguish the case where spi is valid but not within
      allocated slots in the stack state.
      
      The is_spi_bounds_valid function is still kept around as it is a generic
      helper that will be useful for other objects on stack similar to dynptr
      in the future.
      Suggested-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Acked-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20230121002241.2113993-7-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f5b625e5
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Allow reinitializing unreferenced dynptr stack slots · 379d4ba8
      Kumar Kartikeya Dwivedi authored
      Consider a program like below:
      
      void prog(void)
      {
      	{
      		struct bpf_dynptr ptr;
      		bpf_dynptr_from_mem(...);
      	}
      	...
      	{
      		struct bpf_dynptr ptr;
      		bpf_dynptr_from_mem(...);
      	}
      }
      
      Here, the C compiler based on lifetime rules in the C standard would be
      well within in its rights to share stack storage for dynptr 'ptr' as
      their lifetimes do not overlap in the two distinct scopes. Currently,
      such an example would be rejected by the verifier, but this is too
      strict. Instead, we should allow reinitializing over dynptr stack slots
      and forget information about the old dynptr object.
      
      The destroy_if_dynptr_stack_slot function already makes necessary checks
      to avoid overwriting referenced dynptr slots. This is done to present a
      better error message instead of forgetting dynptr information on stack
      and preserving reference state, leading to an inevitable but
      undecipherable error at the end about an unreleased reference which has
      to be associated back to its allocating call instruction to make any
      sense to the user.
      Acked-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20230121002241.2113993-6-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      379d4ba8
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Invalidate slices on destruction of dynptrs on stack · f8064ab9
      Kumar Kartikeya Dwivedi authored
      The previous commit implemented destroy_if_dynptr_stack_slot. It
      destroys the dynptr which given spi belongs to, but still doesn't
      invalidate the slices that belong to such a dynptr. While for the case
      of referenced dynptr, we don't allow their overwrite and return an error
      early, we still allow it and destroy the dynptr for unreferenced dynptr.
      
      To be able to enable precise and scoped invalidation of dynptr slices in
      this case, we must be able to associate the source dynptr of slices that
      have been obtained using bpf_dynptr_data. When doing destruction, only
      slices belonging to the dynptr being destructed should be invalidated,
      and nothing else. Currently, dynptr slices belonging to different
      dynptrs are indistinguishible.
      
      Hence, allocate a unique id to each dynptr (CONST_PTR_TO_DYNPTR and
      those on stack). This will be stored as part of reg->id. Whenever using
      bpf_dynptr_data, transfer this unique dynptr id to the returned
      PTR_TO_MEM_OR_NULL slice pointer, and store it in a new per-PTR_TO_MEM
      dynptr_id register state member.
      
      Finally, after establishing such a relationship between dynptrs and
      their slices, implement precise invalidation logic that only invalidates
      slices belong to the destroyed dynptr in destroy_if_dynptr_stack_slot.
      Acked-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20230121002241.2113993-5-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      f8064ab9
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Fix partial dynptr stack slot reads/writes · ef8fc7a0
      Kumar Kartikeya Dwivedi authored
      Currently, while reads are disallowed for dynptr stack slots, writes are
      not. Reads don't work from both direct access and helpers, while writes
      do work in both cases, but have the effect of overwriting the slot_type.
      
      While this is fine, handling for a few edge cases is missing. Firstly,
      a user can overwrite the stack slots of dynptr partially.
      
      Consider the following layout:
      spi: [d][d][?]
            2  1  0
      
      First slot is at spi 2, second at spi 1.
      Now, do a write of 1 to 8 bytes for spi 1.
      
      This will essentially either write STACK_MISC for all slot_types or
      STACK_MISC and STACK_ZERO (in case of size < BPF_REG_SIZE partial write
      of zeroes). The end result is that slot is scrubbed.
      
      Now, the layout is:
      spi: [d][m][?]
            2  1  0
      
      Suppose if user initializes spi = 1 as dynptr.
      We get:
      spi: [d][d][d]
            2  1  0
      
      But this time, both spi 2 and spi 1 have first_slot = true.
      
      Now, when passing spi 2 to dynptr helper, it will consider it as
      initialized as it does not check whether second slot has first_slot ==
      false. And spi 1 should already work as normal.
      
      This effectively replaced size + offset of first dynptr, hence allowing
      invalid OOB reads and writes.
      
      Make a few changes to protect against this:
      When writing to PTR_TO_STACK using BPF insns, when we touch spi of a
      STACK_DYNPTR type, mark both first and second slot (regardless of which
      slot we touch) as STACK_INVALID. Reads are already prevented.
      
      Second, prevent writing	to stack memory from helpers if the range may
      contain any STACK_DYNPTR slots. Reads are already prevented.
      
      For helpers, we cannot allow it to destroy dynptrs from the writes as
      depending on arguments, helper may take uninit_mem and dynptr both at
      the same time. This would mean that helper may write to uninit_mem
      before it reads the dynptr, which would be bad.
      
      PTR_TO_MEM: [?????dd]
      
      Depending on the code inside the helper, it may end up overwriting the
      dynptr contents first and then read those as the dynptr argument.
      
      Verifier would only simulate destruction when it does byte by byte
      access simulation in check_helper_call for meta.access_size, and
      fail to catch this case, as it happens after argument checks.
      
      The same would need to be done for any other non-trivial objects created
      on the stack in the future, such as bpf_list_head on stack, or
      bpf_rb_root on stack.
      
      A common misunderstanding in the current code is that MEM_UNINIT means
      writes, but note that writes may also be performed even without
      MEM_UNINIT in case of helpers, in that case the code after handling meta
      && meta->raw_mode will complain when it sees STACK_DYNPTR. So that
      invalid read case also covers writes to potential STACK_DYNPTR slots.
      The only loophole was in case of meta->raw_mode which simulated writes
      through instructions which could overwrite them.
      
      A future series sequenced after this will focus on the clean up of
      helper access checks and bugs around that.
      
      Fixes: 97e03f52 ("bpf: Add verifier support for dynptrs")
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20230121002241.2113993-4-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      ef8fc7a0
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Fix missing var_off check for ARG_PTR_TO_DYNPTR · 79168a66
      Kumar Kartikeya Dwivedi authored
      Currently, the dynptr function is not checking the variable offset part
      of PTR_TO_STACK that it needs to check. The fixed offset is considered
      when computing the stack pointer index, but if the variable offset was
      not a constant (such that it could not be accumulated in reg->off), we
      will end up a discrepency where runtime pointer does not point to the
      actual stack slot we mark as STACK_DYNPTR.
      
      It is impossible to precisely track dynptr state when variable offset is
      not constant, hence, just like bpf_timer, kptr, bpf_spin_lock, etc.
      simply reject the case where reg->var_off is not constant. Then,
      consider both reg->off and reg->var_off.value when computing the stack
      pointer index.
      
      A new helper dynptr_get_spi is introduced to hide over these details
      since the dynptr needs to be located in multiple places outside the
      process_dynptr_func checks, hence once we know it's a PTR_TO_STACK, we
      need to enforce these checks in all places.
      
      Note that it is disallowed for unprivileged users to have a non-constant
      var_off, so this problem should only be possible to trigger from
      programs having CAP_PERFMON. However, its effects can vary.
      
      Without the fix, it is possible to replace the contents of the dynptr
      arbitrarily by making verifier mark different stack slots than actual
      location and then doing writes to the actual stack address of dynptr at
      runtime.
      
      Fixes: 97e03f52 ("bpf: Add verifier support for dynptrs")
      Acked-by: default avatarJoanne Koong <joannelkoong@gmail.com>
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20230121002241.2113993-3-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      79168a66
    • Kumar Kartikeya Dwivedi's avatar
      bpf: Fix state pruning for STACK_DYNPTR stack slots · d6fefa11
      Kumar Kartikeya Dwivedi authored
      The root of the problem is missing liveness marking for STACK_DYNPTR
      slots. This leads to all kinds of problems inside stacksafe.
      
      The verifier by default inside stacksafe ignores spilled_ptr in stack
      slots which do not have REG_LIVE_READ marks. Since this is being checked
      in the 'old' explored state, it must have already done clean_live_states
      for this old bpf_func_state. Hence, it won't be receiving any more
      liveness marks from to be explored insns (it has received REG_LIVE_DONE
      marking from liveness point of view).
      
      What this means is that verifier considers that it's safe to not compare
      the stack slot if was never read by children states. While liveness
      marks are usually propagated correctly following the parentage chain for
      spilled registers (SCALAR_VALUE and PTR_* types), the same is not the
      case for STACK_DYNPTR.
      
      clean_live_states hence simply rewrites these stack slots to the type
      STACK_INVALID since it sees no REG_LIVE_READ marks.
      
      The end result is that we will never see STACK_DYNPTR slots in explored
      state. Even if verifier was conservatively matching !REG_LIVE_READ
      slots, very next check continuing the stacksafe loop on seeing
      STACK_INVALID would again prevent further checks.
      
      Now as long as verifier stores an explored state which we can compare to
      when reaching a pruning point, we can abuse this bug to make verifier
      prune search for obviously unsafe paths using STACK_DYNPTR slots
      thinking they are never used hence safe.
      
      Doing this in unprivileged mode is a bit challenging. add_new_state is
      only set when seeing BPF_F_TEST_STATE_FREQ (which requires privileges)
      or when jmps_processed difference is >= 2 and insn_processed difference
      is >= 8. So coming up with the unprivileged case requires a little more
      work, but it is still totally possible. The test case being discussed
      below triggers the heuristic even in unprivileged mode.
      
      However, it no longer works since commit
      8addbfc7 ("bpf: Gate dynptr API behind CAP_BPF").
      
      Let's try to study the test step by step.
      
      Consider the following program (C style BPF ASM):
      
      0  r0 = 0;
      1  r6 = &ringbuf_map;
      3  r1 = r6;
      4  r2 = 8;
      5  r3 = 0;
      6  r4 = r10;
      7  r4 -= -16;
      8  call bpf_ringbuf_reserve_dynptr;
      9  if r0 == 0 goto pc+1;
      10 goto pc+1;
      11 *(r10 - 16) = 0xeB9F;
      12 r1 = r10;
      13 r1 -= -16;
      14 r2 = 0;
      15 call bpf_ringbuf_discard_dynptr;
      16 r0 = 0;
      17 exit;
      
      We know that insn 12 will be a pruning point, hence if we force
      add_new_state for it, it will first verify the following path as
      safe in straight line exploration:
      0 1 3 4 5 6 7 8 9 -> 10 -> (12) 13 14 15 16 17
      
      Then, when we arrive at insn 12 from the following path:
      0 1 3 4 5 6 7 8 9 -> 11 (12)
      
      We will find a state that has been verified as safe already at insn 12.
      Since register state is same at this point, regsafe will pass. Next, in
      stacksafe, for spi = 0 and spi = 1 (location of our dynptr) is skipped
      seeing !REG_LIVE_READ. The rest matches, so stacksafe returns true.
      Next, refsafe is also true as reference state is unchanged in both
      states.
      
      The states are considered equivalent and search is pruned.
      
      Hence, we are able to construct a dynptr with arbitrary contents and use
      the dynptr API to operate on this arbitrary pointer and arbitrary size +
      offset.
      
      To fix this, first define a mark_dynptr_read function that propagates
      liveness marks whenever a valid initialized dynptr is accessed by dynptr
      helpers. REG_LIVE_WRITTEN is marked whenever we initialize an
      uninitialized dynptr. This is done in mark_stack_slots_dynptr. It allows
      screening off mark_reg_read and not propagating marks upwards from that
      point.
      
      This ensures that we either set REG_LIVE_READ64 on both dynptr slots, or
      none, so clean_live_states either sets both slots to STACK_INVALID or
      none of them. This is the invariant the checks inside stacksafe rely on.
      
      Next, do a complete comparison of both stack slots whenever they have
      STACK_DYNPTR. Compare the dynptr type stored in the spilled_ptr, and
      also whether both form the same first_slot. Only then is the later path
      safe.
      
      Fixes: 97e03f52 ("bpf: Add verifier support for dynptrs")
      Acked-by: default avatarEduard Zingerman <eddyz87@gmail.com>
      Signed-off-by: default avatarKumar Kartikeya Dwivedi <memxor@gmail.com>
      Link: https://lore.kernel.org/r/20230121002241.2113993-2-memxor@gmail.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      d6fefa11
  2. 20 Jan, 2023 4 commits
  3. 18 Jan, 2023 3 commits
  4. 17 Jan, 2023 1 commit
  5. 15 Jan, 2023 14 commits
  6. 13 Jan, 2023 3 commits
  7. 12 Jan, 2023 9 commits