• Hou Tao's avatar
    bpf: Zeroing allocated object from slab in bpf memory allocator · 997849c4
    Hou Tao authored
    Currently the freed element in bpf memory allocator may be immediately
    reused, for htab map the reuse will reinitialize special fields in map
    value (e.g., bpf_spin_lock), but lookup procedure may still access
    these special fields, and it may lead to hard-lockup as shown below:
    
     NMI backtrace for cpu 16
     CPU: 16 PID: 2574 Comm: htab.bin Tainted: G             L     6.1.0+ #1
     Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
     RIP: 0010:queued_spin_lock_slowpath+0x283/0x2c0
     ......
     Call Trace:
      <TASK>
      copy_map_value_locked+0xb7/0x170
      bpf_map_copy_value+0x113/0x3c0
      __sys_bpf+0x1c67/0x2780
      __x64_sys_bpf+0x1c/0x20
      do_syscall_64+0x30/0x60
      entry_SYSCALL_64_after_hwframe+0x46/0xb0
     ......
      </TASK>
    
    For htab map, just like the preallocated case, these is no need to
    initialize these special fields in map value again once these fields
    have been initialized. For preallocated htab map, these fields are
    initialized through __GFP_ZERO in bpf_map_area_alloc(), so do the
    similar thing for non-preallocated htab in bpf memory allocator. And
    there is no need to use __GFP_ZERO for per-cpu bpf memory allocator,
    because __alloc_percpu_gfp() does it implicitly.
    
    Fixes: 0fd7c5d4 ("bpf: Optimize call_rcu in non-preallocated hash map.")
    Signed-off-by: default avatarHou Tao <houtao1@huawei.com>
    Link: https://lore.kernel.org/r/20230215082132.3856544-2-houtao@huaweicloud.comSigned-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
    997849c4
hashtab.c 65.7 KB