An error occurred fetching the project authors.
  1. 24 Jul, 2023 1 commit
  2. 12 Jul, 2023 2 commits
  3. 09 Jul, 2023 1 commit
    • Linus Torvalds's avatar
      MAINTAINERS 2: Electric Boogaloo · c192ac73
      Linus Torvalds authored
      We just sorted the entries and fields last release, so just out of a
      perverse sense of curiosity, I decided to see if we can keep things
      ordered for even just one release.
      
      The answer is "No. No we cannot".
      
      I suggest that all kernel developers will need weekly training sessions,
      involving a lot of Big Bird and Sesame Street.  And at the yearly
      maintainer summit, we will all sing the alphabet song together.
      
      I doubt I will keep doing this.  At some point "perverse sense of
      curiosity" turns into just a cold dark place filled with sadness and
      despair.
      
      Repeats: 80e62bc8 ("MAINTAINERS: re-sort all entries and fields")
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c192ac73
  4. 08 Jul, 2023 3 commits
  5. 03 Jul, 2023 1 commit
  6. 29 Jun, 2023 1 commit
  7. 27 Jun, 2023 1 commit
  8. 26 Jun, 2023 3 commits
  9. 24 Jun, 2023 1 commit
  10. 23 Jun, 2023 4 commits
  11. 22 Jun, 2023 4 commits
  12. 21 Jun, 2023 7 commits
  13. 20 Jun, 2023 5 commits
  14. 19 Jun, 2023 2 commits
  15. 18 Jun, 2023 1 commit
    • Arjun Roy's avatar
      tcp: Use per-vma locking for receive zerocopy · 7a7f0946
      Arjun Roy authored
      Per-VMA locking allows us to lock a struct vm_area_struct without
      taking the process-wide mmap lock in read mode.
      
      Consider a process workload where the mmap lock is taken constantly in
      write mode. In this scenario, all zerocopy receives are periodically
      blocked during that period of time - though in principle, the memory
      ranges being used by TCP are not touched by the operations that need
      the mmap write lock. This results in performance degradation.
      
      Now consider another workload where the mmap lock is never taken in
      write mode, but there are many TCP connections using receive zerocopy
      that are concurrently receiving. These connections all take the mmap
      lock in read mode, but this does induce a lot of contention and atomic
      ops for this process-wide lock. This results in additional CPU
      overhead caused by contending on the cache line for this lock.
      
      However, with per-vma locking, both of these problems can be avoided.
      
      As a test, I ran an RPC-style request/response workload with 4KB
      payloads and receive zerocopy enabled, with 100 simultaneous TCP
      connections. I measured perf cycles within the
      find_tcp_vma/mmap_read_lock/mmap_read_unlock codepath, with and
      without per-vma locking enabled.
      
      When using process-wide mmap semaphore read locking, about 1% of
      measured perf cycles were within this path. With per-VMA locking, this
      value dropped to about 0.45%.
      Signed-off-by: default avatarArjun Roy <arjunroy@google.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      7a7f0946
  16. 17 Jun, 2023 1 commit
  17. 16 Jun, 2023 2 commits