1. 30 Jul, 2018 2 commits
  2. 27 Jul, 2018 2 commits
    • Eugeniy Paltsev's avatar
      ARC: dma [non-IOC] setup SMP_CACHE_BYTES and cache_line_size · eb277739
      Eugeniy Paltsev authored
      As for today we don't setup SMP_CACHE_BYTES and cache_line_size for
      ARC, so they are set to L1_CACHE_BYTES by default. L1 line length
      (L1_CACHE_BYTES) might be easily smaller than L2 line (which is
      usually the case BTW). This breaks code.
      
      For example this breaks ethernet infrastructure on HSDK/AXS103 boards
      with IOC disabled, involving manual cache flushes
      Functions which alloc and manage sk_buff packet data area rely on
      SMP_CACHE_BYTES define. In the result we can share last L2 cache
      line in sk_buff linear packet data area between DMA buffer and
      some useful data in other structure. So we can lose this data when
      we invalidate DMA buffer.
      
         sk_buff linear packet data area
                      |
                      |
                      |         skb->end        skb->tail
                      V            |                |
                                   V                V
      ----------------------------------------------.
            packet data            | <tail padding> |  <useful data in other struct>
      ----------------------------------------------.
      
      ---------------------.--------------------------------------------------.
           SLC line        |             SLC (L2 cache) line (128B)           |
      ---------------------.--------------------------------------------------.
              ^                                     ^
              |                                     |
           These cache lines will be invalidated when we invalidate skb
           linear packet data area before DMA transaction starting.
      
      This leads to issues painful to debug as it reproduces only if
      (sk_buff->end - sk_buff->tail) < SLC_LINE_SIZE and
      if we have some useful data right after sk_buff->end.
      
      Fix that by hardcode SMP_CACHE_BYTES to max line length we may have.
      Signed-off-by: default avatarEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      eb277739
    • Eugeniy Paltsev's avatar
      ARC: dma [non IOC]: fix arc_dma_sync_single_for_(device|cpu) · 4c612add
      Eugeniy Paltsev authored
      ARC backend for dma_sync_single_for_(device|cpu) was broken as it was
      not honoring the @dir argument and simply forcing it based on the call:
       - arc_dma_sync_single_for_device(dir) assumed DMA_TO_DEVICE (cache wback)
       - arc_dma_sync_single_for_cpu(dir) assumed DMA_FROM_DEVICE (cache inv)
      
      This is not true given the DMA API programming model and has been
      discussed here [1] in some detail.
      
      Interestingly while the deficiency has been there forever, it only started
      showing up after 4.17 dma common ops rework, commit a8eb92d0
      ("arc: fix arc_dma_{map,unmap}_page") which wired up these calls under the
      more commonly used dma_map_page API triggering the issue.
      
      [1]: https://lkml.org/lkml/2018/5/18/979
      Fixes: commit a8eb92d0 ("arc: fix arc_dma_{map,unmap}_page")
      Cc: stable@kernel.org # v4.17+
      Signed-off-by: default avatarEugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
      Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
      [vgupta: reworked changelog]
      4c612add
  3. 24 Jul, 2018 1 commit
  4. 22 Jul, 2018 8 commits
  5. 21 Jul, 2018 12 commits
  6. 20 Jul, 2018 15 commits