1. 14 Nov, 2017 12 commits
  2. 08 Nov, 2017 5 commits
  3. 02 Nov, 2017 1 commit
  4. 31 Oct, 2017 1 commit
  5. 23 Oct, 2017 4 commits
  6. 20 Oct, 2017 2 commits
    • Alexander Kochetkov's avatar
      dmaengine: pl330: fix descriptor allocation fail · e5887103
      Alexander Kochetkov authored
      If two concurrent threads call pl330_get_desc() when DMAC descriptor
      pool is empty it is possible that allocation for one of threads will fail
      with message:
      
      kernel: dma-pl330 20078000.dma-controller: pl330_get_desc:2469 ALERT!
      
      Here how that can happen. Thread A calls pl330_get_desc() to get
      descriptor. If DMAC descriptor pool is empty pl330_get_desc() allocates
      new descriptor on shared pool using add_desc() and then get newly
      allocated descriptor using pluck_desc(). At the same time thread B calls
      pluck_desc() and take newly allocated descriptor. In that case descriptor
      allocation for thread A will fail.
      
      Using on-stack pool for new descriptor allow avoid the issue described.
      The patch modify pl330_get_desc() to use on-stack pool for allocation
      new descriptors.
      Signed-off-by: default avatarAlexander Kochetkov <al.kochet@gmail.com>
      Tested-by: default avatarMarek Szyprowski <m.szyprowski@samsung.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      e5887103
    • Hiroyuki Yokoyama's avatar
      dmaengine: rcar-dmac: use TCRB instead of TCR for residue · 847449f2
      Hiroyuki Yokoyama authored
      SYS/RT/Audio DMAC includes independent data buffers for reading
      and writing. Therefore, the read transfer counter and write transfer
      counter have different values.
      TCR indicates read counter, and TCRB indicates write counter.
      The relationship is like below.
      
              TCR       TCRB
      [SOURCE] -> [DMAC] -> [SINK]
      
      In the MEM_TO_DEV direction, what really matters is how much data has
      been written to the device. If the DMA is interrupted between read and
      write, then, the data doesn't end up in the destination, so shouldn't
      be counted. TCRB is thus the register we should use in this cases.
      
      In the DEV_TO_MEM direction, the situation is more complex. Both the
      read and write side are important. What matters from a data consumer
      point of view is how much data has been written to memory.
      On the other hand, if the transfer is interrupted between read and
      write, we'll end up losing data. It can also be important to report.
      
      In the MEM_TO_MEM direction, what matters is of course how much data
      has been written to memory from data consumer point of view.
      Here, because read and write have independent data buffers, it will
      take a while for TCR and TCRB to become equal. Thus we should check
      TCRB in this case, too.
      
      Thus, all cases we should check TCRB instead of TCR.
      
      Without this patch, Sound Capture has noise after PluseAudio support
      (= 07b7acb5 ("ASoC: rsnd: update pointer more accurate")), because
      the recorder will use wrong residue counter which indicates transferred
      from sound device, but in reality the data was not yet put to memory
      and recorder will record it.
      Signed-off-by: default avatarHiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
      [Kuninori: added detail information in log]
      Signed-off-by: default avatarKuninori Morimoto <kuninori.morimoto.gx@renesas.com>
      Reviewed-by: default avatarGeert Uytterhoeven <geert+renesas@glider.be>
      Reviewed-by: default avatarLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      847449f2
  7. 16 Oct, 2017 2 commits
    • Ed Blake's avatar
      dmaengine: img-mdc: Add runtime PM · 56d355e6
      Ed Blake authored
      Add runtime PM support to disable the clock when the h/w is not in use.
      The existing clock_prepare_enable is removed from probe() as the clock
      is no longer permanently enabled.
      Signed-off-by: default avatarEd Blake <ed.blake@sondrel.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      56d355e6
    • Ed Blake's avatar
      dmaengine: img-mdc: Add suspend / resume handling · fd9f22ae
      Ed Blake authored
      Add suspend / resume handling using suspend_late and resume_early, and
      check that all channels are idle before suspending.
      
      DMA drivers should use suspend_late / resume_early to ensure that all
      DMA client devices are suspended before the DMA device itself, and that
      client devices are resumed after the DMA device. This avoids suspending
      the DMA device while transactions are still active.
      
      It is the responsibility of client drivers to terminate all DMA
      transactions in their suspend handlers, so there should be no active
      transactions by the time suspend_late is called.
      
      There's no need to save and restore registers for MDC during suspend /
      resume, as all transactions will be terminated as a result of the
      suspend, and all required registers are programmed anyway at the start
      of any new transactions following resume.
      Signed-off-by: default avatarEd Blake <ed.blake@sondrel.com>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      fd9f22ae
  8. 12 Oct, 2017 3 commits
  9. 08 Oct, 2017 5 commits
  10. 27 Sep, 2017 4 commits
  11. 25 Sep, 2017 1 commit
    • Sricharan R's avatar
      dmaengine: qcom-bam: Process multiple pending descriptors · 6b4faeac
      Sricharan R authored
      The bam dmaengine has a circular FIFO to which we
      add hw descriptors that describes the transaction.
      The FIFO has space for about 4096 hw descriptors.
      
      Currently we add one descriptor and wait for it to
      complete with interrupt and then add the next pending
      descriptor. In this way, the FIFO is underutilized
      since only one descriptor is processed at a time, although
      there is space in FIFO for the BAM to process more.
      
      Instead keep adding descriptors to FIFO till its full,
      that allows BAM to continue to work on the next descriptor
      immediately after signalling completion interrupt for the
      previous descriptor.
      
      Also when the client has not set the DMA_PREP_INTERRUPT for
      a descriptor, then do not configure BAM to trigger a interrupt
      upon completion of that descriptor. This way we get a interrupt
      only for the descriptor for which DMA_PREP_INTERRUPT was
      requested and there signal completion of all the previous completed
      descriptors. So we still do callbacks for all requested descriptors,
      but just that the number of interrupts are reduced.
      
      CURRENT:
      
                  ------      -------   ---------------
                  |DES 0|     |DESC 1|  |DESC 2 + INT |
                  ------      -------   ---------------
                     |           |            |
                     |           |            |
      INTERRUPT:   (INT)       (INT)	      (INT)
      CALLBACK:     (CB)        (CB)         (CB)
      
      		MTD_SPEEDTEST READ PAGE: 3560 KiB/s
      		MTD_SPEEDTEST WRITE PAGE: 2664 KiB/s
      		IOZONE READ: 2456 KB/s
      		IOZONE WRITE: 1230 KB/s
      
      	bam dma interrupts (after tests): 96508
      
      CHANGE:
      
              ------  -------    -------------
              |DES 0| |DESC 1   |DESC 2 + INT |
              ------  -------   --------------
      				|
      				|
                		      (INT)
      			      (CB for 0, 1, 2)
      
      		MTD_SPEEDTEST READ PAGE: 3860 KiB/s
      		MTD_SPEEDTEST WRITE PAGE: 2837 KiB/s
      		IOZONE READ: 2677 KB/s
      		IOZONE WRITE: 1308 KB/s
      
      	bam dma interrupts (after tests): 58806
      Signed-off-by: default avatarSricharan R <sricharan@codeaurora.org>
      Reviewed-by: default avatarAndy Gross <andy.gross@linaro.org>
      Tested-by: default avatarAbhishek Sahu <absahu@codeaurora.org>
      Signed-off-by: default avatarVinod Koul <vinod.koul@intel.com>
      6b4faeac