Commit c3d996fb authored by Lucas Stach's avatar Lucas Stach Committed by Mauro Carvalho Chehab

media: coda: limit queueing into internal bitstream buffer

The ringbuffer used to hold the bitstream is very conservatively sized,
as keyframes can get very large and still need to fit into this buffer.
This means that the buffer is way oversized for the average stream to
the extend that it will hold a few hundred frames when the video data
is compressing well.

The current strategy of queueing as much bitstream data as possible
leads to large delays when draining the decoder. In order to keep the
drain latency to a reasonable bound, try to only queue a full reorder
window of buffers. We can't always hit this low target for very well
compressible video data, as we might end up with less than the minimum
amount of data that needs to be available to the bitstream prefetcher,
so we must take this into account and allow more buffers to be queued
in this case.
Signed-off-by: default avatarLucas Stach <l.stach@pengutronix.de>
Signed-off-by: default avatarPhilipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: default avatarHans Verkuil <hansverk@cisco.com>
Signed-off-by: default avatarMauro Carvalho Chehab <mchehab+samsung@kernel.org>
parent 51407c2d
...@@ -269,6 +269,23 @@ void coda_fill_bitstream(struct coda_ctx *ctx, struct list_head *buffer_list) ...@@ -269,6 +269,23 @@ void coda_fill_bitstream(struct coda_ctx *ctx, struct list_head *buffer_list)
ctx->num_metas > 1) ctx->num_metas > 1)
break; break;
if (ctx->num_internal_frames &&
ctx->num_metas >= ctx->num_internal_frames) {
meta = list_first_entry(&ctx->buffer_meta_list,
struct coda_buffer_meta, list);
/*
* If we managed to fill in at least a full reorder
* window of buffers (num_internal_frames is a
* conservative estimate for this) and the bitstream
* prefetcher has at least 2 256 bytes periods beyond
* the first buffer to fetch, we can safely stop queuing
* in order to limit the decoder drain latency.
*/
if (coda_bitstream_can_fetch_past(ctx, meta->end))
break;
}
src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx); src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
/* Drop frames that do not start/end with a SOI/EOI markers */ /* Drop frames that do not start/end with a SOI/EOI markers */
...@@ -2252,6 +2269,17 @@ static void coda_finish_decode(struct coda_ctx *ctx) ...@@ -2252,6 +2269,17 @@ static void coda_finish_decode(struct coda_ctx *ctx)
/* The rotator will copy the current display frame next time */ /* The rotator will copy the current display frame next time */
ctx->display_idx = display_idx; ctx->display_idx = display_idx;
/*
* The current decode run might have brought the bitstream fill level
* below the size where we can start the next decode run. As userspace
* might have filled the output queue completely and might thus be
* blocked, we can't rely on the next qbuf to trigger the bitstream
* refill. Check if we have data to refill the bitstream now.
*/
mutex_lock(&ctx->bitstream_mutex);
coda_fill_bitstream(ctx, NULL);
mutex_unlock(&ctx->bitstream_mutex);
} }
static void coda_decode_timeout(struct coda_ctx *ctx) static void coda_decode_timeout(struct coda_ctx *ctx)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment