Commit c3b7fe8b authored by Christian König's avatar Christian König Committed by Dave Airlie

drm/radeon: multiple ring allocator v3

A startover with a new idea for a multiple ring allocator.
Should perform as well as a normal ring allocator as long
as only one ring does somthing, but falls back to a more
complex algorithm if more complex things start to happen.

We store the last allocated bo in last, we always try to allocate
after the last allocated bo. Principle is that in a linear GPU ring
progression was is after last is the oldest bo we allocated and thus
the first one that should no longer be in use by the GPU.

If it's not the case we skip over the bo after last to the closest
done bo if such one exist. If none exist and we are not asked to
block we report failure to allocate.

If we are asked to block we wait on all the oldest fence of all
rings. We just wait for any of those fence to complete.

v2: We need to be able to let hole point to the list_head, otherwise
    try free will never free the first allocation of the list. Also
    stop calling radeon_fence_signalled more than necessary.

v3: Don't free allocations without considering them as a hole,
    otherwise we might lose holes. Also return ENOMEM instead of ENOENT
    when running out of fences to wait for. Limit the number of holes
    we try for each ring to 3.
Signed-off-by: default avatarChristian König <deathsimple@vodafone.de>
Signed-off-by: default avatarJerome Glisse <jglisse@redhat.com>
Signed-off-by: default avatarDave Airlie <airlied@redhat.com>
parent 0085c950
...@@ -385,7 +385,9 @@ struct radeon_bo_list { ...@@ -385,7 +385,9 @@ struct radeon_bo_list {
struct radeon_sa_manager { struct radeon_sa_manager {
spinlock_t lock; spinlock_t lock;
struct radeon_bo *bo; struct radeon_bo *bo;
struct list_head sa_bo; struct list_head *hole;
struct list_head flist[RADEON_NUM_RINGS];
struct list_head olist;
unsigned size; unsigned size;
uint64_t gpu_addr; uint64_t gpu_addr;
void *cpu_ptr; void *cpu_ptr;
...@@ -396,7 +398,8 @@ struct radeon_sa_bo; ...@@ -396,7 +398,8 @@ struct radeon_sa_bo;
/* sub-allocation buffer */ /* sub-allocation buffer */
struct radeon_sa_bo { struct radeon_sa_bo {
struct list_head list; struct list_head olist;
struct list_head flist;
struct radeon_sa_manager *manager; struct radeon_sa_manager *manager;
unsigned soffset; unsigned soffset;
unsigned eoffset; unsigned eoffset;
......
...@@ -204,25 +204,22 @@ int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib) ...@@ -204,25 +204,22 @@ int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib)
int radeon_ib_pool_init(struct radeon_device *rdev) int radeon_ib_pool_init(struct radeon_device *rdev)
{ {
struct radeon_sa_manager tmp;
int i, r; int i, r;
r = radeon_sa_bo_manager_init(rdev, &tmp,
RADEON_IB_POOL_SIZE*64*1024,
RADEON_GEM_DOMAIN_GTT);
if (r) {
return r;
}
radeon_mutex_lock(&rdev->ib_pool.mutex); radeon_mutex_lock(&rdev->ib_pool.mutex);
if (rdev->ib_pool.ready) { if (rdev->ib_pool.ready) {
radeon_mutex_unlock(&rdev->ib_pool.mutex); radeon_mutex_unlock(&rdev->ib_pool.mutex);
radeon_sa_bo_manager_fini(rdev, &tmp);
return 0; return 0;
} }
rdev->ib_pool.sa_manager = tmp; r = radeon_sa_bo_manager_init(rdev, &rdev->ib_pool.sa_manager,
INIT_LIST_HEAD(&rdev->ib_pool.sa_manager.sa_bo); RADEON_IB_POOL_SIZE*64*1024,
RADEON_GEM_DOMAIN_GTT);
if (r) {
radeon_mutex_unlock(&rdev->ib_pool.mutex);
return r;
}
for (i = 0; i < RADEON_IB_POOL_SIZE; i++) { for (i = 0; i < RADEON_IB_POOL_SIZE; i++) {
rdev->ib_pool.ibs[i].fence = NULL; rdev->ib_pool.ibs[i].fence = NULL;
rdev->ib_pool.ibs[i].idx = i; rdev->ib_pool.ibs[i].idx = i;
......
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment