Commit bdfcea4b authored by Dave Airlie's avatar Dave Airlie

Merge branch 'linux-3.20' of git://anongit.freedesktop.org/git/nouveau/linux-2.6 into drm-next

There's a huge amount of no-op churn here renaming the majority of the
driver from nouveau_ to nvkm_, in preparation for splitting the module
into two down the track.  Also switched to NVIDIA's unit and chipset
names at the same time.  Despite the massive amount of code touch, the
commits should be safe as objdump was used to verify nothing got
changed accidentally in the renames.

Aside from that, not much in this first pull request:
- nouveau_platform.ko for GK20A was merged into nouveau.ko
- GK20A dynamic reclocking support
- no more vt-switches across suspend/resume
- changed output scaling policy.  if the mode comes from the display's
edid, we program that directly rather than using the gpu to scale to
the panel's native mode.  this should address complaints of having to
jump through hoops for 24/120Hz modes etc
- various other minor fixups and cleanups

* 'linux-3.20' of git://anongit.freedesktop.org/git/nouveau/linux-2.6: (86 commits)
  drm/nouveau: finalise nvkm namespace switch (no binary change)
  drm/nouveau/device: namespace + nvidia gpu names (no binary change)
  drm/nouveau/vp: namespace + nvidia gpu names (no binary change)
  drm/nouveau/sw: namespace + nvidia gpu names (no binary change)
  drm/nouveau/sec: namespace + nvidia gpu names (no binary change)
  drm/nouveau/pm: namespace + nvidia gpu names (no binary change)
  drm/nouveau/msvld: namespace + nvidia gpu names (no binary change)
  drm/nouveau/msppp: namespace + nvidia gpu names (no binary change)
  drm/nouveau/mspdec: namespace + nvidia gpu names (no binary change)
  drm/nouveau/mpeg: namespace + nvidia gpu names (no binary change)
  drm/nouveau/gr: namespace + nvidia gpu names (no binary change)
  drm/nouveau/fifo: namespace + nvidia gpu names (no binary change)
  drm/nouveau/dmaobj: namespace + nvidia gpu names (no binary change)
  drm/nouveau/disp: namespace + nvidia gpu names (no binary change)
  drm/nouveau/cipher: namespace + nvidia gpu names (no binary change)
  drm/nouveau/ce: namespace + nvidia gpu names (no binary change)
  drm/nouveau/bsp: namespace + nvidia gpu names (no binary change)
  drm/nouveau/volt: namespace + nvidia gpu names (no binary change)
  drm/nouveau/timer: namespace + nvidia gpu names (no binary change)
  drm/nouveau/therm: namespace + nvidia gpu names (no binary change)
  ...
parents 281d1bbd be83cd4e

Too many changes to show.

To preserve performance only 1000 of 1000+ files are displayed.

ccflags-y := -Iinclude/drm
ccflags-y += -I$(src)/include
ccflags-y += -I$(src)/include/nvkm
ccflags-y += -I$(src)/nvkm
ccflags-y += -I$(src)
# NVKM - HW resource manager
#- code also used by various userspace tools/tests
include $(src)/nvif/Kbuild
nouveau-y := $(nvif-y)
# NVIF - NVKM interface library (NVKM user interface also defined here)
#- code also used by various userspace tools/tests
include $(src)/nvkm/Kbuild
nouveau-y += $(nvkm-y)
# DRM - general
ifdef CONFIG_X86
nouveau-$(CONFIG_ACPI) += nouveau_acpi.o
endif
nouveau-y += nouveau_agp.o
nouveau-$(CONFIG_DEBUG_FS) += nouveau_debugfs.o
nouveau-y += nouveau_drm.o
nouveau-y += nouveau_hwmon.o
nouveau-$(CONFIG_COMPAT) += nouveau_ioc32.o
nouveau-y += nouveau_nvif.o
nouveau-$(CONFIG_NOUVEAU_PLATFORM_DRIVER) += nouveau_platform.o
nouveau-y += nouveau_sysfs.o
nouveau-y += nouveau_usif.o # userspace <-> nvif
nouveau-y += nouveau_vga.o
# DRM - memory management
nouveau-y += nouveau_bo.o
nouveau-y += nouveau_gem.o
nouveau-y += nouveau_prime.o
nouveau-y += nouveau_sgdma.o
nouveau-y += nouveau_ttm.o
# DRM - modesetting
nouveau-$(CONFIG_DRM_NOUVEAU_BACKLIGHT) += nouveau_backlight.o
nouveau-y += nouveau_connector.o
nouveau-y += nouveau_display.o
nouveau-y += nv50_display.o
nouveau-y += nouveau_dp.o
nouveau-y += nouveau_fbcon.o
nouveau-y += nv04_fbcon.o
nouveau-y += nv50_fbcon.o
nouveau-y += nvc0_fbcon.o
# DRM - command submission
nouveau-y += nouveau_abi16.o
nouveau-y += nouveau_chan.o
nouveau-y += nouveau_dma.o
nouveau-y += nouveau_fence.o
nouveau-y += nv04_fence.o
nouveau-y += nv10_fence.o
nouveau-y += nv17_fence.o
nouveau-y += nv50_fence.o
nouveau-y += nv84_fence.o
nouveau-y += nvc0_fence.o
# DRM - prehistoric modesetting (NV04-G7x)
nouveau-y += nouveau_bios.o
include $(src)/dispnv04/Kbuild
obj-$(CONFIG_DRM_NOUVEAU) += nouveau.o
......@@ -26,7 +26,7 @@ config DRM_NOUVEAU
Choose this option for open-source NVIDIA support.
config NOUVEAU_PLATFORM_DRIVER
tristate "Nouveau (NVIDIA) SoC GPUs"
bool "Nouveau (NVIDIA) SoC GPUs"
depends on DRM_NOUVEAU && ARCH_TEGRA
default y
help
......
This diff is collapsed.
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/client.h>
#include <core/handle.h>
#include <core/option.h>
#include <nvif/unpack.h>
#include <nvif/class.h>
#include <nvif/unpack.h>
#include <nvif/event.h>
#include <engine/device.h>
struct nvkm_client_notify {
struct nouveau_client *client;
struct nvkm_notify n;
u8 version;
u8 size;
union {
struct nvif_notify_rep_v0 v0;
} rep;
};
static int
nvkm_client_notify(struct nvkm_notify *n)
{
struct nvkm_client_notify *notify = container_of(n, typeof(*notify), n);
struct nouveau_client *client = notify->client;
return client->ntfy(&notify->rep, notify->size, n->data, n->size);
}
int
nvkm_client_notify_put(struct nouveau_client *client, int index)
{
if (index < ARRAY_SIZE(client->notify)) {
if (client->notify[index]) {
nvkm_notify_put(&client->notify[index]->n);
return 0;
}
}
return -ENOENT;
}
int
nvkm_client_notify_get(struct nouveau_client *client, int index)
{
if (index < ARRAY_SIZE(client->notify)) {
if (client->notify[index]) {
nvkm_notify_get(&client->notify[index]->n);
return 0;
}
}
return -ENOENT;
}
int
nvkm_client_notify_del(struct nouveau_client *client, int index)
{
if (index < ARRAY_SIZE(client->notify)) {
if (client->notify[index]) {
nvkm_notify_fini(&client->notify[index]->n);
kfree(client->notify[index]);
client->notify[index] = NULL;
return 0;
}
}
return -ENOENT;
}
int
nvkm_client_notify_new(struct nouveau_object *object,
struct nvkm_event *event, void *data, u32 size)
{
struct nouveau_client *client = nouveau_client(object);
struct nvkm_client_notify *notify;
union {
struct nvif_notify_req_v0 v0;
} *req = data;
u8 index, reply;
int ret;
for (index = 0; index < ARRAY_SIZE(client->notify); index++) {
if (!client->notify[index])
break;
}
if (index == ARRAY_SIZE(client->notify))
return -ENOSPC;
notify = kzalloc(sizeof(*notify), GFP_KERNEL);
if (!notify)
return -ENOMEM;
nv_ioctl(client, "notify new size %d\n", size);
if (nvif_unpack(req->v0, 0, 0, true)) {
nv_ioctl(client, "notify new vers %d reply %d route %02x "
"token %llx\n", req->v0.version,
req->v0.reply, req->v0.route, req->v0.token);
notify->version = req->v0.version;
notify->size = sizeof(notify->rep.v0);
notify->rep.v0.version = req->v0.version;
notify->rep.v0.route = req->v0.route;
notify->rep.v0.token = req->v0.token;
reply = req->v0.reply;
}
if (ret == 0) {
ret = nvkm_notify_init(object, event, nvkm_client_notify,
false, data, size, reply, &notify->n);
if (ret == 0) {
client->notify[index] = notify;
notify->client = client;
return index;
}
}
kfree(notify);
return ret;
}
static int
nouveau_client_devlist(struct nouveau_object *object, void *data, u32 size)
{
union {
struct nv_client_devlist_v0 v0;
} *args = data;
int ret;
nv_ioctl(object, "client devlist size %d\n", size);
if (nvif_unpack(args->v0, 0, 0, true)) {
nv_ioctl(object, "client devlist vers %d count %d\n",
args->v0.version, args->v0.count);
if (size == sizeof(args->v0.device[0]) * args->v0.count) {
ret = nouveau_device_list(args->v0.device,
args->v0.count);
if (ret >= 0) {
args->v0.count = ret;
ret = 0;
}
} else {
ret = -EINVAL;
}
}
return ret;
}
static int
nouveau_client_mthd(struct nouveau_object *object, u32 mthd,
void *data, u32 size)
{
switch (mthd) {
case NV_CLIENT_DEVLIST:
return nouveau_client_devlist(object, data, size);
default:
break;
}
return -EINVAL;
}
static void
nouveau_client_dtor(struct nouveau_object *object)
{
struct nouveau_client *client = (void *)object;
int i;
for (i = 0; i < ARRAY_SIZE(client->notify); i++)
nvkm_client_notify_del(client, i);
nouveau_object_ref(NULL, &client->device);
nouveau_handle_destroy(client->root);
nouveau_namedb_destroy(&client->base);
}
static struct nouveau_oclass
nouveau_client_oclass = {
.ofuncs = &(struct nouveau_ofuncs) {
.dtor = nouveau_client_dtor,
.mthd = nouveau_client_mthd,
},
};
int
nouveau_client_create_(const char *name, u64 devname, const char *cfg,
const char *dbg, int length, void **pobject)
{
struct nouveau_object *device;
struct nouveau_client *client;
int ret;
device = (void *)nouveau_device_find(devname);
if (!device)
return -ENODEV;
ret = nouveau_namedb_create_(NULL, NULL, &nouveau_client_oclass,
NV_CLIENT_CLASS, NULL,
(1ULL << NVDEV_ENGINE_DEVICE),
length, pobject);
client = *pobject;
if (ret)
return ret;
ret = nouveau_handle_create(nv_object(client), ~0, ~0,
nv_object(client), &client->root);
if (ret)
return ret;
/* prevent init/fini being called, os in in charge of this */
atomic_set(&nv_object(client)->usecount, 2);
nouveau_object_ref(device, &client->device);
snprintf(client->name, sizeof(client->name), "%s", name);
client->debug = nouveau_dbgopt(dbg, "CLIENT");
return 0;
}
int
nouveau_client_init(struct nouveau_client *client)
{
int ret;
nv_debug(client, "init running\n");
ret = nouveau_handle_init(client->root);
nv_debug(client, "init completed with %d\n", ret);
return ret;
}
int
nouveau_client_fini(struct nouveau_client *client, bool suspend)
{
const char *name[2] = { "fini", "suspend" };
int ret, i;
nv_debug(client, "%s running\n", name[suspend]);
nv_debug(client, "%s notify\n", name[suspend]);
for (i = 0; i < ARRAY_SIZE(client->notify); i++)
nvkm_client_notify_put(client, i);
nv_debug(client, "%s object\n", name[suspend]);
ret = nouveau_handle_fini(client->root, suspend);
nv_debug(client, "%s completed with %d\n", name[suspend], ret);
return ret;
}
const char *
nouveau_client_name(void *obj)
{
const char *client_name = "unknown";
struct nouveau_client *client = nouveau_client(obj);
if (client)
client_name = client->name;
return client_name;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/namedb.h>
#include <core/handle.h>
#include <core/client.h>
#include <core/engctx.h>
#include <subdev/vm.h>
static inline int
nouveau_engctx_exists(struct nouveau_object *parent,
struct nouveau_engine *engine, void **pobject)
{
struct nouveau_engctx *engctx;
struct nouveau_object *parctx;
list_for_each_entry(engctx, &engine->contexts, head) {
parctx = nv_pclass(nv_object(engctx), NV_PARENT_CLASS);
if (parctx == parent) {
atomic_inc(&nv_object(engctx)->refcount);
*pobject = engctx;
return 1;
}
}
return 0;
}
int
nouveau_engctx_create_(struct nouveau_object *parent,
struct nouveau_object *engobj,
struct nouveau_oclass *oclass,
struct nouveau_object *pargpu,
u32 size, u32 align, u32 flags,
int length, void **pobject)
{
struct nouveau_client *client = nouveau_client(parent);
struct nouveau_engine *engine = nv_engine(engobj);
struct nouveau_object *engctx;
unsigned long save;
int ret;
/* check if this engine already has a context for the parent object,
* and reference it instead of creating a new one
*/
spin_lock_irqsave(&engine->lock, save);
ret = nouveau_engctx_exists(parent, engine, pobject);
spin_unlock_irqrestore(&engine->lock, save);
if (ret)
return ret;
/* create the new context, supports creating both raw objects and
* objects backed by instance memory
*/
if (size) {
ret = nouveau_gpuobj_create_(parent, engobj, oclass,
NV_ENGCTX_CLASS,
pargpu, size, align, flags,
length, pobject);
} else {
ret = nouveau_object_create_(parent, engobj, oclass,
NV_ENGCTX_CLASS, length, pobject);
}
engctx = *pobject;
if (ret)
return ret;
/* must take the lock again and re-check a context doesn't already
* exist (in case of a race) - the lock had to be dropped before as
* it's not possible to allocate the object with it held.
*/
spin_lock_irqsave(&engine->lock, save);
ret = nouveau_engctx_exists(parent, engine, pobject);
if (ret) {
spin_unlock_irqrestore(&engine->lock, save);
nouveau_object_ref(NULL, &engctx);
return ret;
}
if (client->vm)
atomic_inc(&client->vm->engref[nv_engidx(engobj)]);
list_add(&nv_engctx(engctx)->head, &engine->contexts);
nv_engctx(engctx)->addr = ~0ULL;
spin_unlock_irqrestore(&engine->lock, save);
return 0;
}
void
nouveau_engctx_destroy(struct nouveau_engctx *engctx)
{
struct nouveau_object *engobj = nv_object(engctx)->engine;
struct nouveau_engine *engine = nv_engine(engobj);
struct nouveau_client *client = nouveau_client(engctx);
unsigned long save;
nouveau_gpuobj_unmap(&engctx->vma);
spin_lock_irqsave(&engine->lock, save);
list_del(&engctx->head);
spin_unlock_irqrestore(&engine->lock, save);
if (client->vm)
atomic_dec(&client->vm->engref[nv_engidx(engobj)]);
if (engctx->base.size)
nouveau_gpuobj_destroy(&engctx->base);
else
nouveau_object_destroy(&engctx->base.base);
}
int
nouveau_engctx_init(struct nouveau_engctx *engctx)
{
struct nouveau_object *object = nv_object(engctx);
struct nouveau_subdev *subdev = nv_subdev(object->engine);
struct nouveau_object *parent;
struct nouveau_subdev *pardev;
int ret;
ret = nouveau_gpuobj_init(&engctx->base);
if (ret)
return ret;
parent = nv_pclass(object->parent, NV_PARENT_CLASS);
pardev = nv_subdev(parent->engine);
if (nv_parent(parent)->context_attach) {
mutex_lock(&pardev->mutex);
ret = nv_parent(parent)->context_attach(parent, object);
mutex_unlock(&pardev->mutex);
}
if (ret) {
nv_error(parent, "failed to attach %s context, %d\n",
subdev->name, ret);
return ret;
}
nv_debug(parent, "attached %s context\n", subdev->name);
return 0;
}
int
nouveau_engctx_fini(struct nouveau_engctx *engctx, bool suspend)
{
struct nouveau_object *object = nv_object(engctx);
struct nouveau_subdev *subdev = nv_subdev(object->engine);
struct nouveau_object *parent;
struct nouveau_subdev *pardev;
int ret = 0;
parent = nv_pclass(object->parent, NV_PARENT_CLASS);
pardev = nv_subdev(parent->engine);
if (nv_parent(parent)->context_detach) {
mutex_lock(&pardev->mutex);
ret = nv_parent(parent)->context_detach(parent, suspend, object);
mutex_unlock(&pardev->mutex);
}
if (ret) {
nv_error(parent, "failed to detach %s context, %d\n",
subdev->name, ret);
return ret;
}
nv_debug(parent, "detached %s context\n", subdev->name);
return nouveau_gpuobj_fini(&engctx->base, suspend);
}
int
_nouveau_engctx_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nouveau_engctx *engctx;
int ret;
ret = nouveau_engctx_create(parent, engine, oclass, NULL, 256, 256,
NVOBJ_FLAG_ZERO_ALLOC, &engctx);
*pobject = nv_object(engctx);
return ret;
}
void
_nouveau_engctx_dtor(struct nouveau_object *object)
{
nouveau_engctx_destroy(nv_engctx(object));
}
int
_nouveau_engctx_init(struct nouveau_object *object)
{
return nouveau_engctx_init(nv_engctx(object));
}
int
_nouveau_engctx_fini(struct nouveau_object *object, bool suspend)
{
return nouveau_engctx_fini(nv_engctx(object), suspend);
}
struct nouveau_object *
nouveau_engctx_get(struct nouveau_engine *engine, u64 addr)
{
struct nouveau_engctx *engctx;
unsigned long flags;
spin_lock_irqsave(&engine->lock, flags);
list_for_each_entry(engctx, &engine->contexts, head) {
if (engctx->addr == addr) {
engctx->save = flags;
return nv_object(engctx);
}
}
spin_unlock_irqrestore(&engine->lock, flags);
return NULL;
}
void
nouveau_engctx_put(struct nouveau_object *object)
{
if (object) {
struct nouveau_engine *engine = nv_engine(object->engine);
struct nouveau_engctx *engctx = nv_engctx(object);
spin_unlock_irqrestore(&engine->lock, engctx->save);
}
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/device.h>
#include <core/engine.h>
#include <core/option.h>
int
nouveau_engine_create_(struct nouveau_object *parent,
struct nouveau_object *engobj,
struct nouveau_oclass *oclass, bool enable,
const char *iname, const char *fname,
int length, void **pobject)
{
struct nouveau_engine *engine;
int ret;
ret = nouveau_subdev_create_(parent, engobj, oclass, NV_ENGINE_CLASS,
iname, fname, length, pobject);
engine = *pobject;
if (ret)
return ret;
if (parent) {
struct nouveau_device *device = nv_device(parent);
int engidx = nv_engidx(nv_object(engine));
if (device->disable_mask & (1ULL << engidx)) {
if (!nouveau_boolopt(device->cfgopt, iname, false)) {
nv_debug(engine, "engine disabled by hw/fw\n");
return -ENODEV;
}
nv_warn(engine, "ignoring hw/fw engine disable\n");
}
if (!nouveau_boolopt(device->cfgopt, iname, enable)) {
if (!enable)
nv_warn(engine, "disabled, %s=1 to enable\n", iname);
return -ENODEV;
}
}
INIT_LIST_HEAD(&engine->contexts);
spin_lock_init(&engine->lock);
return 0;
}
/*
* Copyright (C) 2010 Nouveau Project
*
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice (including the
* next paragraph) shall be included in all copies or substantial
* portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
* IN NO EVENT SHALL THE COPYRIGHT OWNER(S) AND/OR ITS SUPPLIERS BE
* LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
* OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
* WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <core/os.h>
#include <core/enum.h>
const struct nouveau_enum *
nouveau_enum_find(const struct nouveau_enum *en, u32 value)
{
while (en->name) {
if (en->value == value)
return en;
en++;
}
return NULL;
}
const struct nouveau_enum *
nouveau_enum_print(const struct nouveau_enum *en, u32 value)
{
en = nouveau_enum_find(en, value);
if (en)
pr_cont("%s", en->name);
else
pr_cont("(unknown enum 0x%08x)", value);
return en;
}
void
nouveau_bitfield_print(const struct nouveau_bitfield *bf, u32 value)
{
while (bf->name) {
if (value & bf->mask) {
pr_cont(" %s", bf->name);
value &= ~bf->mask;
}
bf++;
}
if (value)
pr_cont(" (unknown bits 0x%08x)", value);
}
/*
* Copyright 2013-2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <core/object.h>
#include <core/event.h>
void
nvkm_event_put(struct nvkm_event *event, u32 types, int index)
{
assert_spin_locked(&event->refs_lock);
while (types) {
int type = __ffs(types); types &= ~(1 << type);
if (--event->refs[index * event->types_nr + type] == 0) {
if (event->func->fini)
event->func->fini(event, 1 << type, index);
}
}
}
void
nvkm_event_get(struct nvkm_event *event, u32 types, int index)
{
assert_spin_locked(&event->refs_lock);
while (types) {
int type = __ffs(types); types &= ~(1 << type);
if (++event->refs[index * event->types_nr + type] == 1) {
if (event->func->init)
event->func->init(event, 1 << type, index);
}
}
}
void
nvkm_event_send(struct nvkm_event *event, u32 types, int index,
void *data, u32 size)
{
struct nvkm_notify *notify;
unsigned long flags;
if (!event->refs || WARN_ON(index >= event->index_nr))
return;
spin_lock_irqsave(&event->list_lock, flags);
list_for_each_entry(notify, &event->list, head) {
if (notify->index == index && (notify->types & types)) {
if (event->func->send) {
event->func->send(data, size, notify);
continue;
}
nvkm_notify_send(notify, data, size);
}
}
spin_unlock_irqrestore(&event->list_lock, flags);
}
void
nvkm_event_fini(struct nvkm_event *event)
{
if (event->refs) {
kfree(event->refs);
event->refs = NULL;
}
}
int
nvkm_event_init(const struct nvkm_event_func *func, int types_nr, int index_nr,
struct nvkm_event *event)
{
event->refs = kzalloc(sizeof(*event->refs) * index_nr * types_nr,
GFP_KERNEL);
if (!event->refs)
return -ENOMEM;
event->func = func;
event->types_nr = types_nr;
event->index_nr = index_nr;
spin_lock_init(&event->refs_lock);
spin_lock_init(&event->list_lock);
INIT_LIST_HEAD(&event->list);
return 0;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/gpuobj.h>
#include <subdev/instmem.h>
#include <subdev/bar.h>
#include <subdev/vm.h>
void
nouveau_gpuobj_destroy(struct nouveau_gpuobj *gpuobj)
{
int i;
if (gpuobj->flags & NVOBJ_FLAG_ZERO_FREE) {
for (i = 0; i < gpuobj->size; i += 4)
nv_wo32(gpuobj, i, 0x00000000);
}
if (gpuobj->node) {
nouveau_mm_free(&nv_gpuobj(gpuobj->parent)->heap,
&gpuobj->node);
}
if (gpuobj->heap.block_size)
nouveau_mm_fini(&gpuobj->heap);
nouveau_object_destroy(&gpuobj->base);
}
int
nouveau_gpuobj_create_(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, u32 pclass,
struct nouveau_object *pargpu,
u32 size, u32 align, u32 flags,
int length, void **pobject)
{
struct nouveau_instmem *imem = nouveau_instmem(parent);
struct nouveau_bar *bar = nouveau_bar(parent);
struct nouveau_gpuobj *gpuobj;
struct nouveau_mm *heap = NULL;
int ret, i;
u64 addr;
*pobject = NULL;
if (pargpu) {
while ((pargpu = nv_pclass(pargpu, NV_GPUOBJ_CLASS))) {
if (nv_gpuobj(pargpu)->heap.block_size)
break;
pargpu = pargpu->parent;
}
if (unlikely(pargpu == NULL)) {
nv_error(parent, "no gpuobj heap\n");
return -EINVAL;
}
addr = nv_gpuobj(pargpu)->addr;
heap = &nv_gpuobj(pargpu)->heap;
atomic_inc(&parent->refcount);
} else {
ret = imem->alloc(imem, parent, size, align, &parent);
pargpu = parent;
if (ret)
return ret;
addr = nv_memobj(pargpu)->addr;
size = nv_memobj(pargpu)->size;
if (bar && bar->alloc) {
struct nouveau_instobj *iobj = (void *)parent;
struct nouveau_mem **mem = (void *)(iobj + 1);
struct nouveau_mem *node = *mem;
if (!bar->alloc(bar, parent, node, &pargpu)) {
nouveau_object_ref(NULL, &parent);
parent = pargpu;
}
}
}
ret = nouveau_object_create_(parent, engine, oclass, pclass |
NV_GPUOBJ_CLASS, length, pobject);
nouveau_object_ref(NULL, &parent);
gpuobj = *pobject;
if (ret)
return ret;
gpuobj->parent = pargpu;
gpuobj->flags = flags;
gpuobj->addr = addr;
gpuobj->size = size;
if (heap) {
ret = nouveau_mm_head(heap, 0, 1, size, size,
max(align, (u32)1), &gpuobj->node);
if (ret)
return ret;
gpuobj->addr += gpuobj->node->offset;
}
if (gpuobj->flags & NVOBJ_FLAG_HEAP) {
ret = nouveau_mm_init(&gpuobj->heap, 0, gpuobj->size, 1);
if (ret)
return ret;
}
if (flags & NVOBJ_FLAG_ZERO_ALLOC) {
for (i = 0; i < gpuobj->size; i += 4)
nv_wo32(gpuobj, i, 0x00000000);
}
return ret;
}
struct nouveau_gpuobj_class {
struct nouveau_object *pargpu;
u64 size;
u32 align;
u32 flags;
};
static int
_nouveau_gpuobj_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nouveau_gpuobj_class *args = data;
struct nouveau_gpuobj *object;
int ret;
ret = nouveau_gpuobj_create(parent, engine, oclass, 0, args->pargpu,
args->size, args->align, args->flags,
&object);
*pobject = nv_object(object);
if (ret)
return ret;
return 0;
}
void
_nouveau_gpuobj_dtor(struct nouveau_object *object)
{
nouveau_gpuobj_destroy(nv_gpuobj(object));
}
int
_nouveau_gpuobj_init(struct nouveau_object *object)
{
return nouveau_gpuobj_init(nv_gpuobj(object));
}
int
_nouveau_gpuobj_fini(struct nouveau_object *object, bool suspend)
{
return nouveau_gpuobj_fini(nv_gpuobj(object), suspend);
}
u32
_nouveau_gpuobj_rd32(struct nouveau_object *object, u64 addr)
{
struct nouveau_gpuobj *gpuobj = nv_gpuobj(object);
struct nouveau_ofuncs *pfuncs = nv_ofuncs(gpuobj->parent);
if (gpuobj->node)
addr += gpuobj->node->offset;
return pfuncs->rd32(gpuobj->parent, addr);
}
void
_nouveau_gpuobj_wr32(struct nouveau_object *object, u64 addr, u32 data)
{
struct nouveau_gpuobj *gpuobj = nv_gpuobj(object);
struct nouveau_ofuncs *pfuncs = nv_ofuncs(gpuobj->parent);
if (gpuobj->node)
addr += gpuobj->node->offset;
pfuncs->wr32(gpuobj->parent, addr, data);
}
static struct nouveau_oclass
_nouveau_gpuobj_oclass = {
.handle = 0x00000000,
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = _nouveau_gpuobj_ctor,
.dtor = _nouveau_gpuobj_dtor,
.init = _nouveau_gpuobj_init,
.fini = _nouveau_gpuobj_fini,
.rd32 = _nouveau_gpuobj_rd32,
.wr32 = _nouveau_gpuobj_wr32,
},
};
int
nouveau_gpuobj_new(struct nouveau_object *parent, struct nouveau_object *pargpu,
u32 size, u32 align, u32 flags,
struct nouveau_gpuobj **pgpuobj)
{
struct nouveau_object *engine = parent;
struct nouveau_gpuobj_class args = {
.pargpu = pargpu,
.size = size,
.align = align,
.flags = flags,
};
if (!nv_iclass(engine, NV_SUBDEV_CLASS))
engine = engine->engine;
BUG_ON(engine == NULL);
return nouveau_object_ctor(parent, engine, &_nouveau_gpuobj_oclass,
&args, sizeof(args),
(struct nouveau_object **)pgpuobj);
}
int
nouveau_gpuobj_map(struct nouveau_gpuobj *gpuobj, u32 access,
struct nouveau_vma *vma)
{
struct nouveau_bar *bar = nouveau_bar(gpuobj);
int ret = -EINVAL;
if (bar && bar->umap) {
struct nouveau_instobj *iobj = (void *)
nv_pclass(nv_object(gpuobj), NV_MEMOBJ_CLASS);
struct nouveau_mem **mem = (void *)(iobj + 1);
ret = bar->umap(bar, *mem, access, vma);
}
return ret;
}
int
nouveau_gpuobj_map_vm(struct nouveau_gpuobj *gpuobj, struct nouveau_vm *vm,
u32 access, struct nouveau_vma *vma)
{
struct nouveau_instobj *iobj = (void *)
nv_pclass(nv_object(gpuobj), NV_MEMOBJ_CLASS);
struct nouveau_mem **mem = (void *)(iobj + 1);
int ret;
ret = nouveau_vm_get(vm, gpuobj->size, 12, access, vma);
if (ret)
return ret;
nouveau_vm_map(vma, *mem);
return 0;
}
void
nouveau_gpuobj_unmap(struct nouveau_vma *vma)
{
if (vma->node) {
nouveau_vm_unmap(vma);
nouveau_vm_put(vma);
}
}
/* the below is basically only here to support sharing the paged dma object
* for PCI(E)GART on <=nv4x chipsets, and should *not* be expected to work
* anywhere else.
*/
static void
nouveau_gpudup_dtor(struct nouveau_object *object)
{
struct nouveau_gpuobj *gpuobj = (void *)object;
nouveau_object_ref(NULL, &gpuobj->parent);
nouveau_object_destroy(&gpuobj->base);
}
static struct nouveau_oclass
nouveau_gpudup_oclass = {
.handle = NV_GPUOBJ_CLASS,
.ofuncs = &(struct nouveau_ofuncs) {
.dtor = nouveau_gpudup_dtor,
.init = nouveau_object_init,
.fini = nouveau_object_fini,
},
};
int
nouveau_gpuobj_dup(struct nouveau_object *parent, struct nouveau_gpuobj *base,
struct nouveau_gpuobj **pgpuobj)
{
struct nouveau_gpuobj *gpuobj;
int ret;
ret = nouveau_object_create(parent, parent->engine,
&nouveau_gpudup_oclass, 0, &gpuobj);
*pgpuobj = gpuobj;
if (ret)
return ret;
nouveau_object_ref(nv_object(base), &gpuobj->parent);
gpuobj->addr = base->addr;
gpuobj->size = base->size;
return 0;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/handle.h>
#include <core/client.h>
#define hprintk(h,l,f,a...) do { \
struct nouveau_client *c = nouveau_client((h)->object); \
struct nouveau_handle *p = (h)->parent; u32 n = p ? p->name : ~0; \
nv_printk((c), l, "0x%08x:0x%08x "f, n, (h)->name, ##a); \
} while(0)
int
nouveau_handle_init(struct nouveau_handle *handle)
{
struct nouveau_handle *item;
int ret;
hprintk(handle, TRACE, "init running\n");
ret = nouveau_object_inc(handle->object);
if (ret)
return ret;
hprintk(handle, TRACE, "init children\n");
list_for_each_entry(item, &handle->tree, head) {
ret = nouveau_handle_init(item);
if (ret)
goto fail;
}
hprintk(handle, TRACE, "init completed\n");
return 0;
fail:
hprintk(handle, ERROR, "init failed with %d\n", ret);
list_for_each_entry_continue_reverse(item, &handle->tree, head) {
nouveau_handle_fini(item, false);
}
nouveau_object_dec(handle->object, false);
return ret;
}
int
nouveau_handle_fini(struct nouveau_handle *handle, bool suspend)
{
static char *name[2] = { "fini", "suspend" };
struct nouveau_handle *item;
int ret;
hprintk(handle, TRACE, "%s children\n", name[suspend]);
list_for_each_entry(item, &handle->tree, head) {
ret = nouveau_handle_fini(item, suspend);
if (ret && suspend)
goto fail;
}
hprintk(handle, TRACE, "%s running\n", name[suspend]);
if (handle->object) {
ret = nouveau_object_dec(handle->object, suspend);
if (ret && suspend)
goto fail;
}
hprintk(handle, TRACE, "%s completed\n", name[suspend]);
return 0;
fail:
hprintk(handle, ERROR, "%s failed with %d\n", name[suspend], ret);
list_for_each_entry_continue_reverse(item, &handle->tree, head) {
int rret = nouveau_handle_init(item);
if (rret)
hprintk(handle, FATAL, "failed to restart, %d\n", rret);
}
return ret;
}
int
nouveau_handle_create(struct nouveau_object *parent, u32 _parent, u32 _handle,
struct nouveau_object *object,
struct nouveau_handle **phandle)
{
struct nouveau_object *namedb;
struct nouveau_handle *handle;
int ret;
namedb = parent;
while (!nv_iclass(namedb, NV_NAMEDB_CLASS))
namedb = namedb->parent;
handle = kzalloc(sizeof(*handle), GFP_KERNEL);
if (!handle)
return -ENOMEM;
INIT_LIST_HEAD(&handle->head);
INIT_LIST_HEAD(&handle->tree);
handle->name = _handle;
handle->priv = ~0;
ret = nouveau_namedb_insert(nv_namedb(namedb), _handle, object, handle);
if (ret) {
kfree(handle);
return ret;
}
if (nv_parent(parent)->object_attach) {
ret = nv_parent(parent)->object_attach(parent, object, _handle);
if (ret < 0) {
nouveau_handle_destroy(handle);
return ret;
}
handle->priv = ret;
}
if (object != namedb) {
while (!nv_iclass(namedb, NV_CLIENT_CLASS))
namedb = namedb->parent;
handle->parent = nouveau_namedb_get(nv_namedb(namedb), _parent);
if (handle->parent) {
list_add(&handle->head, &handle->parent->tree);
nouveau_namedb_put(handle->parent);
}
}
hprintk(handle, TRACE, "created\n");
*phandle = handle;
return 0;
}
void
nouveau_handle_destroy(struct nouveau_handle *handle)
{
struct nouveau_handle *item, *temp;
hprintk(handle, TRACE, "destroy running\n");
list_for_each_entry_safe(item, temp, &handle->tree, head) {
nouveau_handle_destroy(item);
}
list_del(&handle->head);
if (handle->priv != ~0) {
struct nouveau_object *parent = handle->parent->object;
nv_parent(parent)->object_detach(parent, handle->priv);
}
hprintk(handle, TRACE, "destroy completed\n");
nouveau_namedb_remove(handle);
kfree(handle);
}
struct nouveau_object *
nouveau_handle_ref(struct nouveau_object *parent, u32 name)
{
struct nouveau_object *object = NULL;
struct nouveau_handle *handle;
while (!nv_iclass(parent, NV_NAMEDB_CLASS))
parent = parent->parent;
handle = nouveau_namedb_get(nv_namedb(parent), name);
if (handle) {
nouveau_object_ref(handle->object, &object);
nouveau_namedb_put(handle);
}
return object;
}
struct nouveau_handle *
nouveau_handle_get_class(struct nouveau_object *engctx, u16 oclass)
{
struct nouveau_namedb *namedb;
if (engctx && (namedb = (void *)nv_pclass(engctx, NV_NAMEDB_CLASS)))
return nouveau_namedb_get_class(namedb, oclass);
return NULL;
}
struct nouveau_handle *
nouveau_handle_get_vinst(struct nouveau_object *engctx, u64 vinst)
{
struct nouveau_namedb *namedb;
if (engctx && (namedb = (void *)nv_pclass(engctx, NV_NAMEDB_CLASS)))
return nouveau_namedb_get_vinst(namedb, vinst);
return NULL;
}
struct nouveau_handle *
nouveau_handle_get_cinst(struct nouveau_object *engctx, u32 cinst)
{
struct nouveau_namedb *namedb;
if (engctx && (namedb = (void *)nv_pclass(engctx, NV_NAMEDB_CLASS)))
return nouveau_namedb_get_cinst(namedb, cinst);
return NULL;
}
void
nouveau_handle_put(struct nouveau_handle *handle)
{
if (handle)
nouveau_namedb_put(handle);
}
This diff is collapsed.
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include "core/os.h"
#include "core/mm.h"
#define node(root, dir) ((root)->nl_entry.dir == &mm->nodes) ? NULL : \
list_entry((root)->nl_entry.dir, struct nouveau_mm_node, nl_entry)
static void
nouveau_mm_dump(struct nouveau_mm *mm, const char *header)
{
struct nouveau_mm_node *node;
printk(KERN_ERR "nouveau: %s\n", header);
printk(KERN_ERR "nouveau: node list:\n");
list_for_each_entry(node, &mm->nodes, nl_entry) {
printk(KERN_ERR "nouveau: \t%08x %08x %d\n",
node->offset, node->length, node->type);
}
printk(KERN_ERR "nouveau: free list:\n");
list_for_each_entry(node, &mm->free, fl_entry) {
printk(KERN_ERR "nouveau: \t%08x %08x %d\n",
node->offset, node->length, node->type);
}
}
void
nouveau_mm_free(struct nouveau_mm *mm, struct nouveau_mm_node **pthis)
{
struct nouveau_mm_node *this = *pthis;
if (this) {
struct nouveau_mm_node *prev = node(this, prev);
struct nouveau_mm_node *next = node(this, next);
if (prev && prev->type == NVKM_MM_TYPE_NONE) {
prev->length += this->length;
list_del(&this->nl_entry);
kfree(this); this = prev;
}
if (next && next->type == NVKM_MM_TYPE_NONE) {
next->offset = this->offset;
next->length += this->length;
if (this->type == NVKM_MM_TYPE_NONE)
list_del(&this->fl_entry);
list_del(&this->nl_entry);
kfree(this); this = NULL;
}
if (this && this->type != NVKM_MM_TYPE_NONE) {
list_for_each_entry(prev, &mm->free, fl_entry) {
if (this->offset < prev->offset)
break;
}
list_add_tail(&this->fl_entry, &prev->fl_entry);
this->type = NVKM_MM_TYPE_NONE;
}
}
*pthis = NULL;
}
static struct nouveau_mm_node *
region_head(struct nouveau_mm *mm, struct nouveau_mm_node *a, u32 size)
{
struct nouveau_mm_node *b;
if (a->length == size)
return a;
b = kmalloc(sizeof(*b), GFP_KERNEL);
if (unlikely(b == NULL))
return NULL;
b->offset = a->offset;
b->length = size;
b->heap = a->heap;
b->type = a->type;
a->offset += size;
a->length -= size;
list_add_tail(&b->nl_entry, &a->nl_entry);
if (b->type == NVKM_MM_TYPE_NONE)
list_add_tail(&b->fl_entry, &a->fl_entry);
return b;
}
int
nouveau_mm_head(struct nouveau_mm *mm, u8 heap, u8 type, u32 size_max,
u32 size_min, u32 align, struct nouveau_mm_node **pnode)
{
struct nouveau_mm_node *prev, *this, *next;
u32 mask = align - 1;
u32 splitoff;
u32 s, e;
BUG_ON(type == NVKM_MM_TYPE_NONE || type == NVKM_MM_TYPE_HOLE);
list_for_each_entry(this, &mm->free, fl_entry) {
if (unlikely(heap != NVKM_MM_HEAP_ANY)) {
if (this->heap != heap)
continue;
}
e = this->offset + this->length;
s = this->offset;
prev = node(this, prev);
if (prev && prev->type != type)
s = roundup(s, mm->block_size);
next = node(this, next);
if (next && next->type != type)
e = rounddown(e, mm->block_size);
s = (s + mask) & ~mask;
e &= ~mask;
if (s > e || e - s < size_min)
continue;
splitoff = s - this->offset;
if (splitoff && !region_head(mm, this, splitoff))
return -ENOMEM;
this = region_head(mm, this, min(size_max, e - s));
if (!this)
return -ENOMEM;
this->type = type;
list_del(&this->fl_entry);
*pnode = this;
return 0;
}
return -ENOSPC;
}
static struct nouveau_mm_node *
region_tail(struct nouveau_mm *mm, struct nouveau_mm_node *a, u32 size)
{
struct nouveau_mm_node *b;
if (a->length == size)
return a;
b = kmalloc(sizeof(*b), GFP_KERNEL);
if (unlikely(b == NULL))
return NULL;
a->length -= size;
b->offset = a->offset + a->length;
b->length = size;
b->heap = a->heap;
b->type = a->type;
list_add(&b->nl_entry, &a->nl_entry);
if (b->type == NVKM_MM_TYPE_NONE)
list_add(&b->fl_entry, &a->fl_entry);
return b;
}
int
nouveau_mm_tail(struct nouveau_mm *mm, u8 heap, u8 type, u32 size_max,
u32 size_min, u32 align, struct nouveau_mm_node **pnode)
{
struct nouveau_mm_node *prev, *this, *next;
u32 mask = align - 1;
BUG_ON(type == NVKM_MM_TYPE_NONE || type == NVKM_MM_TYPE_HOLE);
list_for_each_entry_reverse(this, &mm->free, fl_entry) {
u32 e = this->offset + this->length;
u32 s = this->offset;
u32 c = 0, a;
if (unlikely(heap != NVKM_MM_HEAP_ANY)) {
if (this->heap != heap)
continue;
}
prev = node(this, prev);
if (prev && prev->type != type)
s = roundup(s, mm->block_size);
next = node(this, next);
if (next && next->type != type) {
e = rounddown(e, mm->block_size);
c = next->offset - e;
}
s = (s + mask) & ~mask;
a = e - s;
if (s > e || a < size_min)
continue;
a = min(a, size_max);
s = (e - a) & ~mask;
c += (e - s) - a;
if (c && !region_tail(mm, this, c))
return -ENOMEM;
this = region_tail(mm, this, a);
if (!this)
return -ENOMEM;
this->type = type;
list_del(&this->fl_entry);
*pnode = this;
return 0;
}
return -ENOSPC;
}
int
nouveau_mm_init(struct nouveau_mm *mm, u32 offset, u32 length, u32 block)
{
struct nouveau_mm_node *node, *prev;
u32 next;
if (nouveau_mm_initialised(mm)) {
prev = list_last_entry(&mm->nodes, typeof(*node), nl_entry);
next = prev->offset + prev->length;
if (next != offset) {
BUG_ON(next > offset);
if (!(node = kzalloc(sizeof(*node), GFP_KERNEL)))
return -ENOMEM;
node->type = NVKM_MM_TYPE_HOLE;
node->offset = next;
node->length = offset - next;
list_add_tail(&node->nl_entry, &mm->nodes);
}
BUG_ON(block != mm->block_size);
} else {
INIT_LIST_HEAD(&mm->nodes);
INIT_LIST_HEAD(&mm->free);
mm->block_size = block;
mm->heap_nodes = 0;
}
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (!node)
return -ENOMEM;
if (length) {
node->offset = roundup(offset, mm->block_size);
node->length = rounddown(offset + length, mm->block_size);
node->length -= node->offset;
}
list_add_tail(&node->nl_entry, &mm->nodes);
list_add_tail(&node->fl_entry, &mm->free);
node->heap = ++mm->heap_nodes;
return 0;
}
int
nouveau_mm_fini(struct nouveau_mm *mm)
{
struct nouveau_mm_node *node, *temp;
int nodes = 0;
if (!nouveau_mm_initialised(mm))
return 0;
list_for_each_entry(node, &mm->nodes, nl_entry) {
if (node->type != NVKM_MM_TYPE_HOLE) {
if (++nodes > mm->heap_nodes) {
nouveau_mm_dump(mm, "mm not clean!");
return -EBUSY;
}
}
}
list_for_each_entry_safe(node, temp, &mm->nodes, nl_entry) {
list_del(&node->nl_entry);
kfree(node);
}
mm->heap_nodes = 0;
return 0;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/namedb.h>
#include <core/handle.h>
#include <core/gpuobj.h>
static struct nouveau_handle *
nouveau_namedb_lookup(struct nouveau_namedb *namedb, u32 name)
{
struct nouveau_handle *handle;
list_for_each_entry(handle, &namedb->list, node) {
if (handle->name == name)
return handle;
}
return NULL;
}
static struct nouveau_handle *
nouveau_namedb_lookup_class(struct nouveau_namedb *namedb, u16 oclass)
{
struct nouveau_handle *handle;
list_for_each_entry(handle, &namedb->list, node) {
if (nv_mclass(handle->object) == oclass)
return handle;
}
return NULL;
}
static struct nouveau_handle *
nouveau_namedb_lookup_vinst(struct nouveau_namedb *namedb, u64 vinst)
{
struct nouveau_handle *handle;
list_for_each_entry(handle, &namedb->list, node) {
if (nv_iclass(handle->object, NV_GPUOBJ_CLASS)) {
if (nv_gpuobj(handle->object)->addr == vinst)
return handle;
}
}
return NULL;
}
static struct nouveau_handle *
nouveau_namedb_lookup_cinst(struct nouveau_namedb *namedb, u32 cinst)
{
struct nouveau_handle *handle;
list_for_each_entry(handle, &namedb->list, node) {
if (nv_iclass(handle->object, NV_GPUOBJ_CLASS)) {
if (nv_gpuobj(handle->object)->node &&
nv_gpuobj(handle->object)->node->offset == cinst)
return handle;
}
}
return NULL;
}
int
nouveau_namedb_insert(struct nouveau_namedb *namedb, u32 name,
struct nouveau_object *object,
struct nouveau_handle *handle)
{
int ret = -EEXIST;
write_lock_irq(&namedb->lock);
if (!nouveau_namedb_lookup(namedb, name)) {
nouveau_object_ref(object, &handle->object);
handle->namedb = namedb;
list_add(&handle->node, &namedb->list);
ret = 0;
}
write_unlock_irq(&namedb->lock);
return ret;
}
void
nouveau_namedb_remove(struct nouveau_handle *handle)
{
struct nouveau_namedb *namedb = handle->namedb;
struct nouveau_object *object = handle->object;
write_lock_irq(&namedb->lock);
list_del(&handle->node);
write_unlock_irq(&namedb->lock);
nouveau_object_ref(NULL, &object);
}
struct nouveau_handle *
nouveau_namedb_get(struct nouveau_namedb *namedb, u32 name)
{
struct nouveau_handle *handle;
read_lock(&namedb->lock);
handle = nouveau_namedb_lookup(namedb, name);
if (handle == NULL)
read_unlock(&namedb->lock);
return handle;
}
struct nouveau_handle *
nouveau_namedb_get_class(struct nouveau_namedb *namedb, u16 oclass)
{
struct nouveau_handle *handle;
read_lock(&namedb->lock);
handle = nouveau_namedb_lookup_class(namedb, oclass);
if (handle == NULL)
read_unlock(&namedb->lock);
return handle;
}
struct nouveau_handle *
nouveau_namedb_get_vinst(struct nouveau_namedb *namedb, u64 vinst)
{
struct nouveau_handle *handle;
read_lock(&namedb->lock);
handle = nouveau_namedb_lookup_vinst(namedb, vinst);
if (handle == NULL)
read_unlock(&namedb->lock);
return handle;
}
struct nouveau_handle *
nouveau_namedb_get_cinst(struct nouveau_namedb *namedb, u32 cinst)
{
struct nouveau_handle *handle;
read_lock(&namedb->lock);
handle = nouveau_namedb_lookup_cinst(namedb, cinst);
if (handle == NULL)
read_unlock(&namedb->lock);
return handle;
}
void
nouveau_namedb_put(struct nouveau_handle *handle)
{
if (handle)
read_unlock(&handle->namedb->lock);
}
int
nouveau_namedb_create_(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, u32 pclass,
struct nouveau_oclass *sclass, u64 engcls,
int length, void **pobject)
{
struct nouveau_namedb *namedb;
int ret;
ret = nouveau_parent_create_(parent, engine, oclass, pclass |
NV_NAMEDB_CLASS, sclass, engcls,
length, pobject);
namedb = *pobject;
if (ret)
return ret;
rwlock_init(&namedb->lock);
INIT_LIST_HEAD(&namedb->list);
return 0;
}
int
_nouveau_namedb_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nouveau_namedb *object;
int ret;
ret = nouveau_namedb_create(parent, engine, oclass, 0, NULL, 0, &object);
*pobject = nv_object(object);
if (ret)
return ret;
return 0;
}
/*
* Copyright 2014 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs <bskeggs@redhat.com>
*/
#include <core/client.h>
#include <core/event.h>
#include <core/notify.h>
#include <nvif/unpack.h>
#include <nvif/event.h>
static inline void
nvkm_notify_put_locked(struct nvkm_notify *notify)
{
if (notify->block++ == 0)
nvkm_event_put(notify->event, notify->types, notify->index);
}
void
nvkm_notify_put(struct nvkm_notify *notify)
{
struct nvkm_event *event = notify->event;
unsigned long flags;
if (likely(event) &&
test_and_clear_bit(NVKM_NOTIFY_USER, &notify->flags)) {
spin_lock_irqsave(&event->refs_lock, flags);
nvkm_notify_put_locked(notify);
spin_unlock_irqrestore(&event->refs_lock, flags);
if (test_bit(NVKM_NOTIFY_WORK, &notify->flags))
flush_work(&notify->work);
}
}
static inline void
nvkm_notify_get_locked(struct nvkm_notify *notify)
{
if (--notify->block == 0)
nvkm_event_get(notify->event, notify->types, notify->index);
}
void
nvkm_notify_get(struct nvkm_notify *notify)
{
struct nvkm_event *event = notify->event;
unsigned long flags;
if (likely(event) &&
!test_and_set_bit(NVKM_NOTIFY_USER, &notify->flags)) {
spin_lock_irqsave(&event->refs_lock, flags);
nvkm_notify_get_locked(notify);
spin_unlock_irqrestore(&event->refs_lock, flags);
}
}
static inline void
nvkm_notify_func(struct nvkm_notify *notify)
{
struct nvkm_event *event = notify->event;
int ret = notify->func(notify);
unsigned long flags;
if ((ret == NVKM_NOTIFY_KEEP) ||
!test_and_clear_bit(NVKM_NOTIFY_USER, &notify->flags)) {
spin_lock_irqsave(&event->refs_lock, flags);
nvkm_notify_get_locked(notify);
spin_unlock_irqrestore(&event->refs_lock, flags);
}
}
static void
nvkm_notify_work(struct work_struct *work)
{
struct nvkm_notify *notify = container_of(work, typeof(*notify), work);
nvkm_notify_func(notify);
}
void
nvkm_notify_send(struct nvkm_notify *notify, void *data, u32 size)
{
struct nvkm_event *event = notify->event;
unsigned long flags;
assert_spin_locked(&event->list_lock);
BUG_ON(size != notify->size);
spin_lock_irqsave(&event->refs_lock, flags);
if (notify->block) {
spin_unlock_irqrestore(&event->refs_lock, flags);
return;
}
nvkm_notify_put_locked(notify);
spin_unlock_irqrestore(&event->refs_lock, flags);
if (test_bit(NVKM_NOTIFY_WORK, &notify->flags)) {
memcpy((void *)notify->data, data, size);
schedule_work(&notify->work);
} else {
notify->data = data;
nvkm_notify_func(notify);
notify->data = NULL;
}
}
void
nvkm_notify_fini(struct nvkm_notify *notify)
{
unsigned long flags;
if (notify->event) {
nvkm_notify_put(notify);
spin_lock_irqsave(&notify->event->list_lock, flags);
list_del(&notify->head);
spin_unlock_irqrestore(&notify->event->list_lock, flags);
kfree((void *)notify->data);
notify->event = NULL;
}
}
int
nvkm_notify_init(struct nouveau_object *object, struct nvkm_event *event,
int (*func)(struct nvkm_notify *), bool work,
void *data, u32 size, u32 reply,
struct nvkm_notify *notify)
{
unsigned long flags;
int ret = -ENODEV;
if ((notify->event = event), event->refs) {
ret = event->func->ctor(object, data, size, notify);
if (ret == 0 && (ret = -EINVAL, notify->size == reply)) {
notify->flags = 0;
notify->block = 1;
notify->func = func;
notify->data = NULL;
if (ret = 0, work) {
INIT_WORK(&notify->work, nvkm_notify_work);
set_bit(NVKM_NOTIFY_WORK, &notify->flags);
notify->data = kmalloc(reply, GFP_KERNEL);
if (!notify->data)
ret = -ENOMEM;
}
}
if (ret == 0) {
spin_lock_irqsave(&event->list_lock, flags);
list_add_tail(&notify->head, &event->list);
spin_unlock_irqrestore(&event->list_lock, flags);
}
}
if (ret)
notify->event = NULL;
return ret;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/engine.h>
#ifdef NOUVEAU_OBJECT_MAGIC
static struct list_head _objlist = LIST_HEAD_INIT(_objlist);
static DEFINE_SPINLOCK(_objlist_lock);
#endif
int
nouveau_object_create_(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, u32 pclass,
int size, void **pobject)
{
struct nouveau_object *object;
object = *pobject = kzalloc(size, GFP_KERNEL);
if (!object)
return -ENOMEM;
nouveau_object_ref(parent, &object->parent);
nouveau_object_ref(engine, &object->engine);
object->oclass = oclass;
object->oclass->handle |= pclass;
atomic_set(&object->refcount, 1);
atomic_set(&object->usecount, 0);
#ifdef NOUVEAU_OBJECT_MAGIC
object->_magic = NOUVEAU_OBJECT_MAGIC;
spin_lock(&_objlist_lock);
list_add(&object->list, &_objlist);
spin_unlock(&_objlist_lock);
#endif
return 0;
}
int
_nouveau_object_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
if (size != 0)
return -ENOSYS;
return nouveau_object_create(parent, engine, oclass, 0, pobject);
}
void
nouveau_object_destroy(struct nouveau_object *object)
{
#ifdef NOUVEAU_OBJECT_MAGIC
spin_lock(&_objlist_lock);
list_del(&object->list);
spin_unlock(&_objlist_lock);
#endif
nouveau_object_ref(NULL, &object->engine);
nouveau_object_ref(NULL, &object->parent);
kfree(object);
}
int
nouveau_object_init(struct nouveau_object *object)
{
return 0;
}
int
nouveau_object_fini(struct nouveau_object *object, bool suspend)
{
return 0;
}
struct nouveau_ofuncs
nouveau_object_ofuncs = {
.ctor = _nouveau_object_ctor,
.dtor = nouveau_object_destroy,
.init = nouveau_object_init,
.fini = nouveau_object_fini,
};
int
nouveau_object_ctor(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nouveau_ofuncs *ofuncs = oclass->ofuncs;
struct nouveau_object *object = NULL;
int ret;
ret = ofuncs->ctor(parent, engine, oclass, data, size, &object);
*pobject = object;
if (ret < 0) {
if (ret != -ENODEV) {
nv_error(parent, "failed to create 0x%08x, %d\n",
oclass->handle, ret);
}
if (object) {
ofuncs->dtor(object);
*pobject = NULL;
}
return ret;
}
if (ret == 0) {
nv_trace(object, "created\n");
atomic_set(&object->refcount, 1);
}
return 0;
}
static void
nouveau_object_dtor(struct nouveau_object *object)
{
nv_trace(object, "destroying\n");
nv_ofuncs(object)->dtor(object);
}
void
nouveau_object_ref(struct nouveau_object *obj, struct nouveau_object **ref)
{
if (obj) {
atomic_inc(&obj->refcount);
nv_trace(obj, "inc() == %d\n", atomic_read(&obj->refcount));
}
if (*ref) {
int dead = atomic_dec_and_test(&(*ref)->refcount);
nv_trace(*ref, "dec() == %d\n", atomic_read(&(*ref)->refcount));
if (dead)
nouveau_object_dtor(*ref);
}
*ref = obj;
}
int
nouveau_object_inc(struct nouveau_object *object)
{
int ref = atomic_add_return(1, &object->usecount);
int ret;
nv_trace(object, "use(+1) == %d\n", atomic_read(&object->usecount));
if (ref != 1)
return 0;
nv_trace(object, "initialising...\n");
if (object->parent) {
ret = nouveau_object_inc(object->parent);
if (ret) {
nv_error(object, "parent failed, %d\n", ret);
goto fail_parent;
}
}
if (object->engine) {
mutex_lock(&nv_subdev(object->engine)->mutex);
ret = nouveau_object_inc(object->engine);
mutex_unlock(&nv_subdev(object->engine)->mutex);
if (ret) {
nv_error(object, "engine failed, %d\n", ret);
goto fail_engine;
}
}
ret = nv_ofuncs(object)->init(object);
atomic_set(&object->usecount, 1);
if (ret) {
nv_error(object, "init failed, %d\n", ret);
goto fail_self;
}
nv_trace(object, "initialised\n");
return 0;
fail_self:
if (object->engine) {
mutex_lock(&nv_subdev(object->engine)->mutex);
nouveau_object_dec(object->engine, false);
mutex_unlock(&nv_subdev(object->engine)->mutex);
}
fail_engine:
if (object->parent)
nouveau_object_dec(object->parent, false);
fail_parent:
atomic_dec(&object->usecount);
return ret;
}
static int
nouveau_object_decf(struct nouveau_object *object)
{
int ret;
nv_trace(object, "stopping...\n");
ret = nv_ofuncs(object)->fini(object, false);
atomic_set(&object->usecount, 0);
if (ret)
nv_warn(object, "failed fini, %d\n", ret);
if (object->engine) {
mutex_lock(&nv_subdev(object->engine)->mutex);
nouveau_object_dec(object->engine, false);
mutex_unlock(&nv_subdev(object->engine)->mutex);
}
if (object->parent)
nouveau_object_dec(object->parent, false);
nv_trace(object, "stopped\n");
return 0;
}
static int
nouveau_object_decs(struct nouveau_object *object)
{
int ret, rret;
nv_trace(object, "suspending...\n");
ret = nv_ofuncs(object)->fini(object, true);
atomic_set(&object->usecount, 0);
if (ret) {
nv_error(object, "failed suspend, %d\n", ret);
return ret;
}
if (object->engine) {
mutex_lock(&nv_subdev(object->engine)->mutex);
ret = nouveau_object_dec(object->engine, true);
mutex_unlock(&nv_subdev(object->engine)->mutex);
if (ret) {
nv_warn(object, "engine failed suspend, %d\n", ret);
goto fail_engine;
}
}
if (object->parent) {
ret = nouveau_object_dec(object->parent, true);
if (ret) {
nv_warn(object, "parent failed suspend, %d\n", ret);
goto fail_parent;
}
}
nv_trace(object, "suspended\n");
return 0;
fail_parent:
if (object->engine) {
mutex_lock(&nv_subdev(object->engine)->mutex);
rret = nouveau_object_inc(object->engine);
mutex_unlock(&nv_subdev(object->engine)->mutex);
if (rret)
nv_fatal(object, "engine failed to reinit, %d\n", rret);
}
fail_engine:
rret = nv_ofuncs(object)->init(object);
if (rret)
nv_fatal(object, "failed to reinit, %d\n", rret);
return ret;
}
int
nouveau_object_dec(struct nouveau_object *object, bool suspend)
{
int ref = atomic_add_return(-1, &object->usecount);
int ret;
nv_trace(object, "use(-1) == %d\n", atomic_read(&object->usecount));
if (ref == 0) {
if (suspend)
ret = nouveau_object_decs(object);
else
ret = nouveau_object_decf(object);
if (ret) {
atomic_inc(&object->usecount);
return ret;
}
}
return 0;
}
void
nouveau_object_debug(void)
{
#ifdef NOUVEAU_OBJECT_MAGIC
struct nouveau_object *object;
if (!list_empty(&_objlist)) {
nv_fatal(NULL, "*******************************************\n");
nv_fatal(NULL, "* AIIIII! object(s) still exist!!!\n");
nv_fatal(NULL, "*******************************************\n");
list_for_each_entry(object, &_objlist, list) {
nv_fatal(object, "%p/%p/%d/%d\n",
object->parent, object->engine,
atomic_read(&object->refcount),
atomic_read(&object->usecount));
}
}
#endif
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/option.h>
#include <core/debug.h>
const char *
nouveau_stropt(const char *optstr, const char *opt, int *arglen)
{
while (optstr && *optstr != '\0') {
int len = strcspn(optstr, ",=");
switch (optstr[len]) {
case '=':
if (!strncasecmpz(optstr, opt, len)) {
optstr += len + 1;
*arglen = strcspn(optstr, ",=");
return *arglen ? optstr : NULL;
}
optstr++;
break;
case ',':
optstr++;
break;
default:
break;
}
optstr += len;
}
return NULL;
}
bool
nouveau_boolopt(const char *optstr, const char *opt, bool value)
{
int arglen;
optstr = nouveau_stropt(optstr, opt, &arglen);
if (optstr) {
if (!strncasecmpz(optstr, "0", arglen) ||
!strncasecmpz(optstr, "no", arglen) ||
!strncasecmpz(optstr, "off", arglen) ||
!strncasecmpz(optstr, "false", arglen))
value = false;
else
if (!strncasecmpz(optstr, "1", arglen) ||
!strncasecmpz(optstr, "yes", arglen) ||
!strncasecmpz(optstr, "on", arglen) ||
!strncasecmpz(optstr, "true", arglen))
value = true;
}
return value;
}
int
nouveau_dbgopt(const char *optstr, const char *sub)
{
int mode = 1, level = CONFIG_NOUVEAU_DEBUG_DEFAULT;
while (optstr) {
int len = strcspn(optstr, ",=");
switch (optstr[len]) {
case '=':
if (strncasecmpz(optstr, sub, len))
mode = 0;
optstr++;
break;
default:
if (mode) {
if (!strncasecmpz(optstr, "fatal", len))
level = NV_DBG_FATAL;
else if (!strncasecmpz(optstr, "error", len))
level = NV_DBG_ERROR;
else if (!strncasecmpz(optstr, "warn", len))
level = NV_DBG_WARN;
else if (!strncasecmpz(optstr, "info", len))
level = NV_DBG_INFO_NORMAL;
else if (!strncasecmpz(optstr, "debug", len))
level = NV_DBG_DEBUG;
else if (!strncasecmpz(optstr, "trace", len))
level = NV_DBG_TRACE;
else if (!strncasecmpz(optstr, "paranoia", len))
level = NV_DBG_PARANOIA;
else if (!strncasecmpz(optstr, "spam", len))
level = NV_DBG_SPAM;
}
if (optstr[len] != '\0') {
optstr++;
mode = 1;
break;
}
return level;
}
optstr += len;
}
return level;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/parent.h>
#include <core/client.h>
int
nouveau_parent_sclass(struct nouveau_object *parent, u16 handle,
struct nouveau_object **pengine,
struct nouveau_oclass **poclass)
{
struct nouveau_sclass *sclass;
struct nouveau_engine *engine;
struct nouveau_oclass *oclass;
u64 mask;
sclass = nv_parent(parent)->sclass;
while (sclass) {
if ((sclass->oclass->handle & 0xffff) == handle) {
*pengine = parent->engine;
*poclass = sclass->oclass;
return 0;
}
sclass = sclass->sclass;
}
mask = nv_parent(parent)->engine;
while (mask) {
int i = __ffs64(mask);
if (nv_iclass(parent, NV_CLIENT_CLASS))
engine = nv_engine(nv_client(parent)->device);
else
engine = nouveau_engine(parent, i);
if (engine) {
oclass = engine->sclass;
while (oclass->ofuncs) {
if ((oclass->handle & 0xffff) == handle) {
*pengine = nv_object(engine);
*poclass = oclass;
return 0;
}
oclass++;
}
}
mask &= ~(1ULL << i);
}
return -EINVAL;
}
int
nouveau_parent_lclass(struct nouveau_object *parent, u32 *lclass, int size)
{
struct nouveau_sclass *sclass;
struct nouveau_engine *engine;
struct nouveau_oclass *oclass;
int nr = -1, i;
u64 mask;
sclass = nv_parent(parent)->sclass;
while (sclass) {
if (++nr < size)
lclass[nr] = sclass->oclass->handle & 0xffff;
sclass = sclass->sclass;
}
mask = nv_parent(parent)->engine;
while (i = __ffs64(mask), mask) {
engine = nouveau_engine(parent, i);
if (engine && (oclass = engine->sclass)) {
while (oclass->ofuncs) {
if (++nr < size)
lclass[nr] = oclass->handle & 0xffff;
oclass++;
}
}
mask &= ~(1ULL << i);
}
return nr + 1;
}
int
nouveau_parent_create_(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, u32 pclass,
struct nouveau_oclass *sclass, u64 engcls,
int size, void **pobject)
{
struct nouveau_parent *object;
struct nouveau_sclass *nclass;
int ret;
ret = nouveau_object_create_(parent, engine, oclass, pclass |
NV_PARENT_CLASS, size, pobject);
object = *pobject;
if (ret)
return ret;
while (sclass && sclass->ofuncs) {
nclass = kzalloc(sizeof(*nclass), GFP_KERNEL);
if (!nclass)
return -ENOMEM;
nclass->sclass = object->sclass;
object->sclass = nclass;
nclass->engine = engine ? nv_engine(engine) : NULL;
nclass->oclass = sclass;
sclass++;
}
object->engine = engcls;
return 0;
}
void
nouveau_parent_destroy(struct nouveau_parent *parent)
{
struct nouveau_sclass *sclass;
while ((sclass = parent->sclass)) {
parent->sclass = sclass->sclass;
kfree(sclass);
}
nouveau_object_destroy(&parent->base);
}
void
_nouveau_parent_dtor(struct nouveau_object *object)
{
nouveau_parent_destroy(nv_parent(object));
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/client.h>
#include <core/subdev.h>
#include <core/printk.h>
int nv_info_debug_level = NV_DBG_INFO_NORMAL;
void
nv_printk_(struct nouveau_object *object, int level, const char *fmt, ...)
{
static const char name[] = { '!', 'E', 'W', ' ', 'D', 'T', 'P', 'S' };
const char *pfx;
char mfmt[256];
va_list args;
switch (level) {
case NV_DBG_FATAL:
pfx = KERN_CRIT;
break;
case NV_DBG_ERROR:
pfx = KERN_ERR;
break;
case NV_DBG_WARN:
pfx = KERN_WARNING;
break;
case NV_DBG_INFO_NORMAL:
pfx = KERN_INFO;
break;
case NV_DBG_DEBUG:
case NV_DBG_PARANOIA:
case NV_DBG_TRACE:
case NV_DBG_SPAM:
default:
pfx = KERN_DEBUG;
break;
}
if (object && !nv_iclass(object, NV_CLIENT_CLASS)) {
struct nouveau_object *device = object;
struct nouveau_object *subdev = object;
char obuf[64], *ofmt = "";
if (object->engine) {
snprintf(obuf, sizeof(obuf), "[0x%08x][%p]",
nv_hclass(object), object);
ofmt = obuf;
subdev = object->engine;
device = object->engine;
}
if (subdev->parent)
device = subdev->parent;
if (level > nv_subdev(subdev)->debug)
return;
snprintf(mfmt, sizeof(mfmt), "%snouveau %c[%8s][%s]%s %s", pfx,
name[level], nv_subdev(subdev)->name,
nv_device(device)->name, ofmt, fmt);
} else
if (object && nv_iclass(object, NV_CLIENT_CLASS)) {
if (level > nv_client(object)->debug)
return;
snprintf(mfmt, sizeof(mfmt), "%snouveau %c[%8s] %s", pfx,
name[level], nv_client(object)->name, fmt);
} else {
snprintf(mfmt, sizeof(mfmt), "%snouveau: %s", pfx, fmt);
}
va_start(args, fmt);
vprintk(mfmt, args);
va_end(args);
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <core/object.h>
#include <core/ramht.h>
#include <subdev/bar.h>
static u32
nouveau_ramht_hash(struct nouveau_ramht *ramht, int chid, u32 handle)
{
u32 hash = 0;
while (handle) {
hash ^= (handle & ((1 << ramht->bits) - 1));
handle >>= ramht->bits;
}
hash ^= chid << (ramht->bits - 4);
hash = hash << 3;
return hash;
}
int
nouveau_ramht_insert(struct nouveau_ramht *ramht, int chid,
u32 handle, u32 context)
{
struct nouveau_bar *bar = nouveau_bar(ramht);
u32 co, ho;
co = ho = nouveau_ramht_hash(ramht, chid, handle);
do {
if (!nv_ro32(ramht, co + 4)) {
nv_wo32(ramht, co + 0, handle);
nv_wo32(ramht, co + 4, context);
if (bar)
bar->flush(bar);
return co;
}
co += 8;
if (co >= nv_gpuobj(ramht)->size)
co = 0;
} while (co != ho);
return -ENOMEM;
}
void
nouveau_ramht_remove(struct nouveau_ramht *ramht, int cookie)
{
struct nouveau_bar *bar = nouveau_bar(ramht);
nv_wo32(ramht, cookie + 0, 0x00000000);
nv_wo32(ramht, cookie + 4, 0x00000000);
if (bar)
bar->flush(bar);
}
static struct nouveau_oclass
nouveau_ramht_oclass = {
.handle = 0x0000abcd,
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = NULL,
.dtor = _nouveau_gpuobj_dtor,
.init = _nouveau_gpuobj_init,
.fini = _nouveau_gpuobj_fini,
.rd32 = _nouveau_gpuobj_rd32,
.wr32 = _nouveau_gpuobj_wr32,
},
};
int
nouveau_ramht_new(struct nouveau_object *parent, struct nouveau_object *pargpu,
u32 size, u32 align, struct nouveau_ramht **pramht)
{
struct nouveau_ramht *ramht;
int ret;
ret = nouveau_gpuobj_create(parent, parent->engine ?
parent->engine : parent, /* <nv50 ramht */
&nouveau_ramht_oclass, 0, pargpu, size,
align, NVOBJ_FLAG_ZERO_ALLOC, &ramht);
*pramht = ramht;
if (ret)
return ret;
ramht->bits = order_base_2(nv_gpuobj(ramht)->size >> 3);
return 0;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <core/object.h>
#include <core/subdev.h>
#include <core/device.h>
#include <core/option.h>
void
nouveau_subdev_reset(struct nouveau_object *subdev)
{
nv_trace(subdev, "resetting...\n");
nv_ofuncs(subdev)->fini(subdev, false);
nv_debug(subdev, "reset\n");
}
int
nouveau_subdev_init(struct nouveau_subdev *subdev)
{
int ret = nouveau_object_init(&subdev->base);
if (ret)
return ret;
nouveau_subdev_reset(&subdev->base);
return 0;
}
int
_nouveau_subdev_init(struct nouveau_object *object)
{
return nouveau_subdev_init(nv_subdev(object));
}
int
nouveau_subdev_fini(struct nouveau_subdev *subdev, bool suspend)
{
if (subdev->unit) {
nv_mask(subdev, 0x000200, subdev->unit, 0x00000000);
nv_mask(subdev, 0x000200, subdev->unit, subdev->unit);
}
return nouveau_object_fini(&subdev->base, suspend);
}
int
_nouveau_subdev_fini(struct nouveau_object *object, bool suspend)
{
return nouveau_subdev_fini(nv_subdev(object), suspend);
}
void
nouveau_subdev_destroy(struct nouveau_subdev *subdev)
{
int subidx = nv_hclass(subdev) & 0xff;
nv_device(subdev)->subdev[subidx] = NULL;
nouveau_object_destroy(&subdev->base);
}
void
_nouveau_subdev_dtor(struct nouveau_object *object)
{
nouveau_subdev_destroy(nv_subdev(object));
}
int
nouveau_subdev_create_(struct nouveau_object *parent,
struct nouveau_object *engine,
struct nouveau_oclass *oclass, u32 pclass,
const char *subname, const char *sysname,
int size, void **pobject)
{
struct nouveau_subdev *subdev;
int ret;
ret = nouveau_object_create_(parent, engine, oclass, pclass |
NV_SUBDEV_CLASS, size, pobject);
subdev = *pobject;
if (ret)
return ret;
__mutex_init(&subdev->mutex, subname, &oclass->lock_class_key);
subdev->name = subname;
if (parent) {
struct nouveau_device *device = nv_device(parent);
subdev->debug = nouveau_dbgopt(device->dbgopt, subname);
subdev->mmio = nv_subdev(device)->mmio;
}
return 0;
}
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs, Ilia Mirkin
*/
#include <engine/xtensa.h>
#include <engine/bsp.h>
/*******************************************************************************
* BSP object classes
******************************************************************************/
static struct nouveau_oclass
nv84_bsp_sclass[] = {
{ 0x74b0, &nouveau_object_ofuncs },
{},
};
/*******************************************************************************
* BSP context
******************************************************************************/
static struct nouveau_oclass
nv84_bsp_cclass = {
.handle = NV_ENGCTX(BSP, 0x84),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = _nouveau_xtensa_engctx_ctor,
.dtor = _nouveau_engctx_dtor,
.init = _nouveau_engctx_init,
.fini = _nouveau_engctx_fini,
.rd32 = _nouveau_engctx_rd32,
.wr32 = _nouveau_engctx_wr32,
},
};
/*******************************************************************************
* BSP engine/subdev functions
******************************************************************************/
static int
nv84_bsp_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nouveau_xtensa *priv;
int ret;
ret = nouveau_xtensa_create(parent, engine, oclass, 0x103000, true,
"PBSP", "bsp", &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
nv_subdev(priv)->unit = 0x04008000;
nv_engine(priv)->cclass = &nv84_bsp_cclass;
nv_engine(priv)->sclass = nv84_bsp_sclass;
priv->fifo_val = 0x1111;
priv->unkd28 = 0x90044;
return 0;
}
struct nouveau_oclass
nv84_bsp_oclass = {
.handle = NV_ENGINE(BSP, 0x84),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nv84_bsp_ctor,
.dtor = _nouveau_xtensa_dtor,
.init = _nouveau_xtensa_init,
.fini = _nouveau_xtensa_fini,
.rd32 = _nouveau_xtensa_rd32,
.wr32 = _nouveau_xtensa_wr32,
},
};
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs, Maarten Lankhorst, Ilia Mirkin
*/
#include <engine/falcon.h>
#include <engine/bsp.h>
struct nv98_bsp_priv {
struct nouveau_falcon base;
};
/*******************************************************************************
* BSP object classes
******************************************************************************/
static struct nouveau_oclass
nv98_bsp_sclass[] = {
{ 0x88b1, &nouveau_object_ofuncs },
{ 0x85b1, &nouveau_object_ofuncs },
{ 0x86b1, &nouveau_object_ofuncs },
{},
};
/*******************************************************************************
* PBSP context
******************************************************************************/
static struct nouveau_oclass
nv98_bsp_cclass = {
.handle = NV_ENGCTX(BSP, 0x98),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = _nouveau_falcon_context_ctor,
.dtor = _nouveau_falcon_context_dtor,
.init = _nouveau_falcon_context_init,
.fini = _nouveau_falcon_context_fini,
.rd32 = _nouveau_falcon_context_rd32,
.wr32 = _nouveau_falcon_context_wr32,
},
};
/*******************************************************************************
* PBSP engine/subdev functions
******************************************************************************/
static int
nv98_bsp_init(struct nouveau_object *object)
{
struct nv98_bsp_priv *priv = (void *)object;
int ret;
ret = nouveau_falcon_init(&priv->base);
if (ret)
return ret;
nv_wr32(priv, 0x084010, 0x0000ffd2);
nv_wr32(priv, 0x08401c, 0x0000fff2);
return 0;
}
static int
nv98_bsp_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nv98_bsp_priv *priv;
int ret;
ret = nouveau_falcon_create(parent, engine, oclass, 0x084000, true,
"PBSP", "bsp", &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
nv_subdev(priv)->unit = 0x04008000;
nv_engine(priv)->cclass = &nv98_bsp_cclass;
nv_engine(priv)->sclass = nv98_bsp_sclass;
return 0;
}
struct nouveau_oclass
nv98_bsp_oclass = {
.handle = NV_ENGINE(BSP, 0x98),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nv98_bsp_ctor,
.dtor = _nouveau_falcon_dtor,
.init = nv98_bsp_init,
.fini = _nouveau_falcon_fini,
.rd32 = _nouveau_falcon_rd32,
.wr32 = _nouveau_falcon_wr32,
},
};
/*
* Copyright 2012 Maarten Lankhorst
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Maarten Lankhorst
*/
#include <engine/falcon.h>
#include <engine/bsp.h>
struct nvc0_bsp_priv {
struct nouveau_falcon base;
};
/*******************************************************************************
* BSP object classes
******************************************************************************/
static struct nouveau_oclass
nvc0_bsp_sclass[] = {
{ 0x90b1, &nouveau_object_ofuncs },
{},
};
/*******************************************************************************
* PBSP context
******************************************************************************/
static struct nouveau_oclass
nvc0_bsp_cclass = {
.handle = NV_ENGCTX(BSP, 0xc0),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = _nouveau_falcon_context_ctor,
.dtor = _nouveau_falcon_context_dtor,
.init = _nouveau_falcon_context_init,
.fini = _nouveau_falcon_context_fini,
.rd32 = _nouveau_falcon_context_rd32,
.wr32 = _nouveau_falcon_context_wr32,
},
};
/*******************************************************************************
* PBSP engine/subdev functions
******************************************************************************/
static int
nvc0_bsp_init(struct nouveau_object *object)
{
struct nvc0_bsp_priv *priv = (void *)object;
int ret;
ret = nouveau_falcon_init(&priv->base);
if (ret)
return ret;
nv_wr32(priv, 0x084010, 0x0000fff2);
nv_wr32(priv, 0x08401c, 0x0000fff2);
return 0;
}
static int
nvc0_bsp_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nvc0_bsp_priv *priv;
int ret;
ret = nouveau_falcon_create(parent, engine, oclass, 0x084000, true,
"PBSP", "bsp", &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
nv_subdev(priv)->unit = 0x00008000;
nv_subdev(priv)->intr = nouveau_falcon_intr;
nv_engine(priv)->cclass = &nvc0_bsp_cclass;
nv_engine(priv)->sclass = nvc0_bsp_sclass;
return 0;
}
struct nouveau_oclass
nvc0_bsp_oclass = {
.handle = NV_ENGINE(BSP, 0xc0),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nvc0_bsp_ctor,
.dtor = _nouveau_falcon_dtor,
.init = nvc0_bsp_init,
.fini = _nouveau_falcon_fini,
.rd32 = _nouveau_falcon_rd32,
.wr32 = _nouveau_falcon_wr32,
},
};
/*
* Copyright 2012 Red Hat Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors: Ben Skeggs
*/
#include <engine/falcon.h>
#include <engine/bsp.h>
struct nve0_bsp_priv {
struct nouveau_falcon base;
};
/*******************************************************************************
* BSP object classes
******************************************************************************/
static struct nouveau_oclass
nve0_bsp_sclass[] = {
{ 0x95b1, &nouveau_object_ofuncs },
{},
};
/*******************************************************************************
* PBSP context
******************************************************************************/
static struct nouveau_oclass
nve0_bsp_cclass = {
.handle = NV_ENGCTX(BSP, 0xe0),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = _nouveau_falcon_context_ctor,
.dtor = _nouveau_falcon_context_dtor,
.init = _nouveau_falcon_context_init,
.fini = _nouveau_falcon_context_fini,
.rd32 = _nouveau_falcon_context_rd32,
.wr32 = _nouveau_falcon_context_wr32,
},
};
/*******************************************************************************
* PBSP engine/subdev functions
******************************************************************************/
static int
nve0_bsp_init(struct nouveau_object *object)
{
struct nve0_bsp_priv *priv = (void *)object;
int ret;
ret = nouveau_falcon_init(&priv->base);
if (ret)
return ret;
nv_wr32(priv, 0x084010, 0x0000fff2);
nv_wr32(priv, 0x08401c, 0x0000fff2);
return 0;
}
static int
nve0_bsp_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 size,
struct nouveau_object **pobject)
{
struct nve0_bsp_priv *priv;
int ret;
ret = nouveau_falcon_create(parent, engine, oclass, 0x084000, true,
"PBSP", "bsp", &priv);
*pobject = nv_object(priv);
if (ret)
return ret;
nv_subdev(priv)->unit = 0x00008000;
nv_subdev(priv)->intr = nouveau_falcon_intr;
nv_engine(priv)->cclass = &nve0_bsp_cclass;
nv_engine(priv)->sclass = nve0_bsp_sclass;
return 0;
}
struct nouveau_oclass
nve0_bsp_oclass = {
.handle = NV_ENGINE(BSP, 0xe0),
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nve0_bsp_ctor,
.dtor = _nouveau_falcon_dtor,
.init = nve0_bsp_init,
.fini = _nouveau_falcon_fini,
.rd32 = _nouveau_falcon_rd32,
.wr32 = _nouveau_falcon_wr32,
},
};
This diff is collapsed.
uint32_t nva3_pcopy_data[] = {
/* 0x0000: ctx_object */
0x00000000,
/* 0x0004: ctx_dma */
/* 0x0004: ctx_dma_query */
0x00000000,
/* 0x0008: ctx_dma_src */
0x00000000,
/* 0x000c: ctx_dma_dst */
0x00000000,
/* 0x0010: ctx_query_address_high */
0x00000000,
/* 0x0014: ctx_query_address_low */
0x00000000,
/* 0x0018: ctx_query_counter */
0x00000000,
/* 0x001c: ctx_src_address_high */
0x00000000,
/* 0x0020: ctx_src_address_low */
0x00000000,
/* 0x0024: ctx_src_pitch */
0x00000000,
/* 0x0028: ctx_src_tile_mode */
0x00000000,
/* 0x002c: ctx_src_xsize */
0x00000000,
/* 0x0030: ctx_src_ysize */
0x00000000,
/* 0x0034: ctx_src_zsize */
0x00000000,
/* 0x0038: ctx_src_zoff */
0x00000000,
/* 0x003c: ctx_src_xoff */
0x00000000,
/* 0x0040: ctx_src_yoff */
0x00000000,
/* 0x0044: ctx_src_cpp */
0x00000000,
/* 0x0048: ctx_dst_address_high */
0x00000000,
/* 0x004c: ctx_dst_address_low */
0x00000000,
/* 0x0050: ctx_dst_pitch */
0x00000000,
/* 0x0054: ctx_dst_tile_mode */
0x00000000,
/* 0x0058: ctx_dst_xsize */
0x00000000,
/* 0x005c: ctx_dst_ysize */
0x00000000,
/* 0x0060: ctx_dst_zsize */
0x00000000,
/* 0x0064: ctx_dst_zoff */
0x00000000,
/* 0x0068: ctx_dst_xoff */
0x00000000,
/* 0x006c: ctx_dst_yoff */
0x00000000,
/* 0x0070: ctx_dst_cpp */
0x00000000,
/* 0x0074: ctx_format */
0x00000000,
/* 0x0078: ctx_swz_const0 */
0x00000000,
/* 0x007c: ctx_swz_const1 */
0x00000000,
/* 0x0080: ctx_xcnt */
0x00000000,
/* 0x0084: ctx_ycnt */
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
/* 0x0100: dispatch_table */
0x00010000,
0x00000000,
0x00000000,
0x00010040,
0x00010160,
0x00000000,
0x00010050,
0x00010162,
0x00000000,
0x00030060,
/* 0x0128: dispatch_dma */
0x00010170,
0x00000000,
0x00010170,
0x00000000,
0x00010170,
0x00000000,
0x00070080,
0x00000028,
0xfffff000,
0x0000002c,
0xfff80000,
0x00000030,
0xffffe000,
0x00000034,
0xfffff800,
0x00000038,
0xfffff000,
0x0000003c,
0xfff80000,
0x00000040,
0xffffe000,
0x00070088,
0x00000054,
0xfffff000,
0x00000058,
0xfff80000,
0x0000005c,
0xffffe000,
0x00000060,
0xfffff800,
0x00000064,
0xfffff000,
0x00000068,
0xfff80000,
0x0000006c,
0xffffe000,
0x000200c0,
0x00010492,
0x00000000,
0x0001051b,
0x00000000,
0x000e00c3,
0x0000001c,
0xffffff00,
0x00000020,
0x00000000,
0x00000048,
0xffffff00,
0x0000004c,
0x00000000,
0x00000024,
0xfff80000,
0x00000050,
0xfff80000,
0x00000080,
0xffff0000,
0x00000084,
0xffffe000,
0x00000074,
0xfccc0000,
0x00000078,
0x00000000,
0x0000007c,
0x00000000,
0x00000010,
0xffffff00,
0x00000014,
0x00000000,
0x00000018,
0x00000000,
0x00000800,
};
uint32_t nva3_pcopy_code[] = {
/* 0x0000: main */
0x04fe04bd,
0x3517f000,
0xf10010fe,
0xf1040017,
0xf0fff327,
0x12d00023,
0x0c25f0c0,
0xf40012d0,
0x17f11031,
0x27f01200,
0x0012d003,
/* 0x002f: spin */
0xf40031f4,
0x0ef40028,
/* 0x0035: ih */
0x8001cffd,
0xf40812c4,
0x21f4060b,
/* 0x0041: ih_no_chsw */
0x0412c472,
0xf4060bf4,
/* 0x004a: ih_no_cmd */
0x11c4c321,
0x4001d00c,
/* 0x0052: swctx */
0x47f101f8,
0x4bfe7700,
0x0007fe00,
0xf00204b9,
0x01f40643,
0x0604fa09,
/* 0x006b: swctx_load */
0xfa060ef4,
/* 0x006e: swctx_done */
0x03f80504,
/* 0x0072: chsw */
0x27f100f8,
0x23cf1400,
0x1e3fc800,
0xf4170bf4,
0x21f40132,
0x1e3af052,
0xf00023d0,
0x24d00147,
/* 0x0093: chsw_no_unload */
0xcf00f880,
0x3dc84023,
0x220bf41e,
0xf40131f4,
0x57f05221,
0x0367f004,
/* 0x00a8: chsw_load_ctx_dma */
0xa07856bc,
0xb6018068,
0x87d00884,
0x0162b600,
/* 0x00bb: chsw_finish_load */
0xf0f018f4,
0x23d00237,
/* 0x00c3: dispatch */
0xf100f880,
0xcf190037,
0x33cf4032,
0xff24e400,
0x1024b607,
0x010057f1,
0x74bd64bd,
/* 0x00dc: dispatch_loop */
0x58005658,
0x50b60157,
0x0446b804,
0xbb4d08f4,
0x47b80076,
0x0f08f404,
0xb60276bb,
0x57bb0374,
0xdf0ef400,
/* 0x0100: dispatch_valid_mthd */
0xb60246bb,
0x45bb0344,
0x01459800,
0xb00453fd,
0x1bf40054,
0x00455820,
0xb0014658,
0x1bf40064,
0x00538009,
/* 0x0127: dispatch_cmd */
0xf4300ef4,
0x55f90132,
0xf40c01f4,
/* 0x0132: dispatch_invalid_bitfield */
0x25f0250e,
/* 0x0135: dispatch_illegal_mthd */
0x0125f002,
/* 0x0138: dispatch_error */
0x100047f1,
0xd00042d0,
0x27f04043,
0x0002d040,
/* 0x0148: hostirq_wait */
0xf08002cf,
0x24b04024,
0xf71bf400,
/* 0x0154: dispatch_done */
0x1d0027f1,
0xd00137f0,
0x00f80023,
/* 0x0160: cmd_nop */
/* 0x0162: cmd_pm_trigger */
0x27f100f8,
0x34bd2200,
0xd00233f0,
0x00f80023,
/* 0x0170: cmd_dma */
0x012842b7,
0xf00145b6,
0x43801e39,
0x0040b701,
0x0644b606,
0xf80043d0,
/* 0x0189: cmd_exec_set_format */
0xf030f400,
0xb00001b0,
0x01b00101,
0x0301b002,
0xc71d0498,
0x50b63045,
0x3446c701,
0xc70160b6,
0x70b63847,
0x0232f401,
0x94bd84bd,
/* 0x01b4: ncomp_loop */
0xb60f4ac4,
0xb4bd0445,
/* 0x01bc: bpc_loop */
0xf404a430,
0xa5ff0f18,
0x00cbbbc0,
0xf40231f4,
/* 0x01ce: cmp_c0 */
0x1bf4220e,
0x10c7f00c,
0xf400cbbb,
/* 0x01da: cmp_c1 */
0xa430160e,
0x0c18f406,
0xbb14c7f0,
0x0ef400cb,
/* 0x01e9: cmp_zero */
0x80c7f107,
/* 0x01ed: bpc_next */
0x01c83800,
0xb60180b6,
0xb5b801b0,
0xc308f404,
0xb80190b6,
0x08f40497,
0x0065fdb2,
0x98110680,
0x68fd2008,
0x0502f400,
/* 0x0216: dst_xcnt */
0x75fd64bd,
0x1c078000,
0xf10078fd,
0xb6081057,
0x56d00654,
0x4057d000,
0x080050b7,
0xb61c0698,
0x64b60162,
0x11079808,
0xfd0172b6,
0x56d00567,
0x0050b700,
0x0060b401,
0xb40056d0,
0x56d00160,
0x0260b440,
0xb48056d0,
0x56d00360,
0x0050b7c0,
0x1e069804,
0x980056d0,
0x56d01f06,
0x1030f440,
/* 0x0276: cmd_exec_set_surface_tiled */
0x579800f8,
0x6879c70a,
0xb66478c7,
0x77c70280,
0x0e76b060,
0xf0091bf4,
0x0ef40477,
/* 0x0291: xtile64 */
0x027cf00f,
0xfd1170b6,
0x77f00947,
/* 0x029d: xtileok */
0x0f5a9806,
0xfd115b98,
0xb7f000ab,
0x04b7bb01,
0xff01b2b6,
0xa7bbc4ab,
0x105d9805,
0xbb01e7f0,
0xe2b604e8,
0xb4deff01,
0xb605d8bb,
0xef9401e0,
0x02ebbb0c,
0xf005fefd,
0x60b7026c,
0x64b60208,
0x006fd008,
0xbb04b7bb,
0x5f9800cb,
0x115b980b,
0xf000fbfd,
0xb7bb01b7,
0x01b2b604,
0xbb00fbbb,
0xf0f905f7,
0xf00c5f98,
0xb8bb01b7,
0x01b2b604,
0xbb00fbbb,
0xf0f905f8,
0xb60078bb,
0xb7f00282,
0x04b8bb01,
0x9804b9bb,
0xe7f00e58,
0x04e9bb01,
0xff01e2b6,
0xf7bbf48e,
0x00cfbb04,
0xbb0079bb,
0xf0fc0589,
0xd9fd90fc,
0x00adbb00,
0xfd0089fd,
0xa8bb008f,
0x04a7bb00,
0xbb0192b6,
0x69d00497,
0x08579880,
0xbb075898,
0x7abb00ac,
0x0081b600,
0xfd1084b6,
0x62b7058b,
0x67d00600,
0x0060b700,
0x0068d004,
/* 0x0382: cmd_exec_set_surface_linear */
0x6cf000f8,
0x0260b702,
0x0864b602,
0xd0085798,
0x60b70067,
0x57980400,
0x1074b607,
0xb70067d0,
0x98040060,
0x67d00957,
/* 0x03ab: cmd_exec_wait */
0xf900f800,
0xf110f900,
0xb6080007,
/* 0x03b6: loop */
0x01cf0604,
0x0114f000,
0xfcfa1bf4,
0xf800fc10,
/* 0x03c5: cmd_exec_query */
0x0d34c800,
0xf5701bf4,
0xf103ab21,
0xb6080c47,
0x05980644,
0x0450b605,
0xd00045d0,
0x57f04040,
0x8045d00c,
0x040040b7,
0xb6040598,
0x45d01054,
0x0040b700,
0x0057f105,
0x0153f00b,
0xf10045d0,
0xb6404057,
0x53f10154,
0x45d08080,
0x1057f140,
0x1253f111,
0x8045d013,
0x151457f1,
0x171653f1,
0xf1c045d0,
0xf0260157,
0x47f10153,
0x44b60800,
0x0045d006,
/* 0x0438: query_counter */
0x03ab21f5,
0x080c47f1,
0x980644b6,
0x45d00505,
0x4040d000,
0xd00457f0,
0x40b78045,
0x05980400,
0x1054b604,
0xb70045d0,
0xf1050040,
0xd0030057,
0x57f10045,
0x53f11110,
0x45d01312,
0x06059840,
0x050040b7,
0xf10045d0,
0xf0260157,
0x47f10153,
0x44b60800,
0x0045d006,
/* 0x0492: cmd_exec */
0x21f500f8,
0x3fc803ab,
0x0e0bf400,
0x018921f5,
0x020047f1,
/* 0x04a7: cmd_exec_no_format */
0xf11e0ef4,
0xb6081067,
0x77f00664,
0x11078001,
0x981c0780,
0x67d02007,
0x4067d000,
/* 0x04c2: cmd_exec_init_src_surface */
0x32f444bd,
0xc854bd02,
0x0bf4043f,
0x8221f50a,
0x0a0ef403,
/* 0x04d4: src_tiled */
0x027621f5,
/* 0x04db: cmd_exec_init_dst_surface */
0xf40749f0,
0x57f00231,
0x083fc82c,
0xf50a0bf4,
0xf4038221,
/* 0x04ee: dst_tiled */
0x21f50a0e,
0x49f00276,
/* 0x04f5: cmd_exec_kick */
0x0057f108,
0x0654b608,
0xd0210698,
0x67f04056,
0x0063f141,
0x0546fd44,
0xc80054d0,
0x0bf40c3f,
0xc521f507,
/* 0x0519: cmd_exec_done */
/* 0x051b: cmd_wrcache_flush */
0xf100f803,
0xbd220027,
0x0133f034,
0xf80023d0,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
0x00000000,
};
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#ifndef __NVKM_DEVICE_ACPI_H__
#define __NVKM_DEVICE_ACPI_H__
#include <engine/device.h>
int nvkm_acpi_init(struct nouveau_device *);
int nvkm_acpi_fini(struct nouveau_device *, bool);
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#ifndef __NVKM_DEVICE_PRIV_H__
#define __NVKM_DEVICE_PRIV_H__
#include <engine/device.h>
extern struct nouveau_oclass nouveau_control_oclass[];
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment