- 08 Nov, 2013 34 commits
-
-
Ben Skeggs authored
This is what NVIDIA do on these chipsets, let's hope it works around the reported MSI failures for us on NV86. v2: updated to include G92, as per information provided by NVIDIA. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
v2. updated to cover GF104, as per information provided by NVIDIA. Reported-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This looks to be what NVIDIA do pretty much everywhere, since forever. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This way we can catch it with debugging on for PMC subdev. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Martin Peres authored
This is safe because ptherm hasn't been configured yet and will be a little further down the initialization path. Ptherm should be safe regarding to runtime reconfiguration. v2: - do not limit this patch to nv84-a3 and make it nv84+ v3: - move the ack to fini() - disable IRQs on fini() - silently ignore un-requested IRQs Signed-off-by: Martin Peres <martin.peres@labri.fr> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
NV31 has different config bits than NV40+ do. Also fix the DMA_IMAGE VRAM-only setting to check the right bits. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
This makes nv31+ able to actually perform the nv_call, since previously the inst was not available. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
Since nv40 only covers pre-nv44 now, it can use the nv31-provided functions. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
The nv31/nv40 impls are actually fairly nv44-specific, since they assume the presence of the instance register/context switching. Create a copy before nv31/nv40 get fixed. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
NV1A is numerically higher than NV17 but generationally lower. Use the new card type to help disambiguate. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
NV11/17/1F/18 come after NV10/15/16/1A. In order to facilitate using numerical comparisons, split up the two sets into different card types. This change should be a no-op except that the relevant cards will see NV11 printed instead of NV10 for the family. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
This code was originally moved to using nv_mask by d31e078d. This should not have any actual effect since the mask isn't applied to the value. Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ilia Mirkin authored
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
NVIDIA's name for what rnndb calls PVCOMP. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This fixes a reported locking inversion when interacting with the DRM core's vblank routines. Reviewed-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
This is a necessary step towards being able to work with the insane locking requirements of the DRM core's vblank routines, and a nice cleanup as a side-effect. This is similar in spirit to the interfaces that Peter Hurley arrived at with his nouveau_event rcu conversion series. Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Ben Skeggs authored
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Peter Hurley authored
Most nouveau event handlers have storage in 'static' containers (structures with lifetimes nearly equivalent to the drm_device), but are dangerously reused via nouveau_event_get/_put. For example, if nouveau_event_get is called more than once for a given handler, the event handler list will be corrupted. Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Peter Hurley authored
The index_nr field is constant for the lifetime of the event, so serialized access is unnecessary. Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Peter Hurley authored
Provide private field for event handlers exclusive use. Convert nouveau_fence_wait_uevent() and nouveau_fence_wait_uevent_handler(); drop struct nouveau_fence_uevent. Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Dan Carpenter authored
The test here should be ">= ARRAY_SIZE()" instead of "> ARRAY_SIZE()". Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
Dave Jones authored
self-assignment of a variable doesn't make a lot of sense. Signed-off-by: Dave Jones <davej@fedoraproject.org> Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
-
- 07 Nov, 2013 2 commits
-
-
git://people.freedesktop.org/~thomash/linuxDave Airlie authored
- A couple of fixes that never made it into fixes-3.12 - Make NO_EVICT bo's available for shrinkers when on delayed-delete list - Allow retrying page-faults that need to wait for GPU. * 'ttm-next-3.13' of git://people.freedesktop.org/~thomash/linux: drm/ttm: Fix memory type compatibility check drm/ttm: Fix ttm_bo_move_memcpy drm/ttm: Handle in-memory region copies drm/ttm: Make NO_EVICT bos available to shrinkers pending destruction drm/ttm: Allow vm fault retries
-
git://people.freedesktop.org/~thomash/linuxDave Airlie authored
Pull request for vmwgfx. Currently just the DMA address stuff. * 'vmwgfx-next-3.13' of git://people.freedesktop.org/~thomash/linux: drm/vmwgfx: Use the linux DMA api to get valid device addresses of pages drm/ttm: Enable the dma page pool also for intel IOMMUs
-
- 06 Nov, 2013 4 commits
-
-
Thomas Hellstrom authored
Also check the busy placements before deciding to move a buffer object. Failing to do this may result in a completely unneccessary move within a single memory type. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Jakob Bornecrantz <jakob@vmware.com> Cc: stable@vger.kernel.org
-
Thomas Hellstrom authored
All error paths will want to keep the mm node, so handle this at the function exit. This fixes an ioremap failure error path. Also add some comments to make the function a bit easier to understand. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Jakob Bornecrantz <jakob@vmware.com> Cc: stable@vger.kernel.org
-
Jakob Bornecrantz authored
Fix the case where the ttm pointer may be NULL causing a NULL pointer dereference. Signed-off-by: Jakob Bornecrantz <jakob@vmware.com> Signed-off-by: Thomas Hellström <thellstrom@vmware.com> Cc: stable@vger.kernel.org
-
Thomas Hellstrom authored
NO_EVICT bos that are not idle when all references are dropped are put on the delayed destroy list. However, since they are not on LRU lists, they are not available to shrinkers at that point, and buffers on the delayed destroy list are not checked very often for idle. So when these buffers are put on the delayed destroy list, clear the NO_EVICT flag and put them on the right LRU list. This way they are immediately available for eviction or shrinkers and will not cause false OOMS. Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
-