- 18 May, 2020 17 commits
-
-
Lu Baolu authored
The info and info->pasid_support have already been checked in previous intel_iommu_enable_pasid() call. No need to check again. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-18-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
IOTLB flush already included in the PASID tear down and the page request drain process. There is no need to flush again. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-17-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
When a PASID is stopped or terminated, there can be pending PRQs (requests that haven't received responses) in remapping hardware. This adds the interface to drain page requests and call it when a PASID is terminated. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-16-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
When a PASID is used for SVA by the device, it's possible that the PASID entry is cleared before the device flushes all ongoing DMA requests. The IOMMU should tolerate and ignore the non-recoverable faults caused by the untranslated requests from this device. For example, when an exception happens, the process terminates before the device driver stops DMA and call IOMMU driver to unbind PASID. The flow of process exist is as follows: do_exit() { exit_mm() { mm_put(); exit_mmap() { intel_invalidate_range() //mmu notifier tlb_finish_mmu() mmu_notifier_release(mm) { intel_iommu_release() { [2] intel_iommu_teardown_pasid(); intel_iommu_flush_tlbs(); } } unmap_vmas(); free_pgtables(); }; } exit_files(tsk) { close_files() { dsa_close(); [1] dsa_stop_dma(); intel_svm_unbind_pasid(); } } } Care must be taken on VT-d to avoid unrecoverable faults between the time window of [1] and [2]. [Process exist flow was contributed by Jacob Pan.] Intel VT-d provides such function through the FPD bit of the PASID entry. This sets FPD bit when PASID entry is changing from present to nonpresent in the mm notifier and will clear it when the pasid is unbound. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-15-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Export invalidation queue internals of each iommu device through the debugfs. Example of such dump on a Skylake machine: $ sudo cat /sys/kernel/debug/iommu/intel/invalidation_queue Invalidation queue on IOMMU: dmar1 Base: 0x1672c9000 Head: 80 Tail: 80 Index qw0 qw1 status 0 0000000000000004 0000000000000000 0000000000000000 1 0000000200000025 00000001672be804 0000000000000000 2 0000000000000011 0000000000000000 0000000000000000 3 0000000200000025 00000001672be80c 0000000000000000 4 00000000000000d2 0000000000000000 0000000000000000 5 0000000200000025 00000001672be814 0000000000000000 6 0000000000000014 0000000000000000 0000000000000000 7 0000000200000025 00000001672be81c 0000000000000000 8 0000000000000014 0000000000000000 0000000000000000 9 0000000200000025 00000001672be824 0000000000000000 Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-14-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Current qi_submit_sync() only supports single invalidation descriptor per submission and appends wait descriptor after each submission to poll the hardware completion. This extends the qi_submit_sync() helper to support multiple descriptors, and add an option so that the caller could specify the Page-request Drain (PD) bit in the wait descriptor. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-13-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
This patch is an initial step to replace Intel SVM code with the following IOMMU SVA ops: intel_svm_bind_mm() => iommu_sva_bind_device() intel_svm_unbind_mm() => iommu_sva_unbind_device() intel_svm_is_pasid_valid() => iommu_sva_get_pasid() The features below will continue to work but are not included in this patch in that they are handled mostly within the IOMMU subsystem. - IO page fault - mmu notifier Consolidation of the above will come after merging generic IOMMU sva code[1]. There should not be any changes needed for SVA users such as accelerator device drivers during this time. [1] http://jpbrucker.net/sva/Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-12-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
Query Shared Virtual Address/Memory capability is a generic feature. SVA feature check is the required first step before calling iommu_sva_bind_device(). VT-d checks SVA feature enabling at per IOMMU level during this step, SVA bind device will check and enable PCI ATS, PRS, and PASID capabilities at device level. This patch reports Intel SVM as SVA feature such that generic code (e.g. Uacce [1]) can use it. [1] https://lkml.org/lkml/2020/1/15/604Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-11-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Add a get_domain_info() helper to retrieve the valid per-device iommu private data. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-10-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
When VT-d driver runs in the guest, PASID allocation must be performed via virtual command interface. This patch registers a custom IOASID allocator which takes precedence over the default XArray based allocator. The resulting IOASID allocation will always come from the host. This ensures that PASID namespace is system- wide. Virtual command registers are used in the guest only, to prevent vmexit cost, we cache the capability and store it during initialization. Signed-off-by: Liu, Yi L <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200516062101.29541-9-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Enabling IOMMU in a guest requires communication with the host driver for certain aspects. Use of PASID ID to enable Shared Virtual Addressing (SVA) requires managing PASID's in the host. VT-d 3.0 spec provides a Virtual Command Register (VCMD) to facilitate this. Writes to this register in the guest are trapped by vIOMMU which proxies the call to the host driver. This virtual command interface consists of a capability register, a virtual command register, and a virtual response register. Refer to section 10.4.42, 10.4.43, 10.4.44 for more information. This patch adds the enlightened PASID allocation/free interfaces via the virtual command interface. Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-8-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
When Shared Virtual Address (SVA) is enabled for a guest OS via vIOMMU, we need to provide invalidation support at IOMMU API and driver level. This patch adds Intel VT-d specific function to implement iommu passdown invalidate API for shared virtual address. The use case is for supporting caching structure invalidation of assigned SVM capable devices. Emulated IOMMU exposes queue invalidation capability and passes down all descriptors from the guest to the physical IOMMU. The assumption is that guest to host device ID mapping should be resolved prior to calling IOMMU driver. Based on the device handle, host IOMMU driver can replace certain fields before submit to the invalidation queue. Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200516062101.29541-7-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
When Shared Virtual Memory is exposed to a guest via vIOMMU, scalable IOTLB invalidation may be passed down from outside IOMMU subsystems. This patch adds invalidation functions that can be used for additional translation cache types. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200516062101.29541-6-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
When supporting guest SVA with emulated IOMMU, the guest PASID table is shadowed in VMM. Updates to guest vIOMMU PASID table will result in PASID cache flush which will be passed down to the host as bind guest PASID calls. For the SL page tables, it will be harvested from device's default domain (request w/o PASID), or aux domain in case of mediated device. .-------------. .---------------------------. | vIOMMU | | Guest process CR3, FL only| | | '---------------------------' .----------------/ | PASID Entry |--- PASID cache flush - '-------------' | | | V | | CR3 in GPA '-------------' Guest ------| Shadow |--------------------------|-------- v v v Host .-------------. .----------------------. | pIOMMU | | Bind FL for GVA-GPA | | | '----------------------' .----------------/ | | PASID Entry | V (Nested xlate) '----------------\.------------------------------. | | |SL for GPA-HPA, default domain| | | '------------------------------' '-------------' Where: - FL = First level/stage one page tables - SL = Second level/stage two page tables Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-5-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
Nested translation mode is supported in VT-d 3.0 Spec.CH 3.8. With PASID granular translation type set to 0x11b, translation result from the first level(FL) also subject to a second level(SL) page table translation. This mode is used for SVA virtualization, where FL performs guest virtual to guest physical translation and SL performs guest physical to host physical translation. This patch adds a helper function for setting up nested translation where second level comes from a domain and first level comes from a guest PGD. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200516062101.29541-4-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
An Intel iommu domain uses 5-level page table by default. If the iommu that the domain tries to attach supports less page levels, the top level page tables should be skipped. Add a helper to do this so that it could be used in other places. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-3-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Jacob Pan authored
Move domain helper to header to be used by SVA code. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20200516062101.29541-2-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
- 15 May, 2020 1 commit
-
-
Sai Praneeth Prakhya authored
After moving iommu_group setup to iommu core code [1][2] and removing private domain support in vt-d [3], there are no users for functions such as iommu_request_dm_for_dev(), iommu_request_dma_domain_for_dev() and request_default_domain_for_dev(). So, remove these functions. [1] commit dce8d696 ("iommu/amd: Convert to probe/release_device() call-backs") [2] commit e5d1841f ("iommu/vt-d: Convert to probe/release_device() call-backs") [3] commit 327d5b2f ("iommu/vt-d: Allow 32bit devices to uses DMA domain") Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200513224721.20504-1-sai.praneeth.prakhya@intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
- 13 May, 2020 9 commits
-
-
Andy Shevchenko authored
Unify format of the printed messages, i.e. replace printk(LEVEL ... ) with pr_level(...). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20200507161804.13275-1-andriy.shevchenko@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Current Intel IOMMU driver sets the system level dma_ops. This causes each dma API to go through the IOMMU driver even the devices are using identity mapped domains. This sets per-device dma_ops only if a device is using a DMA domain. Otherwise, use the default system level dma_ops for direct dma. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Daniel Drake <drake@endlessm.com> Reviewed-by: Jon Derrick <jonathan.derrick@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20200506015947.28662-4-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Before commit fa954e68 ("iommu/vt-d: Delegate the dma domain to upper layer"), Intel IOMMU started off with all devices in the identity domain, and took them out later if it found they couldn't access all of memory. This required devices behind a PCI bridge to use a DMA domain at the beginning because all PCI devices behind the bridge use the same source-id in their transactions and the domain couldn't be changed at run-time. Intel IOMMU driver is now aligned with the default domain framework, there's no need to keep this requirement anymore. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Daniel Drake <drake@endlessm.com> Reviewed-by: Jon Derrick <jonathan.derrick@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20200506015947.28662-3-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Lu Baolu authored
Currently, if a 32bit device initially uses an identity domain, Intel IOMMU driver will convert it forcibly to a DMA one if its address capability is not enough for the whole system memory. The motivation was to overcome the overhead caused by possible bounced buffer. Unfortunately, this improvement has led to many problems. For example, some 32bit devices are required to use an identity domain, forcing them to use DMA domain will cause the device not to work anymore. On the other hand, the VMD sub-devices share a domain but each sub-device might have different address capability. Forcing a VMD sub-device to use DMA domain blindly will impact the operation of other sub-devices without any notification. Further more, PCI aliased devices (PCI bridge and all devices beneath it, VMD devices and various devices quirked with pci_add_dma_alias()) must use the same domain. Forcing one device to switch to DMA domain during runtime will cause in-fligh DMAs for other devices to abort or target to other memory which might cause undefind system behavior. With the last private domain usage in iommu_need_mapping() removed, all private domain helpers are also cleaned in this patch. Otherwise, the compiler will complain that some functions are defined but not used. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Daniel Drake <drake@endlessm.com> Reviewed-by: Jon Derrick <jonathan.derrick@intel.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Daniel Drake <drake@endlessm.com> Cc: Derrick Jonathan <jonathan.derrick@intel.com> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20200506015947.28662-2-baolu.lu@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Linux 5.7-rc4
-
Andy Shevchenko authored
Unify format of the printed messages, i.e. replace printk(LEVEL ... ) with pr_level(...). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200507161804.13275-2-andriy.shevchenko@linux.intel.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Arnd Bergmann authored
gcc warns because the only reference to ipmmu_find_group is inside of an #ifdef: drivers/iommu/ipmmu-vmsa.c:878:28: error: 'ipmmu_find_group' defined but not used [-Werror=unused-function] Change the #ifdef to an equivalent IS_ENABLED(). Fixes: 6580c8a7 ("iommu/renesas: Convert to probe/release_device() call-backs") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Simon Horman <horms@verge.net.au> Link: https://lore.kernel.org/r/20200508220224.688985-1-arnd@arndb.deSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Thierry Reding authored
The host1x bus implemented on Tegra SoCs is primarily an abstraction to create logical device from multiple platform devices. Since the devices in such a setup are typically hierarchical, DMA setup still needs to be done so that DMA masks can be properly inherited, but we don't actually want to attach the host1x logical devices to any IOMMU. The platform devices that make up the logical device are responsible for memory bus transactions, so it is them that will need to be attached to the IOMMU. Add a check to __iommu_probe_device() that aborts IOMMU setup early for busses that don't have the IOMMU operations pointer set since they will cause a crash otherwise. Signed-off-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20200511161000.3853342-1-thierry.reding@gmail.comSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Qian Cai authored
The commit dce8d696 ("iommu/amd: Convert to probe/release_device() call-backs") introduced an unused variable, drivers/iommu/amd_iommu.c: In function 'amd_iommu_uninit_device': drivers/iommu/amd_iommu.c:422:20: warning: variable 'iommu' set but not used [-Wunused-but-set-variable] struct amd_iommu *iommu; ^~~~~ Signed-off-by: Qian Cai <cai@lca.pw> Link: https://lore.kernel.org/r/20200509015645.3236-1-cai@lca.pw Fixes: dce8d696 ("iommu/amd: Convert to probe/release_device() call-backs") Signed-off-by: Joerg Roedel <jroedel@suse.de>
-
- 05 May, 2020 13 commits
-
-
Joerg Roedel authored
The function is now only used in IOMMU core code and shouldn't be used outside of it anyway, so remove the export for it. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-35-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Move the calls to dev_iommu_get() and try_module_get() into __iommu_probe_device(), so that the callers don't have to do it on their own. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-34-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
All drivers are converted to use the probe/release_device() call-backs, so the add_device/remove_device() pointers are unused and the code using them can be removed. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-33-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the Exynos IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-32-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
On Exynos platforms there can be more than one SYSMMU (IOMMU) for one DMA master device. Since the IOMMU core code expects only one hardware IOMMU, use the first SYSMMU in the list. Signed-off-by: Joerg Roedel <jroedel@suse.de> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20200429133712.31431-31-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the OMAP IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-30-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Remove the tracking of device which could not be probed because their IOMMU is not probed yet. Replace it with a call to bus_iommu_probe() when a new IOMMU is probed. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-29-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the Renesas IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-28-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the Tegra IOMMU drivers to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-27-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the Rockchip IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-26-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the QCOM IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-25-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the Mediatek-v1 IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-24-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-
Joerg Roedel authored
Convert the Mediatek IOMMU driver to use the probe_device() and release_device() call-backs of iommu_ops, so that the iommu core code does the group and sysfs setup. Signed-off-by: Joerg Roedel <jroedel@suse.de> Link: https://lore.kernel.org/r/20200429133712.31431-23-joro@8bytes.orgSigned-off-by: Joerg Roedel <jroedel@suse.de>
-