Commit 457fa346 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'char-misc-4.21-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
 "Here is the big set of char and misc driver patches for 4.21-rc1.

  Lots of different types of driver things in here, as this tree seems
  to be the "collection of various driver subsystems not big enough to
  have their own git tree" lately.

  Anyway, some highlights of the changes in here:

   - binderfs: is it a rule that all driver subsystems will eventually
     grow to have their own filesystem? Binder now has one to handle the
     use of it in containerized systems.

     This was discussed at the Plumbers conference a few months ago and
     knocked into mergable shape very fast by Christian Brauner. Who
     also has signed up to be another binder maintainer, showing a
     distinct lack of good judgement :)

   - binder updates and fixes

   - mei driver updates

   - fpga driver updates and additions

   - thunderbolt driver updates

   - soundwire driver updates

   - extcon driver updates

   - nvmem driver updates

   - hyper-v driver updates

   - coresight driver updates

   - pvpanic driver additions and reworking for more device support

   - lp driver updates. Yes really, it's _finally_ moved to the proper
     parallal port driver model, something I never thought I would see
     happen. Good stuff.

   - other tiny driver updates and fixes.

  All of these have been in linux-next for a while with no reported
  issues"

* tag 'char-misc-4.21-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (116 commits)
  MAINTAINERS: add another Android binder maintainer
  intel_th: msu: Fix an off-by-one in attribute store
  stm class: Add a reference to the SyS-T document
  stm class: Fix a module refcount leak in policy creation error path
  char: lp: use new parport device model
  char: lp: properly count the lp devices
  char: lp: use first unused lp number while registering
  char: lp: detach the device when parallel port is removed
  char: lp: introduce list to save port number
  bus: qcom: remove duplicated include from qcom-ebi2.c
  VMCI: Use memdup_user() rather than duplicating its implementation
  char/rtc: Use of_node_name_eq for node name comparisons
  misc: mic: fix a DMA pool free failure
  ptp: fix an IS_ERR() vs NULL check
  genwqe: Fix size check
  binder: implement binderfs
  binder: fix use-after-free due to ksys_close() during fdget()
  bus: fsl-mc: remove duplicated include files
  bus: fsl-mc: explicitly define the fsl_mc_command endianness
  misc: ti-st: make array read_ver_cmd static, shrinks object size
  ...
parents b07039b7 fbc4904c
......@@ -21,6 +21,15 @@ Description: Holds a comma separated list of device unique_ids that
If a device is authorized automatically during boot its
boot attribute is set to 1.
What: /sys/bus/thunderbolt/devices/.../domainX/iommu_dma_protection
Date: Mar 2019
KernelVersion: 4.21
Contact: thunderbolt-software@lists.01.org
Description: This attribute tells whether the system uses IOMMU
for DMA protection. Value of 1 means IOMMU is used 0 means
it is not (DMA protection is solely based on Thunderbolt
security levels).
What: /sys/bus/thunderbolt/devices/.../domainX/security
Date: Sep 2017
KernelVersion: 4.13
......
......@@ -133,6 +133,26 @@ If the user still wants to connect the device they can either approve
the device without a key or write a new key and write 1 to the
``authorized`` file to get the new key stored on the device NVM.
DMA protection utilizing IOMMU
------------------------------
Recent systems from 2018 and forward with Thunderbolt ports may natively
support IOMMU. This means that Thunderbolt security is handled by an IOMMU
so connected devices cannot access memory regions outside of what is
allocated for them by drivers. When Linux is running on such system it
automatically enables IOMMU if not enabled by the user already. These
systems can be identified by reading ``1`` from
``/sys/bus/thunderbolt/devices/domainX/iommu_dma_protection`` attribute.
The driver does not do anything special in this case but because DMA
protection is handled by the IOMMU, security levels (if set) are
redundant. For this reason some systems ship with security level set to
``none``. Other systems have security level set to ``user`` in order to
support downgrade to older OS, so users who want to automatically
authorize devices when IOMMU DMA protection is enabled can use the
following ``udev`` rule::
ACTION=="add", SUBSYSTEM=="thunderbolt", ATTRS{iommu_dma_protection}=="1", ATTR{authorized}=="0", ATTR{authorized}="1"
Upgrading NVM on Thunderbolt device or host
-------------------------------------------
Since most of the functionality is handled in firmware running on a
......
Intel Service Layer Driver for Stratix10 SoC
============================================
Intel Stratix10 SoC is composed of a 64 bit quad-core ARM Cortex A53 hard
processor system (HPS) and Secure Device Manager (SDM). When the FPGA is
configured from HPS, there needs to be a way for HPS to notify SDM the
location and size of the configuration data. Then SDM will get the
configuration data from that location and perform the FPGA configuration.
To meet the whole system security needs and support virtual machine requesting
communication with SDM, only the secure world of software (EL3, Exception
Layer 3) can interface with SDM. All software entities running on other
exception layers must channel through the EL3 software whenever it needs
service from SDM.
Intel Stratix10 service layer driver, running at privileged exception level
(EL1, Exception Layer 1), interfaces with the service providers and provides
the services for FPGA configuration, QSPI, Crypto and warm reset. Service layer
driver also manages secure monitor call (SMC) to communicate with secure monitor
code running in EL3.
Required properties:
-------------------
The svc node has the following mandatory properties, must be located under
the firmware node.
- compatible: "intel,stratix10-svc"
- method: smc or hvc
smc - Secure Monitor Call
hvc - Hypervisor Call
- memory-region:
phandle to the reserved memory node. See
Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
for details
Example:
-------
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
service_reserved: svcbuffer@0 {
compatible = "shared-dma-pool";
reg = <0x0 0x0 0x0 0x1000000>;
alignment = <0x1000>;
no-map;
};
};
firmware {
svc {
compatible = "intel,stratix10-svc";
method = "smc";
memory-region = <&service_reserved>;
};
};
Intel Stratix10 SoC FPGA Manager
Required properties:
The fpga_mgr node has the following mandatory property, must be located under
firmware/svc node.
- compatible : should contain "intel,stratix10-soc-fpga-mgr"
Example:
firmware {
svc {
fpga_mgr: fpga-mgr {
compatible = "intel,stratix10-soc-fpga-mgr";
};
};
};
* QEMU PVPANIC MMIO Configuration bindings
QEMU's emulation / virtualization targets provide the following PVPANIC
MMIO Configuration interface on the "virt" machine.
type:
- a read-write, 16-bit wide data register.
QEMU exposes the data register to guests as memory mapped registers.
Required properties:
- compatible: "qemu,pvpanic-mmio".
- reg: the MMIO region used by the device.
* Bytes 0x0 Write panic event to the reg when guest OS panics.
* Bytes 0x1 Reserved.
Example:
/ {
#size-cells = <0x2>;
#address-cells = <0x2>;
pvpanic-mmio@9060000 {
compatible = "qemu,pvpanic-mmio";
reg = <0x0 0x9060000 0x0 0x2>;
};
};
......@@ -2,6 +2,8 @@
Required properties:
- compatible: should be "amlogic,meson-gxbb-efuse"
- clocks: phandle to the efuse peripheral clock provided by the
clock controller.
= Data cells =
Are child nodes of eFuse, bindings of which as described in
......@@ -11,6 +13,7 @@ Example:
efuse: efuse {
compatible = "amlogic,meson-gxbb-efuse";
clocks = <&clkc CLKID_EFUSE>;
#address-cells = <1>;
#size-cells = <1>;
......
......@@ -13,3 +13,33 @@ EDD Interfaces
.. kernel-doc:: drivers/firmware/edd.c
:internal:
Intel Stratix10 SoC Service Layer
---------------------------------
Some features of the Intel Stratix10 SoC require a level of privilege
higher than the kernel is granted. Such secure features include
FPGA programming. In terms of the ARMv8 architecture, the kernel runs
at Exception Level 1 (EL1), access to the features requires
Exception Level 3 (EL3).
The Intel Stratix10 SoC service layer provides an in kernel API for
drivers to request access to the secure features. The requests are queued
and processed one by one. ARM’s SMCCC is used to pass the execution
of the requests on to a secure monitor (EL3).
.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h
:functions: stratix10_svc_command_code
.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h
:functions: stratix10_svc_client_msg
.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h
:functions: stratix10_svc_command_reconfig_payload
.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h
:functions: stratix10_svc_cb_data
.. kernel-doc:: include/linux/firmware/intel/stratix10-svc-client.h
:functions: stratix10_svc_client
.. kernel-doc:: drivers/firmware/stratix10-svc.c
:export:
......@@ -22,3 +22,4 @@ Linux Tracing Technologies
hwlat_detector
intel_th
stm
sys-t
......@@ -958,6 +958,7 @@ M: Arve Hjønnevåg <arve@android.com>
M: Todd Kjos <tkjos@android.com>
M: Martijn Coenen <maco@android.com>
M: Joel Fernandes <joel@joelfernandes.org>
M: Christian Brauner <christian@brauner.io>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git
L: devel@driverdev.osuosl.org
S: Supported
......@@ -1442,6 +1443,7 @@ F: arch/arm/mach-ep93xx/micro9.c
ARM/CORESIGHT FRAMEWORK AND DRIVERS
M: Mathieu Poirier <mathieu.poirier@linaro.org>
R: Suzuki K Poulose <suzuki.poulose@arm.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: drivers/hwtracing/coresight/*
......@@ -16242,7 +16244,7 @@ F: drivers/vme/
F: include/linux/vme*
VMWARE BALLOON DRIVER
M: Xavier Deguillard <xdeguillard@vmware.com>
M: Julien Freche <jfreche@vmware.com>
M: Nadav Amit <namit@vmware.com>
M: "VMware, Inc." <pv-drivers@vmware.com>
L: linux-kernel@vger.kernel.org
......
......@@ -24,6 +24,19 @@ / {
#address-cells = <2>;
#size-cells = <2>;
reserved-memory {
#address-cells = <2>;
#size-cells = <2>;
ranges;
service_reserved: svcbuffer@0 {
compatible = "shared-dma-pool";
reg = <0x0 0x0 0x0 0x1000000>;
alignment = <0x1000>;
no-map;
};
};
cpus {
#address-cells = <1>;
#size-cells = <0>;
......@@ -93,6 +106,14 @@ soc {
interrupt-parent = <&intc>;
ranges = <0 0 0 0xffffffff>;
base_fpga_region {
#address-cells = <0x1>;
#size-cells = <0x1>;
compatible = "fpga-region";
fpga-mgr = <&fpga_mgr>;
};
clkmgr: clock-controller@ffd10000 {
compatible = "intel,stratix10-clkmgr";
reg = <0xffd10000 0x1000>;
......@@ -537,5 +558,17 @@ qspi: spi@ff8d2000 {
status = "disabled";
};
firmware {
svc {
compatible = "intel,stratix10-svc";
method = "smc";
memory-region = <&service_reserved>;
fpga_mgr: fpga-mgr {
compatible = "intel,stratix10-soc-fpga-mgr";
};
};
};
};
};
......@@ -24,6 +24,14 @@ static int acpi_data_get_property_array(const struct acpi_device_data *data,
acpi_object_type type,
const union acpi_object **obj);
/*
* The GUIDs here are made equivalent to each other in order to avoid extra
* complexity in the properties handling code, with the caveat that the
* kernel will accept certain combinations of GUID and properties that are
* not defined without a warning. For instance if any of the properties
* from different GUID appear in a property list of another, it will be
* accepted by the kernel. Firmware validation tools should catch these.
*/
static const guid_t prp_guids[] = {
/* ACPI _DSD device properties GUID: daffd814-6eba-4d8c-8a91-bc9bbf4aa301 */
GUID_INIT(0xdaffd814, 0x6eba, 0x4d8c,
......@@ -31,6 +39,9 @@ static const guid_t prp_guids[] = {
/* Hotplug in D3 GUID: 6211e2c0-58a3-4af3-90e1-927a4e0c55a4 */
GUID_INIT(0x6211e2c0, 0x58a3, 0x4af3,
0x90, 0xe1, 0x92, 0x7a, 0x4e, 0x0c, 0x55, 0xa4),
/* External facing port GUID: efcc06cc-73ac-4bc3-bff0-76143807c389 */
GUID_INIT(0xefcc06cc, 0x73ac, 0x4bc3,
0xbf, 0xf0, 0x76, 0x14, 0x38, 0x07, 0xc3, 0x89),
};
static const guid_t ads_guid =
......
......@@ -20,6 +20,18 @@ config ANDROID_BINDER_IPC
Android process, using Binder to identify, invoke and pass arguments
between said processes.
config ANDROID_BINDERFS
bool "Android Binderfs filesystem"
depends on ANDROID_BINDER_IPC
default n
---help---
Binderfs is a pseudo-filesystem for the Android Binder IPC driver
which can be mounted per-ipc namespace allowing to run multiple
instances of Android.
Each binderfs mount initially only contains a binder-control device.
It can be used to dynamically allocate new binder IPC devices via
ioctls.
config ANDROID_BINDER_DEVICES
string "Android Binder devices"
depends on ANDROID_BINDER_IPC
......
ccflags-y += -I$(src) # needed for trace events
obj-$(CONFIG_ANDROID_BINDERFS) += binderfs.o
obj-$(CONFIG_ANDROID_BINDER_IPC) += binder.o binder_alloc.o
obj-$(CONFIG_ANDROID_BINDER_IPC_SELFTEST) += binder_alloc_selftest.o
This diff is collapsed.
......@@ -939,6 +939,7 @@ enum lru_status binder_alloc_free_page(struct list_head *item,
struct list_lru_one *lru,
spinlock_t *lock,
void *cb_arg)
__must_hold(lock)
{
struct mm_struct *mm = NULL;
struct binder_lru_page *page = container_of(item,
......
......@@ -30,16 +30,16 @@ struct binder_transaction;
* struct binder_buffer - buffer used for binder transactions
* @entry: entry alloc->buffers
* @rb_node: node for allocated_buffers/free_buffers rb trees
* @free: true if buffer is free
* @allow_user_free: describe the second member of struct blah,
* @async_transaction: describe the second member of struct blah,
* @debug_id: describe the second member of struct blah,
* @transaction: describe the second member of struct blah,
* @target_node: describe the second member of struct blah,
* @data_size: describe the second member of struct blah,
* @offsets_size: describe the second member of struct blah,
* @extra_buffers_size: describe the second member of struct blah,
* @data:i describe the second member of struct blah,
* @free: %true if buffer is free
* @allow_user_free: %true if user is allowed to free buffer
* @async_transaction: %true if buffer is in use for an async txn
* @debug_id: unique ID for debugging
* @transaction: pointer to associated struct binder_transaction
* @target_node: struct binder_node associated with this buffer
* @data_size: size of @transaction data
* @offsets_size: size of array of offsets
* @extra_buffers_size: size of space for other objects (like sg lists)
* @data: pointer to base of buffer space
*
* Bookkeeping structure for binder transaction buffers
*/
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_BINDER_INTERNAL_H
#define _LINUX_BINDER_INTERNAL_H
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/list.h>
#include <linux/miscdevice.h>
#include <linux/mutex.h>
#include <linux/stddef.h>
#include <linux/types.h>
#include <linux/uidgid.h>
struct binder_context {
struct binder_node *binder_context_mgr_node;
struct mutex context_mgr_node_lock;
kuid_t binder_context_mgr_uid;
const char *name;
};
/**
* struct binder_device - information about a binder device node
* @hlist: list of binder devices (only used for devices requested via
* CONFIG_ANDROID_BINDER_DEVICES)
* @miscdev: information about a binder character device node
* @context: binder context information
* @binderfs_inode: This is the inode of the root dentry of the super block
* belonging to a binderfs mount.
*/
struct binder_device {
struct hlist_node hlist;
struct miscdevice miscdev;
struct binder_context context;
struct inode *binderfs_inode;
};
extern const struct file_operations binder_fops;
#ifdef CONFIG_ANDROID_BINDERFS
extern bool is_binderfs_device(const struct inode *inode);
#else
static inline bool is_binderfs_device(const struct inode *inode)
{
return false;
}
#endif
#endif /* _LINUX_BINDER_INTERNAL_H */
This diff is collapsed.
......@@ -5,7 +5,6 @@
*/
#include <linux/kernel.h>
#include <linux/fsl/mc.h>
#include <linux/fsl/mc.h>
#include "fsl-mc-private.h"
......
......@@ -5,7 +5,6 @@
*/
#include <linux/kernel.h>
#include <linux/fsl/mc.h>
#include <linux/fsl/mc.h>
#include "fsl-mc-private.h"
......
......@@ -21,7 +21,6 @@
#include <linux/of.h>
#include <linux/of_platform.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <linux/bitops.h>
......
This diff is collapsed.
......@@ -866,8 +866,8 @@ static int __init rtc_init(void)
#ifdef CONFIG_SPARC32
for_each_node_by_name(ebus_dp, "ebus") {
struct device_node *dp;
for (dp = ebus_dp; dp; dp = dp->sibling) {
if (!strcmp(dp->name, "rtc")) {
for_each_child_of_node(ebus_dp, dp) {
if (of_node_name_eq(dp, "rtc")) {
op = of_find_device_by_node(dp);
if (op) {
rtc_port = op->resource[0].start;
......
......@@ -506,28 +506,28 @@ static ssize_t store_select_amcb2_transmit_clock(struct device *d,
val = (unsigned char)tmp;
spin_lock_irqsave(&event_lock, flags);
if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) {
SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x28);
SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val);
} else if (val >= CLK_8_592MHz) {
SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x38);
switch (val) {
case CLK_8_592MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);
break;
case CLK_11_184MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 0);
break;
case CLK_34_368MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 3);
break;
case CLK_44_736MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);
break;
}
} else
SET_PORT_BITS(TLCLK_REG3, 0xc7, val << 3);
if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) {
SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x28);
SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val);
} else if (val >= CLK_8_592MHz) {
SET_PORT_BITS(TLCLK_REG3, 0xc7, 0x38);
switch (val) {
case CLK_8_592MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);
break;
case CLK_11_184MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 0);
break;
case CLK_34_368MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 3);
break;
case CLK_44_736MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);
break;
}
} else {
SET_PORT_BITS(TLCLK_REG3, 0xc7, val << 3);
}
spin_unlock_irqrestore(&event_lock, flags);
return strnlen(buf, count);
......@@ -548,27 +548,28 @@ static ssize_t store_select_amcb1_transmit_clock(struct device *d,
val = (unsigned char)tmp;
spin_lock_irqsave(&event_lock, flags);
if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) {
SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x5);
SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val);
} else if (val >= CLK_8_592MHz) {
SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7);
switch (val) {
case CLK_8_592MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);
break;
case CLK_11_184MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 0);
break;
case CLK_34_368MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 3);
break;
case CLK_44_736MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);
break;
}
} else
SET_PORT_BITS(TLCLK_REG3, 0xf8, val);
if ((val == CLK_8kHz) || (val == CLK_16_384MHz)) {
SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x5);
SET_PORT_BITS(TLCLK_REG1, 0xfb, ~val);
} else if (val >= CLK_8_592MHz) {
SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7);
switch (val) {
case CLK_8_592MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);
break;
case CLK_11_184MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 0);
break;
case CLK_34_368MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 3);
break;
case CLK_44_736MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);
break;
}
} else {
SET_PORT_BITS(TLCLK_REG3, 0xf8, val);
}
spin_unlock_irqrestore(&event_lock, flags);
return strnlen(buf, count);
......
......@@ -1309,7 +1309,7 @@ static const struct attribute_group port_attribute_group = {
.attrs = port_sysfs_entries,
};
static int debugfs_show(struct seq_file *s, void *data)
static int port_debugfs_show(struct seq_file *s, void *data)
{
struct port *port = s->private;
......@@ -1327,18 +1327,7 @@ static int debugfs_show(struct seq_file *s, void *data)
return 0;
}
static int debugfs_open(struct inode *inode, struct file *file)
{
return single_open(file, debugfs_show, inode->i_private);
}
static const struct file_operations port_debugfs_ops = {
.owner = THIS_MODULE,
.open = debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
DEFINE_SHOW_ATTRIBUTE(port_debugfs);
static void set_console_size(struct port *port, u16 rows, u16 cols)
{
......@@ -1490,7 +1479,7 @@ static int add_port(struct ports_device *portdev, u32 id)
port->debugfs_file = debugfs_create_file(debugfs_name, 0444,
pdrvdata.debugfs_dir,
port,
&port_debugfs_ops);
&port_debugfs_fops);
}
return 0;
......
......@@ -657,6 +657,8 @@ static int max14577_muic_probe(struct platform_device *pdev)
struct max14577 *max14577 = dev_get_drvdata(pdev->dev.parent);
struct max14577_muic_info *info;
int delay_jiffies;
int cable_type;
bool attached;
int ret;
int i;
u8 id;
......@@ -725,8 +727,17 @@ static int max14577_muic_probe(struct platform_device *pdev)
info->path_uart = CTRL1_SW_UART;
delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
/* Set initial path for UART */
max14577_muic_set_path(info, info->path_uart, true);
/* Set initial path for UART when JIG is connected to get serial logs */
ret = max14577_bulk_read(info->max14577->regmap,
MAX14577_MUIC_REG_STATUS1, info->status, 2);
if (ret) {
dev_err(info->dev, "Cannot read STATUS registers\n");
return ret;
}
cable_type = max14577_muic_get_cable_type(info, MAX14577_CABLE_GROUP_ADC,
&attached);
if (attached && cable_type == MAX14577_MUIC_ADC_FACTORY_MODE_UART_OFF)
max14577_muic_set_path(info, info->path_uart, true);
/* Check revision number of MUIC device*/
ret = max14577_read_reg(info->max14577->regmap,
......
......@@ -1072,6 +1072,8 @@ static int max77693_muic_probe(struct platform_device *pdev)
struct max77693_reg_data *init_data;
int num_init_data;
int delay_jiffies;
int cable_type;
bool attached;
int ret;
int i;
unsigned int id;
......@@ -1212,8 +1214,18 @@ static int max77693_muic_probe(struct platform_device *pdev)
delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
}
/* Set initial path for UART */
max77693_muic_set_path(info, info->path_uart, true);
/* Set initial path for UART when JIG is connected to get serial logs */
ret = regmap_bulk_read(info->max77693->regmap_muic,
MAX77693_MUIC_REG_STATUS1, info->status, 2);
if (ret) {
dev_err(info->dev, "failed to read MUIC register\n");
return ret;
}
cable_type = max77693_muic_get_cable_type(info,
MAX77693_CABLE_GROUP_ADC, &attached);
if (attached && (cable_type == MAX77693_MUIC_ADC_FACTORY_MODE_UART_ON ||
cable_type == MAX77693_MUIC_ADC_FACTORY_MODE_UART_OFF))
max77693_muic_set_path(info, info->path_uart, true);
/* Check revision number of MUIC device*/
ret = regmap_read(info->max77693->regmap_muic,
......
......@@ -812,6 +812,8 @@ static int max77843_muic_probe(struct platform_device *pdev)
struct max77693_dev *max77843 = dev_get_drvdata(pdev->dev.parent);
struct max77843_muic_info *info;
unsigned int id;
int cable_type;
bool attached;
int i, ret;
info = devm_kzalloc(&pdev->dev, sizeof(*info), GFP_KERNEL);
......@@ -856,9 +858,19 @@ static int max77843_muic_probe(struct platform_device *pdev)
/* Set ADC debounce time */
max77843_muic_set_debounce_time(info, MAX77843_DEBOUNCE_TIME_25MS);
/* Set initial path for UART */
max77843_muic_set_path(info, MAX77843_MUIC_CONTROL1_SW_UART, true,
false);
/* Set initial path for UART when JIG is connected to get serial logs */
ret = regmap_bulk_read(max77843->regmap_muic,
MAX77843_MUIC_REG_STATUS1, info->status,
MAX77843_MUIC_STATUS_NUM);
if (ret) {
dev_err(info->dev, "Cannot read STATUS registers\n");
goto err_muic_irq;
}
cable_type = max77843_muic_get_cable_type(info, MAX77843_CABLE_GROUP_ADC,
&attached);
if (attached && cable_type == MAX77843_MUIC_ADC_FACTORY_MODE_UART_OFF)
max77843_muic_set_path(info, MAX77843_MUIC_CONTROL1_SW_UART,
true, false);
/* Check revision number of MUIC device */
ret = regmap_read(max77843->regmap_muic, MAX77843_MUIC_REG_ID, &id);
......
......@@ -311,12 +311,10 @@ static int max8997_muic_handle_usb(struct max8997_muic_info *info,
{
int ret = 0;
if (usb_type == MAX8997_USB_HOST) {
ret = max8997_muic_set_path(info, info->path_usb, attached);
if (ret < 0) {
dev_err(info->dev, "failed to update muic register\n");
return ret;
}
ret = max8997_muic_set_path(info, info->path_usb, attached);
if (ret < 0) {
dev_err(info->dev, "failed to update muic register\n");
return ret;
}
switch (usb_type) {
......@@ -632,6 +630,8 @@ static int max8997_muic_probe(struct platform_device *pdev)
struct max8997_platform_data *pdata = dev_get_platdata(max8997->dev);
struct max8997_muic_info *info;
int delay_jiffies;
int cable_type;
bool attached;
int ret, i;
info = devm_kzalloc(&pdev->dev, sizeof(struct max8997_muic_info),
......@@ -724,8 +724,17 @@ static int max8997_muic_probe(struct platform_device *pdev)
delay_jiffies = msecs_to_jiffies(DELAY_MS_DEFAULT);
}
/* Set initial path for UART */
max8997_muic_set_path(info, info->path_uart, true);
/* Set initial path for UART when JIG is connected to get serial logs */
ret = max8997_bulk_read(info->muic, MAX8997_MUIC_REG_STATUS1,
2, info->status);
if (ret) {
dev_err(info->dev, "failed to read MUIC register\n");
return ret;
}
cable_type = max8997_muic_get_cable_type(info,
MAX8997_CABLE_GROUP_ADC, &attached);
if (attached && cable_type == MAX8997_MUIC_ADC_FACTORY_MODE_UART_OFF)
max8997_muic_set_path(info, info->path_uart, true);
/* Set ADC debounce time */
max8997_muic_set_debounce_time(info, ADC_DEBOUNCE_TIME_25MS);
......
......@@ -216,6 +216,18 @@ config FW_CFG_SYSFS_CMDLINE
WARNING: Using incorrect parameters (base address in particular)
may crash your system.
config INTEL_STRATIX10_SERVICE
tristate "Intel Stratix10 Service Layer"
depends on HAVE_ARM_SMCCC
default n
help
Intel Stratix10 service layer runs at privileged exception level,
interfaces with the service providers (FPGA manager is one of them)
and manages secure monitor call to communicate with secure monitor
software at secure monitor exception level.
Say Y here if you want Stratix10 service layer support.
config QCOM_SCM
bool
depends on ARM || ARM64
......
......@@ -12,6 +12,7 @@ obj-$(CONFIG_DMI_SYSFS) += dmi-sysfs.o
obj-$(CONFIG_EDD) += edd.o
obj-$(CONFIG_EFI_PCDP) += pcdp.o
obj-$(CONFIG_DMIID) += dmi-id.o
obj-$(CONFIG_INTEL_STRATIX10_SERVICE) += stratix10-svc.o
obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o
obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o
obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o
......
This diff is collapsed.
......@@ -56,6 +56,12 @@ config FPGA_MGR_ZYNQ_FPGA
help
FPGA manager driver support for Xilinx Zynq FPGAs.
config FPGA_MGR_STRATIX10_SOC
tristate "Intel Stratix10 SoC FPGA Manager"
depends on (ARCH_STRATIX10 && INTEL_STRATIX10_SERVICE)
help
FPGA manager driver support for the Intel Stratix10 SoC.
config FPGA_MGR_XILINX_SPI
tristate "Xilinx Configuration over Slave Serial (SPI)"
depends on SPI
......
......@@ -13,6 +13,7 @@ obj-$(CONFIG_FPGA_MGR_ICE40_SPI) += ice40-spi.o
obj-$(CONFIG_FPGA_MGR_MACHXO2_SPI) += machxo2-spi.o
obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o
obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o
obj-$(CONFIG_FPGA_MGR_STRATIX10_SOC) += stratix10-soc.o
obj-$(CONFIG_FPGA_MGR_TS73XX) += ts73xx-fpga.o
obj-$(CONFIG_FPGA_MGR_XILINX_SPI) += xilinx-spi.o
obj-$(CONFIG_FPGA_MGR_ZYNQ_FPGA) += zynq-fpga.o
......
......@@ -403,6 +403,7 @@ static int altera_cvp_probe(struct pci_dev *pdev,
struct altera_cvp_conf *conf;
struct fpga_manager *mgr;
u16 cmd, val;
u32 regval;
int ret;
/*
......@@ -416,6 +417,14 @@ static int altera_cvp_probe(struct pci_dev *pdev,
return -ENODEV;
}
pci_read_config_dword(pdev, VSE_CVP_STATUS, &regval);
if (!(regval & VSE_CVP_STATUS_CVP_EN)) {
dev_err(&pdev->dev,
"CVP is disabled for this device: CVP_STATUS Reg 0x%x\n",
regval);
return -ENODEV;
}
conf = devm_kzalloc(&pdev->dev, sizeof(*conf), GFP_KERNEL);
if (!conf)
return -ENOMEM;
......@@ -466,18 +475,11 @@ static int altera_cvp_probe(struct pci_dev *pdev,
if (ret)
goto err_unmap;
ret = driver_create_file(&altera_cvp_driver.driver,
&driver_attr_chkcfg);
if (ret) {
dev_err(&pdev->dev, "Can't create sysfs chkcfg file\n");
fpga_mgr_unregister(mgr);
goto err_unmap;
}
return 0;
err_unmap:
pci_iounmap(pdev, conf->map);
if (conf->map)
pci_iounmap(pdev, conf->map);
pci_release_region(pdev, CVP_BAR);
err_disable:
cmd &= ~PCI_COMMAND_MEMORY;
......@@ -491,16 +493,39 @@ static void altera_cvp_remove(struct pci_dev *pdev)
struct altera_cvp_conf *conf = mgr->priv;
u16 cmd;
driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg);
fpga_mgr_unregister(mgr);
pci_iounmap(pdev, conf->map);
if (conf->map)
pci_iounmap(pdev, conf->map);
pci_release_region(pdev, CVP_BAR);
pci_read_config_word(pdev, PCI_COMMAND, &cmd);
cmd &= ~PCI_COMMAND_MEMORY;
pci_write_config_word(pdev, PCI_COMMAND, cmd);
}
module_pci_driver(altera_cvp_driver);
static int __init altera_cvp_init(void)
{
int ret;
ret = pci_register_driver(&altera_cvp_driver);
if (ret)
return ret;
ret = driver_create_file(&altera_cvp_driver.driver,
&driver_attr_chkcfg);
if (ret)
pr_warn("Can't create sysfs chkcfg file\n");
return 0;
}
static void __exit altera_cvp_exit(void)
{
driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg);
pci_unregister_driver(&altera_cvp_driver);
}
module_init(altera_cvp_init);
module_exit(altera_cvp_exit);
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Anatolij Gustschin <agust@denx.de>");
......
......@@ -75,6 +75,12 @@ static struct altera_ps_data a10_data = {
.t_st2ck_us = 10, /* min(t_ST2CK) */
};
/* Array index is enum altera_ps_devtype */
static const struct altera_ps_data *altera_ps_data_map[] = {
&c5_data,
&a10_data,
};
static const struct of_device_id of_ef_match[] = {
{ .compatible = "altr,fpga-passive-serial", .data = &c5_data },
{ .compatible = "altr,fpga-arria10-passive-serial", .data = &a10_data },
......@@ -234,6 +240,22 @@ static const struct fpga_manager_ops altera_ps_ops = {
.write_complete = altera_ps_write_complete,
};
static const struct altera_ps_data *id_to_data(const struct spi_device_id *id)
{
kernel_ulong_t devtype = id->driver_data;
const struct altera_ps_data *data;
/* someone added a altera_ps_devtype without adding to the map array */
if (devtype >= ARRAY_SIZE(altera_ps_data_map))
return NULL;
data = altera_ps_data_map[devtype];
if (!data || data->devtype != devtype)
return NULL;
return data;
}
static int altera_ps_probe(struct spi_device *spi)
{
struct altera_ps_conf *conf;
......@@ -244,11 +266,17 @@ static int altera_ps_probe(struct spi_device *spi)
if (!conf)
return -ENOMEM;
of_id = of_match_device(of_ef_match, &spi->dev);
if (!of_id)
return -ENODEV;
if (spi->dev.of_node) {
of_id = of_match_device(of_ef_match, &spi->dev);
if (!of_id)
return -ENODEV;
conf->data = of_id->data;
} else {
conf->data = id_to_data(spi_get_device_id(spi));
if (!conf->data)
return -ENODEV;
}
conf->data = of_id->data;
conf->spi = spi;
conf->config = devm_gpiod_get(&spi->dev, "nconfig", GPIOD_OUT_LOW);
if (IS_ERR(conf->config)) {
......@@ -294,7 +322,9 @@ static int altera_ps_remove(struct spi_device *spi)
}
static const struct spi_device_id altera_ps_spi_ids[] = {
{"cyclone-ps-spi", 0},
{ "cyclone-ps-spi", CYCLONE5 },
{ "fpga-passive-serial", CYCLONE5 },
{ "fpga-arria10-passive-serial", ARRIA10 },
{}
};
MODULE_DEVICE_TABLE(spi, altera_ps_spi_ids);
......
......@@ -444,10 +444,8 @@ static void pr_mgmt_uinit(struct platform_device *pdev,
struct dfl_feature *feature)
{
struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev);
struct dfl_fme *priv;
mutex_lock(&pdata->lock);
priv = dfl_fpga_pdata_get_private(pdata);
dfl_fme_destroy_regions(pdata);
dfl_fme_destroy_bridges(pdata);
......
......@@ -64,7 +64,7 @@ static int fme_region_probe(struct platform_device *pdev)
static int fme_region_remove(struct platform_device *pdev)
{
struct fpga_region *region = dev_get_drvdata(&pdev->dev);
struct fpga_region *region = platform_get_drvdata(pdev);
struct fpga_manager *mgr = region->mgr;
fpga_region_unregister(region);
......
......@@ -421,7 +421,7 @@ static int of_fpga_region_probe(struct platform_device *pdev)
goto eprobe_mgr_put;
of_platform_populate(np, fpga_region_of_match, NULL, &region->dev);
dev_set_drvdata(dev, region);
platform_set_drvdata(pdev, region);
dev_info(dev, "FPGA Region probed\n");
......
This diff is collapsed.
......@@ -501,6 +501,10 @@ static int zynq_fpga_ops_write_complete(struct fpga_manager *mgr,
if (err)
return err;
/* Release 'PR' control back to the ICAP */
zynq_fpga_write(priv, CTRL_OFFSET,
zynq_fpga_read(priv, CTRL_OFFSET) & ~CTRL_PCAP_PR_MASK);
err = zynq_fpga_poll_timeout(priv, INT_STS_OFFSET, intr_status,
intr_status & IXR_PCFG_DONE_MASK,
INIT_POLL_DELAY,
......
......@@ -711,7 +711,6 @@ int vmbus_disconnect_ring(struct vmbus_channel *channel)
/* Snapshot the list of subchannels */
spin_lock_irqsave(&channel->lock, flags);
list_splice_init(&channel->sc_list, &list);
channel->num_sc = 0;
spin_unlock_irqrestore(&channel->lock, flags);
list_for_each_entry_safe(cur_channel, tmp, &list, sc_list) {
......
......@@ -405,7 +405,6 @@ void hv_process_channel_removal(struct vmbus_channel *channel)
primary_channel = channel->primary_channel;
spin_lock_irqsave(&primary_channel->lock, flags);
list_del(&channel->sc_list);
primary_channel->num_sc--;
spin_unlock_irqrestore(&primary_channel->lock, flags);
}
......@@ -1302,49 +1301,6 @@ int vmbus_request_offers(void)
return ret;
}
/*
* Retrieve the (sub) channel on which to send an outgoing request.
* When a primary channel has multiple sub-channels, we try to
* distribute the load equally amongst all available channels.
*/
struct vmbus_channel *vmbus_get_outgoing_channel(struct vmbus_channel *primary)
{
struct list_head *cur, *tmp;
int cur_cpu;
struct vmbus_channel *cur_channel;
struct vmbus_channel *outgoing_channel = primary;
int next_channel;
int i = 1;
if (list_empty(&primary->sc_list))
return outgoing_channel;
next_channel = primary->next_oc++;
if (next_channel > (primary->num_sc)) {
primary->next_oc = 0;
return outgoing_channel;
}
cur_cpu = hv_cpu_number_to_vp_number(smp_processor_id());
list_for_each_safe(cur, tmp, &primary->sc_list) {
cur_channel = list_entry(cur, struct vmbus_channel, sc_list);
if (cur_channel->state != CHANNEL_OPENED_STATE)
continue;
if (cur_channel->target_vp == cur_cpu)
return cur_channel;
if (i == next_channel)
return cur_channel;
i++;
}
return outgoing_channel;
}
EXPORT_SYMBOL_GPL(vmbus_get_outgoing_channel);
static void invoke_sc_cb(struct vmbus_channel *primary_channel)
{
struct list_head *cur, *tmp;
......
......@@ -33,9 +33,7 @@
#include "hyperv_vmbus.h"
/* The one and only */
struct hv_context hv_context = {
.synic_initialized = false,
};
struct hv_context hv_context;
/*
* If false, we're using the old mechanism for stimer0 interrupts
......@@ -326,8 +324,6 @@ int hv_synic_init(unsigned int cpu)
hv_set_synic_state(sctrl.as_uint64);
hv_context.synic_initialized = true;
/*
* Register the per-cpu clockevent source.
*/
......@@ -373,7 +369,8 @@ int hv_synic_cleanup(unsigned int cpu)
bool channel_found = false;
unsigned long flags;
if (!hv_context.synic_initialized)
hv_get_synic_state(sctrl.as_uint64);
if (sctrl.enable != 1)
return -EFAULT;
/*
......@@ -435,7 +432,6 @@ int hv_synic_cleanup(unsigned int cpu)
hv_set_siefp(siefp.as_uint64);
/* Disable the global synic bit */
hv_get_synic_state(sctrl.as_uint64);
sctrl.enable = 0;
hv_set_synic_state(sctrl.as_uint64);
......
......@@ -437,7 +437,7 @@ kvp_send_key(struct work_struct *dummy)
val32 = in_msg->body.kvp_set.data.value_u32;
message->body.kvp_set.data.value_size =
sprintf(message->body.kvp_set.data.value,
"%d", val32) + 1;
"%u", val32) + 1;
break;
case REG_U64:
......
......@@ -483,7 +483,7 @@ MODULE_DEVICE_TABLE(vmbus, id_table);
/* The one and only one */
static struct hv_driver util_drv = {
.name = "hv_util",
.name = "hv_utils",
.id_table = id_table,
.probe = util_probe,
.remove = util_remove,
......
......@@ -162,8 +162,6 @@ struct hv_context {
void *tsc_page;
bool synic_initialized;
struct hv_per_cpu_context __percpu *cpu_context;
/*
......
......@@ -136,6 +136,11 @@ static void __etb_enable_hw(struct etb_drvdata *drvdata)
static int etb_enable_hw(struct etb_drvdata *drvdata)
{
int rc = coresight_claim_device(drvdata->base);
if (rc)
return rc;
__etb_enable_hw(drvdata);
return 0;
}
......@@ -223,7 +228,7 @@ static int etb_enable(struct coresight_device *csdev, u32 mode, void *data)
return 0;
}
static void etb_disable_hw(struct etb_drvdata *drvdata)
static void __etb_disable_hw(struct etb_drvdata *drvdata)
{
u32 ffcr;
......@@ -313,6 +318,13 @@ static void etb_dump_hw(struct etb_drvdata *drvdata)
CS_LOCK(drvdata->base);
}
static void etb_disable_hw(struct etb_drvdata *drvdata)
{
__etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
coresight_disclaim_device(drvdata->base);
}
static void etb_disable(struct coresight_device *csdev)
{
struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
......@@ -323,7 +335,6 @@ static void etb_disable(struct coresight_device *csdev)
/* Disable the ETB only if it needs to */
if (drvdata->mode != CS_MODE_DISABLED) {
etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
drvdata->mode = CS_MODE_DISABLED;
}
spin_unlock_irqrestore(&drvdata->spinlock, flags);
......@@ -402,7 +413,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS;
etb_disable_hw(drvdata);
__etb_disable_hw(drvdata);
CS_UNLOCK(drvdata->base);
/* unit is in words, not bytes */
......@@ -510,7 +521,7 @@ static unsigned long etb_update_buffer(struct coresight_device *csdev,
handle->head = (cur * PAGE_SIZE) + offset;
to_read = buf->nr_pages << PAGE_SHIFT;
}
etb_enable_hw(drvdata);
__etb_enable_hw(drvdata);
CS_LOCK(drvdata->base);
return to_read;
......@@ -534,9 +545,9 @@ static void etb_dump(struct etb_drvdata *drvdata)
spin_lock_irqsave(&drvdata->spinlock, flags);
if (drvdata->mode == CS_MODE_SYSFS) {
etb_disable_hw(drvdata);
__etb_disable_hw(drvdata);
etb_dump_hw(drvdata);
etb_enable_hw(drvdata);
__etb_enable_hw(drvdata);
}
spin_unlock_irqrestore(&drvdata->spinlock, flags);
......
......@@ -363,15 +363,16 @@ static int etm_enable_hw(struct etm_drvdata *drvdata)
CS_UNLOCK(drvdata->base);
rc = coresight_claim_device_unlocked(drvdata->base);
if (rc)
goto done;
/* Turn engine on */
etm_clr_pwrdwn(drvdata);
/* Apply power to trace registers */
etm_set_pwrup(drvdata);
/* Make sure all registers are accessible */
etm_os_unlock(drvdata);
rc = coresight_claim_device_unlocked(drvdata->base);
if (rc)
goto done;
etm_set_prog(drvdata);
......@@ -422,8 +423,6 @@ static int etm_enable_hw(struct etm_drvdata *drvdata)
etm_clr_prog(drvdata);
done:
if (rc)
etm_set_pwrdwn(drvdata);
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n",
......@@ -577,9 +576,9 @@ static void etm_disable_hw(void *info)
for (i = 0; i < drvdata->nr_cntr; i++)
config->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i));
etm_set_pwrdwn(drvdata);
coresight_disclaim_device_unlocked(drvdata->base);
etm_set_pwrdwn(drvdata);
CS_LOCK(drvdata->base);
dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
......@@ -602,6 +601,7 @@ static void etm_disable_perf(struct coresight_device *csdev)
* power down the tracer.
*/
etm_set_pwrdwn(drvdata);
coresight_disclaim_device_unlocked(drvdata->base);
CS_LOCK(drvdata->base);
}
......
......@@ -856,7 +856,7 @@ static int stm_probe(struct amba_device *adev, const struct amba_id *id)
if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) {
dev_info(dev,
"stm_register_device failed, probing deffered\n");
"stm_register_device failed, probing deferred\n");
return -EPROBE_DEFER;
}
......
......@@ -86,8 +86,8 @@ static void __tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
{
coresight_disclaim_device(drvdata);
__tmc_etb_disable_hw(drvdata);
coresight_disclaim_device(drvdata->base);
}
static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
......
......@@ -1423,7 +1423,8 @@ nr_pages_store(struct device *dev, struct device_attribute *attr,
if (!end)
break;
len -= end - p;
/* consume the number and the following comma, hence +1 */
len -= end - p + 1;
p = end + 1;
} while (len);
......
......@@ -440,10 +440,8 @@ stp_policy_make(struct config_group *group, const char *name)
stm->policy = kzalloc(sizeof(*stm->policy), GFP_KERNEL);
if (!stm->policy) {
mutex_unlock(&stm->policy_mutex);
stm_put_protocol(pdrv);
stm_put_device(stm);
return ERR_PTR(-ENOMEM);
ret = ERR_PTR(-ENOMEM);
goto unlock_policy;
}
config_group_init_type_name(&stm->policy->group, name,
......@@ -458,7 +456,11 @@ stp_policy_make(struct config_group *group, const char *name)
mutex_unlock(&stm->policy_mutex);
if (IS_ERR(ret)) {
stm_put_protocol(stm->pdrv);
/*
* pdrv and stm->pdrv at this point can be quite different,
* and only one of them needs to be 'put'
*/
stm_put_protocol(pdrv);
stm_put_device(stm);
}
......
......@@ -2042,3 +2042,28 @@ int dmar_device_remove(acpi_handle handle)
{
return dmar_device_hotplug(handle, false);
}
/*
* dmar_platform_optin - Is %DMA_CTRL_PLATFORM_OPT_IN_FLAG set in DMAR table
*
* Returns true if the platform has %DMA_CTRL_PLATFORM_OPT_IN_FLAG set in
* the ACPI DMAR table. This means that the platform boot firmware has made
* sure no device can issue DMA outside of RMRR regions.
*/
bool dmar_platform_optin(void)
{
struct acpi_table_dmar *dmar;
acpi_status status;
bool ret;
status = acpi_get_table(ACPI_SIG_DMAR, 0,
(struct acpi_table_header **)&dmar);
if (ACPI_FAILURE(status))
return false;
ret = !!(dmar->flags & DMAR_PLATFORM_OPT_IN);
acpi_put_table((struct acpi_table_header *)dmar);
return ret;
}
EXPORT_SYMBOL_GPL(dmar_platform_optin);
......@@ -184,6 +184,7 @@ static int rwbf_quirk;
*/
static int force_on = 0;
int intel_iommu_tboot_noforce;
static int no_platform_optin;
#define ROOT_ENTRY_NR (VTD_PAGE_SIZE/sizeof(struct root_entry))
......@@ -503,6 +504,7 @@ static int __init intel_iommu_setup(char *str)
pr_info("IOMMU enabled\n");
} else if (!strncmp(str, "off", 3)) {
dmar_disabled = 1;
no_platform_optin = 1;
pr_info("IOMMU disabled\n");
} else if (!strncmp(str, "igfx_off", 8)) {
dmar_map_gfx = 0;
......@@ -1471,7 +1473,8 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info)
if (info->pri_supported && !pci_reset_pri(pdev) && !pci_enable_pri(pdev, 32))
info->pri_enabled = 1;
#endif
if (info->ats_supported && !pci_enable_ats(pdev, VTD_PAGE_SHIFT)) {
if (!pdev->untrusted && info->ats_supported &&
!pci_enable_ats(pdev, VTD_PAGE_SHIFT)) {
info->ats_enabled = 1;
domain_update_iotlb(info->domain);
info->ats_qdep = pci_ats_queue_depth(pdev);
......@@ -2895,6 +2898,13 @@ static int iommu_should_identity_map(struct device *dev, int startup)
if (device_is_rmrr_locked(dev))
return 0;
/*
* Prevent any device marked as untrusted from getting
* placed into the statically identity mapping domain.
*/
if (pdev->untrusted)
return 0;
if ((iommu_identity_mapping & IDENTMAP_AZALIA) && IS_AZALIA(pdev))
return 1;
......@@ -4722,14 +4732,54 @@ const struct attribute_group *intel_iommu_groups[] = {
NULL,
};
static int __init platform_optin_force_iommu(void)
{
struct pci_dev *pdev = NULL;
bool has_untrusted_dev = false;
if (!dmar_platform_optin() || no_platform_optin)
return 0;
for_each_pci_dev(pdev) {
if (pdev->untrusted) {
has_untrusted_dev = true;
break;
}
}
if (!has_untrusted_dev)
return 0;
if (no_iommu || dmar_disabled)
pr_info("Intel-IOMMU force enabled due to platform opt in\n");
/*
* If Intel-IOMMU is disabled by default, we will apply identity
* map for all devices except those marked as being untrusted.
*/
if (dmar_disabled)
iommu_identity_mapping |= IDENTMAP_ALL;
dmar_disabled = 0;
#if defined(CONFIG_X86) && defined(CONFIG_SWIOTLB)
swiotlb = 0;
#endif
no_iommu = 0;
return 1;
}
int __init intel_iommu_init(void)
{
int ret = -ENODEV;
struct dmar_drhd_unit *drhd;
struct intel_iommu *iommu;
/* VT-d is required for a TXT/tboot launch, so enforce that */
force_on = tboot_force_iommu();
/*
* Intel IOMMU is required for a TXT/tboot launch or platform
* opt in, so enforce that.
*/
force_on = tboot_force_iommu() || platform_optin_force_iommu();
if (iommu_init_mempool()) {
if (force_on)
......
......@@ -513,6 +513,14 @@ config MISC_RTSX
tristate
default MISC_RTSX_PCI || MISC_RTSX_USB
config PVPANIC
tristate "pvpanic device support"
depends on HAS_IOMEM && (ACPI || OF)
help
This driver provides support for the pvpanic device. pvpanic is
a paravirtualized device provided by QEMU; it lets a virtual machine
(guest) communicate panic events to the host.
source "drivers/misc/c2port/Kconfig"
source "drivers/misc/eeprom/Kconfig"
source "drivers/misc/cb710/Kconfig"
......
......@@ -57,4 +57,5 @@ obj-$(CONFIG_ASPEED_LPC_CTRL) += aspeed-lpc-ctrl.o
obj-$(CONFIG_ASPEED_LPC_SNOOP) += aspeed-lpc-snoop.o
obj-$(CONFIG_PCI_ENDPOINT_TEST) += pci_endpoint_test.o
obj-$(CONFIG_OCXL) += ocxl/
obj-y += cardreader/
obj-y += cardreader/
obj-$(CONFIG_PVPANIC) += pvpanic.o
......@@ -2176,8 +2176,7 @@ static int altera_get_note(u8 *p, s32 program_size,
key_ptr = &p[note_strings +
get_unaligned_be32(
&p[note_table + (8 * i)])];
if ((strncasecmp(key, key_ptr, strlen(key_ptr)) == 0) &&
(key != NULL)) {
if (key && !strncasecmp(key, key_ptr, strlen(key_ptr))) {
status = 0;
value_ptr = &p[note_strings +
......
......@@ -33,19 +33,6 @@
#include "card_base.h"
#include "card_ddcb.h"
#define GENWQE_DEBUGFS_RO(_name, _showfn) \
static int genwqe_debugfs_##_name##_open(struct inode *inode, \
struct file *file) \
{ \
return single_open(file, _showfn, inode->i_private); \
} \
static const struct file_operations genwqe_##_name##_fops = { \
.open = genwqe_debugfs_##_name##_open, \
.read = seq_read, \
.llseek = seq_lseek, \
.release = single_release, \
}
static void dbg_uidn_show(struct seq_file *s, struct genwqe_reg *regs,
int entries)
{
......@@ -87,26 +74,26 @@ static int curr_dbg_uidn_show(struct seq_file *s, void *unused, int uid)
return 0;
}
static int genwqe_curr_dbg_uid0_show(struct seq_file *s, void *unused)
static int curr_dbg_uid0_show(struct seq_file *s, void *unused)
{
return curr_dbg_uidn_show(s, unused, 0);
}
GENWQE_DEBUGFS_RO(curr_dbg_uid0, genwqe_curr_dbg_uid0_show);
DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid0);
static int genwqe_curr_dbg_uid1_show(struct seq_file *s, void *unused)
static int curr_dbg_uid1_show(struct seq_file *s, void *unused)
{
return curr_dbg_uidn_show(s, unused, 1);
}
GENWQE_DEBUGFS_RO(curr_dbg_uid1, genwqe_curr_dbg_uid1_show);
DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid1);
static int genwqe_curr_dbg_uid2_show(struct seq_file *s, void *unused)
static int curr_dbg_uid2_show(struct seq_file *s, void *unused)
{
return curr_dbg_uidn_show(s, unused, 2);
}
GENWQE_DEBUGFS_RO(curr_dbg_uid2, genwqe_curr_dbg_uid2_show);
DEFINE_SHOW_ATTRIBUTE(curr_dbg_uid2);
static int prev_dbg_uidn_show(struct seq_file *s, void *unused, int uid)
{
......@@ -116,28 +103,28 @@ static int prev_dbg_uidn_show(struct seq_file *s, void *unused, int uid)
return 0;
}
static int genwqe_prev_dbg_uid0_show(struct seq_file *s, void *unused)
static int prev_dbg_uid0_show(struct seq_file *s, void *unused)
{
return prev_dbg_uidn_show(s, unused, 0);
}
GENWQE_DEBUGFS_RO(prev_dbg_uid0, genwqe_prev_dbg_uid0_show);
DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid0);
static int genwqe_prev_dbg_uid1_show(struct seq_file *s, void *unused)
static int prev_dbg_uid1_show(struct seq_file *s, void *unused)
{
return prev_dbg_uidn_show(s, unused, 1);
}
GENWQE_DEBUGFS_RO(prev_dbg_uid1, genwqe_prev_dbg_uid1_show);
DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid1);
static int genwqe_prev_dbg_uid2_show(struct seq_file *s, void *unused)
static int prev_dbg_uid2_show(struct seq_file *s, void *unused)
{
return prev_dbg_uidn_show(s, unused, 2);
}
GENWQE_DEBUGFS_RO(prev_dbg_uid2, genwqe_prev_dbg_uid2_show);
DEFINE_SHOW_ATTRIBUTE(prev_dbg_uid2);
static int genwqe_curr_regs_show(struct seq_file *s, void *unused)
static int curr_regs_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int i;
......@@ -164,9 +151,9 @@ static int genwqe_curr_regs_show(struct seq_file *s, void *unused)
return 0;
}
GENWQE_DEBUGFS_RO(curr_regs, genwqe_curr_regs_show);
DEFINE_SHOW_ATTRIBUTE(curr_regs);
static int genwqe_prev_regs_show(struct seq_file *s, void *unused)
static int prev_regs_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int i;
......@@ -188,9 +175,9 @@ static int genwqe_prev_regs_show(struct seq_file *s, void *unused)
return 0;
}
GENWQE_DEBUGFS_RO(prev_regs, genwqe_prev_regs_show);
DEFINE_SHOW_ATTRIBUTE(prev_regs);
static int genwqe_jtimer_show(struct seq_file *s, void *unused)
static int jtimer_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int vf_num;
......@@ -209,9 +196,9 @@ static int genwqe_jtimer_show(struct seq_file *s, void *unused)
return 0;
}
GENWQE_DEBUGFS_RO(jtimer, genwqe_jtimer_show);
DEFINE_SHOW_ATTRIBUTE(jtimer);
static int genwqe_queue_working_time_show(struct seq_file *s, void *unused)
static int queue_working_time_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int vf_num;
......@@ -227,9 +214,9 @@ static int genwqe_queue_working_time_show(struct seq_file *s, void *unused)
return 0;
}
GENWQE_DEBUGFS_RO(queue_working_time, genwqe_queue_working_time_show);
DEFINE_SHOW_ATTRIBUTE(queue_working_time);
static int genwqe_ddcb_info_show(struct seq_file *s, void *unused)
static int ddcb_info_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
unsigned int i;
......@@ -300,9 +287,9 @@ static int genwqe_ddcb_info_show(struct seq_file *s, void *unused)
return 0;
}
GENWQE_DEBUGFS_RO(ddcb_info, genwqe_ddcb_info_show);
DEFINE_SHOW_ATTRIBUTE(ddcb_info);
static int genwqe_info_show(struct seq_file *s, void *unused)
static int info_show(struct seq_file *s, void *unused)
{
struct genwqe_dev *cd = s->private;
u64 app_id, slu_id, bitstream = -1;
......@@ -335,7 +322,7 @@ static int genwqe_info_show(struct seq_file *s, void *unused)
return 0;
}
GENWQE_DEBUGFS_RO(info, genwqe_info_show);
DEFINE_SHOW_ATTRIBUTE(info);
int genwqe_init_debugfs(struct genwqe_dev *cd)
{
......@@ -356,14 +343,14 @@ int genwqe_init_debugfs(struct genwqe_dev *cd)
/* non privileged interfaces are done here */
file = debugfs_create_file("ddcb_info", S_IRUGO, root, cd,
&genwqe_ddcb_info_fops);
&ddcb_info_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("info", S_IRUGO, root, cd,
&genwqe_info_fops);
&info_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
......@@ -396,56 +383,56 @@ int genwqe_init_debugfs(struct genwqe_dev *cd)
}
file = debugfs_create_file("curr_regs", S_IRUGO, root, cd,
&genwqe_curr_regs_fops);
&curr_regs_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("curr_dbg_uid0", S_IRUGO, root, cd,
&genwqe_curr_dbg_uid0_fops);
&curr_dbg_uid0_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("curr_dbg_uid1", S_IRUGO, root, cd,
&genwqe_curr_dbg_uid1_fops);
&curr_dbg_uid1_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("curr_dbg_uid2", S_IRUGO, root, cd,
&genwqe_curr_dbg_uid2_fops);
&curr_dbg_uid2_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_regs", S_IRUGO, root, cd,
&genwqe_prev_regs_fops);
&prev_regs_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_dbg_uid0", S_IRUGO, root, cd,
&genwqe_prev_dbg_uid0_fops);
&prev_dbg_uid0_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_dbg_uid1", S_IRUGO, root, cd,
&genwqe_prev_dbg_uid1_fops);
&prev_dbg_uid1_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("prev_dbg_uid2", S_IRUGO, root, cd,
&genwqe_prev_dbg_uid2_fops);
&prev_dbg_uid2_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
......@@ -463,14 +450,14 @@ int genwqe_init_debugfs(struct genwqe_dev *cd)
}
file = debugfs_create_file("jobtimer", S_IRUGO, root, cd,
&genwqe_jtimer_fops);
&jtimer_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
}
file = debugfs_create_file("queue_working_time", S_IRUGO, root, cd,
&genwqe_queue_working_time_fops);
&queue_working_time_fops);
if (!file) {
ret = -ENOMEM;
goto err1;
......
......@@ -215,7 +215,7 @@ u32 genwqe_crc32(u8 *buff, size_t len, u32 init)
void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size,
dma_addr_t *dma_handle)
{
if (get_order(size) > MAX_ORDER)
if (get_order(size) >= MAX_ORDER)
return NULL;
return dma_zalloc_coherent(&cd->pci_dev->dev, size, dma_handle,
......
......@@ -9,6 +9,7 @@ mei-objs += hbm.o
mei-objs += interrupt.o
mei-objs += client.o
mei-objs += main.o
mei-objs += dma-ring.o
mei-objs += bus.o
mei-objs += bus-fixup.o
mei-$(CONFIG_DEBUG_FS) += debugfs.o
......
This diff is collapsed.
This diff is collapsed.
......@@ -65,6 +65,7 @@ const char *mei_hbm_state_str(enum mei_hbm_state state)
MEI_HBM_STATE(IDLE);
MEI_HBM_STATE(STARTING);
MEI_HBM_STATE(STARTED);
MEI_HBM_STATE(DR_SETUP);
MEI_HBM_STATE(ENUM_CLIENTS);
MEI_HBM_STATE(CLIENT_PROPERTIES);
MEI_HBM_STATE(STOPPED);
......@@ -295,6 +296,48 @@ int mei_hbm_start_req(struct mei_device *dev)
return 0;
}
/**
* mei_hbm_dma_setup_req() - setup DMA request
* @dev: the device structure
*
* Return: 0 on success and < 0 on failure
*/
static int mei_hbm_dma_setup_req(struct mei_device *dev)
{
struct mei_msg_hdr mei_hdr;
struct hbm_dma_setup_request req;
const size_t len = sizeof(struct hbm_dma_setup_request);
unsigned int i;
int ret;
mei_hbm_hdr(&mei_hdr, len);
memset(&req, 0, len);
req.hbm_cmd = MEI_HBM_DMA_SETUP_REQ_CMD;
for (i = 0; i < DMA_DSCR_NUM; i++) {
phys_addr_t paddr;
paddr = dev->dr_dscr[i].daddr;
req.dma_dscr[i].addr_hi = upper_32_bits(paddr);
req.dma_dscr[i].addr_lo = lower_32_bits(paddr);
req.dma_dscr[i].size = dev->dr_dscr[i].size;
}
mei_dma_ring_reset(dev);
ret = mei_hbm_write_message(dev, &mei_hdr, &req);
if (ret) {
dev_err(dev->dev, "dma setup request write failed: ret = %d.\n",
ret);
return ret;
}
dev->hbm_state = MEI_HBM_DR_SETUP;
dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT;
mei_schedule_stall_timer(dev);
return 0;
}
/**
* mei_hbm_enum_clients_req - sends enumeration client request message.
*
......@@ -1044,6 +1087,7 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
struct hbm_host_version_response *version_res;
struct hbm_props_response *props_res;
struct hbm_host_enum_response *enum_res;
struct hbm_dma_setup_response *dma_setup_res;
struct hbm_add_client_request *add_cl_req;
int ret;
......@@ -1108,14 +1152,52 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
return -EPROTO;
}
if (mei_hbm_enum_clients_req(dev)) {
dev_err(dev->dev, "hbm: start: failed to send enumeration request\n");
return -EIO;
if (dev->hbm_f_dr_supported) {
if (mei_dmam_ring_alloc(dev))
dev_info(dev->dev, "running w/o dma ring\n");
if (mei_dma_ring_is_allocated(dev)) {
if (mei_hbm_dma_setup_req(dev))
return -EIO;
wake_up(&dev->wait_hbm_start);
break;
}
}
dev->hbm_f_dr_supported = 0;
mei_dmam_ring_free(dev);
if (mei_hbm_enum_clients_req(dev))
return -EIO;
wake_up(&dev->wait_hbm_start);
break;
case MEI_HBM_DMA_SETUP_RES_CMD:
dev_dbg(dev->dev, "hbm: dma setup response: message received.\n");
dev->init_clients_timer = 0;
if (dev->hbm_state != MEI_HBM_DR_SETUP) {
dev_err(dev->dev, "hbm: dma setup response: state mismatch, [%d, %d]\n",
dev->dev_state, dev->hbm_state);
return -EPROTO;
}
dma_setup_res = (struct hbm_dma_setup_response *)mei_msg;
if (dma_setup_res->status) {
dev_info(dev->dev, "hbm: dma setup response: failure = %d %s\n",
dma_setup_res->status,
mei_hbm_status_str(dma_setup_res->status));
dev->hbm_f_dr_supported = 0;
mei_dmam_ring_free(dev);
}
if (mei_hbm_enum_clients_req(dev))
return -EIO;
break;
case CLIENT_CONNECT_RES_CMD:
dev_dbg(dev->dev, "hbm: client connect response: message received.\n");
mei_hbm_cl_res(dev, cl_cmd, MEI_FOP_CONNECT);
......@@ -1271,8 +1353,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
break;
default:
BUG();
break;
WARN(1, "hbm: wrong command %d\n", mei_msg->hbm_cmd);
return -EPROTO;
}
return 0;
......
......@@ -26,6 +26,7 @@ struct mei_cl;
*
* @MEI_HBM_IDLE : protocol not started
* @MEI_HBM_STARTING : start request message was sent
* @MEI_HBM_DR_SETUP : dma ring setup request message was sent
* @MEI_HBM_ENUM_CLIENTS : enumeration request was sent
* @MEI_HBM_CLIENT_PROPERTIES : acquiring clients properties
* @MEI_HBM_STARTED : enumeration was completed
......@@ -34,6 +35,7 @@ struct mei_cl;
enum mei_hbm_state {
MEI_HBM_IDLE = 0,
MEI_HBM_STARTING,
MEI_HBM_DR_SETUP,
MEI_HBM_ENUM_CLIENTS,
MEI_HBM_CLIENT_PROPERTIES,
MEI_HBM_STARTED,
......
......@@ -1471,15 +1471,21 @@ struct mei_device *mei_me_dev_init(struct pci_dev *pdev,
{
struct mei_device *dev;
struct mei_me_hw *hw;
int i;
dev = devm_kzalloc(&pdev->dev, sizeof(struct mei_device) +
sizeof(struct mei_me_hw), GFP_KERNEL);
if (!dev)
return NULL;
hw = to_me_hw(dev);
for (i = 0; i < DMA_DSCR_NUM; i++)
dev->dr_dscr[i].size = cfg->dma_size[i];
mei_device_init(dev, &pdev->dev, &mei_me_hw_ops);
hw->cfg = cfg;
return dev;
}
This diff is collapsed.
......@@ -151,7 +151,7 @@ int mei_reset(struct mei_device *dev)
mei_hbm_reset(dev);
dev->rd_msg_hdr = 0;
memset(dev->rd_msg_hdr, 0, sizeof(dev->rd_msg_hdr));
if (ret) {
dev_err(dev->dev, "hw_reset failed ret = %d\n", ret);
......
This diff is collapsed.
This diff is collapsed.
......@@ -98,9 +98,9 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
{MEI_PCI_DEVICE(MEI_DEV_ID_KBP, MEI_ME_PCH8_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_KBP_2, MEI_ME_PCH8_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH8_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH12_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP_4, MEI_ME_PCH8_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH8_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH12_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_4, MEI_ME_PCH8_CFG)},
/* required last entry */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -61,7 +61,7 @@ static int vexpress_syscfg_exec(struct vexpress_syscfg_func *func,
int tries;
long timeout;
if (WARN_ON(index > func->num_templates))
if (WARN_ON(index >= func->num_templates))
return -EINVAL;
command = readl(syscfg->base + SYS_CFGCTRL);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment