Commit 49ffdb4c authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'char-misc-5.3-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver fixes from Greg KH:
 "Here are some small char and misc driver fixes for reported issues for
  5.3-rc7

  Also included in here is the documentation for how we are handling
  hardware issues under embargo that everyone has finally agreed on, as
  well as a MAINTAINERS update for the suckers who agreed to handle the
  LICENSES/ files.

  All of these have been in linux-next last week with no reported
  issues"

* tag 'char-misc-5.3-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc:
  fsi: scom: Don't abort operations for minor errors
  vmw_balloon: Fix offline page marking with compaction
  VMCI: Release resource if the work is already queued
  Documentation/process: Embargoed hardware security issues
  lkdtm/bugs: fix build error in lkdtm_EXHAUST_STACK
  mei: me: add Tiger Lake point LP device ID
  intel_th: pci: Add Tiger Lake support
  intel_th: pci: Add support for another Lewisburg PCH
  stm class: Fix a double free of stm_source_device
  MAINTAINERS: add entry for LICENSES and SPDX stuff
  fpga: altera-ps-spi: Fix getting of optional confd gpio
parents 2c248f92 8919dfcb
Embargoed hardware issues
=========================
Scope
-----
Hardware issues which result in security problems are a different category
of security bugs than pure software bugs which only affect the Linux
kernel.
Hardware issues like Meltdown, Spectre, L1TF etc. must be treated
differently because they usually affect all Operating Systems ("OS") and
therefore need coordination across different OS vendors, distributions,
hardware vendors and other parties. For some of the issues, software
mitigations can depend on microcode or firmware updates, which need further
coordination.
.. _Contact:
Contact
-------
The Linux kernel hardware security team is separate from the regular Linux
kernel security team.
The team only handles the coordination of embargoed hardware security
issues. Reports of pure software security bugs in the Linux kernel are not
handled by this team and the reporter will be guided to contact the regular
Linux kernel security team (:ref:`Documentation/admin-guide/
<securitybugs>`) instead.
The team can be contacted by email at <hardware-security@kernel.org>. This
is a private list of security officers who will help you to coordinate an
issue according to our documented process.
The list is encrypted and email to the list can be sent by either PGP or
S/MIME encrypted and must be signed with the reporter's PGP key or S/MIME
certificate. The list's PGP key and S/MIME certificate are available from
https://www.kernel.org/....
While hardware security issues are often handled by the affected hardware
vendor, we welcome contact from researchers or individuals who have
identified a potential hardware flaw.
Hardware security officers
^^^^^^^^^^^^^^^^^^^^^^^^^^
The current team of hardware security officers:
- Linus Torvalds (Linux Foundation Fellow)
- Greg Kroah-Hartman (Linux Foundation Fellow)
- Thomas Gleixner (Linux Foundation Fellow)
Operation of mailing-lists
^^^^^^^^^^^^^^^^^^^^^^^^^^
The encrypted mailing-lists which are used in our process are hosted on
Linux Foundation's IT infrastructure. By providing this service Linux
Foundation's director of IT Infrastructure security technically has the
ability to access the embargoed information, but is obliged to
confidentiality by his employment contract. Linux Foundation's director of
IT Infrastructure security is also responsible for the kernel.org
infrastructure.
The Linux Foundation's current director of IT Infrastructure security is
Konstantin Ryabitsev.
Non-disclosure agreements
-------------------------
The Linux kernel hardware security team is not a formal body and therefore
unable to enter into any non-disclosure agreements. The kernel community
is aware of the sensitive nature of such issues and offers a Memorandum of
Understanding instead.
Memorandum of Understanding
---------------------------
The Linux kernel community has a deep understanding of the requirement to
keep hardware security issues under embargo for coordination between
different OS vendors, distributors, hardware vendors and other parties.
The Linux kernel community has successfully handled hardware security
issues in the past and has the necessary mechanisms in place to allow
community compliant development under embargo restrictions.
The Linux kernel community has a dedicated hardware security team for
initial contact, which oversees the process of handling such issues under
embargo rules.
The hardware security team identifies the developers (domain experts) who
will form the initial response team for a particular issue. The initial
response team can bring in further developers (domain experts) to address
the issue in the best technical way.
All involved developers pledge to adhere to the embargo rules and to keep
the received information confidential. Violation of the pledge will lead to
immediate exclusion from the current issue and removal from all related
mailing-lists. In addition, the hardware security team will also exclude
the offender from future issues. The impact of this consequence is a highly
effective deterrent in our community. In case a violation happens the
hardware security team will inform the involved parties immediately. If you
or anyone becomes aware of a potential violation, please report it
immediately to the Hardware security officers.
Process
^^^^^^^
Due to the globally distributed nature of Linux kernel development,
face-to-face meetings are almost impossible to address hardware security
issues. Phone conferences are hard to coordinate due to time zones and
other factors and should be only used when absolutely necessary. Encrypted
email has been proven to be the most effective and secure communication
method for these types of issues.
Start of Disclosure
"""""""""""""""""""
Disclosure starts by contacting the Linux kernel hardware security team by
email. This initial contact should contain a description of the problem and
a list of any known affected hardware. If your organization builds or
distributes the affected hardware, we encourage you to also consider what
other hardware could be affected.
The hardware security team will provide an incident-specific encrypted
mailing-list which will be used for initial discussion with the reporter,
further disclosure and coordination.
The hardware security team will provide the disclosing party a list of
developers (domain experts) who should be informed initially about the
issue after confirming with the developers that they will adhere to this
Memorandum of Understanding and the documented process. These developers
form the initial response team and will be responsible for handling the
issue after initial contact. The hardware security team is supporting the
response team, but is not necessarily involved in the mitigation
development process.
While individual developers might be covered by a non-disclosure agreement
via their employer, they cannot enter individual non-disclosure agreements
in their role as Linux kernel developers. They will, however, agree to
adhere to this documented process and the Memorandum of Understanding.
Disclosure
""""""""""
The disclosing party provides detailed information to the initial response
team via the specific encrypted mailing-list.
From our experience the technical documentation of these issues is usually
a sufficient starting point and further technical clarification is best
done via email.
Mitigation development
""""""""""""""""""""""
The initial response team sets up an encrypted mailing-list or repurposes
an existing one if appropriate. The disclosing party should provide a list
of contacts for all other parties who have already been, or should be
informed about the issue. The response team contacts these parties so they
can name experts who should be subscribed to the mailing-list.
Using a mailing-list is close to the normal Linux development process and
has been successfully used in developing mitigations for various hardware
security issues in the past.
The mailing-list operates in the same way as normal Linux development.
Patches are posted, discussed and reviewed and if agreed on applied to a
non-public git repository which is only accessible to the participating
developers via a secure connection. The repository contains the main
development branch against the mainline kernel and backport branches for
stable kernel versions as necessary.
The initial response team will identify further experts from the Linux
kernel developer community as needed and inform the disclosing party about
their participation. Bringing in experts can happen at any time of the
development process and often needs to be handled in a timely manner.
Coordinated release
"""""""""""""""""""
The involved parties will negotiate the date and time where the embargo
ends. At that point the prepared mitigations are integrated into the
relevant kernel trees and published.
While we understand that hardware security issues need coordinated embargo
time, the embargo time should be constrained to the minimum time which is
required for all involved parties to develop, test and prepare the
mitigations. Extending embargo time artificially to meet conference talk
dates or other non-technical reasons is creating more work and burden for
the involved developers and response teams as the patches need to be kept
up to date in order to follow the ongoing upstream kernel development,
which might create conflicting changes.
CVE assignment
""""""""""""""
Neither the hardware security team nor the initial response team assign
CVEs, nor are CVEs required for the development process. If CVEs are
provided by the disclosing party they can be used for documentation
purposes.
Process ambassadors
-------------------
For assistance with this process we have established ambassadors in various
organizations, who can answer questions about or provide guidance on the
reporting process and further handling. Ambassadors are not involved in the
disclosure of a particular issue, unless requested by a response team or by
an involved disclosed party. The current ambassadors list:
============= ========================================================
ARM
AMD
IBM
Intel
Qualcomm
Microsoft
VMware
XEN
Canonical Tyler Hicks <tyhicks@canonical.com>
Debian Ben Hutchings <ben@decadent.org.uk>
Oracle Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Red Hat Josh Poimboeuf <jpoimboe@redhat.com>
SUSE Jiri Kosina <jkosina@suse.cz>
Amazon
Google
============== ========================================================
If you want your organization to be added to the ambassadors list, please
contact the hardware security team. The nominated ambassador has to
understand and support our process fully and is ideally well connected in
the Linux kernel community.
Encrypted mailing-lists
-----------------------
We use encrypted mailing-lists for communication. The operating principle
of these lists is that email sent to the list is encrypted either with the
list's PGP key or with the list's S/MIME certificate. The mailing-list
software decrypts the email and re-encrypts it individually for each
subscriber with the subscriber's PGP key or S/MIME certificate. Details
about the mailing-list software and the setup which is used to ensure the
security of the lists and protection of the data can be found here:
https://www.kernel.org/....
List keys
^^^^^^^^^
For initial contact see :ref:`Contact`. For incident specific mailing-lists
the key and S/MIME certificate are conveyed to the subscribers by email
sent from the specific list.
Subscription to incident specific lists
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Subscription is handled by the response teams. Disclosed parties who want
to participate in the communication send a list of potential subscribers to
the response team so the response team can validate subscription requests.
Each subscriber needs to send a subscription request to the response team
by email. The email must be signed with the subscriber's PGP key or S/MIME
certificate. If a PGP key is used, it must be available from a public key
server and is ideally connected to the Linux kernel's PGP web of trust. See
also: https://www.kernel.org/signature.html.
The response team verifies that the subscriber request is valid and adds
the subscriber to the list. After subscription the subscriber will receive
email from the mailing-list which is signed either with the list's PGP key
or the list's S/MIME certificate. The subscriber's email client can extract
the PGP key or the S/MIME certificate from the signature so the subscriber
can send encrypted email to the list.
...@@ -45,6 +45,7 @@ Other guides to the community that are of interest to most developers are: ...@@ -45,6 +45,7 @@ Other guides to the community that are of interest to most developers are:
submit-checklist submit-checklist
kernel-docs kernel-docs
deprecated deprecated
embargoed-hardware-issues
These are some overall technical guides that have been put here for now for These are some overall technical guides that have been put here for now for
lack of a better place. lack of a better place.
......
...@@ -9229,6 +9229,18 @@ F: include/linux/nd.h ...@@ -9229,6 +9229,18 @@ F: include/linux/nd.h
F: include/linux/libnvdimm.h F: include/linux/libnvdimm.h
F: include/uapi/linux/ndctl.h F: include/uapi/linux/ndctl.h
LICENSES and SPDX stuff
M: Thomas Gleixner <tglx@linutronix.de>
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-spdx@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/spdx.git
F: COPYING
F: Documentation/process/license-rules.rst
F: LICENSES/
F: scripts/spdxcheck-test.sh
F: scripts/spdxcheck.py
LIGHTNVM PLATFORM SUPPORT LIGHTNVM PLATFORM SUPPORT
M: Matias Bjorling <mb@lightnvm.io> M: Matias Bjorling <mb@lightnvm.io>
W: http://github/OpenChannelSSD W: http://github/OpenChannelSSD
......
...@@ -210,7 +210,7 @@ static int altera_ps_write_complete(struct fpga_manager *mgr, ...@@ -210,7 +210,7 @@ static int altera_ps_write_complete(struct fpga_manager *mgr,
return -EIO; return -EIO;
} }
if (!IS_ERR(conf->confd)) { if (conf->confd) {
if (!gpiod_get_raw_value_cansleep(conf->confd)) { if (!gpiod_get_raw_value_cansleep(conf->confd)) {
dev_err(&mgr->dev, "CONF_DONE is inactive!\n"); dev_err(&mgr->dev, "CONF_DONE is inactive!\n");
return -EIO; return -EIO;
...@@ -289,10 +289,13 @@ static int altera_ps_probe(struct spi_device *spi) ...@@ -289,10 +289,13 @@ static int altera_ps_probe(struct spi_device *spi)
return PTR_ERR(conf->status); return PTR_ERR(conf->status);
} }
conf->confd = devm_gpiod_get(&spi->dev, "confd", GPIOD_IN); conf->confd = devm_gpiod_get_optional(&spi->dev, "confd", GPIOD_IN);
if (IS_ERR(conf->confd)) { if (IS_ERR(conf->confd)) {
dev_warn(&spi->dev, "Not using confd gpio: %ld\n", dev_err(&spi->dev, "Failed to get confd gpio: %ld\n",
PTR_ERR(conf->confd)); PTR_ERR(conf->confd));
return PTR_ERR(conf->confd);
} else if (!conf->confd) {
dev_warn(&spi->dev, "Not using confd gpio");
} }
/* Register manager with unique name */ /* Register manager with unique name */
......
...@@ -38,8 +38,7 @@ ...@@ -38,8 +38,7 @@
#define SCOM_STATUS_PIB_RESP_MASK 0x00007000 #define SCOM_STATUS_PIB_RESP_MASK 0x00007000
#define SCOM_STATUS_PIB_RESP_SHIFT 12 #define SCOM_STATUS_PIB_RESP_SHIFT 12
#define SCOM_STATUS_ANY_ERR (SCOM_STATUS_ERR_SUMMARY | \ #define SCOM_STATUS_ANY_ERR (SCOM_STATUS_PROTECTION | \
SCOM_STATUS_PROTECTION | \
SCOM_STATUS_PARITY | \ SCOM_STATUS_PARITY | \
SCOM_STATUS_PIB_ABORT | \ SCOM_STATUS_PIB_ABORT | \
SCOM_STATUS_PIB_RESP_MASK) SCOM_STATUS_PIB_RESP_MASK)
...@@ -251,11 +250,6 @@ static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status) ...@@ -251,11 +250,6 @@ static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status)
/* Return -EBUSY on PIB abort to force a retry */ /* Return -EBUSY on PIB abort to force a retry */
if (status & SCOM_STATUS_PIB_ABORT) if (status & SCOM_STATUS_PIB_ABORT)
return -EBUSY; return -EBUSY;
if (status & SCOM_STATUS_ERR_SUMMARY) {
fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy,
sizeof(uint32_t));
return -EIO;
}
return 0; return 0;
} }
......
...@@ -164,6 +164,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = { ...@@ -164,6 +164,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa1a6), PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa1a6),
.driver_data = (kernel_ulong_t)0, .driver_data = (kernel_ulong_t)0,
}, },
{
/* Lewisburg PCH */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa226),
.driver_data = (kernel_ulong_t)0,
},
{ {
/* Gemini Lake */ /* Gemini Lake */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x318e), PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x318e),
...@@ -199,6 +204,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = { ...@@ -199,6 +204,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5), PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5),
.driver_data = (kernel_ulong_t)&intel_th_2x, .driver_data = (kernel_ulong_t)&intel_th_2x,
}, },
{
/* Tiger Lake PCH */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6),
.driver_data = (kernel_ulong_t)&intel_th_2x,
},
{ 0 }, { 0 },
}; };
......
...@@ -1276,7 +1276,6 @@ int stm_source_register_device(struct device *parent, ...@@ -1276,7 +1276,6 @@ int stm_source_register_device(struct device *parent,
err: err:
put_device(&src->dev); put_device(&src->dev);
kfree(src);
return err; return err;
} }
......
...@@ -22,7 +22,7 @@ struct lkdtm_list { ...@@ -22,7 +22,7 @@ struct lkdtm_list {
* recurse past the end of THREAD_SIZE by default. * recurse past the end of THREAD_SIZE by default.
*/ */
#if defined(CONFIG_FRAME_WARN) && (CONFIG_FRAME_WARN > 0) #if defined(CONFIG_FRAME_WARN) && (CONFIG_FRAME_WARN > 0)
#define REC_STACK_SIZE (CONFIG_FRAME_WARN / 2) #define REC_STACK_SIZE (_AC(CONFIG_FRAME_WARN, UL) / 2)
#else #else
#define REC_STACK_SIZE (THREAD_SIZE / 8) #define REC_STACK_SIZE (THREAD_SIZE / 8)
#endif #endif
...@@ -91,7 +91,7 @@ void lkdtm_LOOP(void) ...@@ -91,7 +91,7 @@ void lkdtm_LOOP(void)
void lkdtm_EXHAUST_STACK(void) void lkdtm_EXHAUST_STACK(void)
{ {
pr_info("Calling function with %d frame size to depth %d ...\n", pr_info("Calling function with %lu frame size to depth %d ...\n",
REC_STACK_SIZE, recur_count); REC_STACK_SIZE, recur_count);
recursive_loop(recur_count); recursive_loop(recur_count);
pr_info("FAIL: survived without exhausting stack?!\n"); pr_info("FAIL: survived without exhausting stack?!\n");
......
...@@ -81,6 +81,8 @@ ...@@ -81,6 +81,8 @@
#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */ #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */
#define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */
#define MEI_DEV_ID_MCC 0x4B70 /* Mule Creek Canyon (EHL) */ #define MEI_DEV_ID_MCC 0x4B70 /* Mule Creek Canyon (EHL) */
#define MEI_DEV_ID_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */ #define MEI_DEV_ID_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */
......
...@@ -98,6 +98,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = { ...@@ -98,6 +98,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
{MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH12_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_MCC, MEI_ME_PCH12_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_MCC, MEI_ME_PCH12_CFG)},
{MEI_PCI_DEVICE(MEI_DEV_ID_MCC_4, MEI_ME_PCH8_CFG)}, {MEI_PCI_DEVICE(MEI_DEV_ID_MCC_4, MEI_ME_PCH8_CFG)},
......
...@@ -691,7 +691,6 @@ static int vmballoon_alloc_page_list(struct vmballoon *b, ...@@ -691,7 +691,6 @@ static int vmballoon_alloc_page_list(struct vmballoon *b,
} }
if (page) { if (page) {
vmballoon_mark_page_offline(page, ctl->page_size);
/* Success. Add the page to the list and continue. */ /* Success. Add the page to the list and continue. */
list_add(&page->lru, &ctl->pages); list_add(&page->lru, &ctl->pages);
continue; continue;
...@@ -930,7 +929,6 @@ static void vmballoon_release_page_list(struct list_head *page_list, ...@@ -930,7 +929,6 @@ static void vmballoon_release_page_list(struct list_head *page_list,
list_for_each_entry_safe(page, tmp, page_list, lru) { list_for_each_entry_safe(page, tmp, page_list, lru) {
list_del(&page->lru); list_del(&page->lru);
vmballoon_mark_page_online(page, page_size);
__free_pages(page, vmballoon_page_order(page_size)); __free_pages(page, vmballoon_page_order(page_size));
} }
...@@ -1005,6 +1003,7 @@ static void vmballoon_enqueue_page_list(struct vmballoon *b, ...@@ -1005,6 +1003,7 @@ static void vmballoon_enqueue_page_list(struct vmballoon *b,
enum vmballoon_page_size_type page_size) enum vmballoon_page_size_type page_size)
{ {
unsigned long flags; unsigned long flags;
struct page *page;
if (page_size == VMW_BALLOON_4K_PAGE) { if (page_size == VMW_BALLOON_4K_PAGE) {
balloon_page_list_enqueue(&b->b_dev_info, pages); balloon_page_list_enqueue(&b->b_dev_info, pages);
...@@ -1014,6 +1013,11 @@ static void vmballoon_enqueue_page_list(struct vmballoon *b, ...@@ -1014,6 +1013,11 @@ static void vmballoon_enqueue_page_list(struct vmballoon *b,
* for the balloon compaction mechanism. * for the balloon compaction mechanism.
*/ */
spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); spin_lock_irqsave(&b->b_dev_info.pages_lock, flags);
list_for_each_entry(page, pages, lru) {
vmballoon_mark_page_offline(page, VMW_BALLOON_2M_PAGE);
}
list_splice_init(pages, &b->huge_pages); list_splice_init(pages, &b->huge_pages);
__count_vm_events(BALLOON_INFLATE, *n_pages * __count_vm_events(BALLOON_INFLATE, *n_pages *
vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)); vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE));
...@@ -1056,6 +1060,8 @@ static void vmballoon_dequeue_page_list(struct vmballoon *b, ...@@ -1056,6 +1060,8 @@ static void vmballoon_dequeue_page_list(struct vmballoon *b,
/* 2MB pages */ /* 2MB pages */
spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); spin_lock_irqsave(&b->b_dev_info.pages_lock, flags);
list_for_each_entry_safe(page, tmp, &b->huge_pages, lru) { list_for_each_entry_safe(page, tmp, &b->huge_pages, lru) {
vmballoon_mark_page_online(page, VMW_BALLOON_2M_PAGE);
list_move(&page->lru, pages); list_move(&page->lru, pages);
if (++i == n_req_pages) if (++i == n_req_pages)
break; break;
......
...@@ -310,7 +310,8 @@ int vmci_dbell_host_context_notify(u32 src_cid, struct vmci_handle handle) ...@@ -310,7 +310,8 @@ int vmci_dbell_host_context_notify(u32 src_cid, struct vmci_handle handle)
entry = container_of(resource, struct dbell_entry, resource); entry = container_of(resource, struct dbell_entry, resource);
if (entry->run_delayed) { if (entry->run_delayed) {
schedule_work(&entry->work); if (!schedule_work(&entry->work))
vmci_resource_put(resource);
} else { } else {
entry->notify_cb(entry->client_data); entry->notify_cb(entry->client_data);
vmci_resource_put(resource); vmci_resource_put(resource);
...@@ -361,7 +362,8 @@ static void dbell_fire_entries(u32 notify_idx) ...@@ -361,7 +362,8 @@ static void dbell_fire_entries(u32 notify_idx)
atomic_read(&dbell->active) == 1) { atomic_read(&dbell->active) == 1) {
if (dbell->run_delayed) { if (dbell->run_delayed) {
vmci_resource_get(&dbell->resource); vmci_resource_get(&dbell->resource);
schedule_work(&dbell->work); if (!schedule_work(&dbell->work))
vmci_resource_put(&dbell->resource);
} else { } else {
dbell->notify_cb(dbell->client_data); dbell->notify_cb(dbell->client_data);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment