Commit e2683957 authored by Daniel De Graaf's avatar Daniel De Graaf Committed by Konrad Rzeszutek Wilk

drivers/tpm: add xen tpmfront interface

This is a complete rewrite of the Xen TPM frontend driver, taking
advantage of a simplified frontend/backend interface and adding support
for cancellation and timeouts.  The backend for this driver is provided
by a vTPM stub domain using the interface in Xen 4.3.
Signed-off-by: default avatarDaniel De Graaf <dgdegra@tycho.nsa.gov>
Acked-by: default avatarMatthew Fioravante <matthew.fioravante@jhuapl.edu>
Reviewed-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: default avatarPeter Huewe <peterhuewe@gmx.de>
Reviewed-by: default avatarPeter Huewe <peterhuewe@gmx.de>
Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
parent 6efa20e4
Virtual TPM interface for Xen
Authors: Matthew Fioravante (JHUAPL), Daniel De Graaf (NSA)
This document describes the virtual Trusted Platform Module (vTPM) subsystem for
Xen. The reader is assumed to have familiarity with building and installing Xen,
Linux, and a basic understanding of the TPM and vTPM concepts.
INTRODUCTION
The goal of this work is to provide a TPM functionality to a virtual guest
operating system (in Xen terms, a DomU). This allows programs to interact with
a TPM in a virtual system the same way they interact with a TPM on the physical
system. Each guest gets its own unique, emulated, software TPM. However, each
of the vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain,
which seals the secrets to the Physical TPM. If the process of creating each of
these domains (manager, vTPM, and guest) is trusted, the vTPM subsystem extends
the chain of trust rooted in the hardware TPM to virtual machines in Xen. Each
major component of vTPM is implemented as a separate domain, providing secure
separation guaranteed by the hypervisor. The vTPM domains are implemented in
mini-os to reduce memory and processor overhead.
This mini-os vTPM subsystem was built on top of the previous vTPM work done by
IBM and Intel corporation.
DESIGN OVERVIEW
---------------
The architecture of vTPM is described below:
+------------------+
| Linux DomU | ...
| | ^ |
| v | |
| xen-tpmfront |
+------------------+
| ^
v |
+------------------+
| mini-os/tpmback |
| | ^ |
| v | |
| vtpm-stubdom | ...
| | ^ |
| v | |
| mini-os/tpmfront |
+------------------+
| ^
v |
+------------------+
| mini-os/tpmback |
| | ^ |
| v | |
| vtpmmgr-stubdom |
| | ^ |
| v | |
| mini-os/tpm_tis |
+------------------+
| ^
v |
+------------------+
| Hardware TPM |
+------------------+
* Linux DomU: The Linux based guest that wants to use a vTPM. There may be
more than one of these.
* xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver
provides vTPM access to a Linux-based DomU.
* mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver
connects to this backend driver to facilitate communications
between the Linux DomU and its vTPM. This driver is also
used by vtpmmgr-stubdom to communicate with vtpm-stubdom.
* vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a
one to one mapping between running vtpm-stubdom instances and
logical vtpms on the system. The vTPM Platform Configuration
Registers (PCRs) are normally all initialized to zero.
* mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain
vtpm-stubdom uses this driver to communicate with
vtpmmgr-stubdom. This driver is also used in mini-os
domains such as pv-grub that talk to the vTPM domain.
* vtpmmgr-stubdom: A mini-os domain that implements the vTPM manager. There is
only one vTPM manager and it should be running during the
entire lifetime of the machine. This domain regulates
access to the physical TPM on the system and secures the
persistent state of each vTPM.
* mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS)
driver. This driver used by vtpmmgr-stubdom to talk directly to
the hardware TPM. Communication is facilitated by mapping
hardware memory pages into vtpmmgr-stubdom.
* Hardware TPM: The physical TPM that is soldered onto the motherboard.
INTEGRATION WITH XEN
--------------------
Support for the vTPM driver was added in Xen using the libxl toolstack in Xen
4.3. See the Xen documentation (docs/misc/vtpm.txt) for details on setting up
the vTPM and vTPM Manager stub domains. Once the stub domains are running, a
vTPM device is set up in the same manner as a disk or network device in the
domain's configuration file.
In order to use features such as IMA that require a TPM to be loaded prior to
the initrd, the xen-tpmfront driver must be compiled in to the kernel. If not
using such features, the driver can be compiled as a module and will be loaded
as usual.
...@@ -91,4 +91,15 @@ config TCG_ST33_I2C ...@@ -91,4 +91,15 @@ config TCG_ST33_I2C
To compile this driver as a module, choose M here; the module will be To compile this driver as a module, choose M here; the module will be
called tpm_stm_st33_i2c. called tpm_stm_st33_i2c.
config TCG_XEN
tristate "XEN TPM Interface"
depends on TCG_TPM && XEN
---help---
If you want to make TPM support available to a Xen user domain,
say Yes and it will be accessible from within Linux. See
the manpages for xl, xl.conf, and docs/misc/vtpm.txt in
the Xen source repository for more details.
To compile this driver as a module, choose M here; the module
will be called xen-tpmfront.
endif # TCG_TPM endif # TCG_TPM
...@@ -18,3 +18,4 @@ obj-$(CONFIG_TCG_ATMEL) += tpm_atmel.o ...@@ -18,3 +18,4 @@ obj-$(CONFIG_TCG_ATMEL) += tpm_atmel.o
obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o obj-$(CONFIG_TCG_INFINEON) += tpm_infineon.o
obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o obj-$(CONFIG_TCG_IBMVTPM) += tpm_ibmvtpm.o
obj-$(CONFIG_TCG_ST33_I2C) += tpm_i2c_stm_st33.o obj-$(CONFIG_TCG_ST33_I2C) += tpm_i2c_stm_st33.o
obj-$(CONFIG_TCG_XEN) += xen-tpmfront.o
This diff is collapsed.
/******************************************************************************
* tpmif.h
*
* TPM I/O interface for Xen guest OSes, v2
*
* This file is in the public domain.
*
*/
#ifndef __XEN_PUBLIC_IO_TPMIF_H__
#define __XEN_PUBLIC_IO_TPMIF_H__
/*
* Xenbus state machine
*
* Device open:
* 1. Both ends start in XenbusStateInitialising
* 2. Backend transitions to InitWait (frontend does not wait on this step)
* 3. Frontend populates ring-ref, event-channel, feature-protocol-v2
* 4. Frontend transitions to Initialised
* 5. Backend maps grant and event channel, verifies feature-protocol-v2
* 6. Backend transitions to Connected
* 7. Frontend verifies feature-protocol-v2, transitions to Connected
*
* Device close:
* 1. State is changed to XenbusStateClosing
* 2. Frontend transitions to Closed
* 3. Backend unmaps grant and event, changes state to InitWait
*/
enum vtpm_shared_page_state {
VTPM_STATE_IDLE, /* no contents / vTPM idle / cancel complete */
VTPM_STATE_SUBMIT, /* request ready / vTPM working */
VTPM_STATE_FINISH, /* response ready / vTPM idle */
VTPM_STATE_CANCEL, /* cancel requested / vTPM working */
};
/* The backend should only change state to IDLE or FINISH, while the
* frontend should only change to SUBMIT or CANCEL. */
struct vtpm_shared_page {
uint32_t length; /* request/response length in bytes */
uint8_t state; /* enum vtpm_shared_page_state */
uint8_t locality; /* for the current request */
uint8_t pad;
uint8_t nr_extra_pages; /* extra pages for long packets; may be zero */
uint32_t extra_pages[0]; /* grant IDs; length in nr_extra_pages */
};
#endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment