Commit 0302e28d authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security

Pull security subsystem updates from James Morris:
 "Highlights:

  IMA:
   - provide ">" and "<" operators for fowner/uid/euid rules

  KEYS:
   - add a system blacklist keyring

   - add KEYCTL_RESTRICT_KEYRING, exposes keyring link restriction
     functionality to userland via keyctl()

  LSM:
   - harden LSM API with __ro_after_init

   - add prlmit security hook, implement for SELinux

   - revive security_task_alloc hook

  TPM:
   - implement contextual TPM command 'spaces'"

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (98 commits)
  tpm: Fix reference count to main device
  tpm_tis: convert to using locality callbacks
  tpm: fix handling of the TPM 2.0 event logs
  tpm_crb: remove a cruft constant
  keys: select CONFIG_CRYPTO when selecting DH / KDF
  apparmor: Make path_max parameter readonly
  apparmor: fix parameters so that the permission test is bypassed at boot
  apparmor: fix invalid reference to index variable of iterator line 836
  apparmor: use SHASH_DESC_ON_STACK
  security/apparmor/lsm.c: set debug messages
  apparmor: fix boolreturn.cocci warnings
  Smack: Use GFP_KERNEL for smk_netlbl_mls().
  smack: fix double free in smack_parse_opts_str()
  KEYS: add SP800-56A KDF support for DH
  KEYS: Keyring asymmetric key restrict method with chaining
  KEYS: Restrict asymmetric key linkage using a specific keychain
  KEYS: Add a lookup_restriction function for the asymmetric key type
  KEYS: Add KEYCTL_RESTRICT_KEYRING
  KEYS: Consistent ordering for __key_link_begin and restrict check
  KEYS: Add an optional lookup_restriction hook to key_type
  ...
parents 89c9fea3 8979b02a
...@@ -311,3 +311,54 @@ Functions are provided to register and unregister parsers: ...@@ -311,3 +311,54 @@ Functions are provided to register and unregister parsers:
Parsers may not have the same name. The names are otherwise only used for Parsers may not have the same name. The names are otherwise only used for
displaying in debugging messages. displaying in debugging messages.
=========================
KEYRING LINK RESTRICTIONS
=========================
Keyrings created from userspace using add_key can be configured to check the
signature of the key being linked.
Several restriction methods are available:
(1) Restrict using the kernel builtin trusted keyring
- Option string used with KEYCTL_RESTRICT_KEYRING:
- "builtin_trusted"
The kernel builtin trusted keyring will be searched for the signing
key. The ca_keys kernel parameter also affects which keys are used for
signature verification.
(2) Restrict using the kernel builtin and secondary trusted keyrings
- Option string used with KEYCTL_RESTRICT_KEYRING:
- "builtin_and_secondary_trusted"
The kernel builtin and secondary trusted keyrings will be searched for the
signing key. The ca_keys kernel parameter also affects which keys are used
for signature verification.
(3) Restrict using a separate key or keyring
- Option string used with KEYCTL_RESTRICT_KEYRING:
- "key_or_keyring:<key or keyring serial number>[:chain]"
Whenever a key link is requested, the link will only succeed if the key
being linked is signed by one of the designated keys. This key may be
specified directly by providing a serial number for one asymmetric key, or
a group of keys may be searched for the signing key by providing the
serial number for a keyring.
When the "chain" option is provided at the end of the string, the keys
within the destination keyring will also be searched for signing keys.
This allows for verification of certificate chains by adding each
cert in order (starting closest to the root) to one keyring.
In all of these cases, if the signing key is found the signature of the key to
be linked will be verified using the signing key. The requested key is added
to the keyring only if the signature is successfully verified. -ENOKEY is
returned if the parent certificate could not be found, or -EKEYREJECTED is
returned if the signature check fails or the key is blacklisted. Other errors
may be returned if the signature check could not be performed.
...@@ -827,7 +827,7 @@ The keyctl syscall functions are: ...@@ -827,7 +827,7 @@ The keyctl syscall functions are:
long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params, long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params,
char *buffer, size_t buflen, char *buffer, size_t buflen,
void *reserved); struct keyctl_kdf_params *kdf);
The params struct contains serial numbers for three keys: The params struct contains serial numbers for three keys:
...@@ -844,18 +844,61 @@ The keyctl syscall functions are: ...@@ -844,18 +844,61 @@ The keyctl syscall functions are:
public key. If the base is the remote public key, the result is public key. If the base is the remote public key, the result is
the shared secret. the shared secret.
The reserved argument must be set to NULL. If the parameter kdf is NULL, the following applies:
The buffer length must be at least the length of the prime, or zero. - The buffer length must be at least the length of the prime, or zero.
If the buffer length is nonzero, the length of the result is - If the buffer length is nonzero, the length of the result is
returned when it is successfully calculated and copied in to the returned when it is successfully calculated and copied in to the
buffer. When the buffer length is zero, the minimum required buffer. When the buffer length is zero, the minimum required
buffer length is returned. buffer length is returned.
The kdf parameter allows the caller to apply a key derivation function
(KDF) on the Diffie-Hellman computation where only the result
of the KDF is returned to the caller. The KDF is characterized with
struct keyctl_kdf_params as follows:
- char *hashname specifies the NUL terminated string identifying
the hash used from the kernel crypto API and applied for the KDF
operation. The KDF implemenation complies with SP800-56A as well
as with SP800-108 (the counter KDF).
- char *otherinfo specifies the OtherInfo data as documented in
SP800-56A section 5.8.1.2. The length of the buffer is given with
otherinfolen. The format of OtherInfo is defined by the caller.
The otherinfo pointer may be NULL if no OtherInfo shall be used.
This function will return error EOPNOTSUPP if the key type is not This function will return error EOPNOTSUPP if the key type is not
supported, error ENOKEY if the key could not be found, or error supported, error ENOKEY if the key could not be found, or error
EACCES if the key is not readable by the caller. EACCES if the key is not readable by the caller. In addition, the
function will return EMSGSIZE when the parameter kdf is non-NULL
and either the buffer length or the OtherInfo length exceeds the
allowed length.
(*) Restrict keyring linkage
long keyctl(KEYCTL_RESTRICT_KEYRING, key_serial_t keyring,
const char *type, const char *restriction);
An existing keyring can restrict linkage of additional keys by evaluating
the contents of the key according to a restriction scheme.
"keyring" is the key ID for an existing keyring to apply a restriction
to. It may be empty or may already have keys linked. Existing linked keys
will remain in the keyring even if the new restriction would reject them.
"type" is a registered key type.
"restriction" is a string describing how key linkage is to be restricted.
The format varies depending on the key type, and the string is passed to
the lookup_restriction() function for the requested type. It may specify
a method and relevant data for the restriction such as signature
verification or constraints on key payload. If the requested key type is
later unregistered, no keys may be added to the keyring after the key type
is removed.
To apply a keyring restriction the process must have Set Attribute
permission and the keyring must not be previously restricted.
=============== ===============
KERNEL SERVICES KERNEL SERVICES
...@@ -1032,10 +1075,7 @@ payload contents" for more information. ...@@ -1032,10 +1075,7 @@ payload contents" for more information.
struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid, struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid,
const struct cred *cred, const struct cred *cred,
key_perm_t perm, key_perm_t perm,
int (*restrict_link)(struct key *, struct key_restriction *restrict_link,
const struct key_type *,
unsigned long,
const union key_payload *),
unsigned long flags, unsigned long flags,
struct key *dest); struct key *dest);
...@@ -1047,20 +1087,23 @@ payload contents" for more information. ...@@ -1047,20 +1087,23 @@ payload contents" for more information.
KEY_ALLOC_NOT_IN_QUOTA in flags if the keyring shouldn't be accounted KEY_ALLOC_NOT_IN_QUOTA in flags if the keyring shouldn't be accounted
towards the user's quota). Error ENOMEM can also be returned. towards the user's quota). Error ENOMEM can also be returned.
If restrict_link not NULL, it should point to a function that will be If restrict_link is not NULL, it should point to a structure that contains
called each time an attempt is made to link a key into the new keyring. the function that will be called each time an attempt is made to link a
This function is called to check whether a key may be added into the keying key into the new keyring. The structure may also contain a key pointer
or not. Callers of key_create_or_update() within the kernel can pass and an associated key type. The function is called to check whether a key
KEY_ALLOC_BYPASS_RESTRICTION to suppress the check. An example of using may be added into the keyring or not. The key type is used by the garbage
this is to manage rings of cryptographic keys that are set up when the collector to clean up function or data pointers in this structure if the
kernel boots where userspace is also permitted to add keys - provided they given key type is unregistered. Callers of key_create_or_update() within
can be verified by a key the kernel already has. the kernel can pass KEY_ALLOC_BYPASS_RESTRICTION to suppress the check.
An example of using this is to manage rings of cryptographic keys that are
set up when the kernel boots where userspace is also permitted to add keys
- provided they can be verified by a key the kernel already has.
When called, the restriction function will be passed the keyring being When called, the restriction function will be passed the keyring being
added to, the key flags value and the type and payload of the key being added to, the key type, the payload of the key being added, and data to be
added. Note that when a new key is being created, this is called between used in the restriction check. Note that when a new key is being created,
payload preparsing and actual key creation. The function should return 0 this is called between payload preparsing and actual key creation. The
to allow the link or an error to reject it. function should return 0 to allow the link or an error to reject it.
A convenience function, restrict_link_reject, exists to always return A convenience function, restrict_link_reject, exists to always return
-EPERM to in this case. -EPERM to in this case.
...@@ -1445,6 +1488,15 @@ The structure has a number of fields, some of which are mandatory: ...@@ -1445,6 +1488,15 @@ The structure has a number of fields, some of which are mandatory:
The authorisation key. The authorisation key.
(*) struct key_restriction *(*lookup_restriction)(const char *params);
This optional method is used to enable userspace configuration of keyring
restrictions. The restriction parameter string (not including the key type
name) is passed in, and this method returns a pointer to a key_restriction
structure containing the relevant functions and data to evaluate each
attempted key link operation. If there is no match, -EINVAL is returned.
============================ ============================
REQUEST-KEY CALLBACK SERVICE REQUEST-KEY CALLBACK SERVICE
============================ ============================
......
...@@ -64,4 +64,22 @@ config SECONDARY_TRUSTED_KEYRING ...@@ -64,4 +64,22 @@ config SECONDARY_TRUSTED_KEYRING
those keys are not blacklisted and are vouched for by a key built those keys are not blacklisted and are vouched for by a key built
into the kernel or already in the secondary trusted keyring. into the kernel or already in the secondary trusted keyring.
config SYSTEM_BLACKLIST_KEYRING
bool "Provide system-wide ring of blacklisted keys"
depends on KEYS
help
Provide a system keyring to which blacklisted keys can be added.
Keys in the keyring are considered entirely untrusted. Keys in this
keyring are used by the module signature checking to reject loading
of modules signed with a blacklisted key.
config SYSTEM_BLACKLIST_HASH_LIST
string "Hashes to be preloaded into the system blacklist keyring"
depends on SYSTEM_BLACKLIST_KEYRING
help
If set, this option should be the filename of a list of hashes in the
form "<hash>", "<hash>", ... . This will be included into a C
wrapper to incorporate the list into the kernel. Each <hash> should
be a string of hex digits.
endmenu endmenu
...@@ -3,6 +3,12 @@ ...@@ -3,6 +3,12 @@
# #
obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o
ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),"")
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_hashes.o
else
obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist_nohashes.o
endif
ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y) ifeq ($(CONFIG_SYSTEM_TRUSTED_KEYRING),y)
......
/* System hash blacklist.
*
* Copyright (C) 2016 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public Licence
* as published by the Free Software Foundation; either version
* 2 of the Licence, or (at your option) any later version.
*/
#define pr_fmt(fmt) "blacklist: "fmt
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/key.h>
#include <linux/key-type.h>
#include <linux/sched.h>
#include <linux/ctype.h>
#include <linux/err.h>
#include <linux/seq_file.h>
#include <keys/system_keyring.h>
#include "blacklist.h"
static struct key *blacklist_keyring;
/*
* The description must be a type prefix, a colon and then an even number of
* hex digits. The hash is kept in the description.
*/
static int blacklist_vet_description(const char *desc)
{
int n = 0;
if (*desc == ':')
return -EINVAL;
for (; *desc; desc++)
if (*desc == ':')
goto found_colon;
return -EINVAL;
found_colon:
desc++;
for (; *desc; desc++) {
if (!isxdigit(*desc))
return -EINVAL;
n++;
}
if (n == 0 || n & 1)
return -EINVAL;
return 0;
}
/*
* The hash to be blacklisted is expected to be in the description. There will
* be no payload.
*/
static int blacklist_preparse(struct key_preparsed_payload *prep)
{
if (prep->datalen > 0)
return -EINVAL;
return 0;
}
static void blacklist_free_preparse(struct key_preparsed_payload *prep)
{
}
static void blacklist_describe(const struct key *key, struct seq_file *m)
{
seq_puts(m, key->description);
}
static struct key_type key_type_blacklist = {
.name = "blacklist",
.vet_description = blacklist_vet_description,
.preparse = blacklist_preparse,
.free_preparse = blacklist_free_preparse,
.instantiate = generic_key_instantiate,
.describe = blacklist_describe,
};
/**
* mark_hash_blacklisted - Add a hash to the system blacklist
* @hash - The hash as a hex string with a type prefix (eg. "tbs:23aa429783")
*/
int mark_hash_blacklisted(const char *hash)
{
key_ref_t key;
key = key_create_or_update(make_key_ref(blacklist_keyring, true),
"blacklist",
hash,
NULL,
0,
((KEY_POS_ALL & ~KEY_POS_SETATTR) |
KEY_USR_VIEW),
KEY_ALLOC_NOT_IN_QUOTA |
KEY_ALLOC_BUILT_IN);
if (IS_ERR(key)) {
pr_err("Problem blacklisting hash (%ld)\n", PTR_ERR(key));
return PTR_ERR(key);
}
return 0;
}
/**
* is_hash_blacklisted - Determine if a hash is blacklisted
* @hash: The hash to be checked as a binary blob
* @hash_len: The length of the binary hash
* @type: Type of hash
*/
int is_hash_blacklisted(const u8 *hash, size_t hash_len, const char *type)
{
key_ref_t kref;
size_t type_len = strlen(type);
char *buffer, *p;
int ret = 0;
buffer = kmalloc(type_len + 1 + hash_len * 2 + 1, GFP_KERNEL);
if (!buffer)
return -ENOMEM;
p = memcpy(buffer, type, type_len);
p += type_len;
*p++ = ':';
bin2hex(p, hash, hash_len);
p += hash_len * 2;
*p = 0;
kref = keyring_search(make_key_ref(blacklist_keyring, true),
&key_type_blacklist, buffer);
if (!IS_ERR(kref)) {
key_ref_put(kref);
ret = -EKEYREJECTED;
}
kfree(buffer);
return ret;
}
EXPORT_SYMBOL_GPL(is_hash_blacklisted);
/*
* Intialise the blacklist
*/
static int __init blacklist_init(void)
{
const char *const *bl;
if (register_key_type(&key_type_blacklist) < 0)
panic("Can't allocate system blacklist key type\n");
blacklist_keyring =
keyring_alloc(".blacklist",
KUIDT_INIT(0), KGIDT_INIT(0),
current_cred(),
(KEY_POS_ALL & ~KEY_POS_SETATTR) |
KEY_USR_VIEW | KEY_USR_READ |
KEY_USR_SEARCH,
KEY_ALLOC_NOT_IN_QUOTA |
KEY_FLAG_KEEP,
NULL, NULL);
if (IS_ERR(blacklist_keyring))
panic("Can't allocate system blacklist keyring\n");
for (bl = blacklist_hashes; *bl; bl++)
if (mark_hash_blacklisted(*bl) < 0)
pr_err("- blacklisting failed\n");
return 0;
}
/*
* Must be initialised before we try and load the keys into the keyring.
*/
device_initcall(blacklist_init);
#include <linux/kernel.h>
extern const char __initdata *const blacklist_hashes[];
#include "blacklist.h"
const char __initdata *const blacklist_hashes[] = {
#include CONFIG_SYSTEM_BLACKLIST_HASH_LIST
, NULL
};
#include "blacklist.h"
const char __initdata *const blacklist_hashes[] = {
NULL
};
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/cred.h> #include <linux/cred.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/slab.h>
#include <keys/asymmetric-type.h> #include <keys/asymmetric-type.h>
#include <keys/system_keyring.h> #include <keys/system_keyring.h>
#include <crypto/pkcs7.h> #include <crypto/pkcs7.h>
...@@ -32,11 +33,13 @@ extern __initconst const unsigned long system_certificate_list_size; ...@@ -32,11 +33,13 @@ extern __initconst const unsigned long system_certificate_list_size;
* Restrict the addition of keys into a keyring based on the key-to-be-added * Restrict the addition of keys into a keyring based on the key-to-be-added
* being vouched for by a key in the built in system keyring. * being vouched for by a key in the built in system keyring.
*/ */
int restrict_link_by_builtin_trusted(struct key *keyring, int restrict_link_by_builtin_trusted(struct key *dest_keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload) const union key_payload *payload,
struct key *restriction_key)
{ {
return restrict_link_by_signature(builtin_trusted_keys, type, payload); return restrict_link_by_signature(dest_keyring, type, payload,
builtin_trusted_keys);
} }
#ifdef CONFIG_SECONDARY_TRUSTED_KEYRING #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
...@@ -49,20 +52,40 @@ int restrict_link_by_builtin_trusted(struct key *keyring, ...@@ -49,20 +52,40 @@ int restrict_link_by_builtin_trusted(struct key *keyring,
* keyrings. * keyrings.
*/ */
int restrict_link_by_builtin_and_secondary_trusted( int restrict_link_by_builtin_and_secondary_trusted(
struct key *keyring, struct key *dest_keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload) const union key_payload *payload,
struct key *restrict_key)
{ {
/* If we have a secondary trusted keyring, then that contains a link /* If we have a secondary trusted keyring, then that contains a link
* through to the builtin keyring and the search will follow that link. * through to the builtin keyring and the search will follow that link.
*/ */
if (type == &key_type_keyring && if (type == &key_type_keyring &&
keyring == secondary_trusted_keys && dest_keyring == secondary_trusted_keys &&
payload == &builtin_trusted_keys->payload) payload == &builtin_trusted_keys->payload)
/* Allow the builtin keyring to be added to the secondary */ /* Allow the builtin keyring to be added to the secondary */
return 0; return 0;
return restrict_link_by_signature(secondary_trusted_keys, type, payload); return restrict_link_by_signature(dest_keyring, type, payload,
secondary_trusted_keys);
}
/**
* Allocate a struct key_restriction for the "builtin and secondary trust"
* keyring. Only for use in system_trusted_keyring_init().
*/
static __init struct key_restriction *get_builtin_and_secondary_restriction(void)
{
struct key_restriction *restriction;
restriction = kzalloc(sizeof(struct key_restriction), GFP_KERNEL);
if (!restriction)
panic("Can't allocate secondary trusted keyring restriction\n");
restriction->check = restrict_link_by_builtin_and_secondary_trusted;
return restriction;
} }
#endif #endif
...@@ -91,7 +114,7 @@ static __init int system_trusted_keyring_init(void) ...@@ -91,7 +114,7 @@ static __init int system_trusted_keyring_init(void)
KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH | KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH |
KEY_USR_WRITE), KEY_USR_WRITE),
KEY_ALLOC_NOT_IN_QUOTA, KEY_ALLOC_NOT_IN_QUOTA,
restrict_link_by_builtin_and_secondary_trusted, get_builtin_and_secondary_restriction(),
NULL); NULL);
if (IS_ERR(secondary_trusted_keys)) if (IS_ERR(secondary_trusted_keys))
panic("Can't allocate secondary trusted keyring\n"); panic("Can't allocate secondary trusted keyring\n");
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/ctype.h> #include <linux/ctype.h>
#include <keys/system_keyring.h>
#include "asymmetric_keys.h" #include "asymmetric_keys.h"
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
...@@ -451,15 +452,100 @@ static void asymmetric_key_destroy(struct key *key) ...@@ -451,15 +452,100 @@ static void asymmetric_key_destroy(struct key *key)
asymmetric_key_free_kids(kids); asymmetric_key_free_kids(kids);
} }
static struct key_restriction *asymmetric_restriction_alloc(
key_restrict_link_func_t check,
struct key *key)
{
struct key_restriction *keyres =
kzalloc(sizeof(struct key_restriction), GFP_KERNEL);
if (!keyres)
return ERR_PTR(-ENOMEM);
keyres->check = check;
keyres->key = key;
keyres->keytype = &key_type_asymmetric;
return keyres;
}
/*
* look up keyring restrict functions for asymmetric keys
*/
static struct key_restriction *asymmetric_lookup_restriction(
const char *restriction)
{
char *restrict_method;
char *parse_buf;
char *next;
struct key_restriction *ret = ERR_PTR(-EINVAL);
if (strcmp("builtin_trusted", restriction) == 0)
return asymmetric_restriction_alloc(
restrict_link_by_builtin_trusted, NULL);
if (strcmp("builtin_and_secondary_trusted", restriction) == 0)
return asymmetric_restriction_alloc(
restrict_link_by_builtin_and_secondary_trusted, NULL);
parse_buf = kstrndup(restriction, PAGE_SIZE, GFP_KERNEL);
if (!parse_buf)
return ERR_PTR(-ENOMEM);
next = parse_buf;
restrict_method = strsep(&next, ":");
if ((strcmp(restrict_method, "key_or_keyring") == 0) && next) {
char *key_text;
key_serial_t serial;
struct key *key;
key_restrict_link_func_t link_fn =
restrict_link_by_key_or_keyring;
bool allow_null_key = false;
key_text = strsep(&next, ":");
if (next) {
if (strcmp(next, "chain") != 0)
goto out;
link_fn = restrict_link_by_key_or_keyring_chain;
allow_null_key = true;
}
if (kstrtos32(key_text, 0, &serial) < 0)
goto out;
if ((serial == 0) && allow_null_key) {
key = NULL;
} else {
key = key_lookup(serial);
if (IS_ERR(key)) {
ret = ERR_CAST(key);
goto out;
}
}
ret = asymmetric_restriction_alloc(link_fn, key);
if (IS_ERR(ret))
key_put(key);
}
out:
kfree(parse_buf);
return ret;
}
struct key_type key_type_asymmetric = { struct key_type key_type_asymmetric = {
.name = "asymmetric", .name = "asymmetric",
.preparse = asymmetric_key_preparse, .preparse = asymmetric_key_preparse,
.free_preparse = asymmetric_key_free_preparse, .free_preparse = asymmetric_key_free_preparse,
.instantiate = generic_key_instantiate, .instantiate = generic_key_instantiate,
.match_preparse = asymmetric_key_match_preparse, .match_preparse = asymmetric_key_match_preparse,
.match_free = asymmetric_key_match_free, .match_free = asymmetric_key_match_free,
.destroy = asymmetric_key_destroy, .destroy = asymmetric_key_destroy,
.describe = asymmetric_key_describe, .describe = asymmetric_key_describe,
.lookup_restriction = asymmetric_lookup_restriction,
}; };
EXPORT_SYMBOL_GPL(key_type_asymmetric); EXPORT_SYMBOL_GPL(key_type_asymmetric);
......
...@@ -23,6 +23,7 @@ struct pkcs7_signed_info { ...@@ -23,6 +23,7 @@ struct pkcs7_signed_info {
struct x509_certificate *signer; /* Signing certificate (in msg->certs) */ struct x509_certificate *signer; /* Signing certificate (in msg->certs) */
unsigned index; unsigned index;
bool unsupported_crypto; /* T if not usable due to missing crypto */ bool unsupported_crypto; /* T if not usable due to missing crypto */
bool blacklisted;
/* Message digest - the digest of the Content Data (or NULL) */ /* Message digest - the digest of the Content Data (or NULL) */
const void *msgdigest; const void *msgdigest;
......
...@@ -190,6 +190,18 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7, ...@@ -190,6 +190,18 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
x509->subject, x509->subject,
x509->raw_serial_size, x509->raw_serial); x509->raw_serial_size, x509->raw_serial);
x509->seen = true; x509->seen = true;
if (x509->blacklisted) {
/* If this cert is blacklisted, then mark everything
* that depends on this as blacklisted too.
*/
sinfo->blacklisted = true;
for (p = sinfo->signer; p != x509; p = p->signer)
p->blacklisted = true;
pr_debug("- blacklisted\n");
return 0;
}
if (x509->unsupported_key) if (x509->unsupported_key)
goto unsupported_crypto_in_x509; goto unsupported_crypto_in_x509;
...@@ -357,17 +369,19 @@ static int pkcs7_verify_one(struct pkcs7_message *pkcs7, ...@@ -357,17 +369,19 @@ static int pkcs7_verify_one(struct pkcs7_message *pkcs7,
* *
* (*) -EBADMSG if some part of the message was invalid, or: * (*) -EBADMSG if some part of the message was invalid, or:
* *
* (*) -ENOPKG if none of the signature chains are verifiable because suitable * (*) 0 if no signature chains were found to be blacklisted or to contain
* crypto modules couldn't be found, or: * unsupported crypto, or:
* *
* (*) 0 if all the signature chains that don't incur -ENOPKG can be verified * (*) -EKEYREJECTED if a blacklisted key was encountered, or:
* (note that a signature chain may be of zero length), or: *
* (*) -ENOPKG if none of the signature chains are verifiable because suitable
* crypto modules couldn't be found.
*/ */
int pkcs7_verify(struct pkcs7_message *pkcs7, int pkcs7_verify(struct pkcs7_message *pkcs7,
enum key_being_used_for usage) enum key_being_used_for usage)
{ {
struct pkcs7_signed_info *sinfo; struct pkcs7_signed_info *sinfo;
int enopkg = -ENOPKG; int actual_ret = -ENOPKG;
int ret; int ret;
kenter(""); kenter("");
...@@ -412,6 +426,8 @@ int pkcs7_verify(struct pkcs7_message *pkcs7, ...@@ -412,6 +426,8 @@ int pkcs7_verify(struct pkcs7_message *pkcs7,
for (sinfo = pkcs7->signed_infos; sinfo; sinfo = sinfo->next) { for (sinfo = pkcs7->signed_infos; sinfo; sinfo = sinfo->next) {
ret = pkcs7_verify_one(pkcs7, sinfo); ret = pkcs7_verify_one(pkcs7, sinfo);
if (sinfo->blacklisted && actual_ret == -ENOPKG)
actual_ret = -EKEYREJECTED;
if (ret < 0) { if (ret < 0) {
if (ret == -ENOPKG) { if (ret == -ENOPKG) {
sinfo->unsupported_crypto = true; sinfo->unsupported_crypto = true;
...@@ -420,11 +436,11 @@ int pkcs7_verify(struct pkcs7_message *pkcs7, ...@@ -420,11 +436,11 @@ int pkcs7_verify(struct pkcs7_message *pkcs7,
kleave(" = %d", ret); kleave(" = %d", ret);
return ret; return ret;
} }
enopkg = 0; actual_ret = 0;
} }
kleave(" = %d", enopkg); kleave(" = %d", actual_ret);
return enopkg; return actual_ret;
} }
EXPORT_SYMBOL_GPL(pkcs7_verify); EXPORT_SYMBOL_GPL(pkcs7_verify);
......
...@@ -56,9 +56,10 @@ __setup("ca_keys=", ca_keys_setup); ...@@ -56,9 +56,10 @@ __setup("ca_keys=", ca_keys_setup);
/** /**
* restrict_link_by_signature - Restrict additions to a ring of public keys * restrict_link_by_signature - Restrict additions to a ring of public keys
* @trust_keyring: A ring of keys that can be used to vouch for the new cert. * @dest_keyring: Keyring being linked to.
* @type: The type of key being added. * @type: The type of key being added.
* @payload: The payload of the new key. * @payload: The payload of the new key.
* @trust_keyring: A ring of keys that can be used to vouch for the new cert.
* *
* Check the new certificate against the ones in the trust keyring. If one of * Check the new certificate against the ones in the trust keyring. If one of
* those is the signing key and validates the new certificate, then mark the * those is the signing key and validates the new certificate, then mark the
...@@ -69,9 +70,10 @@ __setup("ca_keys=", ca_keys_setup); ...@@ -69,9 +70,10 @@ __setup("ca_keys=", ca_keys_setup);
* signature check fails or the key is blacklisted and some other error if * signature check fails or the key is blacklisted and some other error if
* there is a matching certificate but the signature check cannot be performed. * there is a matching certificate but the signature check cannot be performed.
*/ */
int restrict_link_by_signature(struct key *trust_keyring, int restrict_link_by_signature(struct key *dest_keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload) const union key_payload *payload,
struct key *trust_keyring)
{ {
const struct public_key_signature *sig; const struct public_key_signature *sig;
struct key *key; struct key *key;
...@@ -106,3 +108,156 @@ int restrict_link_by_signature(struct key *trust_keyring, ...@@ -106,3 +108,156 @@ int restrict_link_by_signature(struct key *trust_keyring,
key_put(key); key_put(key);
return ret; return ret;
} }
static bool match_either_id(const struct asymmetric_key_ids *pair,
const struct asymmetric_key_id *single)
{
return (asymmetric_key_id_same(pair->id[0], single) ||
asymmetric_key_id_same(pair->id[1], single));
}
static int key_or_keyring_common(struct key *dest_keyring,
const struct key_type *type,
const union key_payload *payload,
struct key *trusted, bool check_dest)
{
const struct public_key_signature *sig;
struct key *key = NULL;
int ret;
pr_devel("==>%s()\n", __func__);
if (!dest_keyring)
return -ENOKEY;
else if (dest_keyring->type != &key_type_keyring)
return -EOPNOTSUPP;
if (!trusted && !check_dest)
return -ENOKEY;
if (type != &key_type_asymmetric)
return -EOPNOTSUPP;
sig = payload->data[asym_auth];
if (!sig->auth_ids[0] && !sig->auth_ids[1])
return -ENOKEY;
if (trusted) {
if (trusted->type == &key_type_keyring) {
/* See if we have a key that signed this one. */
key = find_asymmetric_key(trusted, sig->auth_ids[0],
sig->auth_ids[1], false);
if (IS_ERR(key))
key = NULL;
} else if (trusted->type == &key_type_asymmetric) {
const struct asymmetric_key_ids *signer_ids;
signer_ids = asymmetric_key_ids(trusted);
/*
* The auth_ids come from the candidate key (the
* one that is being considered for addition to
* dest_keyring) and identify the key that was
* used to sign.
*
* The signer_ids are identifiers for the
* signing key specified for dest_keyring.
*
* The first auth_id is the preferred id, and
* the second is the fallback. If only one
* auth_id is present, it may match against
* either signer_id. If two auth_ids are
* present, the first auth_id must match one
* signer_id and the second auth_id must match
* the second signer_id.
*/
if (!sig->auth_ids[0] || !sig->auth_ids[1]) {
const struct asymmetric_key_id *auth_id;
auth_id = sig->auth_ids[0] ?: sig->auth_ids[1];
if (match_either_id(signer_ids, auth_id))
key = __key_get(trusted);
} else if (asymmetric_key_id_same(signer_ids->id[1],
sig->auth_ids[1]) &&
match_either_id(signer_ids,
sig->auth_ids[0])) {
key = __key_get(trusted);
}
} else {
return -EOPNOTSUPP;
}
}
if (check_dest && !key) {
/* See if the destination has a key that signed this one. */
key = find_asymmetric_key(dest_keyring, sig->auth_ids[0],
sig->auth_ids[1], false);
if (IS_ERR(key))
key = NULL;
}
if (!key)
return -ENOKEY;
ret = key_validate(key);
if (ret == 0)
ret = verify_signature(key, sig);
key_put(key);
return ret;
}
/**
* restrict_link_by_key_or_keyring - Restrict additions to a ring of public
* keys using the restrict_key information stored in the ring.
* @dest_keyring: Keyring being linked to.
* @type: The type of key being added.
* @payload: The payload of the new key.
* @trusted: A key or ring of keys that can be used to vouch for the new cert.
*
* Check the new certificate only against the key or keys passed in the data
* parameter. If one of those is the signing key and validates the new
* certificate, then mark the new certificate as being ok to link.
*
* Returns 0 if the new certificate was accepted, -ENOKEY if we
* couldn't find a matching parent certificate in the trusted list,
* -EKEYREJECTED if the signature check fails, and some other error if
* there is a matching certificate but the signature check cannot be
* performed.
*/
int restrict_link_by_key_or_keyring(struct key *dest_keyring,
const struct key_type *type,
const union key_payload *payload,
struct key *trusted)
{
return key_or_keyring_common(dest_keyring, type, payload, trusted,
false);
}
/**
* restrict_link_by_key_or_keyring_chain - Restrict additions to a ring of
* public keys using the restrict_key information stored in the ring.
* @dest_keyring: Keyring being linked to.
* @type: The type of key being added.
* @payload: The payload of the new key.
* @trusted: A key or ring of keys that can be used to vouch for the new cert.
*
* Check the new certificate only against the key or keys passed in the data
* parameter. If one of those is the signing key and validates the new
* certificate, then mark the new certificate as being ok to link.
*
* Returns 0 if the new certificate was accepted, -ENOKEY if we
* couldn't find a matching parent certificate in the trusted list,
* -EKEYREJECTED if the signature check fails, and some other error if
* there is a matching certificate but the signature check cannot be
* performed.
*/
int restrict_link_by_key_or_keyring_chain(struct key *dest_keyring,
const struct key_type *type,
const union key_payload *payload,
struct key *trusted)
{
return key_or_keyring_common(dest_keyring, type, payload, trusted,
true);
}
...@@ -42,6 +42,7 @@ struct x509_certificate { ...@@ -42,6 +42,7 @@ struct x509_certificate {
bool self_signed; /* T if self-signed (check unsupported_sig too) */ bool self_signed; /* T if self-signed (check unsupported_sig too) */
bool unsupported_key; /* T if key uses unsupported crypto */ bool unsupported_key; /* T if key uses unsupported crypto */
bool unsupported_sig; /* T if signature uses unsupported crypto */ bool unsupported_sig; /* T if signature uses unsupported crypto */
bool blacklisted;
}; };
/* /*
......
...@@ -84,6 +84,16 @@ int x509_get_sig_params(struct x509_certificate *cert) ...@@ -84,6 +84,16 @@ int x509_get_sig_params(struct x509_certificate *cert)
goto error_2; goto error_2;
might_sleep(); might_sleep();
ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest); ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest);
if (ret < 0)
goto error_2;
ret = is_hash_blacklisted(sig->digest, sig->digest_size, "tbs");
if (ret == -EKEYREJECTED) {
pr_err("Cert %*phN is blacklisted\n",
sig->digest_size, sig->digest);
cert->blacklisted = true;
ret = 0;
}
error_2: error_2:
kfree(desc); kfree(desc);
...@@ -186,6 +196,11 @@ static int x509_key_preparse(struct key_preparsed_payload *prep) ...@@ -186,6 +196,11 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
cert->sig->pkey_algo, cert->sig->hash_algo); cert->sig->pkey_algo, cert->sig->hash_algo);
} }
/* Don't permit addition of blacklisted keys */
ret = -EKEYREJECTED;
if (cert->blacklisted)
goto error_free_cert;
/* Propose a description */ /* Propose a description */
sulen = strlen(cert->subject); sulen = strlen(cert->subject);
if (cert->raw_skid) { if (cert->raw_skid) {
......
...@@ -6,6 +6,7 @@ menuconfig TCG_TPM ...@@ -6,6 +6,7 @@ menuconfig TCG_TPM
tristate "TPM Hardware Support" tristate "TPM Hardware Support"
depends on HAS_IOMEM depends on HAS_IOMEM
select SECURITYFS select SECURITYFS
select CRYPTO
select CRYPTO_HASH_INFO select CRYPTO_HASH_INFO
---help--- ---help---
If you have a TPM security chip in your system, which If you have a TPM security chip in your system, which
...@@ -135,7 +136,7 @@ config TCG_XEN ...@@ -135,7 +136,7 @@ config TCG_XEN
config TCG_CRB config TCG_CRB
tristate "TPM 2.0 CRB Interface" tristate "TPM 2.0 CRB Interface"
depends on X86 && ACPI depends on ACPI
---help--- ---help---
If you have a TPM security chip that is compliant with the If you have a TPM security chip that is compliant with the
TCG CRB 2.0 TPM specification say Yes and it will be accessible TCG CRB 2.0 TPM specification say Yes and it will be accessible
......
...@@ -3,7 +3,8 @@ ...@@ -3,7 +3,8 @@
# #
obj-$(CONFIG_TCG_TPM) += tpm.o obj-$(CONFIG_TCG_TPM) += tpm.o
tpm-y := tpm-interface.o tpm-dev.o tpm-sysfs.o tpm-chip.o tpm2-cmd.o \ tpm-y := tpm-interface.o tpm-dev.o tpm-sysfs.o tpm-chip.o tpm2-cmd.o \
tpm1_eventlog.o tpm2_eventlog.o tpm-dev-common.o tpmrm-dev.o tpm1_eventlog.o tpm2_eventlog.o \
tpm2-space.o
tpm-$(CONFIG_ACPI) += tpm_ppi.o tpm_acpi.o tpm-$(CONFIG_ACPI) += tpm_ppi.o tpm_acpi.o
tpm-$(CONFIG_OF) += tpm_of.o tpm-$(CONFIG_OF) += tpm_of.o
obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_core.o obj-$(CONFIG_TCG_TIS_CORE) += tpm_tis_core.o
......
...@@ -111,6 +111,13 @@ static const struct st33zp24_phy_ops i2c_phy_ops = { ...@@ -111,6 +111,13 @@ static const struct st33zp24_phy_ops i2c_phy_ops = {
.recv = st33zp24_i2c_recv, .recv = st33zp24_i2c_recv,
}; };
static const struct acpi_gpio_params lpcpd_gpios = { 1, 0, false };
static const struct acpi_gpio_mapping acpi_st33zp24_gpios[] = {
{ "lpcpd-gpios", &lpcpd_gpios, 1 },
{},
};
static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client) static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client)
{ {
struct tpm_chip *chip = i2c_get_clientdata(client); struct tpm_chip *chip = i2c_get_clientdata(client);
...@@ -118,10 +125,14 @@ static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client) ...@@ -118,10 +125,14 @@ static int st33zp24_i2c_acpi_request_resources(struct i2c_client *client)
struct st33zp24_i2c_phy *phy = tpm_dev->phy_id; struct st33zp24_i2c_phy *phy = tpm_dev->phy_id;
struct gpio_desc *gpiod_lpcpd; struct gpio_desc *gpiod_lpcpd;
struct device *dev = &client->dev; struct device *dev = &client->dev;
int ret;
ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(dev), acpi_st33zp24_gpios);
if (ret)
return ret;
/* Get LPCPD GPIO from ACPI */ /* Get LPCPD GPIO from ACPI */
gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1, gpiod_lpcpd = devm_gpiod_get(dev, "lpcpd", GPIOD_OUT_HIGH);
GPIOD_OUT_HIGH);
if (IS_ERR(gpiod_lpcpd)) { if (IS_ERR(gpiod_lpcpd)) {
dev_err(&client->dev, dev_err(&client->dev,
"Failed to retrieve lpcpd-gpios from acpi.\n"); "Failed to retrieve lpcpd-gpios from acpi.\n");
...@@ -268,8 +279,14 @@ static int st33zp24_i2c_probe(struct i2c_client *client, ...@@ -268,8 +279,14 @@ static int st33zp24_i2c_probe(struct i2c_client *client,
static int st33zp24_i2c_remove(struct i2c_client *client) static int st33zp24_i2c_remove(struct i2c_client *client)
{ {
struct tpm_chip *chip = i2c_get_clientdata(client); struct tpm_chip *chip = i2c_get_clientdata(client);
int ret;
return st33zp24_remove(chip); ret = st33zp24_remove(chip);
if (ret)
return ret;
acpi_dev_remove_driver_gpios(ACPI_COMPANION(&client->dev));
return 0;
} }
static const struct i2c_device_id st33zp24_i2c_id[] = { static const struct i2c_device_id st33zp24_i2c_id[] = {
......
...@@ -230,6 +230,13 @@ static const struct st33zp24_phy_ops spi_phy_ops = { ...@@ -230,6 +230,13 @@ static const struct st33zp24_phy_ops spi_phy_ops = {
.recv = st33zp24_spi_recv, .recv = st33zp24_spi_recv,
}; };
static const struct acpi_gpio_params lpcpd_gpios = { 1, 0, false };
static const struct acpi_gpio_mapping acpi_st33zp24_gpios[] = {
{ "lpcpd-gpios", &lpcpd_gpios, 1 },
{},
};
static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev) static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev)
{ {
struct tpm_chip *chip = spi_get_drvdata(spi_dev); struct tpm_chip *chip = spi_get_drvdata(spi_dev);
...@@ -237,10 +244,14 @@ static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev) ...@@ -237,10 +244,14 @@ static int st33zp24_spi_acpi_request_resources(struct spi_device *spi_dev)
struct st33zp24_spi_phy *phy = tpm_dev->phy_id; struct st33zp24_spi_phy *phy = tpm_dev->phy_id;
struct gpio_desc *gpiod_lpcpd; struct gpio_desc *gpiod_lpcpd;
struct device *dev = &spi_dev->dev; struct device *dev = &spi_dev->dev;
int ret;
ret = acpi_dev_add_driver_gpios(ACPI_COMPANION(dev), acpi_st33zp24_gpios);
if (ret)
return ret;
/* Get LPCPD GPIO from ACPI */ /* Get LPCPD GPIO from ACPI */
gpiod_lpcpd = devm_gpiod_get_index(dev, "TPM IO LPCPD", 1, gpiod_lpcpd = devm_gpiod_get(dev, "lpcpd", GPIOD_OUT_HIGH);
GPIOD_OUT_HIGH);
if (IS_ERR(gpiod_lpcpd)) { if (IS_ERR(gpiod_lpcpd)) {
dev_err(dev, "Failed to retrieve lpcpd-gpios from acpi.\n"); dev_err(dev, "Failed to retrieve lpcpd-gpios from acpi.\n");
phy->io_lpcpd = -1; phy->io_lpcpd = -1;
...@@ -385,8 +396,14 @@ static int st33zp24_spi_probe(struct spi_device *dev) ...@@ -385,8 +396,14 @@ static int st33zp24_spi_probe(struct spi_device *dev)
static int st33zp24_spi_remove(struct spi_device *dev) static int st33zp24_spi_remove(struct spi_device *dev)
{ {
struct tpm_chip *chip = spi_get_drvdata(dev); struct tpm_chip *chip = spi_get_drvdata(dev);
int ret;
return st33zp24_remove(chip); ret = st33zp24_remove(chip);
if (ret)
return ret;
acpi_dev_remove_driver_gpios(ACPI_COMPANION(&dev->dev));
return 0;
} }
static const struct spi_device_id st33zp24_spi_id[] = { static const struct spi_device_id st33zp24_spi_id[] = {
......
...@@ -117,9 +117,9 @@ static u8 st33zp24_status(struct tpm_chip *chip) ...@@ -117,9 +117,9 @@ static u8 st33zp24_status(struct tpm_chip *chip)
/* /*
* check_locality if the locality is active * check_locality if the locality is active
* @param: chip, the tpm chip description * @param: chip, the tpm chip description
* @return: the active locality or -EACCESS. * @return: true if LOCALITY0 is active, otherwise false
*/ */
static int check_locality(struct tpm_chip *chip) static bool check_locality(struct tpm_chip *chip)
{ {
struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev); struct st33zp24_dev *tpm_dev = dev_get_drvdata(&chip->dev);
u8 data; u8 data;
...@@ -129,9 +129,9 @@ static int check_locality(struct tpm_chip *chip) ...@@ -129,9 +129,9 @@ static int check_locality(struct tpm_chip *chip)
if (status && (data & if (status && (data &
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID))
return tpm_dev->locality; return true;
return -EACCES; return false;
} /* check_locality() */ } /* check_locality() */
/* /*
...@@ -146,7 +146,7 @@ static int request_locality(struct tpm_chip *chip) ...@@ -146,7 +146,7 @@ static int request_locality(struct tpm_chip *chip)
long ret; long ret;
u8 data; u8 data;
if (check_locality(chip) == tpm_dev->locality) if (check_locality(chip))
return tpm_dev->locality; return tpm_dev->locality;
data = TPM_ACCESS_REQUEST_USE; data = TPM_ACCESS_REQUEST_USE;
...@@ -158,7 +158,7 @@ static int request_locality(struct tpm_chip *chip) ...@@ -158,7 +158,7 @@ static int request_locality(struct tpm_chip *chip)
/* Request locality is usually effective after the request */ /* Request locality is usually effective after the request */
do { do {
if (check_locality(chip) >= 0) if (check_locality(chip))
return tpm_dev->locality; return tpm_dev->locality;
msleep(TPM_TIMEOUT); msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop)); } while (time_before(jiffies, stop));
......
...@@ -33,6 +33,7 @@ DEFINE_IDR(dev_nums_idr); ...@@ -33,6 +33,7 @@ DEFINE_IDR(dev_nums_idr);
static DEFINE_MUTEX(idr_lock); static DEFINE_MUTEX(idr_lock);
struct class *tpm_class; struct class *tpm_class;
struct class *tpmrm_class;
dev_t tpm_devt; dev_t tpm_devt;
/** /**
...@@ -128,9 +129,19 @@ static void tpm_dev_release(struct device *dev) ...@@ -128,9 +129,19 @@ static void tpm_dev_release(struct device *dev)
mutex_unlock(&idr_lock); mutex_unlock(&idr_lock);
kfree(chip->log.bios_event_log); kfree(chip->log.bios_event_log);
kfree(chip->work_space.context_buf);
kfree(chip->work_space.session_buf);
kfree(chip); kfree(chip);
} }
static void tpm_devs_release(struct device *dev)
{
struct tpm_chip *chip = container_of(dev, struct tpm_chip, devs);
/* release the master device reference */
put_device(&chip->dev);
}
/** /**
* tpm_chip_alloc() - allocate a new struct tpm_chip instance * tpm_chip_alloc() - allocate a new struct tpm_chip instance
* @pdev: device to which the chip is associated * @pdev: device to which the chip is associated
...@@ -167,18 +178,36 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev, ...@@ -167,18 +178,36 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
chip->dev_num = rc; chip->dev_num = rc;
device_initialize(&chip->dev); device_initialize(&chip->dev);
device_initialize(&chip->devs);
chip->dev.class = tpm_class; chip->dev.class = tpm_class;
chip->dev.release = tpm_dev_release; chip->dev.release = tpm_dev_release;
chip->dev.parent = pdev; chip->dev.parent = pdev;
chip->dev.groups = chip->groups; chip->dev.groups = chip->groups;
chip->devs.parent = pdev;
chip->devs.class = tpmrm_class;
chip->devs.release = tpm_devs_release;
/* get extra reference on main device to hold on
* behalf of devs. This holds the chip structure
* while cdevs is in use. The corresponding put
* is in the tpm_devs_release (TPM2 only)
*/
if (chip->flags & TPM_CHIP_FLAG_TPM2)
get_device(&chip->dev);
if (chip->dev_num == 0) if (chip->dev_num == 0)
chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR); chip->dev.devt = MKDEV(MISC_MAJOR, TPM_MINOR);
else else
chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num); chip->dev.devt = MKDEV(MAJOR(tpm_devt), chip->dev_num);
chip->devs.devt =
MKDEV(MAJOR(tpm_devt), chip->dev_num + TPM_NUM_DEVICES);
rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num); rc = dev_set_name(&chip->dev, "tpm%d", chip->dev_num);
if (rc)
goto out;
rc = dev_set_name(&chip->devs, "tpmrm%d", chip->dev_num);
if (rc) if (rc)
goto out; goto out;
...@@ -186,12 +215,28 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev, ...@@ -186,12 +215,28 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
chip->flags |= TPM_CHIP_FLAG_VIRTUAL; chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
cdev_init(&chip->cdev, &tpm_fops); cdev_init(&chip->cdev, &tpm_fops);
cdev_init(&chip->cdevs, &tpmrm_fops);
chip->cdev.owner = THIS_MODULE; chip->cdev.owner = THIS_MODULE;
chip->cdevs.owner = THIS_MODULE;
chip->cdev.kobj.parent = &chip->dev.kobj; chip->cdev.kobj.parent = &chip->dev.kobj;
chip->cdevs.kobj.parent = &chip->devs.kobj;
chip->work_space.context_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (!chip->work_space.context_buf) {
rc = -ENOMEM;
goto out;
}
chip->work_space.session_buf = kzalloc(PAGE_SIZE, GFP_KERNEL);
if (!chip->work_space.session_buf) {
rc = -ENOMEM;
goto out;
}
chip->locality = -1;
return chip; return chip;
out: out:
put_device(&chip->devs);
put_device(&chip->dev); put_device(&chip->dev);
return ERR_PTR(rc); return ERR_PTR(rc);
} }
...@@ -236,7 +281,6 @@ static int tpm_add_char_device(struct tpm_chip *chip) ...@@ -236,7 +281,6 @@ static int tpm_add_char_device(struct tpm_chip *chip)
"unable to cdev_add() %s, major %d, minor %d, err=%d\n", "unable to cdev_add() %s, major %d, minor %d, err=%d\n",
dev_name(&chip->dev), MAJOR(chip->dev.devt), dev_name(&chip->dev), MAJOR(chip->dev.devt),
MINOR(chip->dev.devt), rc); MINOR(chip->dev.devt), rc);
return rc; return rc;
} }
...@@ -251,6 +295,27 @@ static int tpm_add_char_device(struct tpm_chip *chip) ...@@ -251,6 +295,27 @@ static int tpm_add_char_device(struct tpm_chip *chip)
return rc; return rc;
} }
if (chip->flags & TPM_CHIP_FLAG_TPM2)
rc = cdev_add(&chip->cdevs, chip->devs.devt, 1);
if (rc) {
dev_err(&chip->dev,
"unable to cdev_add() %s, major %d, minor %d, err=%d\n",
dev_name(&chip->devs), MAJOR(chip->devs.devt),
MINOR(chip->devs.devt), rc);
return rc;
}
if (chip->flags & TPM_CHIP_FLAG_TPM2)
rc = device_add(&chip->devs);
if (rc) {
dev_err(&chip->dev,
"unable to device_register() %s, major %d, minor %d, err=%d\n",
dev_name(&chip->devs), MAJOR(chip->devs.devt),
MINOR(chip->devs.devt), rc);
cdev_del(&chip->cdevs);
return rc;
}
/* Make the chip available. */ /* Make the chip available. */
mutex_lock(&idr_lock); mutex_lock(&idr_lock);
idr_replace(&dev_nums_idr, chip, chip->dev_num); idr_replace(&dev_nums_idr, chip, chip->dev_num);
...@@ -384,6 +449,10 @@ void tpm_chip_unregister(struct tpm_chip *chip) ...@@ -384,6 +449,10 @@ void tpm_chip_unregister(struct tpm_chip *chip)
{ {
tpm_del_legacy_sysfs(chip); tpm_del_legacy_sysfs(chip);
tpm_bios_log_teardown(chip); tpm_bios_log_teardown(chip);
if (chip->flags & TPM_CHIP_FLAG_TPM2) {
cdev_del(&chip->cdevs);
device_del(&chip->devs);
}
tpm_del_char_device(chip); tpm_del_char_device(chip);
} }
EXPORT_SYMBOL_GPL(tpm_chip_unregister); EXPORT_SYMBOL_GPL(tpm_chip_unregister);
/*
* Copyright (C) 2004 IBM Corporation
* Authors:
* Leendert van Doorn <leendert@watson.ibm.com>
* Dave Safford <safford@watson.ibm.com>
* Reiner Sailer <sailer@watson.ibm.com>
* Kylene Hall <kjhall@us.ibm.com>
*
* Copyright (C) 2013 Obsidian Research Corp
* Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
*
* Device file system interface to the TPM
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2 of the
* License.
*
*/
#include <linux/slab.h>
#include <linux/uaccess.h>
#include "tpm.h"
#include "tpm-dev.h"
static void user_reader_timeout(unsigned long ptr)
{
struct file_priv *priv = (struct file_priv *)ptr;
pr_warn("TPM user space timeout is deprecated (pid=%d)\n",
task_tgid_nr(current));
schedule_work(&priv->work);
}
static void timeout_work(struct work_struct *work)
{
struct file_priv *priv = container_of(work, struct file_priv, work);
mutex_lock(&priv->buffer_mutex);
atomic_set(&priv->data_pending, 0);
memset(priv->data_buffer, 0, sizeof(priv->data_buffer));
mutex_unlock(&priv->buffer_mutex);
}
void tpm_common_open(struct file *file, struct tpm_chip *chip,
struct file_priv *priv)
{
priv->chip = chip;
atomic_set(&priv->data_pending, 0);
mutex_init(&priv->buffer_mutex);
setup_timer(&priv->user_read_timer, user_reader_timeout,
(unsigned long)priv);
INIT_WORK(&priv->work, timeout_work);
file->private_data = priv;
}
ssize_t tpm_common_read(struct file *file, char __user *buf,
size_t size, loff_t *off)
{
struct file_priv *priv = file->private_data;
ssize_t ret_size;
ssize_t orig_ret_size;
int rc;
del_singleshot_timer_sync(&priv->user_read_timer);
flush_work(&priv->work);
ret_size = atomic_read(&priv->data_pending);
if (ret_size > 0) { /* relay data */
orig_ret_size = ret_size;
if (size < ret_size)
ret_size = size;
mutex_lock(&priv->buffer_mutex);
rc = copy_to_user(buf, priv->data_buffer, ret_size);
memset(priv->data_buffer, 0, orig_ret_size);
if (rc)
ret_size = -EFAULT;
mutex_unlock(&priv->buffer_mutex);
}
atomic_set(&priv->data_pending, 0);
return ret_size;
}
ssize_t tpm_common_write(struct file *file, const char __user *buf,
size_t size, loff_t *off, struct tpm_space *space)
{
struct file_priv *priv = file->private_data;
size_t in_size = size;
ssize_t out_size;
/* Cannot perform a write until the read has cleared either via
* tpm_read or a user_read_timer timeout. This also prevents split
* buffered writes from blocking here.
*/
if (atomic_read(&priv->data_pending) != 0)
return -EBUSY;
if (in_size > TPM_BUFSIZE)
return -E2BIG;
mutex_lock(&priv->buffer_mutex);
if (copy_from_user
(priv->data_buffer, (void __user *) buf, in_size)) {
mutex_unlock(&priv->buffer_mutex);
return -EFAULT;
}
/* atomic tpm command send and result receive. We only hold the ops
* lock during this period so that the tpm can be unregistered even if
* the char dev is held open.
*/
if (tpm_try_get_ops(priv->chip)) {
mutex_unlock(&priv->buffer_mutex);
return -EPIPE;
}
out_size = tpm_transmit(priv->chip, space, priv->data_buffer,
sizeof(priv->data_buffer), 0);
tpm_put_ops(priv->chip);
if (out_size < 0) {
mutex_unlock(&priv->buffer_mutex);
return out_size;
}
atomic_set(&priv->data_pending, out_size);
mutex_unlock(&priv->buffer_mutex);
/* Set a timeout by which the reader must come claim the result */
mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
return in_size;
}
/*
* Called on file close
*/
void tpm_common_release(struct file *file, struct file_priv *priv)
{
del_singleshot_timer_sync(&priv->user_read_timer);
flush_work(&priv->work);
file->private_data = NULL;
atomic_set(&priv->data_pending, 0);
}
...@@ -18,48 +18,15 @@ ...@@ -18,48 +18,15 @@
* *
*/ */
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/uaccess.h> #include "tpm-dev.h"
#include "tpm.h"
struct file_priv {
struct tpm_chip *chip;
/* Data passed to and from the tpm via the read/write calls */
atomic_t data_pending;
struct mutex buffer_mutex;
struct timer_list user_read_timer; /* user needs to claim result */
struct work_struct work;
u8 data_buffer[TPM_BUFSIZE];
};
static void user_reader_timeout(unsigned long ptr)
{
struct file_priv *priv = (struct file_priv *)ptr;
pr_warn("TPM user space timeout is deprecated (pid=%d)\n",
task_tgid_nr(current));
schedule_work(&priv->work);
}
static void timeout_work(struct work_struct *work)
{
struct file_priv *priv = container_of(work, struct file_priv, work);
mutex_lock(&priv->buffer_mutex);
atomic_set(&priv->data_pending, 0);
memset(priv->data_buffer, 0, sizeof(priv->data_buffer));
mutex_unlock(&priv->buffer_mutex);
}
static int tpm_open(struct inode *inode, struct file *file) static int tpm_open(struct inode *inode, struct file *file)
{ {
struct tpm_chip *chip = struct tpm_chip *chip;
container_of(inode->i_cdev, struct tpm_chip, cdev);
struct file_priv *priv; struct file_priv *priv;
chip = container_of(inode->i_cdev, struct tpm_chip, cdev);
/* It's assured that the chip will be opened just once, /* It's assured that the chip will be opened just once,
* by the check of is_open variable, which is protected * by the check of is_open variable, which is protected
* by driver_lock. */ * by driver_lock. */
...@@ -69,100 +36,22 @@ static int tpm_open(struct inode *inode, struct file *file) ...@@ -69,100 +36,22 @@ static int tpm_open(struct inode *inode, struct file *file)
} }
priv = kzalloc(sizeof(*priv), GFP_KERNEL); priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (priv == NULL) { if (priv == NULL)
clear_bit(0, &chip->is_open); goto out;
return -ENOMEM;
}
priv->chip = chip; tpm_common_open(file, chip, priv);
atomic_set(&priv->data_pending, 0);
mutex_init(&priv->buffer_mutex);
setup_timer(&priv->user_read_timer, user_reader_timeout,
(unsigned long)priv);
INIT_WORK(&priv->work, timeout_work);
file->private_data = priv;
return 0; return 0;
}
static ssize_t tpm_read(struct file *file, char __user *buf,
size_t size, loff_t *off)
{
struct file_priv *priv = file->private_data;
ssize_t ret_size;
int rc;
del_singleshot_timer_sync(&priv->user_read_timer); out:
flush_work(&priv->work); clear_bit(0, &chip->is_open);
ret_size = atomic_read(&priv->data_pending); return -ENOMEM;
if (ret_size > 0) { /* relay data */
ssize_t orig_ret_size = ret_size;
if (size < ret_size)
ret_size = size;
mutex_lock(&priv->buffer_mutex);
rc = copy_to_user(buf, priv->data_buffer, ret_size);
memset(priv->data_buffer, 0, orig_ret_size);
if (rc)
ret_size = -EFAULT;
mutex_unlock(&priv->buffer_mutex);
}
atomic_set(&priv->data_pending, 0);
return ret_size;
} }
static ssize_t tpm_write(struct file *file, const char __user *buf, static ssize_t tpm_write(struct file *file, const char __user *buf,
size_t size, loff_t *off) size_t size, loff_t *off)
{ {
struct file_priv *priv = file->private_data; return tpm_common_write(file, buf, size, off, NULL);
size_t in_size = size;
ssize_t out_size;
/* cannot perform a write until the read has cleared
either via tpm_read or a user_read_timer timeout.
This also prevents splitted buffered writes from blocking here.
*/
if (atomic_read(&priv->data_pending) != 0)
return -EBUSY;
if (in_size > TPM_BUFSIZE)
return -E2BIG;
mutex_lock(&priv->buffer_mutex);
if (copy_from_user
(priv->data_buffer, (void __user *) buf, in_size)) {
mutex_unlock(&priv->buffer_mutex);
return -EFAULT;
}
/* atomic tpm command send and result receive. We only hold the ops
* lock during this period so that the tpm can be unregistered even if
* the char dev is held open.
*/
if (tpm_try_get_ops(priv->chip)) {
mutex_unlock(&priv->buffer_mutex);
return -EPIPE;
}
out_size = tpm_transmit(priv->chip, priv->data_buffer,
sizeof(priv->data_buffer), 0);
tpm_put_ops(priv->chip);
if (out_size < 0) {
mutex_unlock(&priv->buffer_mutex);
return out_size;
}
atomic_set(&priv->data_pending, out_size);
mutex_unlock(&priv->buffer_mutex);
/* Set a timeout by which the reader must come claim the result */
mod_timer(&priv->user_read_timer, jiffies + (120 * HZ));
return in_size;
} }
/* /*
...@@ -172,12 +61,10 @@ static int tpm_release(struct inode *inode, struct file *file) ...@@ -172,12 +61,10 @@ static int tpm_release(struct inode *inode, struct file *file)
{ {
struct file_priv *priv = file->private_data; struct file_priv *priv = file->private_data;
del_singleshot_timer_sync(&priv->user_read_timer); tpm_common_release(file, priv);
flush_work(&priv->work);
file->private_data = NULL;
atomic_set(&priv->data_pending, 0);
clear_bit(0, &priv->chip->is_open); clear_bit(0, &priv->chip->is_open);
kfree(priv); kfree(priv);
return 0; return 0;
} }
...@@ -185,9 +72,7 @@ const struct file_operations tpm_fops = { ...@@ -185,9 +72,7 @@ const struct file_operations tpm_fops = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
.llseek = no_llseek, .llseek = no_llseek,
.open = tpm_open, .open = tpm_open,
.read = tpm_read, .read = tpm_common_read,
.write = tpm_write, .write = tpm_write,
.release = tpm_release, .release = tpm_release,
}; };
#ifndef _TPM_DEV_H
#define _TPM_DEV_H
#include "tpm.h"
struct file_priv {
struct tpm_chip *chip;
/* Data passed to and from the tpm via the read/write calls */
atomic_t data_pending;
struct mutex buffer_mutex;
struct timer_list user_read_timer; /* user needs to claim result */
struct work_struct work;
u8 data_buffer[TPM_BUFSIZE];
};
void tpm_common_open(struct file *file, struct tpm_chip *chip,
struct file_priv *priv);
ssize_t tpm_common_read(struct file *file, char __user *buf,
size_t size, loff_t *off);
ssize_t tpm_common_write(struct file *file, const char __user *buf,
size_t size, loff_t *off, struct tpm_space *space);
void tpm_common_release(struct file *file, struct file_priv *priv);
#endif
...@@ -328,6 +328,47 @@ unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip, ...@@ -328,6 +328,47 @@ unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip,
} }
EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration); EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
static bool tpm_validate_command(struct tpm_chip *chip,
struct tpm_space *space,
const u8 *cmd,
size_t len)
{
const struct tpm_input_header *header = (const void *)cmd;
int i;
u32 cc;
u32 attrs;
unsigned int nr_handles;
if (len < TPM_HEADER_SIZE)
return false;
if (!space)
return true;
if (chip->flags & TPM_CHIP_FLAG_TPM2 && chip->nr_commands) {
cc = be32_to_cpu(header->ordinal);
i = tpm2_find_cc(chip, cc);
if (i < 0) {
dev_dbg(&chip->dev, "0x%04X is an invalid command\n",
cc);
return false;
}
attrs = chip->cc_attrs_tbl[i];
nr_handles =
4 * ((attrs >> TPM2_CC_ATTR_CHANDLES) & GENMASK(2, 0));
if (len < TPM_HEADER_SIZE + 4 * nr_handles)
goto err_len;
}
return true;
err_len:
dev_dbg(&chip->dev,
"%s: insufficient command length %zu", __func__, len);
return false;
}
/** /**
* tmp_transmit - Internal kernel interface to transmit TPM commands. * tmp_transmit - Internal kernel interface to transmit TPM commands.
* *
...@@ -340,14 +381,17 @@ EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration); ...@@ -340,14 +381,17 @@ EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
* 0 when the operation is successful. * 0 when the operation is successful.
* A negative number for system errors (errno). * A negative number for system errors (errno).
*/ */
ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
unsigned int flags) u8 *buf, size_t bufsiz, unsigned int flags)
{ {
ssize_t rc; struct tpm_output_header *header = (void *)buf;
int rc;
ssize_t len = 0;
u32 count, ordinal; u32 count, ordinal;
unsigned long stop; unsigned long stop;
bool need_locality;
if (bufsiz < TPM_HEADER_SIZE) if (!tpm_validate_command(chip, space, buf, bufsiz))
return -EINVAL; return -EINVAL;
if (bufsiz > TPM_BUFSIZE) if (bufsiz > TPM_BUFSIZE)
...@@ -369,10 +413,24 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, ...@@ -369,10 +413,24 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz,
if (chip->dev.parent) if (chip->dev.parent)
pm_runtime_get_sync(chip->dev.parent); pm_runtime_get_sync(chip->dev.parent);
/* Store the decision as chip->locality will be changed. */
need_locality = chip->locality == -1;
if (need_locality && chip->ops->request_locality) {
rc = chip->ops->request_locality(chip, 0);
if (rc < 0)
goto out_no_locality;
chip->locality = rc;
}
rc = tpm2_prepare_space(chip, space, ordinal, buf);
if (rc)
goto out;
rc = chip->ops->send(chip, (u8 *) buf, count); rc = chip->ops->send(chip, (u8 *) buf, count);
if (rc < 0) { if (rc < 0) {
dev_err(&chip->dev, dev_err(&chip->dev,
"tpm_transmit: tpm_send: error %zd\n", rc); "tpm_transmit: tpm_send: error %d\n", rc);
goto out; goto out;
} }
...@@ -405,17 +463,36 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, ...@@ -405,17 +463,36 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz,
goto out; goto out;
out_recv: out_recv:
rc = chip->ops->recv(chip, (u8 *) buf, bufsiz); len = chip->ops->recv(chip, (u8 *) buf, bufsiz);
if (rc < 0) if (len < 0) {
rc = len;
dev_err(&chip->dev, dev_err(&chip->dev,
"tpm_transmit: tpm_recv: error %zd\n", rc); "tpm_transmit: tpm_recv: error %d\n", rc);
goto out;
} else if (len < TPM_HEADER_SIZE) {
rc = -EFAULT;
goto out;
}
if (len != be32_to_cpu(header->length)) {
rc = -EFAULT;
goto out;
}
rc = tpm2_commit_space(chip, space, ordinal, buf, &len);
out: out:
if (need_locality && chip->ops->relinquish_locality) {
chip->ops->relinquish_locality(chip, chip->locality);
chip->locality = -1;
}
out_no_locality:
if (chip->dev.parent) if (chip->dev.parent)
pm_runtime_put_sync(chip->dev.parent); pm_runtime_put_sync(chip->dev.parent);
if (!(flags & TPM_TRANSMIT_UNLOCKED)) if (!(flags & TPM_TRANSMIT_UNLOCKED))
mutex_unlock(&chip->tpm_mutex); mutex_unlock(&chip->tpm_mutex);
return rc; return rc ? rc : len;
} }
/** /**
...@@ -434,23 +511,18 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, ...@@ -434,23 +511,18 @@ ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz,
* A negative number for system errors (errno). * A negative number for system errors (errno).
* A positive number for a TPM error. * A positive number for a TPM error.
*/ */
ssize_t tpm_transmit_cmd(struct tpm_chip *chip, const void *buf, ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
size_t bufsiz, size_t min_rsp_body_length, const void *buf, size_t bufsiz,
unsigned int flags, const char *desc) size_t min_rsp_body_length, unsigned int flags,
const char *desc)
{ {
const struct tpm_output_header *header; const struct tpm_output_header *header = buf;
int err; int err;
ssize_t len; ssize_t len;
len = tpm_transmit(chip, (const u8 *)buf, bufsiz, flags); len = tpm_transmit(chip, space, (u8 *)buf, bufsiz, flags);
if (len < 0) if (len < 0)
return len; return len;
else if (len < TPM_HEADER_SIZE)
return -EFAULT;
header = buf;
if (len != be32_to_cpu(header->length))
return -EFAULT;
err = be32_to_cpu(header->return_code); err = be32_to_cpu(header->return_code);
if (err != 0 && desc) if (err != 0 && desc)
...@@ -501,7 +573,7 @@ ssize_t tpm_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap, ...@@ -501,7 +573,7 @@ ssize_t tpm_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap,
tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4); tpm_cmd.params.getcap_in.subcap_size = cpu_to_be32(4);
tpm_cmd.params.getcap_in.subcap = cpu_to_be32(subcap_id); tpm_cmd.params.getcap_in.subcap = cpu_to_be32(subcap_id);
} }
rc = tpm_transmit_cmd(chip, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE, rc = tpm_transmit_cmd(chip, NULL, &tpm_cmd, TPM_INTERNAL_RESULT_SIZE,
min_cap_length, 0, desc); min_cap_length, 0, desc);
if (!rc) if (!rc)
*cap = tpm_cmd.params.getcap_out.cap; *cap = tpm_cmd.params.getcap_out.cap;
...@@ -525,7 +597,8 @@ static int tpm_startup(struct tpm_chip *chip, __be16 startup_type) ...@@ -525,7 +597,8 @@ static int tpm_startup(struct tpm_chip *chip, __be16 startup_type)
start_cmd.header.in = tpm_startup_header; start_cmd.header.in = tpm_startup_header;
start_cmd.params.startup_in.startup_type = startup_type; start_cmd.params.startup_in.startup_type = startup_type;
return tpm_transmit_cmd(chip, &start_cmd, TPM_INTERNAL_RESULT_SIZE, 0, return tpm_transmit_cmd(chip, NULL, &start_cmd,
TPM_INTERNAL_RESULT_SIZE, 0,
0, "attempting to start the TPM"); 0, "attempting to start the TPM");
} }
...@@ -682,8 +755,8 @@ static int tpm_continue_selftest(struct tpm_chip *chip) ...@@ -682,8 +755,8 @@ static int tpm_continue_selftest(struct tpm_chip *chip)
struct tpm_cmd_t cmd; struct tpm_cmd_t cmd;
cmd.header.in = continue_selftest_header; cmd.header.in = continue_selftest_header;
rc = tpm_transmit_cmd(chip, &cmd, CONTINUE_SELFTEST_RESULT_SIZE, 0, 0, rc = tpm_transmit_cmd(chip, NULL, &cmd, CONTINUE_SELFTEST_RESULT_SIZE,
"continue selftest"); 0, 0, "continue selftest");
return rc; return rc;
} }
...@@ -703,7 +776,7 @@ int tpm_pcr_read_dev(struct tpm_chip *chip, int pcr_idx, u8 *res_buf) ...@@ -703,7 +776,7 @@ int tpm_pcr_read_dev(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
cmd.header.in = pcrread_header; cmd.header.in = pcrread_header;
cmd.params.pcrread_in.pcr_idx = cpu_to_be32(pcr_idx); cmd.params.pcrread_in.pcr_idx = cpu_to_be32(pcr_idx);
rc = tpm_transmit_cmd(chip, &cmd, READ_PCR_RESULT_SIZE, rc = tpm_transmit_cmd(chip, NULL, &cmd, READ_PCR_RESULT_SIZE,
READ_PCR_RESULT_BODY_SIZE, 0, READ_PCR_RESULT_BODY_SIZE, 0,
"attempting to read a pcr value"); "attempting to read a pcr value");
...@@ -815,7 +888,7 @@ int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash) ...@@ -815,7 +888,7 @@ int tpm_pcr_extend(u32 chip_num, int pcr_idx, const u8 *hash)
cmd.header.in = pcrextend_header; cmd.header.in = pcrextend_header;
cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx); cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(pcr_idx);
memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE); memcpy(cmd.params.pcrextend_in.hash, hash, TPM_DIGEST_SIZE);
rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, rc = tpm_transmit_cmd(chip, NULL, &cmd, EXTEND_PCR_RESULT_SIZE,
EXTEND_PCR_RESULT_BODY_SIZE, 0, EXTEND_PCR_RESULT_BODY_SIZE, 0,
"attempting extend a PCR value"); "attempting extend a PCR value");
...@@ -920,8 +993,8 @@ int tpm_send(u32 chip_num, void *cmd, size_t buflen) ...@@ -920,8 +993,8 @@ int tpm_send(u32 chip_num, void *cmd, size_t buflen)
if (chip == NULL) if (chip == NULL)
return -ENODEV; return -ENODEV;
rc = tpm_transmit_cmd(chip, cmd, buflen, 0, 0, "attempting tpm_cmd"); rc = tpm_transmit_cmd(chip, NULL, cmd, buflen, 0, 0,
"attempting tpm_cmd");
tpm_put_ops(chip); tpm_put_ops(chip);
return rc; return rc;
} }
...@@ -1022,16 +1095,16 @@ int tpm_pm_suspend(struct device *dev) ...@@ -1022,16 +1095,16 @@ int tpm_pm_suspend(struct device *dev)
cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr); cmd.params.pcrextend_in.pcr_idx = cpu_to_be32(tpm_suspend_pcr);
memcpy(cmd.params.pcrextend_in.hash, dummy_hash, memcpy(cmd.params.pcrextend_in.hash, dummy_hash,
TPM_DIGEST_SIZE); TPM_DIGEST_SIZE);
rc = tpm_transmit_cmd(chip, &cmd, EXTEND_PCR_RESULT_SIZE, rc = tpm_transmit_cmd(chip, NULL, &cmd, EXTEND_PCR_RESULT_SIZE,
EXTEND_PCR_RESULT_BODY_SIZE, 0, EXTEND_PCR_RESULT_BODY_SIZE, 0,
"extending dummy pcr before suspend"); "extending dummy pcr before suspend");
} }
/* now do the actual savestate */ /* now do the actual savestate */
for (try = 0; try < TPM_RETRY; try++) { for (try = 0; try < TPM_RETRY; try++) {
cmd.header.in = savestate_header; cmd.header.in = savestate_header;
rc = tpm_transmit_cmd(chip, &cmd, SAVESTATE_RESULT_SIZE, 0, rc = tpm_transmit_cmd(chip, NULL, &cmd, SAVESTATE_RESULT_SIZE,
0, NULL); 0, 0, NULL);
/* /*
* If the TPM indicates that it is too busy to respond to * If the TPM indicates that it is too busy to respond to
...@@ -1114,7 +1187,7 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max) ...@@ -1114,7 +1187,7 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
tpm_cmd.header.in = tpm_getrandom_header; tpm_cmd.header.in = tpm_getrandom_header;
tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes); tpm_cmd.params.getrandom_in.num_bytes = cpu_to_be32(num_bytes);
err = tpm_transmit_cmd(chip, &tpm_cmd, err = tpm_transmit_cmd(chip, NULL, &tpm_cmd,
TPM_GETRANDOM_RESULT_SIZE + num_bytes, TPM_GETRANDOM_RESULT_SIZE + num_bytes,
offsetof(struct tpm_getrandom_out, offsetof(struct tpm_getrandom_out,
rng_data), rng_data),
...@@ -1205,9 +1278,17 @@ static int __init tpm_init(void) ...@@ -1205,9 +1278,17 @@ static int __init tpm_init(void)
return PTR_ERR(tpm_class); return PTR_ERR(tpm_class);
} }
rc = alloc_chrdev_region(&tpm_devt, 0, TPM_NUM_DEVICES, "tpm"); tpmrm_class = class_create(THIS_MODULE, "tpmrm");
if (IS_ERR(tpmrm_class)) {
pr_err("couldn't create tpmrm class\n");
class_destroy(tpm_class);
return PTR_ERR(tpmrm_class);
}
rc = alloc_chrdev_region(&tpm_devt, 0, 2*TPM_NUM_DEVICES, "tpm");
if (rc < 0) { if (rc < 0) {
pr_err("tpm: failed to allocate char dev region\n"); pr_err("tpm: failed to allocate char dev region\n");
class_destroy(tpmrm_class);
class_destroy(tpm_class); class_destroy(tpm_class);
return rc; return rc;
} }
...@@ -1219,7 +1300,8 @@ static void __exit tpm_exit(void) ...@@ -1219,7 +1300,8 @@ static void __exit tpm_exit(void)
{ {
idr_destroy(&dev_nums_idr); idr_destroy(&dev_nums_idr);
class_destroy(tpm_class); class_destroy(tpm_class);
unregister_chrdev_region(tpm_devt, TPM_NUM_DEVICES); class_destroy(tpmrm_class);
unregister_chrdev_region(tpm_devt, 2*TPM_NUM_DEVICES);
} }
subsys_initcall(tpm_init); subsys_initcall(tpm_init);
......
...@@ -40,7 +40,7 @@ static ssize_t pubek_show(struct device *dev, struct device_attribute *attr, ...@@ -40,7 +40,7 @@ static ssize_t pubek_show(struct device *dev, struct device_attribute *attr,
struct tpm_chip *chip = to_tpm_chip(dev); struct tpm_chip *chip = to_tpm_chip(dev);
tpm_cmd.header.in = tpm_readpubek_header; tpm_cmd.header.in = tpm_readpubek_header;
err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE, err = tpm_transmit_cmd(chip, NULL, &tpm_cmd, READ_PUBEK_RESULT_SIZE,
READ_PUBEK_RESULT_MIN_BODY_SIZE, 0, READ_PUBEK_RESULT_MIN_BODY_SIZE, 0,
"attempting to read the PUBEK"); "attempting to read the PUBEK");
if (err) if (err)
......
...@@ -89,10 +89,13 @@ enum tpm2_structures { ...@@ -89,10 +89,13 @@ enum tpm2_structures {
}; };
enum tpm2_return_codes { enum tpm2_return_codes {
TPM2_RC_SUCCESS = 0x0000,
TPM2_RC_HASH = 0x0083, /* RC_FMT1 */ TPM2_RC_HASH = 0x0083, /* RC_FMT1 */
TPM2_RC_HANDLE = 0x008B,
TPM2_RC_INITIALIZE = 0x0100, /* RC_VER1 */ TPM2_RC_INITIALIZE = 0x0100, /* RC_VER1 */
TPM2_RC_DISABLED = 0x0120, TPM2_RC_DISABLED = 0x0120,
TPM2_RC_TESTING = 0x090A, /* RC_WARN */ TPM2_RC_TESTING = 0x090A, /* RC_WARN */
TPM2_RC_REFERENCE_H0 = 0x0910,
}; };
enum tpm2_algorithms { enum tpm2_algorithms {
...@@ -114,6 +117,8 @@ enum tpm2_command_codes { ...@@ -114,6 +117,8 @@ enum tpm2_command_codes {
TPM2_CC_CREATE = 0x0153, TPM2_CC_CREATE = 0x0153,
TPM2_CC_LOAD = 0x0157, TPM2_CC_LOAD = 0x0157,
TPM2_CC_UNSEAL = 0x015E, TPM2_CC_UNSEAL = 0x015E,
TPM2_CC_CONTEXT_LOAD = 0x0161,
TPM2_CC_CONTEXT_SAVE = 0x0162,
TPM2_CC_FLUSH_CONTEXT = 0x0165, TPM2_CC_FLUSH_CONTEXT = 0x0165,
TPM2_CC_GET_CAPABILITY = 0x017A, TPM2_CC_GET_CAPABILITY = 0x017A,
TPM2_CC_GET_RANDOM = 0x017B, TPM2_CC_GET_RANDOM = 0x017B,
...@@ -127,21 +132,39 @@ enum tpm2_permanent_handles { ...@@ -127,21 +132,39 @@ enum tpm2_permanent_handles {
}; };
enum tpm2_capabilities { enum tpm2_capabilities {
TPM2_CAP_HANDLES = 1,
TPM2_CAP_COMMANDS = 2,
TPM2_CAP_PCRS = 5, TPM2_CAP_PCRS = 5,
TPM2_CAP_TPM_PROPERTIES = 6, TPM2_CAP_TPM_PROPERTIES = 6,
}; };
enum tpm2_properties {
TPM_PT_TOTAL_COMMANDS = 0x0129,
};
enum tpm2_startup_types { enum tpm2_startup_types {
TPM2_SU_CLEAR = 0x0000, TPM2_SU_CLEAR = 0x0000,
TPM2_SU_STATE = 0x0001, TPM2_SU_STATE = 0x0001,
}; };
enum tpm2_cc_attrs {
TPM2_CC_ATTR_CHANDLES = 25,
TPM2_CC_ATTR_RHANDLE = 28,
};
#define TPM_VID_INTEL 0x8086 #define TPM_VID_INTEL 0x8086
#define TPM_VID_WINBOND 0x1050 #define TPM_VID_WINBOND 0x1050
#define TPM_VID_STM 0x104A #define TPM_VID_STM 0x104A
#define TPM_PPI_VERSION_LEN 3 #define TPM_PPI_VERSION_LEN 3
struct tpm_space {
u32 context_tbl[3];
u8 *context_buf;
u32 session_tbl[3];
u8 *session_buf;
};
enum tpm_chip_flags { enum tpm_chip_flags {
TPM_CHIP_FLAG_TPM2 = BIT(1), TPM_CHIP_FLAG_TPM2 = BIT(1),
TPM_CHIP_FLAG_IRQ = BIT(2), TPM_CHIP_FLAG_IRQ = BIT(2),
...@@ -161,7 +184,9 @@ struct tpm_chip_seqops { ...@@ -161,7 +184,9 @@ struct tpm_chip_seqops {
struct tpm_chip { struct tpm_chip {
struct device dev; struct device dev;
struct device devs;
struct cdev cdev; struct cdev cdev;
struct cdev cdevs;
/* A driver callback under ops cannot be run unless ops_sem is held /* A driver callback under ops cannot be run unless ops_sem is held
* (sometimes implicitly, eg for the sysfs code). ops becomes null * (sometimes implicitly, eg for the sysfs code). ops becomes null
...@@ -199,6 +224,13 @@ struct tpm_chip { ...@@ -199,6 +224,13 @@ struct tpm_chip {
acpi_handle acpi_dev_handle; acpi_handle acpi_dev_handle;
char ppi_version[TPM_PPI_VERSION_LEN + 1]; char ppi_version[TPM_PPI_VERSION_LEN + 1];
#endif /* CONFIG_ACPI */ #endif /* CONFIG_ACPI */
struct tpm_space work_space;
u32 nr_commands;
u32 *cc_attrs_tbl;
/* active locality */
int locality;
}; };
#define to_tpm_chip(d) container_of(d, struct tpm_chip, dev) #define to_tpm_chip(d) container_of(d, struct tpm_chip, dev)
...@@ -485,18 +517,21 @@ static inline void tpm_buf_append_u32(struct tpm_buf *buf, const u32 value) ...@@ -485,18 +517,21 @@ static inline void tpm_buf_append_u32(struct tpm_buf *buf, const u32 value)
} }
extern struct class *tpm_class; extern struct class *tpm_class;
extern struct class *tpmrm_class;
extern dev_t tpm_devt; extern dev_t tpm_devt;
extern const struct file_operations tpm_fops; extern const struct file_operations tpm_fops;
extern const struct file_operations tpmrm_fops;
extern struct idr dev_nums_idr; extern struct idr dev_nums_idr;
enum tpm_transmit_flags { enum tpm_transmit_flags {
TPM_TRANSMIT_UNLOCKED = BIT(0), TPM_TRANSMIT_UNLOCKED = BIT(0),
}; };
ssize_t tpm_transmit(struct tpm_chip *chip, const u8 *buf, size_t bufsiz, ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
unsigned int flags); u8 *buf, size_t bufsiz, unsigned int flags);
ssize_t tpm_transmit_cmd(struct tpm_chip *chip, const void *buf, size_t bufsiz, ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_space *space,
size_t min_rsp_body_len, unsigned int flags, const void *buf, size_t bufsiz,
size_t min_rsp_body_length, unsigned int flags,
const char *desc); const char *desc);
ssize_t tpm_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap, ssize_t tpm_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap,
const char *desc, size_t min_cap_length); const char *desc, size_t min_cap_length);
...@@ -541,6 +576,8 @@ int tpm2_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf); ...@@ -541,6 +576,8 @@ int tpm2_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf);
int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, u32 count, int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, u32 count,
struct tpm2_digest *digests); struct tpm2_digest *digests);
int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max); int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max);
void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle,
unsigned int flags);
int tpm2_seal_trusted(struct tpm_chip *chip, int tpm2_seal_trusted(struct tpm_chip *chip,
struct trusted_key_payload *payload, struct trusted_key_payload *payload,
struct trusted_key_options *options); struct trusted_key_options *options);
...@@ -554,4 +591,11 @@ int tpm2_auto_startup(struct tpm_chip *chip); ...@@ -554,4 +591,11 @@ int tpm2_auto_startup(struct tpm_chip *chip);
void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type); void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type);
unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal); unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal);
int tpm2_probe(struct tpm_chip *chip); int tpm2_probe(struct tpm_chip *chip);
int tpm2_find_cc(struct tpm_chip *chip, u32 cc);
int tpm2_init_space(struct tpm_space *space);
void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space);
int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u32 cc,
u8 *cmd);
int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space,
u32 cc, u8 *buf, size_t *bufsiz);
#endif #endif
...@@ -266,7 +266,7 @@ int tpm2_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf) ...@@ -266,7 +266,7 @@ int tpm2_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf)
sizeof(cmd.params.pcrread_in.pcr_select)); sizeof(cmd.params.pcrread_in.pcr_select));
cmd.params.pcrread_in.pcr_select[pcr_idx >> 3] = 1 << (pcr_idx & 0x7); cmd.params.pcrread_in.pcr_select[pcr_idx >> 3] = 1 << (pcr_idx & 0x7);
rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd),
TPM2_PCR_READ_RESP_BODY_SIZE, TPM2_PCR_READ_RESP_BODY_SIZE,
0, "attempting to read a pcr value"); 0, "attempting to read a pcr value");
if (rc == 0) { if (rc == 0) {
...@@ -333,7 +333,7 @@ int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, u32 count, ...@@ -333,7 +333,7 @@ int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, u32 count,
} }
} }
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 0, 0, rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 0, 0,
"attempting extend a PCR value"); "attempting extend a PCR value");
tpm_buf_destroy(&buf); tpm_buf_destroy(&buf);
...@@ -382,7 +382,7 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max) ...@@ -382,7 +382,7 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max)
cmd.header.in = tpm2_getrandom_header; cmd.header.in = tpm2_getrandom_header;
cmd.params.getrandom_in.size = cpu_to_be16(num_bytes); cmd.params.getrandom_in.size = cpu_to_be16(num_bytes);
err = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), err = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd),
offsetof(struct tpm2_get_random_out, offsetof(struct tpm2_get_random_out,
buffer), buffer),
0, "attempting get random"); 0, "attempting get random");
...@@ -418,6 +418,35 @@ static const struct tpm_input_header tpm2_get_tpm_pt_header = { ...@@ -418,6 +418,35 @@ static const struct tpm_input_header tpm2_get_tpm_pt_header = {
.ordinal = cpu_to_be32(TPM2_CC_GET_CAPABILITY) .ordinal = cpu_to_be32(TPM2_CC_GET_CAPABILITY)
}; };
/**
* tpm2_flush_context_cmd() - execute a TPM2_FlushContext command
* @chip: TPM chip to use
* @payload: the key data in clear and encrypted form
* @options: authentication values and other options
*
* Return: same as with tpm_transmit_cmd
*/
void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle,
unsigned int flags)
{
struct tpm_buf buf;
int rc;
rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_FLUSH_CONTEXT);
if (rc) {
dev_warn(&chip->dev, "0x%08x was not flushed, out of memory\n",
handle);
return;
}
tpm_buf_append_u32(&buf, handle);
(void) tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 0, flags,
"flushing context");
tpm_buf_destroy(&buf);
}
/** /**
* tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer. * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
* *
...@@ -528,7 +557,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip, ...@@ -528,7 +557,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
goto out; goto out;
} }
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 4, 0, rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 4, 0,
"sealing data"); "sealing data");
if (rc) if (rc)
goto out; goto out;
...@@ -612,7 +641,7 @@ static int tpm2_load_cmd(struct tpm_chip *chip, ...@@ -612,7 +641,7 @@ static int tpm2_load_cmd(struct tpm_chip *chip,
goto out; goto out;
} }
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 4, flags, rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 4, flags,
"loading blob"); "loading blob");
if (!rc) if (!rc)
*blob_handle = be32_to_cpup( *blob_handle = be32_to_cpup(
...@@ -627,39 +656,6 @@ static int tpm2_load_cmd(struct tpm_chip *chip, ...@@ -627,39 +656,6 @@ static int tpm2_load_cmd(struct tpm_chip *chip,
return rc; return rc;
} }
/**
* tpm2_flush_context_cmd() - execute a TPM2_FlushContext command
*
* @chip: TPM chip to use
* @handle: the key data in clear and encrypted form
* @flags: tpm transmit flags
*
* Return: Same as with tpm_transmit_cmd.
*/
static void tpm2_flush_context_cmd(struct tpm_chip *chip, u32 handle,
unsigned int flags)
{
struct tpm_buf buf;
int rc;
rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_FLUSH_CONTEXT);
if (rc) {
dev_warn(&chip->dev, "0x%08x was not flushed, out of memory\n",
handle);
return;
}
tpm_buf_append_u32(&buf, handle);
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 0, flags,
"flushing context");
if (rc)
dev_warn(&chip->dev, "0x%08x was not flushed, rc=%d\n", handle,
rc);
tpm_buf_destroy(&buf);
}
/** /**
* tpm2_unseal_cmd() - execute a TPM2_Unload command * tpm2_unseal_cmd() - execute a TPM2_Unload command
* *
...@@ -697,7 +693,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip, ...@@ -697,7 +693,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
options->blobauth /* hmac */, options->blobauth /* hmac */,
TPM_DIGEST_SIZE); TPM_DIGEST_SIZE);
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 6, flags, rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 6, flags,
"unsealing"); "unsealing");
if (rc > 0) if (rc > 0)
rc = -EPERM; rc = -EPERM;
...@@ -774,7 +770,7 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, u32 *value, ...@@ -774,7 +770,7 @@ ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, u32 *value,
cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(property_id); cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(property_id);
cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1); cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1);
rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd),
TPM2_GET_TPM_PT_OUT_BODY_SIZE, 0, desc); TPM2_GET_TPM_PT_OUT_BODY_SIZE, 0, desc);
if (!rc) if (!rc)
*value = be32_to_cpu(cmd.params.get_tpm_pt_out.value); *value = be32_to_cpu(cmd.params.get_tpm_pt_out.value);
...@@ -809,7 +805,7 @@ static int tpm2_startup(struct tpm_chip *chip, u16 startup_type) ...@@ -809,7 +805,7 @@ static int tpm2_startup(struct tpm_chip *chip, u16 startup_type)
cmd.header.in = tpm2_startup_header; cmd.header.in = tpm2_startup_header;
cmd.params.startup_in.startup_type = cpu_to_be16(startup_type); cmd.params.startup_in.startup_type = cpu_to_be16(startup_type);
return tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, return tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0,
"attempting to start the TPM"); "attempting to start the TPM");
} }
...@@ -838,7 +834,7 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type) ...@@ -838,7 +834,7 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type)
cmd.header.in = tpm2_shutdown_header; cmd.header.in = tpm2_shutdown_header;
cmd.params.startup_in.startup_type = cpu_to_be16(shutdown_type); cmd.params.startup_in.startup_type = cpu_to_be16(shutdown_type);
rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0,
"stopping the TPM"); "stopping the TPM");
/* In places where shutdown command is sent there's no much we can do /* In places where shutdown command is sent there's no much we can do
...@@ -902,7 +898,7 @@ static int tpm2_start_selftest(struct tpm_chip *chip, bool full) ...@@ -902,7 +898,7 @@ static int tpm2_start_selftest(struct tpm_chip *chip, bool full)
cmd.header.in = tpm2_selftest_header; cmd.header.in = tpm2_selftest_header;
cmd.params.selftest_in.full_test = full; cmd.params.selftest_in.full_test = full;
rc = tpm_transmit_cmd(chip, &cmd, TPM2_SELF_TEST_IN_SIZE, 0, 0, rc = tpm_transmit_cmd(chip, NULL, &cmd, TPM2_SELF_TEST_IN_SIZE, 0, 0,
"continue selftest"); "continue selftest");
/* At least some prototype chips seem to give RC_TESTING error /* At least some prototype chips seem to give RC_TESTING error
...@@ -953,7 +949,8 @@ static int tpm2_do_selftest(struct tpm_chip *chip) ...@@ -953,7 +949,8 @@ static int tpm2_do_selftest(struct tpm_chip *chip)
cmd.params.pcrread_in.pcr_select[1] = 0x00; cmd.params.pcrread_in.pcr_select[1] = 0x00;
cmd.params.pcrread_in.pcr_select[2] = 0x00; cmd.params.pcrread_in.pcr_select[2] = 0x00;
rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, NULL); rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0,
NULL);
if (rc < 0) if (rc < 0)
break; break;
...@@ -986,7 +983,7 @@ int tpm2_probe(struct tpm_chip *chip) ...@@ -986,7 +983,7 @@ int tpm2_probe(struct tpm_chip *chip)
cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(0x100); cmd.params.get_tpm_pt_in.property_id = cpu_to_be32(0x100);
cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1); cmd.params.get_tpm_pt_in.property_cnt = cpu_to_be32(1);
rc = tpm_transmit_cmd(chip, &cmd, sizeof(cmd), 0, 0, NULL); rc = tpm_transmit_cmd(chip, NULL, &cmd, sizeof(cmd), 0, 0, NULL);
if (rc < 0) if (rc < 0)
return rc; return rc;
...@@ -1024,7 +1021,7 @@ static ssize_t tpm2_get_pcr_allocation(struct tpm_chip *chip) ...@@ -1024,7 +1021,7 @@ static ssize_t tpm2_get_pcr_allocation(struct tpm_chip *chip)
tpm_buf_append_u32(&buf, 0); tpm_buf_append_u32(&buf, 0);
tpm_buf_append_u32(&buf, 1); tpm_buf_append_u32(&buf, 1);
rc = tpm_transmit_cmd(chip, buf.data, PAGE_SIZE, 9, 0, rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE, 9, 0,
"get tpm pcr allocation"); "get tpm pcr allocation");
if (rc) if (rc)
goto out; goto out;
...@@ -1067,15 +1064,76 @@ static ssize_t tpm2_get_pcr_allocation(struct tpm_chip *chip) ...@@ -1067,15 +1064,76 @@ static ssize_t tpm2_get_pcr_allocation(struct tpm_chip *chip)
return rc; return rc;
} }
static int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip)
{
struct tpm_buf buf;
u32 nr_commands;
u32 *attrs;
u32 cc;
int i;
int rc;
rc = tpm2_get_tpm_pt(chip, TPM_PT_TOTAL_COMMANDS, &nr_commands, NULL);
if (rc)
goto out;
if (nr_commands > 0xFFFFF) {
rc = -EFAULT;
goto out;
}
chip->cc_attrs_tbl = devm_kzalloc(&chip->dev, 4 * nr_commands,
GFP_KERNEL);
rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_GET_CAPABILITY);
if (rc)
goto out;
tpm_buf_append_u32(&buf, TPM2_CAP_COMMANDS);
tpm_buf_append_u32(&buf, TPM2_CC_FIRST);
tpm_buf_append_u32(&buf, nr_commands);
rc = tpm_transmit_cmd(chip, NULL, buf.data, PAGE_SIZE,
9 + 4 * nr_commands, 0, NULL);
if (rc) {
tpm_buf_destroy(&buf);
goto out;
}
if (nr_commands !=
be32_to_cpup((__be32 *)&buf.data[TPM_HEADER_SIZE + 5])) {
tpm_buf_destroy(&buf);
goto out;
}
chip->nr_commands = nr_commands;
attrs = (u32 *)&buf.data[TPM_HEADER_SIZE + 9];
for (i = 0; i < nr_commands; i++, attrs++) {
chip->cc_attrs_tbl[i] = be32_to_cpup(attrs);
cc = chip->cc_attrs_tbl[i] & 0xFFFF;
if (cc == TPM2_CC_CONTEXT_SAVE || cc == TPM2_CC_FLUSH_CONTEXT) {
chip->cc_attrs_tbl[i] &=
~(GENMASK(2, 0) << TPM2_CC_ATTR_CHANDLES);
chip->cc_attrs_tbl[i] |= 1 << TPM2_CC_ATTR_CHANDLES;
}
}
tpm_buf_destroy(&buf);
out:
if (rc > 0)
rc = -ENODEV;
return rc;
}
/** /**
* tpm2_auto_startup - Perform the standard automatic TPM initialization * tpm2_auto_startup - Perform the standard automatic TPM initialization
* sequence * sequence
* @chip: TPM chip to use * @chip: TPM chip to use
* *
* Initializes timeout values for operation and command durations, conducts * Returns 0 on success, < 0 in case of fatal error.
* a self-test and reads the list of active PCR banks.
*
* Return: 0 on success. Otherwise, a system error code is returned.
*/ */
int tpm2_auto_startup(struct tpm_chip *chip) int tpm2_auto_startup(struct tpm_chip *chip)
{ {
...@@ -1104,9 +1162,24 @@ int tpm2_auto_startup(struct tpm_chip *chip) ...@@ -1104,9 +1162,24 @@ int tpm2_auto_startup(struct tpm_chip *chip)
} }
rc = tpm2_get_pcr_allocation(chip); rc = tpm2_get_pcr_allocation(chip);
if (rc)
goto out;
rc = tpm2_get_cc_attrs_tbl(chip);
out: out:
if (rc > 0) if (rc > 0)
rc = -ENODEV; rc = -ENODEV;
return rc; return rc;
} }
int tpm2_find_cc(struct tpm_chip *chip, u32 cc)
{
int i;
for (i = 0; i < chip->nr_commands; i++)
if (cc == (chip->cc_attrs_tbl[i] & GENMASK(15, 0)))
return i;
return -1;
}
This diff is collapsed.
...@@ -56,18 +56,24 @@ static int calc_tpm2_event_size(struct tcg_pcr_event2 *event, ...@@ -56,18 +56,24 @@ static int calc_tpm2_event_size(struct tcg_pcr_event2 *event,
efispecid = (struct tcg_efi_specid_event *)event_header->event; efispecid = (struct tcg_efi_specid_event *)event_header->event;
for (i = 0; (i < event->count) && (i < TPM2_ACTIVE_PCR_BANKS); /* Check if event is malformed. */
i++) { if (event->count > efispecid->num_algs)
return 0;
for (i = 0; i < event->count; i++) {
halg_size = sizeof(event->digests[i].alg_id); halg_size = sizeof(event->digests[i].alg_id);
memcpy(&halg, marker, halg_size); memcpy(&halg, marker, halg_size);
marker = marker + halg_size; marker = marker + halg_size;
for (j = 0; (j < efispecid->num_algs); j++) { for (j = 0; j < efispecid->num_algs; j++) {
if (halg == efispecid->digest_sizes[j].alg_id) { if (halg == efispecid->digest_sizes[j].alg_id) {
marker = marker + marker +=
efispecid->digest_sizes[j].digest_size; efispecid->digest_sizes[j].digest_size;
break; break;
} }
} }
/* Algorithm without known length. Such event is unparseable. */
if (j == efispecid->num_algs)
return 0;
} }
event_field = (struct tcg_event_field *)marker; event_field = (struct tcg_event_field *)marker;
......
This diff is collapsed.
...@@ -278,22 +278,22 @@ enum tis_defaults { ...@@ -278,22 +278,22 @@ enum tis_defaults {
#define TPM_DATA_FIFO(l) (0x0005 | ((l) << 4)) #define TPM_DATA_FIFO(l) (0x0005 | ((l) << 4))
#define TPM_DID_VID(l) (0x0006 | ((l) << 4)) #define TPM_DID_VID(l) (0x0006 | ((l) << 4))
static int check_locality(struct tpm_chip *chip, int loc) static bool check_locality(struct tpm_chip *chip, int loc)
{ {
u8 buf; u8 buf;
int rc; int rc;
rc = iic_tpm_read(TPM_ACCESS(loc), &buf, 1); rc = iic_tpm_read(TPM_ACCESS(loc), &buf, 1);
if (rc < 0) if (rc < 0)
return rc; return false;
if ((buf & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == if ((buf & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) { (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) {
tpm_dev.locality = loc; tpm_dev.locality = loc;
return loc; return true;
} }
return -EIO; return false;
} }
/* implementation similar to tpm_tis */ /* implementation similar to tpm_tis */
...@@ -315,7 +315,7 @@ static int request_locality(struct tpm_chip *chip, int loc) ...@@ -315,7 +315,7 @@ static int request_locality(struct tpm_chip *chip, int loc)
unsigned long stop; unsigned long stop;
u8 buf = TPM_ACCESS_REQUEST_USE; u8 buf = TPM_ACCESS_REQUEST_USE;
if (check_locality(chip, loc) >= 0) if (check_locality(chip, loc))
return loc; return loc;
iic_tpm_write(TPM_ACCESS(loc), &buf, 1); iic_tpm_write(TPM_ACCESS(loc), &buf, 1);
...@@ -323,7 +323,7 @@ static int request_locality(struct tpm_chip *chip, int loc) ...@@ -323,7 +323,7 @@ static int request_locality(struct tpm_chip *chip, int loc)
/* wait for burstcount */ /* wait for burstcount */
stop = jiffies + chip->timeout_a; stop = jiffies + chip->timeout_a;
do { do {
if (check_locality(chip, loc) >= 0) if (check_locality(chip, loc))
return loc; return loc;
usleep_range(TPM_TIMEOUT_US_LOW, TPM_TIMEOUT_US_HI); usleep_range(TPM_TIMEOUT_US_LOW, TPM_TIMEOUT_US_HI);
} while (time_before(jiffies, stop)); } while (time_before(jiffies, stop));
......
...@@ -49,9 +49,10 @@ ...@@ -49,9 +49,10 @@
*/ */
#define TPM_I2C_MAX_BUF_SIZE 32 #define TPM_I2C_MAX_BUF_SIZE 32
#define TPM_I2C_RETRY_COUNT 32 #define TPM_I2C_RETRY_COUNT 32
#define TPM_I2C_BUS_DELAY 1 /* msec */ #define TPM_I2C_BUS_DELAY 1000 /* usec */
#define TPM_I2C_RETRY_DELAY_SHORT 2 /* msec */ #define TPM_I2C_RETRY_DELAY_SHORT (2 * 1000) /* usec */
#define TPM_I2C_RETRY_DELAY_LONG 10 /* msec */ #define TPM_I2C_RETRY_DELAY_LONG (10 * 1000) /* usec */
#define TPM_I2C_DELAY_RANGE 300 /* usec */
#define OF_IS_TPM2 ((void *)1) #define OF_IS_TPM2 ((void *)1)
#define I2C_IS_TPM2 1 #define I2C_IS_TPM2 1
...@@ -123,7 +124,9 @@ static s32 i2c_nuvoton_write_status(struct i2c_client *client, u8 data) ...@@ -123,7 +124,9 @@ static s32 i2c_nuvoton_write_status(struct i2c_client *client, u8 data)
/* this causes the current command to be aborted */ /* this causes the current command to be aborted */
for (i = 0, status = -1; i < TPM_I2C_RETRY_COUNT && status < 0; i++) { for (i = 0, status = -1; i < TPM_I2C_RETRY_COUNT && status < 0; i++) {
status = i2c_nuvoton_write_buf(client, TPM_STS, 1, &data); status = i2c_nuvoton_write_buf(client, TPM_STS, 1, &data);
msleep(TPM_I2C_BUS_DELAY); if (status < 0)
usleep_range(TPM_I2C_BUS_DELAY, TPM_I2C_BUS_DELAY
+ TPM_I2C_DELAY_RANGE);
} }
return status; return status;
} }
...@@ -160,7 +163,8 @@ static int i2c_nuvoton_get_burstcount(struct i2c_client *client, ...@@ -160,7 +163,8 @@ static int i2c_nuvoton_get_burstcount(struct i2c_client *client,
burst_count = min_t(u8, TPM_I2C_MAX_BUF_SIZE, data); burst_count = min_t(u8, TPM_I2C_MAX_BUF_SIZE, data);
break; break;
} }
msleep(TPM_I2C_BUS_DELAY); usleep_range(TPM_I2C_BUS_DELAY, TPM_I2C_BUS_DELAY
+ TPM_I2C_DELAY_RANGE);
} while (time_before(jiffies, stop)); } while (time_before(jiffies, stop));
return burst_count; return burst_count;
...@@ -203,13 +207,17 @@ static int i2c_nuvoton_wait_for_stat(struct tpm_chip *chip, u8 mask, u8 value, ...@@ -203,13 +207,17 @@ static int i2c_nuvoton_wait_for_stat(struct tpm_chip *chip, u8 mask, u8 value,
return 0; return 0;
/* use polling to wait for the event */ /* use polling to wait for the event */
ten_msec = jiffies + msecs_to_jiffies(TPM_I2C_RETRY_DELAY_LONG); ten_msec = jiffies + usecs_to_jiffies(TPM_I2C_RETRY_DELAY_LONG);
stop = jiffies + timeout; stop = jiffies + timeout;
do { do {
if (time_before(jiffies, ten_msec)) if (time_before(jiffies, ten_msec))
msleep(TPM_I2C_RETRY_DELAY_SHORT); usleep_range(TPM_I2C_RETRY_DELAY_SHORT,
TPM_I2C_RETRY_DELAY_SHORT
+ TPM_I2C_DELAY_RANGE);
else else
msleep(TPM_I2C_RETRY_DELAY_LONG); usleep_range(TPM_I2C_RETRY_DELAY_LONG,
TPM_I2C_RETRY_DELAY_LONG
+ TPM_I2C_DELAY_RANGE);
status_valid = i2c_nuvoton_check_status(chip, mask, status_valid = i2c_nuvoton_check_status(chip, mask,
value); value);
if (status_valid) if (status_valid)
......
...@@ -299,6 +299,8 @@ static int tpm_ibmvtpm_remove(struct vio_dev *vdev) ...@@ -299,6 +299,8 @@ static int tpm_ibmvtpm_remove(struct vio_dev *vdev)
} }
kfree(ibmvtpm); kfree(ibmvtpm);
/* For tpm_ibmvtpm_get_desired_dma */
dev_set_drvdata(&vdev->dev, NULL);
return 0; return 0;
} }
...@@ -313,14 +315,16 @@ static int tpm_ibmvtpm_remove(struct vio_dev *vdev) ...@@ -313,14 +315,16 @@ static int tpm_ibmvtpm_remove(struct vio_dev *vdev)
static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev) static unsigned long tpm_ibmvtpm_get_desired_dma(struct vio_dev *vdev)
{ {
struct tpm_chip *chip = dev_get_drvdata(&vdev->dev); struct tpm_chip *chip = dev_get_drvdata(&vdev->dev);
struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev); struct ibmvtpm_dev *ibmvtpm;
/* /*
* ibmvtpm initializes at probe time, so the data we are * ibmvtpm initializes at probe time, so the data we are
* asking for may not be set yet. Estimate that 4K required * asking for may not be set yet. Estimate that 4K required
* for TCE-mapped buffer in addition to CRQ. * for TCE-mapped buffer in addition to CRQ.
*/ */
if (!ibmvtpm) if (chip)
ibmvtpm = dev_get_drvdata(&chip->dev);
else
return CRQ_RES_BUF_SIZE + PAGE_SIZE; return CRQ_RES_BUF_SIZE + PAGE_SIZE;
return CRQ_RES_BUF_SIZE + ibmvtpm->rtce_size; return CRQ_RES_BUF_SIZE + ibmvtpm->rtce_size;
......
...@@ -56,7 +56,7 @@ static int wait_startup(struct tpm_chip *chip, int l) ...@@ -56,7 +56,7 @@ static int wait_startup(struct tpm_chip *chip, int l)
return -1; return -1;
} }
static int check_locality(struct tpm_chip *chip, int l) static bool check_locality(struct tpm_chip *chip, int l)
{ {
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc; int rc;
...@@ -64,30 +64,22 @@ static int check_locality(struct tpm_chip *chip, int l) ...@@ -64,30 +64,22 @@ static int check_locality(struct tpm_chip *chip, int l)
rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access);
if (rc < 0) if (rc < 0)
return rc; return false;
if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) {
return priv->locality = l; priv->locality = l;
return true;
}
return -1; return false;
} }
static void release_locality(struct tpm_chip *chip, int l, int force) static void release_locality(struct tpm_chip *chip, int l)
{ {
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
int rc;
u8 access;
rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access);
if (rc < 0)
return;
if (force || (access &
(TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID)) ==
(TPM_ACCESS_REQUEST_PENDING | TPM_ACCESS_VALID))
tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY);
} }
static int request_locality(struct tpm_chip *chip, int l) static int request_locality(struct tpm_chip *chip, int l)
...@@ -96,7 +88,7 @@ static int request_locality(struct tpm_chip *chip, int l) ...@@ -96,7 +88,7 @@ static int request_locality(struct tpm_chip *chip, int l)
unsigned long stop, timeout; unsigned long stop, timeout;
long rc; long rc;
if (check_locality(chip, l) >= 0) if (check_locality(chip, l))
return l; return l;
rc = tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_REQUEST_USE); rc = tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_REQUEST_USE);
...@@ -112,7 +104,7 @@ static int request_locality(struct tpm_chip *chip, int l) ...@@ -112,7 +104,7 @@ static int request_locality(struct tpm_chip *chip, int l)
return -1; return -1;
rc = wait_event_interruptible_timeout(priv->int_queue, rc = wait_event_interruptible_timeout(priv->int_queue,
(check_locality (check_locality
(chip, l) >= 0), (chip, l)),
timeout); timeout);
if (rc > 0) if (rc > 0)
return l; return l;
...@@ -123,7 +115,7 @@ static int request_locality(struct tpm_chip *chip, int l) ...@@ -123,7 +115,7 @@ static int request_locality(struct tpm_chip *chip, int l)
} else { } else {
/* wait for burstcount */ /* wait for burstcount */
do { do {
if (check_locality(chip, l) >= 0) if (check_locality(chip, l))
return l; return l;
msleep(TPM_TIMEOUT); msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop)); } while (time_before(jiffies, stop));
...@@ -160,8 +152,10 @@ static int get_burstcount(struct tpm_chip *chip) ...@@ -160,8 +152,10 @@ static int get_burstcount(struct tpm_chip *chip)
u32 value; u32 value;
/* wait for burstcount */ /* wait for burstcount */
/* which timeout value, spec has 2 answers (c & d) */ if (chip->flags & TPM_CHIP_FLAG_TPM2)
stop = jiffies + chip->timeout_d; stop = jiffies + chip->timeout_a;
else
stop = jiffies + chip->timeout_d;
do { do {
rc = tpm_tis_read32(priv, TPM_STS(priv->locality), &value); rc = tpm_tis_read32(priv, TPM_STS(priv->locality), &value);
if (rc < 0) if (rc < 0)
...@@ -250,7 +244,6 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) ...@@ -250,7 +244,6 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
out: out:
tpm_tis_ready(chip); tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return size; return size;
} }
...@@ -266,9 +259,6 @@ static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len) ...@@ -266,9 +259,6 @@ static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len)
size_t count = 0; size_t count = 0;
bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND; bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND;
if (request_locality(chip, 0) < 0)
return -EBUSY;
status = tpm_tis_status(chip); status = tpm_tis_status(chip);
if ((status & TPM_STS_COMMAND_READY) == 0) { if ((status & TPM_STS_COMMAND_READY) == 0) {
tpm_tis_ready(chip); tpm_tis_ready(chip);
...@@ -327,7 +317,6 @@ static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len) ...@@ -327,7 +317,6 @@ static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len)
out_err: out_err:
tpm_tis_ready(chip); tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return rc; return rc;
} }
...@@ -388,7 +377,6 @@ static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len) ...@@ -388,7 +377,6 @@ static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len)
return len; return len;
out_err: out_err:
tpm_tis_ready(chip); tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
return rc; return rc;
} }
...@@ -475,12 +463,14 @@ static int probe_itpm(struct tpm_chip *chip) ...@@ -475,12 +463,14 @@ static int probe_itpm(struct tpm_chip *chip)
if (vendor != TPM_VID_INTEL) if (vendor != TPM_VID_INTEL)
return 0; return 0;
if (request_locality(chip, 0) != 0)
return -EBUSY;
rc = tpm_tis_send_data(chip, cmd_getticks, len); rc = tpm_tis_send_data(chip, cmd_getticks, len);
if (rc == 0) if (rc == 0)
goto out; goto out;
tpm_tis_ready(chip); tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0);
priv->flags |= TPM_TIS_ITPM_WORKAROUND; priv->flags |= TPM_TIS_ITPM_WORKAROUND;
...@@ -494,7 +484,7 @@ static int probe_itpm(struct tpm_chip *chip) ...@@ -494,7 +484,7 @@ static int probe_itpm(struct tpm_chip *chip)
out: out:
tpm_tis_ready(chip); tpm_tis_ready(chip);
release_locality(chip, priv->locality, 0); release_locality(chip, priv->locality);
return rc; return rc;
} }
...@@ -533,7 +523,7 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id) ...@@ -533,7 +523,7 @@ static irqreturn_t tis_int_handler(int dummy, void *dev_id)
wake_up_interruptible(&priv->read_queue); wake_up_interruptible(&priv->read_queue);
if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT) if (interrupt & TPM_INTF_LOCALITY_CHANGE_INT)
for (i = 0; i < 5; i++) for (i = 0; i < 5; i++)
if (check_locality(chip, i) >= 0) if (check_locality(chip, i))
break; break;
if (interrupt & if (interrupt &
(TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT | (TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_STS_VALID_INT |
...@@ -668,7 +658,6 @@ void tpm_tis_remove(struct tpm_chip *chip) ...@@ -668,7 +658,6 @@ void tpm_tis_remove(struct tpm_chip *chip)
interrupt = 0; interrupt = 0;
tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt); tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt);
release_locality(chip, priv->locality, 1);
} }
EXPORT_SYMBOL_GPL(tpm_tis_remove); EXPORT_SYMBOL_GPL(tpm_tis_remove);
...@@ -682,6 +671,8 @@ static const struct tpm_class_ops tpm_tis = { ...@@ -682,6 +671,8 @@ static const struct tpm_class_ops tpm_tis = {
.req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID, .req_complete_mask = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
.req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID, .req_complete_val = TPM_STS_DATA_AVAIL | TPM_STS_VALID,
.req_canceled = tpm_tis_req_canceled, .req_canceled = tpm_tis_req_canceled,
.request_locality = request_locality,
.relinquish_locality = release_locality,
}; };
int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
...@@ -724,11 +715,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq, ...@@ -724,11 +715,6 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
intmask &= ~TPM_GLOBAL_INT_ENABLE; intmask &= ~TPM_GLOBAL_INT_ENABLE;
tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask); tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
if (request_locality(chip, 0) != 0) {
rc = -ENODEV;
goto out_err;
}
rc = tpm2_probe(chip); rc = tpm2_probe(chip);
if (rc) if (rc)
goto out_err; goto out_err;
......
...@@ -47,8 +47,8 @@ struct tpm_tis_spi_phy { ...@@ -47,8 +47,8 @@ struct tpm_tis_spi_phy {
struct tpm_tis_data priv; struct tpm_tis_data priv;
struct spi_device *spi_device; struct spi_device *spi_device;
u8 tx_buf[MAX_SPI_FRAMESIZE + 4]; u8 tx_buf[4];
u8 rx_buf[MAX_SPI_FRAMESIZE + 4]; u8 rx_buf[4];
}; };
static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *data) static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *data)
...@@ -56,122 +56,98 @@ static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *da ...@@ -56,122 +56,98 @@ static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *da
return container_of(data, struct tpm_tis_spi_phy, priv); return container_of(data, struct tpm_tis_spi_phy, priv);
} }
static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr, static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
u16 len, u8 *result) u8 *buffer, u8 direction)
{ {
struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data); struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
int ret, i; int ret = 0;
int i;
struct spi_message m; struct spi_message m;
struct spi_transfer spi_xfer = { struct spi_transfer spi_xfer;
.tx_buf = phy->tx_buf, u8 transfer_len;
.rx_buf = phy->rx_buf,
.len = 4,
};
if (len > MAX_SPI_FRAMESIZE) spi_bus_lock(phy->spi_device->master);
return -ENOMEM;
phy->tx_buf[0] = 0x80 | (len - 1); while (len) {
phy->tx_buf[1] = 0xd4; transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
phy->tx_buf[2] = (addr >> 8) & 0xFF;
phy->tx_buf[3] = addr & 0xFF;
spi_xfer.cs_change = 1; phy->tx_buf[0] = direction | (transfer_len - 1);
spi_message_init(&m); phy->tx_buf[1] = 0xd4;
spi_message_add_tail(&spi_xfer, &m); phy->tx_buf[2] = addr >> 8;
phy->tx_buf[3] = addr;
memset(&spi_xfer, 0, sizeof(spi_xfer));
spi_xfer.tx_buf = phy->tx_buf;
spi_xfer.rx_buf = phy->rx_buf;
spi_xfer.len = 4;
spi_xfer.cs_change = 1;
spi_bus_lock(phy->spi_device->master);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
memset(phy->tx_buf, 0, len);
/* According to TCG PTP specification, if there is no TPM present at
* all, then the design has a weak pull-up on MISO. If a TPM is not
* present, a pull-up on MISO means that the SB controller sees a 1,
* and will latch in 0xFF on the read.
*/
for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) {
spi_xfer.len = 1;
spi_message_init(&m); spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m); spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m); ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0) if (ret < 0)
goto exit; goto exit;
}
spi_xfer.cs_change = 0;
spi_xfer.len = len;
spi_xfer.rx_buf = result;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
exit:
spi_bus_unlock(phy->spi_device->master);
return ret;
}
static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *value)
{
struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
int ret, i;
struct spi_message m;
struct spi_transfer spi_xfer = {
.tx_buf = phy->tx_buf,
.rx_buf = phy->rx_buf,
.len = 4,
};
if (len > MAX_SPI_FRAMESIZE)
return -ENOMEM;
phy->tx_buf[0] = len - 1;
phy->tx_buf[1] = 0xd4;
phy->tx_buf[2] = (addr >> 8) & 0xFF;
phy->tx_buf[3] = addr & 0xFF;
spi_xfer.cs_change = 1; if ((phy->rx_buf[3] & 0x01) == 0) {
spi_message_init(&m); // handle SPI wait states
spi_message_add_tail(&spi_xfer, &m); phy->tx_buf[0] = 0;
for (i = 0; i < TPM_RETRY; i++) {
spi_xfer.len = 1;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
if (phy->rx_buf[0] & 0x01)
break;
}
if (i == TPM_RETRY) {
ret = -ETIMEDOUT;
goto exit;
}
}
spi_xfer.cs_change = 0;
spi_xfer.len = transfer_len;
spi_xfer.delay_usecs = 5;
if (direction) {
spi_xfer.tx_buf = NULL;
spi_xfer.rx_buf = buffer;
} else {
spi_xfer.tx_buf = buffer;
spi_xfer.rx_buf = NULL;
}
spi_bus_lock(phy->spi_device->master);
ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0)
goto exit;
memset(phy->tx_buf, 0, len);
/* According to TCG PTP specification, if there is no TPM present at
* all, then the design has a weak pull-up on MISO. If a TPM is not
* present, a pull-up on MISO means that the SB controller sees a 1,
* and will latch in 0xFF on the read.
*/
for (i = 0; (phy->rx_buf[0] & 0x01) == 0 && i < TPM_RETRY; i++) {
spi_xfer.len = 1;
spi_message_init(&m); spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m); spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m); ret = spi_sync_locked(phy->spi_device, &m);
if (ret < 0) if (ret < 0)
goto exit; goto exit;
}
spi_xfer.len = len; len -= transfer_len;
spi_xfer.tx_buf = value; buffer += transfer_len;
spi_xfer.cs_change = 0; }
spi_xfer.tx_buf = value;
spi_message_init(&m);
spi_message_add_tail(&spi_xfer, &m);
ret = spi_sync_locked(phy->spi_device, &m);
exit: exit:
spi_bus_unlock(phy->spi_device->master); spi_bus_unlock(phy->spi_device->master);
return ret; return ret;
} }
static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *result)
{
return tpm_tis_spi_transfer(data, addr, len, result, 0x80);
}
static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr,
u16 len, u8 *value)
{
return tpm_tis_spi_transfer(data, addr, len, value, 0);
}
static int tpm_tis_spi_read16(struct tpm_tis_data *data, u32 addr, u16 *result) static int tpm_tis_spi_read16(struct tpm_tis_data *data, u32 addr, u16 *result)
{ {
int rc; int rc;
......
/*
* Copyright (C) 2017 James.Bottomley@HansenPartnership.com
*
* GPLv2
*/
#include <linux/slab.h>
#include "tpm-dev.h"
struct tpmrm_priv {
struct file_priv priv;
struct tpm_space space;
};
static int tpmrm_open(struct inode *inode, struct file *file)
{
struct tpm_chip *chip;
struct tpmrm_priv *priv;
int rc;
chip = container_of(inode->i_cdev, struct tpm_chip, cdevs);
priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (priv == NULL)
return -ENOMEM;
rc = tpm2_init_space(&priv->space);
if (rc) {
kfree(priv);
return -ENOMEM;
}
tpm_common_open(file, chip, &priv->priv);
return 0;
}
static int tpmrm_release(struct inode *inode, struct file *file)
{
struct file_priv *fpriv = file->private_data;
struct tpmrm_priv *priv = container_of(fpriv, struct tpmrm_priv, priv);
tpm_common_release(file, fpriv);
tpm2_del_space(fpriv->chip, &priv->space);
kfree(priv);
return 0;
}
ssize_t tpmrm_write(struct file *file, const char __user *buf,
size_t size, loff_t *off)
{
struct file_priv *fpriv = file->private_data;
struct tpmrm_priv *priv = container_of(fpriv, struct tpmrm_priv, priv);
return tpm_common_write(file, buf, size, off, &priv->space);
}
const struct file_operations tpmrm_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.open = tpmrm_open,
.read = tpm_common_read,
.write = tpmrm_write,
.release = tpmrm_release,
};
...@@ -340,22 +340,14 @@ int generic_permission(struct inode *inode, int mask) ...@@ -340,22 +340,14 @@ int generic_permission(struct inode *inode, int mask)
if (S_ISDIR(inode->i_mode)) { if (S_ISDIR(inode->i_mode)) {
/* DACs are overridable for directories */ /* DACs are overridable for directories */
if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE))
return 0;
if (!(mask & MAY_WRITE)) if (!(mask & MAY_WRITE))
if (capable_wrt_inode_uidgid(inode, if (capable_wrt_inode_uidgid(inode,
CAP_DAC_READ_SEARCH)) CAP_DAC_READ_SEARCH))
return 0; return 0;
return -EACCES;
}
/*
* Read/write DACs are always overridable.
* Executable DACs are overridable when there is
* at least one exec bit set.
*/
if (!(mask & MAY_EXEC) || (inode->i_mode & S_IXUGO))
if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE)) if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE))
return 0; return 0;
return -EACCES;
}
/* /*
* Searching includes executable on directories, else just read. * Searching includes executable on directories, else just read.
...@@ -364,6 +356,14 @@ int generic_permission(struct inode *inode, int mask) ...@@ -364,6 +356,14 @@ int generic_permission(struct inode *inode, int mask)
if (mask == MAY_READ) if (mask == MAY_READ)
if (capable_wrt_inode_uidgid(inode, CAP_DAC_READ_SEARCH)) if (capable_wrt_inode_uidgid(inode, CAP_DAC_READ_SEARCH))
return 0; return 0;
/*
* Read/write DACs are always overridable.
* Executable DACs are overridable when there is
* at least one exec bit set.
*/
if (!(mask & MAY_EXEC) || (inode->i_mode & S_IXUGO))
if (capable_wrt_inode_uidgid(inode, CAP_DAC_OVERRIDE))
return 0;
return -EACCES; return -EACCES;
} }
......
...@@ -1294,6 +1294,7 @@ struct acpi_table_tpm2 { ...@@ -1294,6 +1294,7 @@ struct acpi_table_tpm2 {
#define ACPI_TPM2_MEMORY_MAPPED 6 #define ACPI_TPM2_MEMORY_MAPPED 6
#define ACPI_TPM2_COMMAND_BUFFER 7 #define ACPI_TPM2_COMMAND_BUFFER 7
#define ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD 8 #define ACPI_TPM2_COMMAND_BUFFER_WITH_START_METHOD 8
#define ACPI_TPM2_COMMAND_BUFFER_WITH_SMC 11
/******************************************************************************* /*******************************************************************************
* *
......
...@@ -50,9 +50,20 @@ struct key; ...@@ -50,9 +50,20 @@ struct key;
struct key_type; struct key_type;
union key_payload; union key_payload;
extern int restrict_link_by_signature(struct key *trust_keyring, extern int restrict_link_by_signature(struct key *dest_keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload); const union key_payload *payload,
struct key *trust_keyring);
extern int restrict_link_by_key_or_keyring(struct key *dest_keyring,
const struct key_type *type,
const union key_payload *payload,
struct key *trusted);
extern int restrict_link_by_key_or_keyring_chain(struct key *trust_keyring,
const struct key_type *type,
const union key_payload *payload,
struct key *trusted);
extern int verify_signature(const struct key *key, extern int verify_signature(const struct key *key,
const struct public_key_signature *sig); const struct public_key_signature *sig);
......
...@@ -18,7 +18,8 @@ ...@@ -18,7 +18,8 @@
extern int restrict_link_by_builtin_trusted(struct key *keyring, extern int restrict_link_by_builtin_trusted(struct key *keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload); const union key_payload *payload,
struct key *restriction_key);
#else #else
#define restrict_link_by_builtin_trusted restrict_link_reject #define restrict_link_by_builtin_trusted restrict_link_reject
...@@ -28,11 +29,24 @@ extern int restrict_link_by_builtin_trusted(struct key *keyring, ...@@ -28,11 +29,24 @@ extern int restrict_link_by_builtin_trusted(struct key *keyring,
extern int restrict_link_by_builtin_and_secondary_trusted( extern int restrict_link_by_builtin_and_secondary_trusted(
struct key *keyring, struct key *keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload); const union key_payload *payload,
struct key *restriction_key);
#else #else
#define restrict_link_by_builtin_and_secondary_trusted restrict_link_by_builtin_trusted #define restrict_link_by_builtin_and_secondary_trusted restrict_link_by_builtin_trusted
#endif #endif
#ifdef CONFIG_SYSTEM_BLACKLIST_KEYRING
extern int mark_hash_blacklisted(const char *hash);
extern int is_hash_blacklisted(const u8 *hash, size_t hash_len,
const char *type);
#else
static inline int is_hash_blacklisted(const u8 *hash, size_t hash_len,
const char *type)
{
return 0;
}
#endif
#ifdef CONFIG_IMA_BLACKLIST_KEYRING #ifdef CONFIG_IMA_BLACKLIST_KEYRING
extern struct key *ima_blacklist_keyring; extern struct key *ima_blacklist_keyring;
......
...@@ -295,6 +295,13 @@ struct compat_old_sigaction { ...@@ -295,6 +295,13 @@ struct compat_old_sigaction {
}; };
#endif #endif
struct compat_keyctl_kdf_params {
compat_uptr_t hashname;
compat_uptr_t otherinfo;
__u32 otherinfolen;
__u32 __spare[8];
};
struct compat_statfs; struct compat_statfs;
struct compat_statfs64; struct compat_statfs64;
struct compat_old_linux_dirent; struct compat_old_linux_dirent;
......
...@@ -219,6 +219,12 @@ extern struct cred init_cred; ...@@ -219,6 +219,12 @@ extern struct cred init_cred;
# define INIT_TASK_TI(tsk) # define INIT_TASK_TI(tsk)
#endif #endif
#ifdef CONFIG_SECURITY
#define INIT_TASK_SECURITY .security = NULL,
#else
#define INIT_TASK_SECURITY
#endif
/* /*
* INIT_TASK is used to set up the first task table, touch at * INIT_TASK is used to set up the first task table, touch at
* your own risk!. Base=0, limit=0x1fffff (=2MB) * your own risk!. Base=0, limit=0x1fffff (=2MB)
...@@ -298,6 +304,7 @@ extern struct cred init_cred; ...@@ -298,6 +304,7 @@ extern struct cred init_cred;
INIT_NUMA_BALANCING(tsk) \ INIT_NUMA_BALANCING(tsk) \
INIT_KASAN(tsk) \ INIT_KASAN(tsk) \
INIT_LIVEPATCH(tsk) \ INIT_LIVEPATCH(tsk) \
INIT_TASK_SECURITY \
} }
......
...@@ -147,6 +147,14 @@ struct key_type { ...@@ -147,6 +147,14 @@ struct key_type {
*/ */
request_key_actor_t request_key; request_key_actor_t request_key;
/* Look up a keyring access restriction (optional)
*
* - NULL is a valid return value (meaning the requested restriction
* is known but will never block addition of a key)
* - should return -EINVAL if the restriction is unknown
*/
struct key_restriction *(*lookup_restriction)(const char *params);
/* internal fields */ /* internal fields */
struct list_head link; /* link in types list */ struct list_head link; /* link in types list */
struct lock_class_key lock_class; /* key->sem lock class */ struct lock_class_key lock_class; /* key->sem lock class */
......
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#include <linux/rwsem.h> #include <linux/rwsem.h>
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/assoc_array.h> #include <linux/assoc_array.h>
#include <linux/refcount.h>
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/uidgid.h> #include <linux/uidgid.h>
...@@ -126,6 +127,17 @@ static inline bool is_key_possessed(const key_ref_t key_ref) ...@@ -126,6 +127,17 @@ static inline bool is_key_possessed(const key_ref_t key_ref)
return (unsigned long) key_ref & 1UL; return (unsigned long) key_ref & 1UL;
} }
typedef int (*key_restrict_link_func_t)(struct key *dest_keyring,
const struct key_type *type,
const union key_payload *payload,
struct key *restriction_key);
struct key_restriction {
key_restrict_link_func_t check;
struct key *key;
struct key_type *keytype;
};
/*****************************************************************************/ /*****************************************************************************/
/* /*
* authentication token / access credential / keyring * authentication token / access credential / keyring
...@@ -135,7 +147,7 @@ static inline bool is_key_possessed(const key_ref_t key_ref) ...@@ -135,7 +147,7 @@ static inline bool is_key_possessed(const key_ref_t key_ref)
* - Kerberos TGTs and tickets * - Kerberos TGTs and tickets
*/ */
struct key { struct key {
atomic_t usage; /* number of references */ refcount_t usage; /* number of references */
key_serial_t serial; /* key serial number */ key_serial_t serial; /* key serial number */
union { union {
struct list_head graveyard_link; struct list_head graveyard_link;
...@@ -205,18 +217,17 @@ struct key { ...@@ -205,18 +217,17 @@ struct key {
}; };
/* This is set on a keyring to restrict the addition of a link to a key /* This is set on a keyring to restrict the addition of a link to a key
* to it. If this method isn't provided then it is assumed that the * to it. If this structure isn't provided then it is assumed that the
* keyring is open to any addition. It is ignored for non-keyring * keyring is open to any addition. It is ignored for non-keyring
* keys. * keys. Only set this value using keyring_restrict(), keyring_alloc(),
* or key_alloc().
* *
* This is intended for use with rings of trusted keys whereby addition * This is intended for use with rings of trusted keys whereby addition
* to the keyring needs to be controlled. KEY_ALLOC_BYPASS_RESTRICTION * to the keyring needs to be controlled. KEY_ALLOC_BYPASS_RESTRICTION
* overrides this, allowing the kernel to add extra keys without * overrides this, allowing the kernel to add extra keys without
* restriction. * restriction.
*/ */
int (*restrict_link)(struct key *keyring, struct key_restriction *restrict_link;
const struct key_type *type,
const union key_payload *payload);
}; };
extern struct key *key_alloc(struct key_type *type, extern struct key *key_alloc(struct key_type *type,
...@@ -225,9 +236,7 @@ extern struct key *key_alloc(struct key_type *type, ...@@ -225,9 +236,7 @@ extern struct key *key_alloc(struct key_type *type,
const struct cred *cred, const struct cred *cred,
key_perm_t perm, key_perm_t perm,
unsigned long flags, unsigned long flags,
int (*restrict_link)(struct key *, struct key_restriction *restrict_link);
const struct key_type *,
const union key_payload *));
#define KEY_ALLOC_IN_QUOTA 0x0000 /* add to quota, reject if would overrun */ #define KEY_ALLOC_IN_QUOTA 0x0000 /* add to quota, reject if would overrun */
...@@ -242,7 +251,7 @@ extern void key_put(struct key *key); ...@@ -242,7 +251,7 @@ extern void key_put(struct key *key);
static inline struct key *__key_get(struct key *key) static inline struct key *__key_get(struct key *key)
{ {
atomic_inc(&key->usage); refcount_inc(&key->usage);
return key; return key;
} }
...@@ -303,14 +312,13 @@ extern struct key *keyring_alloc(const char *description, kuid_t uid, kgid_t gid ...@@ -303,14 +312,13 @@ extern struct key *keyring_alloc(const char *description, kuid_t uid, kgid_t gid
const struct cred *cred, const struct cred *cred,
key_perm_t perm, key_perm_t perm,
unsigned long flags, unsigned long flags,
int (*restrict_link)(struct key *, struct key_restriction *restrict_link,
const struct key_type *,
const union key_payload *),
struct key *dest); struct key *dest);
extern int restrict_link_reject(struct key *keyring, extern int restrict_link_reject(struct key *keyring,
const struct key_type *type, const struct key_type *type,
const union key_payload *payload); const union key_payload *payload,
struct key *restriction_key);
extern int keyring_clear(struct key *keyring); extern int keyring_clear(struct key *keyring);
...@@ -321,6 +329,9 @@ extern key_ref_t keyring_search(key_ref_t keyring, ...@@ -321,6 +329,9 @@ extern key_ref_t keyring_search(key_ref_t keyring,
extern int keyring_add_key(struct key *keyring, extern int keyring_add_key(struct key *keyring,
struct key *key); struct key *key);
extern int keyring_restrict(key_ref_t keyring, const char *type,
const char *restriction);
extern struct key *key_lookup(key_serial_t id); extern struct key *key_lookup(key_serial_t id);
static inline key_serial_t key_serial(const struct key *key) static inline key_serial_t key_serial(const struct key *key)
......
...@@ -533,8 +533,13 @@ ...@@ -533,8 +533,13 @@
* manual page for definitions of the @clone_flags. * manual page for definitions of the @clone_flags.
* @clone_flags contains the flags indicating what should be shared. * @clone_flags contains the flags indicating what should be shared.
* Return 0 if permission is granted. * Return 0 if permission is granted.
* @task_alloc:
* @task task being allocated.
* @clone_flags contains the flags indicating what should be shared.
* Handle allocation of task-related resources.
* Returns a zero on success, negative values on failure.
* @task_free: * @task_free:
* @task task being freed * @task task about to be freed.
* Handle release of task-related resources. (Note that this can be called * Handle release of task-related resources. (Note that this can be called
* from interrupt context.) * from interrupt context.)
* @cred_alloc_blank: * @cred_alloc_blank:
...@@ -630,10 +635,19 @@ ...@@ -630,10 +635,19 @@
* Check permission before getting the ioprio value of @p. * Check permission before getting the ioprio value of @p.
* @p contains the task_struct of process. * @p contains the task_struct of process.
* Return 0 if permission is granted. * Return 0 if permission is granted.
* @task_prlimit:
* Check permission before getting and/or setting the resource limits of
* another task.
* @cred points to the cred structure for the current task.
* @tcred points to the cred structure for the target task.
* @flags contains the LSM_PRLIMIT_* flag bits indicating whether the
* resource limits are being read, modified, or both.
* Return 0 if permission is granted.
* @task_setrlimit: * @task_setrlimit:
* Check permission before setting the resource limits of the current * Check permission before setting the resource limits of process @p
* process for @resource to @new_rlim. The old resource limit values can * for @resource to @new_rlim. The old resource limit values can
* be examined by dereferencing (current->signal->rlim + resource). * be examined by dereferencing (p->signal->rlim + resource).
* @p points to the task_struct for the target task's group leader.
* @resource contains the resource whose limit is being set. * @resource contains the resource whose limit is being set.
* @new_rlim contains the new limits for @resource. * @new_rlim contains the new limits for @resource.
* Return 0 if permission is granted. * Return 0 if permission is granted.
...@@ -1473,6 +1487,7 @@ union security_list_options { ...@@ -1473,6 +1487,7 @@ union security_list_options {
int (*file_open)(struct file *file, const struct cred *cred); int (*file_open)(struct file *file, const struct cred *cred);
int (*task_create)(unsigned long clone_flags); int (*task_create)(unsigned long clone_flags);
int (*task_alloc)(struct task_struct *task, unsigned long clone_flags);
void (*task_free)(struct task_struct *task); void (*task_free)(struct task_struct *task);
int (*cred_alloc_blank)(struct cred *cred, gfp_t gfp); int (*cred_alloc_blank)(struct cred *cred, gfp_t gfp);
void (*cred_free)(struct cred *cred); void (*cred_free)(struct cred *cred);
...@@ -1494,6 +1509,8 @@ union security_list_options { ...@@ -1494,6 +1509,8 @@ union security_list_options {
int (*task_setnice)(struct task_struct *p, int nice); int (*task_setnice)(struct task_struct *p, int nice);
int (*task_setioprio)(struct task_struct *p, int ioprio); int (*task_setioprio)(struct task_struct *p, int ioprio);
int (*task_getioprio)(struct task_struct *p); int (*task_getioprio)(struct task_struct *p);
int (*task_prlimit)(const struct cred *cred, const struct cred *tcred,
unsigned int flags);
int (*task_setrlimit)(struct task_struct *p, unsigned int resource, int (*task_setrlimit)(struct task_struct *p, unsigned int resource,
struct rlimit *new_rlim); struct rlimit *new_rlim);
int (*task_setscheduler)(struct task_struct *p); int (*task_setscheduler)(struct task_struct *p);
...@@ -1737,6 +1754,7 @@ struct security_hook_heads { ...@@ -1737,6 +1754,7 @@ struct security_hook_heads {
struct list_head file_receive; struct list_head file_receive;
struct list_head file_open; struct list_head file_open;
struct list_head task_create; struct list_head task_create;
struct list_head task_alloc;
struct list_head task_free; struct list_head task_free;
struct list_head cred_alloc_blank; struct list_head cred_alloc_blank;
struct list_head cred_free; struct list_head cred_free;
...@@ -1755,6 +1773,7 @@ struct security_hook_heads { ...@@ -1755,6 +1773,7 @@ struct security_hook_heads {
struct list_head task_setnice; struct list_head task_setnice;
struct list_head task_setioprio; struct list_head task_setioprio;
struct list_head task_getioprio; struct list_head task_getioprio;
struct list_head task_prlimit;
struct list_head task_setrlimit; struct list_head task_setrlimit;
struct list_head task_setscheduler; struct list_head task_setscheduler;
struct list_head task_getscheduler; struct list_head task_getscheduler;
...@@ -1908,6 +1927,13 @@ static inline void security_delete_hooks(struct security_hook_list *hooks, ...@@ -1908,6 +1927,13 @@ static inline void security_delete_hooks(struct security_hook_list *hooks,
} }
#endif /* CONFIG_SECURITY_SELINUX_DISABLE */ #endif /* CONFIG_SECURITY_SELINUX_DISABLE */
/* Currently required to handle SELinux runtime hook disable. */
#ifdef CONFIG_SECURITY_WRITABLE_HOOKS
#define __lsm_ro_after_init
#else
#define __lsm_ro_after_init __ro_after_init
#endif /* CONFIG_SECURITY_WRITABLE_HOOKS */
extern int __init security_module_enable(const char *module); extern int __init security_module_enable(const char *module);
extern void __init capability_add_hooks(void); extern void __init capability_add_hooks(void);
#ifdef CONFIG_SECURITY_YAMA #ifdef CONFIG_SECURITY_YAMA
......
...@@ -1046,6 +1046,10 @@ struct task_struct { ...@@ -1046,6 +1046,10 @@ struct task_struct {
#endif #endif
#ifdef CONFIG_LIVEPATCH #ifdef CONFIG_LIVEPATCH
int patch_state; int patch_state;
#endif
#ifdef CONFIG_SECURITY
/* Used by LSM modules for access restriction: */
void *security;
#endif #endif
/* CPU-specific state of this task: */ /* CPU-specific state of this task: */
struct thread_struct thread; struct thread_struct thread;
......
...@@ -133,6 +133,10 @@ extern unsigned long dac_mmap_min_addr; ...@@ -133,6 +133,10 @@ extern unsigned long dac_mmap_min_addr;
/* setfsuid or setfsgid, id0 == fsuid or fsgid */ /* setfsuid or setfsgid, id0 == fsuid or fsgid */
#define LSM_SETID_FS 8 #define LSM_SETID_FS 8
/* Flags for security_task_prlimit(). */
#define LSM_PRLIMIT_READ 1
#define LSM_PRLIMIT_WRITE 2
/* forward declares to avoid warnings */ /* forward declares to avoid warnings */
struct sched_param; struct sched_param;
struct request_sock; struct request_sock;
...@@ -304,6 +308,7 @@ int security_file_send_sigiotask(struct task_struct *tsk, ...@@ -304,6 +308,7 @@ int security_file_send_sigiotask(struct task_struct *tsk,
int security_file_receive(struct file *file); int security_file_receive(struct file *file);
int security_file_open(struct file *file, const struct cred *cred); int security_file_open(struct file *file, const struct cred *cred);
int security_task_create(unsigned long clone_flags); int security_task_create(unsigned long clone_flags);
int security_task_alloc(struct task_struct *task, unsigned long clone_flags);
void security_task_free(struct task_struct *task); void security_task_free(struct task_struct *task);
int security_cred_alloc_blank(struct cred *cred, gfp_t gfp); int security_cred_alloc_blank(struct cred *cred, gfp_t gfp);
void security_cred_free(struct cred *cred); void security_cred_free(struct cred *cred);
...@@ -324,6 +329,8 @@ void security_task_getsecid(struct task_struct *p, u32 *secid); ...@@ -324,6 +329,8 @@ void security_task_getsecid(struct task_struct *p, u32 *secid);
int security_task_setnice(struct task_struct *p, int nice); int security_task_setnice(struct task_struct *p, int nice);
int security_task_setioprio(struct task_struct *p, int ioprio); int security_task_setioprio(struct task_struct *p, int ioprio);
int security_task_getioprio(struct task_struct *p); int security_task_getioprio(struct task_struct *p);
int security_task_prlimit(const struct cred *cred, const struct cred *tcred,
unsigned int flags);
int security_task_setrlimit(struct task_struct *p, unsigned int resource, int security_task_setrlimit(struct task_struct *p, unsigned int resource,
struct rlimit *new_rlim); struct rlimit *new_rlim);
int security_task_setscheduler(struct task_struct *p); int security_task_setscheduler(struct task_struct *p);
...@@ -855,6 +862,12 @@ static inline int security_task_create(unsigned long clone_flags) ...@@ -855,6 +862,12 @@ static inline int security_task_create(unsigned long clone_flags)
return 0; return 0;
} }
static inline int security_task_alloc(struct task_struct *task,
unsigned long clone_flags)
{
return 0;
}
static inline void security_task_free(struct task_struct *task) static inline void security_task_free(struct task_struct *task)
{ } { }
...@@ -949,6 +962,13 @@ static inline int security_task_getioprio(struct task_struct *p) ...@@ -949,6 +962,13 @@ static inline int security_task_getioprio(struct task_struct *p)
return 0; return 0;
} }
static inline int security_task_prlimit(const struct cred *cred,
const struct cred *tcred,
unsigned int flags)
{
return 0;
}
static inline int security_task_setrlimit(struct task_struct *p, static inline int security_task_setrlimit(struct task_struct *p,
unsigned int resource, unsigned int resource,
struct rlimit *new_rlim) struct rlimit *new_rlim)
......
...@@ -48,7 +48,8 @@ struct tpm_class_ops { ...@@ -48,7 +48,8 @@ struct tpm_class_ops {
u8 (*status) (struct tpm_chip *chip); u8 (*status) (struct tpm_chip *chip);
bool (*update_timeouts)(struct tpm_chip *chip, bool (*update_timeouts)(struct tpm_chip *chip,
unsigned long *timeout_cap); unsigned long *timeout_cap);
int (*request_locality)(struct tpm_chip *chip, int loc);
void (*relinquish_locality)(struct tpm_chip *chip, int loc);
}; };
#if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE) #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
......
...@@ -60,6 +60,7 @@ ...@@ -60,6 +60,7 @@
#define KEYCTL_INVALIDATE 21 /* invalidate a key */ #define KEYCTL_INVALIDATE 21 /* invalidate a key */
#define KEYCTL_GET_PERSISTENT 22 /* get a user's persistent keyring */ #define KEYCTL_GET_PERSISTENT 22 /* get a user's persistent keyring */
#define KEYCTL_DH_COMPUTE 23 /* Compute Diffie-Hellman values */ #define KEYCTL_DH_COMPUTE 23 /* Compute Diffie-Hellman values */
#define KEYCTL_RESTRICT_KEYRING 29 /* Restrict keys allowed to link to a keyring */
/* keyctl structures */ /* keyctl structures */
struct keyctl_dh_params { struct keyctl_dh_params {
...@@ -68,4 +69,11 @@ struct keyctl_dh_params { ...@@ -68,4 +69,11 @@ struct keyctl_dh_params {
__s32 base; __s32 base;
}; };
struct keyctl_kdf_params {
char *hashname;
char *otherinfo;
__u32 otherinfolen;
__u32 __spare[8];
};
#endif /* _LINUX_KEYCTL_H */ #endif /* _LINUX_KEYCTL_H */
...@@ -1681,9 +1681,12 @@ static __latent_entropy struct task_struct *copy_process( ...@@ -1681,9 +1681,12 @@ static __latent_entropy struct task_struct *copy_process(
goto bad_fork_cleanup_perf; goto bad_fork_cleanup_perf;
/* copy all the process information */ /* copy all the process information */
shm_init_task(p); shm_init_task(p);
retval = copy_semundo(clone_flags, p); retval = security_task_alloc(p, clone_flags);
if (retval) if (retval)
goto bad_fork_cleanup_audit; goto bad_fork_cleanup_audit;
retval = copy_semundo(clone_flags, p);
if (retval)
goto bad_fork_cleanup_security;
retval = copy_files(clone_flags, p); retval = copy_files(clone_flags, p);
if (retval) if (retval)
goto bad_fork_cleanup_semundo; goto bad_fork_cleanup_semundo;
...@@ -1907,6 +1910,8 @@ static __latent_entropy struct task_struct *copy_process( ...@@ -1907,6 +1910,8 @@ static __latent_entropy struct task_struct *copy_process(
exit_files(p); /* blocking */ exit_files(p); /* blocking */
bad_fork_cleanup_semundo: bad_fork_cleanup_semundo:
exit_sem(p); exit_sem(p);
bad_fork_cleanup_security:
security_task_free(p);
bad_fork_cleanup_audit: bad_fork_cleanup_audit:
audit_free(p); audit_free(p);
bad_fork_cleanup_perf: bad_fork_cleanup_perf:
......
...@@ -1432,25 +1432,26 @@ int do_prlimit(struct task_struct *tsk, unsigned int resource, ...@@ -1432,25 +1432,26 @@ int do_prlimit(struct task_struct *tsk, unsigned int resource,
} }
/* rcu lock must be held */ /* rcu lock must be held */
static int check_prlimit_permission(struct task_struct *task) static int check_prlimit_permission(struct task_struct *task,
unsigned int flags)
{ {
const struct cred *cred = current_cred(), *tcred; const struct cred *cred = current_cred(), *tcred;
bool id_match;
if (current == task) if (current == task)
return 0; return 0;
tcred = __task_cred(task); tcred = __task_cred(task);
if (uid_eq(cred->uid, tcred->euid) && id_match = (uid_eq(cred->uid, tcred->euid) &&
uid_eq(cred->uid, tcred->suid) && uid_eq(cred->uid, tcred->suid) &&
uid_eq(cred->uid, tcred->uid) && uid_eq(cred->uid, tcred->uid) &&
gid_eq(cred->gid, tcred->egid) && gid_eq(cred->gid, tcred->egid) &&
gid_eq(cred->gid, tcred->sgid) && gid_eq(cred->gid, tcred->sgid) &&
gid_eq(cred->gid, tcred->gid)) gid_eq(cred->gid, tcred->gid));
return 0; if (!id_match && !ns_capable(tcred->user_ns, CAP_SYS_RESOURCE))
if (ns_capable(tcred->user_ns, CAP_SYS_RESOURCE)) return -EPERM;
return 0;
return -EPERM; return security_task_prlimit(cred, tcred, flags);
} }
SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource, SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource,
...@@ -1460,12 +1461,17 @@ SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource, ...@@ -1460,12 +1461,17 @@ SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource,
struct rlimit64 old64, new64; struct rlimit64 old64, new64;
struct rlimit old, new; struct rlimit old, new;
struct task_struct *tsk; struct task_struct *tsk;
unsigned int checkflags = 0;
int ret; int ret;
if (old_rlim)
checkflags |= LSM_PRLIMIT_READ;
if (new_rlim) { if (new_rlim) {
if (copy_from_user(&new64, new_rlim, sizeof(new64))) if (copy_from_user(&new64, new_rlim, sizeof(new64)))
return -EFAULT; return -EFAULT;
rlim64_to_rlim(&new64, &new); rlim64_to_rlim(&new64, &new);
checkflags |= LSM_PRLIMIT_WRITE;
} }
rcu_read_lock(); rcu_read_lock();
...@@ -1474,7 +1480,7 @@ SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource, ...@@ -1474,7 +1480,7 @@ SYSCALL_DEFINE4(prlimit64, pid_t, pid, unsigned int, resource,
rcu_read_unlock(); rcu_read_unlock();
return -ESRCH; return -ESRCH;
} }
ret = check_prlimit_permission(tsk); ret = check_prlimit_permission(tsk, checkflags);
if (ret) { if (ret) {
rcu_read_unlock(); rcu_read_unlock();
return ret; return ret;
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include <string.h> #include <string.h>
#include <errno.h> #include <errno.h>
#include <ctype.h> #include <ctype.h>
#include <sys/socket.h>
struct security_class_mapping { struct security_class_mapping {
const char *name; const char *name;
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include <stdlib.h> #include <stdlib.h>
#include <unistd.h> #include <unistd.h>
#include <string.h> #include <string.h>
#include <sys/socket.h>
static void usage(char *name) static void usage(char *name)
{ {
......
...@@ -31,6 +31,11 @@ config SECURITY ...@@ -31,6 +31,11 @@ config SECURITY
If you are unsure how to answer this question, answer N. If you are unsure how to answer this question, answer N.
config SECURITY_WRITABLE_HOOKS
depends on SECURITY
bool
default n
config SECURITYFS config SECURITYFS
bool "Enable the securityfs filesystem" bool "Enable the securityfs filesystem"
help help
......
...@@ -31,10 +31,7 @@ unsigned int aa_hash_size(void) ...@@ -31,10 +31,7 @@ unsigned int aa_hash_size(void)
char *aa_calc_hash(void *data, size_t len) char *aa_calc_hash(void *data, size_t len)
{ {
struct { SHASH_DESC_ON_STACK(desc, apparmor_tfm);
struct shash_desc shash;
char ctx[crypto_shash_descsize(apparmor_tfm)];
} desc;
char *hash = NULL; char *hash = NULL;
int error = -ENOMEM; int error = -ENOMEM;
...@@ -45,16 +42,16 @@ char *aa_calc_hash(void *data, size_t len) ...@@ -45,16 +42,16 @@ char *aa_calc_hash(void *data, size_t len)
if (!hash) if (!hash)
goto fail; goto fail;
desc.shash.tfm = apparmor_tfm; desc->tfm = apparmor_tfm;
desc.shash.flags = 0; desc->flags = 0;
error = crypto_shash_init(&desc.shash); error = crypto_shash_init(desc);
if (error) if (error)
goto fail; goto fail;
error = crypto_shash_update(&desc.shash, (u8 *) data, len); error = crypto_shash_update(desc, (u8 *) data, len);
if (error) if (error)
goto fail; goto fail;
error = crypto_shash_final(&desc.shash, hash); error = crypto_shash_final(desc, hash);
if (error) if (error)
goto fail; goto fail;
...@@ -69,10 +66,7 @@ char *aa_calc_hash(void *data, size_t len) ...@@ -69,10 +66,7 @@ char *aa_calc_hash(void *data, size_t len)
int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start, int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start,
size_t len) size_t len)
{ {
struct { SHASH_DESC_ON_STACK(desc, apparmor_tfm);
struct shash_desc shash;
char ctx[crypto_shash_descsize(apparmor_tfm)];
} desc;
int error = -ENOMEM; int error = -ENOMEM;
__le32 le32_version = cpu_to_le32(version); __le32 le32_version = cpu_to_le32(version);
...@@ -86,19 +80,19 @@ int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start, ...@@ -86,19 +80,19 @@ int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start,
if (!profile->hash) if (!profile->hash)
goto fail; goto fail;
desc.shash.tfm = apparmor_tfm; desc->tfm = apparmor_tfm;
desc.shash.flags = 0; desc->flags = 0;
error = crypto_shash_init(&desc.shash); error = crypto_shash_init(desc);
if (error) if (error)
goto fail; goto fail;
error = crypto_shash_update(&desc.shash, (u8 *) &le32_version, 4); error = crypto_shash_update(desc, (u8 *) &le32_version, 4);
if (error) if (error)
goto fail; goto fail;
error = crypto_shash_update(&desc.shash, (u8 *) start, len); error = crypto_shash_update(desc, (u8 *) start, len);
if (error) if (error)
goto fail; goto fail;
error = crypto_shash_final(&desc.shash, profile->hash); error = crypto_shash_final(desc, profile->hash);
if (error) if (error)
goto fail; goto fail;
......
...@@ -57,7 +57,7 @@ ...@@ -57,7 +57,7 @@
pr_err_ratelimited("AppArmor: " fmt, ##args) pr_err_ratelimited("AppArmor: " fmt, ##args)
/* Flag indicating whether initialization completed */ /* Flag indicating whether initialization completed */
extern int apparmor_initialized __initdata; extern int apparmor_initialized;
/* fn's in lib */ /* fn's in lib */
char *aa_split_fqname(char *args, char **ns_name); char *aa_split_fqname(char *args, char **ns_name);
......
...@@ -180,13 +180,13 @@ bool aa_policy_init(struct aa_policy *policy, const char *prefix, ...@@ -180,13 +180,13 @@ bool aa_policy_init(struct aa_policy *policy, const char *prefix,
} else } else
policy->hname = kstrdup(name, gfp); policy->hname = kstrdup(name, gfp);
if (!policy->hname) if (!policy->hname)
return 0; return false;
/* base.name is a substring of fqname */ /* base.name is a substring of fqname */
policy->name = basename(policy->hname); policy->name = basename(policy->hname);
INIT_LIST_HEAD(&policy->list); INIT_LIST_HEAD(&policy->list);
INIT_LIST_HEAD(&policy->profiles); INIT_LIST_HEAD(&policy->profiles);
return 1; return true;
} }
/** /**
......
...@@ -39,7 +39,7 @@ ...@@ -39,7 +39,7 @@
#include "include/procattr.h" #include "include/procattr.h"
/* Flag indicating whether initialization completed */ /* Flag indicating whether initialization completed */
int apparmor_initialized __initdata; int apparmor_initialized;
DEFINE_PER_CPU(struct aa_buffers, aa_buffers); DEFINE_PER_CPU(struct aa_buffers, aa_buffers);
...@@ -587,7 +587,7 @@ static int apparmor_task_setrlimit(struct task_struct *task, ...@@ -587,7 +587,7 @@ static int apparmor_task_setrlimit(struct task_struct *task,
return error; return error;
} }
static struct security_hook_list apparmor_hooks[] = { static struct security_hook_list apparmor_hooks[] __lsm_ro_after_init = {
LSM_HOOK_INIT(ptrace_access_check, apparmor_ptrace_access_check), LSM_HOOK_INIT(ptrace_access_check, apparmor_ptrace_access_check),
LSM_HOOK_INIT(ptrace_traceme, apparmor_ptrace_traceme), LSM_HOOK_INIT(ptrace_traceme, apparmor_ptrace_traceme),
LSM_HOOK_INIT(capget, apparmor_capget), LSM_HOOK_INIT(capget, apparmor_capget),
...@@ -681,7 +681,7 @@ module_param_named(hash_policy, aa_g_hash_policy, aabool, S_IRUSR | S_IWUSR); ...@@ -681,7 +681,7 @@ module_param_named(hash_policy, aa_g_hash_policy, aabool, S_IRUSR | S_IWUSR);
#endif #endif
/* Debug mode */ /* Debug mode */
bool aa_g_debug = IS_ENABLED(CONFIG_SECURITY_DEBUG_MESSAGES); bool aa_g_debug = IS_ENABLED(CONFIG_SECURITY_APPARMOR_DEBUG_MESSAGES);
module_param_named(debug, aa_g_debug, aabool, S_IRUSR | S_IWUSR); module_param_named(debug, aa_g_debug, aabool, S_IRUSR | S_IWUSR);
/* Audit mode */ /* Audit mode */
...@@ -710,7 +710,7 @@ module_param_named(logsyscall, aa_g_logsyscall, aabool, S_IRUSR | S_IWUSR); ...@@ -710,7 +710,7 @@ module_param_named(logsyscall, aa_g_logsyscall, aabool, S_IRUSR | S_IWUSR);
/* Maximum pathname length before accesses will start getting rejected */ /* Maximum pathname length before accesses will start getting rejected */
unsigned int aa_g_path_max = 2 * PATH_MAX; unsigned int aa_g_path_max = 2 * PATH_MAX;
module_param_named(path_max, aa_g_path_max, aauint, S_IRUSR | S_IWUSR); module_param_named(path_max, aa_g_path_max, aauint, S_IRUSR);
/* Determines how paranoid loading of policy is and how much verification /* Determines how paranoid loading of policy is and how much verification
* on the loaded policy is done. * on the loaded policy is done.
...@@ -738,78 +738,77 @@ __setup("apparmor=", apparmor_enabled_setup); ...@@ -738,78 +738,77 @@ __setup("apparmor=", apparmor_enabled_setup);
/* set global flag turning off the ability to load policy */ /* set global flag turning off the ability to load policy */
static int param_set_aalockpolicy(const char *val, const struct kernel_param *kp) static int param_set_aalockpolicy(const char *val, const struct kernel_param *kp)
{ {
if (!policy_admin_capable(NULL)) if (!apparmor_enabled)
return -EINVAL;
if (apparmor_initialized && !policy_admin_capable(NULL))
return -EPERM; return -EPERM;
return param_set_bool(val, kp); return param_set_bool(val, kp);
} }
static int param_get_aalockpolicy(char *buffer, const struct kernel_param *kp) static int param_get_aalockpolicy(char *buffer, const struct kernel_param *kp)
{ {
if (!policy_view_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_view_capable(NULL))
return -EPERM;
return param_get_bool(buffer, kp); return param_get_bool(buffer, kp);
} }
static int param_set_aabool(const char *val, const struct kernel_param *kp) static int param_set_aabool(const char *val, const struct kernel_param *kp)
{ {
if (!policy_admin_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_admin_capable(NULL))
return -EPERM;
return param_set_bool(val, kp); return param_set_bool(val, kp);
} }
static int param_get_aabool(char *buffer, const struct kernel_param *kp) static int param_get_aabool(char *buffer, const struct kernel_param *kp)
{ {
if (!policy_view_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_view_capable(NULL))
return -EPERM;
return param_get_bool(buffer, kp); return param_get_bool(buffer, kp);
} }
static int param_set_aauint(const char *val, const struct kernel_param *kp) static int param_set_aauint(const char *val, const struct kernel_param *kp)
{ {
if (!policy_admin_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_admin_capable(NULL))
return -EPERM;
return param_set_uint(val, kp); return param_set_uint(val, kp);
} }
static int param_get_aauint(char *buffer, const struct kernel_param *kp) static int param_get_aauint(char *buffer, const struct kernel_param *kp)
{ {
if (!policy_view_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_view_capable(NULL))
return -EPERM;
return param_get_uint(buffer, kp); return param_get_uint(buffer, kp);
} }
static int param_get_audit(char *buffer, struct kernel_param *kp) static int param_get_audit(char *buffer, struct kernel_param *kp)
{ {
if (!policy_view_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_view_capable(NULL))
return -EPERM;
return sprintf(buffer, "%s", audit_mode_names[aa_g_audit]); return sprintf(buffer, "%s", audit_mode_names[aa_g_audit]);
} }
static int param_set_audit(const char *val, struct kernel_param *kp) static int param_set_audit(const char *val, struct kernel_param *kp)
{ {
int i; int i;
if (!policy_admin_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (!val) if (!val)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_admin_capable(NULL))
return -EPERM;
for (i = 0; i < AUDIT_MAX_INDEX; i++) { for (i = 0; i < AUDIT_MAX_INDEX; i++) {
if (strcmp(val, audit_mode_names[i]) == 0) { if (strcmp(val, audit_mode_names[i]) == 0) {
...@@ -823,11 +822,10 @@ static int param_set_audit(const char *val, struct kernel_param *kp) ...@@ -823,11 +822,10 @@ static int param_set_audit(const char *val, struct kernel_param *kp)
static int param_get_mode(char *buffer, struct kernel_param *kp) static int param_get_mode(char *buffer, struct kernel_param *kp)
{ {
if (!policy_view_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_view_capable(NULL))
return -EPERM;
return sprintf(buffer, "%s", aa_profile_mode_names[aa_g_profile_mode]); return sprintf(buffer, "%s", aa_profile_mode_names[aa_g_profile_mode]);
} }
...@@ -835,14 +833,13 @@ static int param_get_mode(char *buffer, struct kernel_param *kp) ...@@ -835,14 +833,13 @@ static int param_get_mode(char *buffer, struct kernel_param *kp)
static int param_set_mode(const char *val, struct kernel_param *kp) static int param_set_mode(const char *val, struct kernel_param *kp)
{ {
int i; int i;
if (!policy_admin_capable(NULL))
return -EPERM;
if (!apparmor_enabled) if (!apparmor_enabled)
return -EINVAL; return -EINVAL;
if (!val) if (!val)
return -EINVAL; return -EINVAL;
if (apparmor_initialized && !policy_admin_capable(NULL))
return -EPERM;
for (i = 0; i < APPARMOR_MODE_NAMES_MAX_INDEX; i++) { for (i = 0; i < APPARMOR_MODE_NAMES_MAX_INDEX; i++) {
if (strcmp(val, aa_profile_mode_names[i]) == 0) { if (strcmp(val, aa_profile_mode_names[i]) == 0) {
......
...@@ -876,9 +876,11 @@ ssize_t aa_replace_profiles(struct aa_ns *view, struct aa_profile *profile, ...@@ -876,9 +876,11 @@ ssize_t aa_replace_profiles(struct aa_ns *view, struct aa_profile *profile,
if (ns_name) { if (ns_name) {
ns = aa_prepare_ns(view, ns_name); ns = aa_prepare_ns(view, ns_name);
if (IS_ERR(ns)) { if (IS_ERR(ns)) {
op = OP_PROF_LOAD;
info = "failed to prepare namespace"; info = "failed to prepare namespace";
error = PTR_ERR(ns); error = PTR_ERR(ns);
ns = NULL; ns = NULL;
ent = NULL;
goto fail; goto fail;
} }
} else } else
...@@ -1013,7 +1015,7 @@ ssize_t aa_replace_profiles(struct aa_ns *view, struct aa_profile *profile, ...@@ -1013,7 +1015,7 @@ ssize_t aa_replace_profiles(struct aa_ns *view, struct aa_profile *profile,
/* audit cause of failure */ /* audit cause of failure */
op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL; op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL;
fail: fail:
audit_policy(profile, op, ns_name, ent->new->base.hname, audit_policy(profile, op, ns_name, ent ? ent->new->base.hname : NULL,
info, error); info, error);
/* audit status that rest of profiles in the atomic set failed too */ /* audit status that rest of profiles in the atomic set failed too */
info = "valid profile in failed atomic policy load"; info = "valid profile in failed atomic policy load";
...@@ -1023,7 +1025,7 @@ ssize_t aa_replace_profiles(struct aa_ns *view, struct aa_profile *profile, ...@@ -1023,7 +1025,7 @@ ssize_t aa_replace_profiles(struct aa_ns *view, struct aa_profile *profile,
/* skip entry that caused failure */ /* skip entry that caused failure */
continue; continue;
} }
op = (!ent->old) ? OP_PROF_LOAD : OP_PROF_REPL; op = (!tmp->old) ? OP_PROF_LOAD : OP_PROF_REPL;
audit_policy(profile, op, ns_name, audit_policy(profile, op, ns_name,
tmp->new->base.hname, info, error); tmp->new->base.hname, info, error);
} }
......
...@@ -1071,7 +1071,7 @@ int cap_mmap_file(struct file *file, unsigned long reqprot, ...@@ -1071,7 +1071,7 @@ int cap_mmap_file(struct file *file, unsigned long reqprot,
#ifdef CONFIG_SECURITY #ifdef CONFIG_SECURITY
struct security_hook_list capability_hooks[] = { struct security_hook_list capability_hooks[] __lsm_ro_after_init = {
LSM_HOOK_INIT(capable, cap_capable), LSM_HOOK_INIT(capable, cap_capable),
LSM_HOOK_INIT(settime, cap_settime), LSM_HOOK_INIT(settime, cap_settime),
LSM_HOOK_INIT(ptrace_access_check, cap_ptrace_access_check), LSM_HOOK_INIT(ptrace_access_check, cap_ptrace_access_check),
......
...@@ -81,18 +81,25 @@ int integrity_digsig_verify(const unsigned int id, const char *sig, int siglen, ...@@ -81,18 +81,25 @@ int integrity_digsig_verify(const unsigned int id, const char *sig, int siglen,
int __init integrity_init_keyring(const unsigned int id) int __init integrity_init_keyring(const unsigned int id)
{ {
const struct cred *cred = current_cred(); const struct cred *cred = current_cred();
struct key_restriction *restriction;
int err = 0; int err = 0;
if (!init_keyring) if (!init_keyring)
return 0; return 0;
restriction = kzalloc(sizeof(struct key_restriction), GFP_KERNEL);
if (!restriction)
return -ENOMEM;
restriction->check = restrict_link_to_ima;
keyring[id] = keyring_alloc(keyring_name[id], KUIDT_INIT(0), keyring[id] = keyring_alloc(keyring_name[id], KUIDT_INIT(0),
KGIDT_INIT(0), cred, KGIDT_INIT(0), cred,
((KEY_POS_ALL & ~KEY_POS_SETATTR) | ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
KEY_USR_VIEW | KEY_USR_READ | KEY_USR_VIEW | KEY_USR_READ |
KEY_USR_WRITE | KEY_USR_SEARCH), KEY_USR_WRITE | KEY_USR_SEARCH),
KEY_ALLOC_NOT_IN_QUOTA, KEY_ALLOC_NOT_IN_QUOTA,
restrict_link_to_ima, NULL); restriction, NULL);
if (IS_ERR(keyring[id])) { if (IS_ERR(keyring[id])) {
err = PTR_ERR(keyring[id]); err = PTR_ERR(keyring[id]);
pr_info("Can't allocate %s keyring (%d)\n", pr_info("Can't allocate %s keyring (%d)\n",
......
...@@ -207,10 +207,11 @@ int ima_appraise_measurement(enum ima_hooks func, ...@@ -207,10 +207,11 @@ int ima_appraise_measurement(enum ima_hooks func,
cause = "missing-hash"; cause = "missing-hash";
status = INTEGRITY_NOLABEL; status = INTEGRITY_NOLABEL;
if (opened & FILE_CREATED) { if (opened & FILE_CREATED)
iint->flags |= IMA_NEW_FILE; iint->flags |= IMA_NEW_FILE;
if ((iint->flags & IMA_NEW_FILE) &&
!(iint->flags & IMA_DIGSIG_REQUIRED))
status = INTEGRITY_PASS; status = INTEGRITY_PASS;
}
goto out; goto out;
} }
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/cred.h> #include <linux/cred.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/slab.h>
#include <keys/system_keyring.h> #include <keys/system_keyring.h>
...@@ -27,15 +28,23 @@ struct key *ima_blacklist_keyring; ...@@ -27,15 +28,23 @@ struct key *ima_blacklist_keyring;
*/ */
__init int ima_mok_init(void) __init int ima_mok_init(void)
{ {
struct key_restriction *restriction;
pr_notice("Allocating IMA blacklist keyring.\n"); pr_notice("Allocating IMA blacklist keyring.\n");
restriction = kzalloc(sizeof(struct key_restriction), GFP_KERNEL);
if (!restriction)
panic("Can't allocate IMA blacklist restriction.");
restriction->check = restrict_link_by_builtin_trusted;
ima_blacklist_keyring = keyring_alloc(".ima_blacklist", ima_blacklist_keyring = keyring_alloc(".ima_blacklist",
KUIDT_INIT(0), KGIDT_INIT(0), current_cred(), KUIDT_INIT(0), KGIDT_INIT(0), current_cred(),
(KEY_POS_ALL & ~KEY_POS_SETATTR) | (KEY_POS_ALL & ~KEY_POS_SETATTR) |
KEY_USR_VIEW | KEY_USR_READ | KEY_USR_VIEW | KEY_USR_READ |
KEY_USR_WRITE | KEY_USR_SEARCH, KEY_USR_WRITE | KEY_USR_SEARCH,
KEY_ALLOC_NOT_IN_QUOTA, KEY_ALLOC_NOT_IN_QUOTA,
restrict_link_by_builtin_trusted, NULL); restriction, NULL);
if (IS_ERR(ima_blacklist_keyring)) if (IS_ERR(ima_blacklist_keyring))
panic("Can't allocate IMA blacklist keyring."); panic("Can't allocate IMA blacklist keyring.");
......
...@@ -64,6 +64,8 @@ struct ima_rule_entry { ...@@ -64,6 +64,8 @@ struct ima_rule_entry {
u8 fsuuid[16]; u8 fsuuid[16];
kuid_t uid; kuid_t uid;
kuid_t fowner; kuid_t fowner;
bool (*uid_op)(kuid_t, kuid_t); /* Handlers for operators */
bool (*fowner_op)(kuid_t, kuid_t); /* uid_eq(), uid_gt(), uid_lt() */
int pcr; int pcr;
struct { struct {
void *rule; /* LSM file metadata specific */ void *rule; /* LSM file metadata specific */
...@@ -83,7 +85,7 @@ struct ima_rule_entry { ...@@ -83,7 +85,7 @@ struct ima_rule_entry {
* normal users can easily run the machine out of memory simply building * normal users can easily run the machine out of memory simply building
* and running executables. * and running executables.
*/ */
static struct ima_rule_entry dont_measure_rules[] = { static struct ima_rule_entry dont_measure_rules[] __ro_after_init = {
{.action = DONT_MEASURE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC}, {.action = DONT_MEASURE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC},
{.action = DONT_MEASURE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC}, {.action = DONT_MEASURE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC},
{.action = DONT_MEASURE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC}, {.action = DONT_MEASURE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC},
...@@ -97,32 +99,35 @@ static struct ima_rule_entry dont_measure_rules[] = { ...@@ -97,32 +99,35 @@ static struct ima_rule_entry dont_measure_rules[] = {
{.action = DONT_MEASURE, .fsmagic = NSFS_MAGIC, .flags = IMA_FSMAGIC} {.action = DONT_MEASURE, .fsmagic = NSFS_MAGIC, .flags = IMA_FSMAGIC}
}; };
static struct ima_rule_entry original_measurement_rules[] = { static struct ima_rule_entry original_measurement_rules[] __ro_after_init = {
{.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC, {.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC,
.flags = IMA_FUNC | IMA_MASK}, .flags = IMA_FUNC | IMA_MASK},
{.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC, {.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC,
.flags = IMA_FUNC | IMA_MASK}, .flags = IMA_FUNC | IMA_MASK},
{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, {.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ,
.uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_MASK | IMA_UID}, .uid = GLOBAL_ROOT_UID, .uid_op = &uid_eq,
.flags = IMA_FUNC | IMA_MASK | IMA_UID},
{.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC}, {.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC},
{.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC}, {.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC},
}; };
static struct ima_rule_entry default_measurement_rules[] = { static struct ima_rule_entry default_measurement_rules[] __ro_after_init = {
{.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC, {.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC,
.flags = IMA_FUNC | IMA_MASK}, .flags = IMA_FUNC | IMA_MASK},
{.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC, {.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC,
.flags = IMA_FUNC | IMA_MASK}, .flags = IMA_FUNC | IMA_MASK},
{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, {.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ,
.uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_INMASK | IMA_EUID}, .uid = GLOBAL_ROOT_UID, .uid_op = &uid_eq,
.flags = IMA_FUNC | IMA_INMASK | IMA_EUID},
{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, {.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ,
.uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_INMASK | IMA_UID}, .uid = GLOBAL_ROOT_UID, .uid_op = &uid_eq,
.flags = IMA_FUNC | IMA_INMASK | IMA_UID},
{.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC}, {.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC},
{.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC}, {.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC},
{.action = MEASURE, .func = POLICY_CHECK, .flags = IMA_FUNC}, {.action = MEASURE, .func = POLICY_CHECK, .flags = IMA_FUNC},
}; };
static struct ima_rule_entry default_appraise_rules[] = { static struct ima_rule_entry default_appraise_rules[] __ro_after_init = {
{.action = DONT_APPRAISE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC}, {.action = DONT_APPRAISE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC},
{.action = DONT_APPRAISE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC}, {.action = DONT_APPRAISE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC},
{.action = DONT_APPRAISE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC}, {.action = DONT_APPRAISE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC},
...@@ -139,10 +144,11 @@ static struct ima_rule_entry default_appraise_rules[] = { ...@@ -139,10 +144,11 @@ static struct ima_rule_entry default_appraise_rules[] = {
.flags = IMA_FUNC | IMA_DIGSIG_REQUIRED}, .flags = IMA_FUNC | IMA_DIGSIG_REQUIRED},
#endif #endif
#ifndef CONFIG_IMA_APPRAISE_SIGNED_INIT #ifndef CONFIG_IMA_APPRAISE_SIGNED_INIT
{.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .flags = IMA_FOWNER}, {.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .fowner_op = &uid_eq,
.flags = IMA_FOWNER},
#else #else
/* force signature */ /* force signature */
{.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, {.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .fowner_op = &uid_eq,
.flags = IMA_FOWNER | IMA_DIGSIG_REQUIRED}, .flags = IMA_FOWNER | IMA_DIGSIG_REQUIRED},
#endif #endif
}; };
...@@ -240,19 +246,20 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode, ...@@ -240,19 +246,20 @@ static bool ima_match_rules(struct ima_rule_entry *rule, struct inode *inode,
if ((rule->flags & IMA_FSUUID) && if ((rule->flags & IMA_FSUUID) &&
memcmp(rule->fsuuid, inode->i_sb->s_uuid, sizeof(rule->fsuuid))) memcmp(rule->fsuuid, inode->i_sb->s_uuid, sizeof(rule->fsuuid)))
return false; return false;
if ((rule->flags & IMA_UID) && !uid_eq(rule->uid, cred->uid)) if ((rule->flags & IMA_UID) && !rule->uid_op(cred->uid, rule->uid))
return false; return false;
if (rule->flags & IMA_EUID) { if (rule->flags & IMA_EUID) {
if (has_capability_noaudit(current, CAP_SETUID)) { if (has_capability_noaudit(current, CAP_SETUID)) {
if (!uid_eq(rule->uid, cred->euid) if (!rule->uid_op(cred->euid, rule->uid)
&& !uid_eq(rule->uid, cred->suid) && !rule->uid_op(cred->suid, rule->uid)
&& !uid_eq(rule->uid, cred->uid)) && !rule->uid_op(cred->uid, rule->uid))
return false; return false;
} else if (!uid_eq(rule->uid, cred->euid)) } else if (!rule->uid_op(cred->euid, rule->uid))
return false; return false;
} }
if ((rule->flags & IMA_FOWNER) && !uid_eq(rule->fowner, inode->i_uid)) if ((rule->flags & IMA_FOWNER) &&
!rule->fowner_op(inode->i_uid, rule->fowner))
return false; return false;
for (i = 0; i < MAX_LSM_RULES; i++) { for (i = 0; i < MAX_LSM_RULES; i++) {
int rc = 0; int rc = 0;
...@@ -486,7 +493,9 @@ enum { ...@@ -486,7 +493,9 @@ enum {
Opt_obj_user, Opt_obj_role, Opt_obj_type, Opt_obj_user, Opt_obj_role, Opt_obj_type,
Opt_subj_user, Opt_subj_role, Opt_subj_type, Opt_subj_user, Opt_subj_role, Opt_subj_type,
Opt_func, Opt_mask, Opt_fsmagic, Opt_func, Opt_mask, Opt_fsmagic,
Opt_fsuuid, Opt_uid, Opt_euid, Opt_fowner, Opt_fsuuid, Opt_uid_eq, Opt_euid_eq, Opt_fowner_eq,
Opt_uid_gt, Opt_euid_gt, Opt_fowner_gt,
Opt_uid_lt, Opt_euid_lt, Opt_fowner_lt,
Opt_appraise_type, Opt_permit_directio, Opt_appraise_type, Opt_permit_directio,
Opt_pcr Opt_pcr
}; };
...@@ -507,9 +516,15 @@ static match_table_t policy_tokens = { ...@@ -507,9 +516,15 @@ static match_table_t policy_tokens = {
{Opt_mask, "mask=%s"}, {Opt_mask, "mask=%s"},
{Opt_fsmagic, "fsmagic=%s"}, {Opt_fsmagic, "fsmagic=%s"},
{Opt_fsuuid, "fsuuid=%s"}, {Opt_fsuuid, "fsuuid=%s"},
{Opt_uid, "uid=%s"}, {Opt_uid_eq, "uid=%s"},
{Opt_euid, "euid=%s"}, {Opt_euid_eq, "euid=%s"},
{Opt_fowner, "fowner=%s"}, {Opt_fowner_eq, "fowner=%s"},
{Opt_uid_gt, "uid>%s"},
{Opt_euid_gt, "euid>%s"},
{Opt_fowner_gt, "fowner>%s"},
{Opt_uid_lt, "uid<%s"},
{Opt_euid_lt, "euid<%s"},
{Opt_fowner_lt, "fowner<%s"},
{Opt_appraise_type, "appraise_type=%s"}, {Opt_appraise_type, "appraise_type=%s"},
{Opt_permit_directio, "permit_directio"}, {Opt_permit_directio, "permit_directio"},
{Opt_pcr, "pcr=%s"}, {Opt_pcr, "pcr=%s"},
...@@ -541,24 +556,37 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry, ...@@ -541,24 +556,37 @@ static int ima_lsm_rule_init(struct ima_rule_entry *entry,
return result; return result;
} }
static void ima_log_string(struct audit_buffer *ab, char *key, char *value) static void ima_log_string_op(struct audit_buffer *ab, char *key, char *value,
bool (*rule_operator)(kuid_t, kuid_t))
{ {
audit_log_format(ab, "%s=", key); if (rule_operator == &uid_gt)
audit_log_format(ab, "%s>", key);
else if (rule_operator == &uid_lt)
audit_log_format(ab, "%s<", key);
else
audit_log_format(ab, "%s=", key);
audit_log_untrustedstring(ab, value); audit_log_untrustedstring(ab, value);
audit_log_format(ab, " "); audit_log_format(ab, " ");
} }
static void ima_log_string(struct audit_buffer *ab, char *key, char *value)
{
ima_log_string_op(ab, key, value, NULL);
}
static int ima_parse_rule(char *rule, struct ima_rule_entry *entry) static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
{ {
struct audit_buffer *ab; struct audit_buffer *ab;
char *from; char *from;
char *p; char *p;
bool uid_token;
int result = 0; int result = 0;
ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_INTEGRITY_RULE); ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_INTEGRITY_RULE);
entry->uid = INVALID_UID; entry->uid = INVALID_UID;
entry->fowner = INVALID_UID; entry->fowner = INVALID_UID;
entry->uid_op = &uid_eq;
entry->fowner_op = &uid_eq;
entry->action = UNKNOWN; entry->action = UNKNOWN;
while ((p = strsep(&rule, " \t")) != NULL) { while ((p = strsep(&rule, " \t")) != NULL) {
substring_t args[MAX_OPT_ARGS]; substring_t args[MAX_OPT_ARGS];
...@@ -694,11 +722,21 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry) ...@@ -694,11 +722,21 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
if (!result) if (!result)
entry->flags |= IMA_FSUUID; entry->flags |= IMA_FSUUID;
break; break;
case Opt_uid: case Opt_uid_gt:
ima_log_string(ab, "uid", args[0].from); case Opt_euid_gt:
case Opt_euid: entry->uid_op = &uid_gt;
if (token == Opt_euid) case Opt_uid_lt:
ima_log_string(ab, "euid", args[0].from); case Opt_euid_lt:
if ((token == Opt_uid_lt) || (token == Opt_euid_lt))
entry->uid_op = &uid_lt;
case Opt_uid_eq:
case Opt_euid_eq:
uid_token = (token == Opt_uid_eq) ||
(token == Opt_uid_gt) ||
(token == Opt_uid_lt);
ima_log_string_op(ab, uid_token ? "uid" : "euid",
args[0].from, entry->uid_op);
if (uid_valid(entry->uid)) { if (uid_valid(entry->uid)) {
result = -EINVAL; result = -EINVAL;
...@@ -713,12 +751,18 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry) ...@@ -713,12 +751,18 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
(uid_t)lnum != lnum) (uid_t)lnum != lnum)
result = -EINVAL; result = -EINVAL;
else else
entry->flags |= (token == Opt_uid) entry->flags |= uid_token
? IMA_UID : IMA_EUID; ? IMA_UID : IMA_EUID;
} }
break; break;
case Opt_fowner: case Opt_fowner_gt:
ima_log_string(ab, "fowner", args[0].from); entry->fowner_op = &uid_gt;
case Opt_fowner_lt:
if (token == Opt_fowner_lt)
entry->fowner_op = &uid_lt;
case Opt_fowner_eq:
ima_log_string_op(ab, "fowner", args[0].from,
entry->fowner_op);
if (uid_valid(entry->fowner)) { if (uid_valid(entry->fowner)) {
result = -EINVAL; result = -EINVAL;
...@@ -1049,19 +1093,34 @@ int ima_policy_show(struct seq_file *m, void *v) ...@@ -1049,19 +1093,34 @@ int ima_policy_show(struct seq_file *m, void *v)
if (entry->flags & IMA_UID) { if (entry->flags & IMA_UID) {
snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->uid)); snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->uid));
seq_printf(m, pt(Opt_uid), tbuf); if (entry->uid_op == &uid_gt)
seq_printf(m, pt(Opt_uid_gt), tbuf);
else if (entry->uid_op == &uid_lt)
seq_printf(m, pt(Opt_uid_lt), tbuf);
else
seq_printf(m, pt(Opt_uid_eq), tbuf);
seq_puts(m, " "); seq_puts(m, " ");
} }
if (entry->flags & IMA_EUID) { if (entry->flags & IMA_EUID) {
snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->uid)); snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->uid));
seq_printf(m, pt(Opt_euid), tbuf); if (entry->uid_op == &uid_gt)
seq_printf(m, pt(Opt_euid_gt), tbuf);
else if (entry->uid_op == &uid_lt)
seq_printf(m, pt(Opt_euid_lt), tbuf);
else
seq_printf(m, pt(Opt_euid_eq), tbuf);
seq_puts(m, " "); seq_puts(m, " ");
} }
if (entry->flags & IMA_FOWNER) { if (entry->flags & IMA_FOWNER) {
snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->fowner)); snprintf(tbuf, sizeof(tbuf), "%d", __kuid_val(entry->fowner));
seq_printf(m, pt(Opt_fowner), tbuf); if (entry->fowner_op == &uid_gt)
seq_printf(m, pt(Opt_fowner_gt), tbuf);
else if (entry->fowner_op == &uid_lt)
seq_printf(m, pt(Opt_fowner_lt), tbuf);
else
seq_printf(m, pt(Opt_fowner_eq), tbuf);
seq_puts(m, " "); seq_puts(m, " ");
} }
......
...@@ -90,6 +90,8 @@ config KEY_DH_OPERATIONS ...@@ -90,6 +90,8 @@ config KEY_DH_OPERATIONS
bool "Diffie-Hellman operations on retained keys" bool "Diffie-Hellman operations on retained keys"
depends on KEYS depends on KEYS
select MPILIB select MPILIB
select CRYPTO
select CRYPTO_HASH
help help
This option provides support for calculating Diffie-Hellman This option provides support for calculating Diffie-Hellman
public keys and shared secrets using values stored as keys public keys and shared secrets using values stored as keys
......
...@@ -15,7 +15,8 @@ obj-y := \ ...@@ -15,7 +15,8 @@ obj-y := \
request_key.o \ request_key.o \
request_key_auth.o \ request_key_auth.o \
user_defined.o user_defined.o
obj-$(CONFIG_KEYS_COMPAT) += compat.o compat-obj-$(CONFIG_KEY_DH_OPERATIONS) += compat_dh.o
obj-$(CONFIG_KEYS_COMPAT) += compat.o $(compat-obj-y)
obj-$(CONFIG_PROC_FS) += proc.o obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_SYSCTL) += sysctl.o obj-$(CONFIG_SYSCTL) += sysctl.o
obj-$(CONFIG_PERSISTENT_KEYRINGS) += persistent.o obj-$(CONFIG_PERSISTENT_KEYRINGS) += persistent.o
......
...@@ -133,8 +133,13 @@ COMPAT_SYSCALL_DEFINE5(keyctl, u32, option, ...@@ -133,8 +133,13 @@ COMPAT_SYSCALL_DEFINE5(keyctl, u32, option,
return keyctl_get_persistent(arg2, arg3); return keyctl_get_persistent(arg2, arg3);
case KEYCTL_DH_COMPUTE: case KEYCTL_DH_COMPUTE:
return keyctl_dh_compute(compat_ptr(arg2), compat_ptr(arg3), return compat_keyctl_dh_compute(compat_ptr(arg2),
arg4, compat_ptr(arg5)); compat_ptr(arg3),
arg4, compat_ptr(arg5));
case KEYCTL_RESTRICT_KEYRING:
return keyctl_restrict_keyring(arg2, compat_ptr(arg3),
compat_ptr(arg4));
default: default:
return -EOPNOTSUPP; return -EOPNOTSUPP;
......
/* 32-bit compatibility syscall for 64-bit systems for DH operations
*
* Copyright (C) 2016 Stephan Mueller <smueller@chronox.de>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/uaccess.h>
#include "internal.h"
/*
* Perform the DH computation or DH based key derivation.
*
* If successful, 0 will be returned.
*/
long compat_keyctl_dh_compute(struct keyctl_dh_params __user *params,
char __user *buffer, size_t buflen,
struct compat_keyctl_kdf_params __user *kdf)
{
struct keyctl_kdf_params kdfcopy;
struct compat_keyctl_kdf_params compat_kdfcopy;
if (!kdf)
return __keyctl_dh_compute(params, buffer, buflen, NULL);
if (copy_from_user(&compat_kdfcopy, kdf, sizeof(compat_kdfcopy)) != 0)
return -EFAULT;
kdfcopy.hashname = compat_ptr(compat_kdfcopy.hashname);
kdfcopy.otherinfo = compat_ptr(compat_kdfcopy.otherinfo);
kdfcopy.otherinfolen = compat_kdfcopy.otherinfolen;
return __keyctl_dh_compute(params, buffer, buflen, &kdfcopy);
}
This diff is collapsed.
...@@ -220,7 +220,7 @@ static void key_garbage_collector(struct work_struct *work) ...@@ -220,7 +220,7 @@ static void key_garbage_collector(struct work_struct *work)
key = rb_entry(cursor, struct key, serial_node); key = rb_entry(cursor, struct key, serial_node);
cursor = rb_next(cursor); cursor = rb_next(cursor);
if (atomic_read(&key->usage) == 0) if (refcount_read(&key->usage) == 0)
goto found_unreferenced_key; goto found_unreferenced_key;
if (unlikely(gc_state & KEY_GC_REAPING_DEAD_1)) { if (unlikely(gc_state & KEY_GC_REAPING_DEAD_1)) {
...@@ -229,6 +229,9 @@ static void key_garbage_collector(struct work_struct *work) ...@@ -229,6 +229,9 @@ static void key_garbage_collector(struct work_struct *work)
set_bit(KEY_FLAG_DEAD, &key->flags); set_bit(KEY_FLAG_DEAD, &key->flags);
key->perm = 0; key->perm = 0;
goto skip_dead_key; goto skip_dead_key;
} else if (key->type == &key_type_keyring &&
key->restrict_link) {
goto found_restricted_keyring;
} }
} }
...@@ -334,6 +337,14 @@ static void key_garbage_collector(struct work_struct *work) ...@@ -334,6 +337,14 @@ static void key_garbage_collector(struct work_struct *work)
gc_state |= KEY_GC_REAP_AGAIN; gc_state |= KEY_GC_REAP_AGAIN;
goto maybe_resched; goto maybe_resched;
/* We found a restricted keyring and need to update the restriction if
* it is associated with the dead key type.
*/
found_restricted_keyring:
spin_unlock(&key_serial_lock);
keyring_restriction_gc(key, key_gc_dead_keytype);
goto maybe_resched;
/* We found a keyring and we need to check the payload for links to /* We found a keyring and we need to check the payload for links to
* dead or expired keys. We don't flag another reap immediately as we * dead or expired keys. We don't flag another reap immediately as we
* have to wait for the old payload to be destroyed by RCU before we * have to wait for the old payload to be destroyed by RCU before we
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment