commit | b6b6bc63ee770b7a261c6a260026a028287f336c | [log] [tgz] |
---|---|---|
author | Philip Tricca <philip.b.tricca@intel.com> | Wed May 03 09:03:26 2017 -0700 |
committer | Philip Tricca <philip.b.tricca@intel.com> | Wed May 03 09:03:26 2017 -0700 |
tree | 12f8a8bec9acbf9e9fddb92176eff7bd176bda7c | |
parent | 3ae74a863b92a129ad88858a58c09ff8eb3fca99 [diff] | |
parent | 56bd58afa792b37cf32e749f0109e9115e87ecc4 [diff] |
Merge branch '1.x'
The current resource manager implementation in $(srcdir)/resourcemgr/resourcemgr.c should be considered a prototype only. It is not suitable for regular use as it has numerous threading issues and likely other latent bugs that can't be quantified without significant investment of time and resources. As such we've decided to write a new implementation as a proper daemon using D-Bus and more modern development practices. Patches fixing issues with the existing resource manager are welcome but this code will be removed as soon as the replacement is ready.
This repository hosts source code implementing the Trusted Computing Group's (TCG) TPM2 Software Stack (TSS) This stack consists of the following layers from top to bottom:
The test application, tpmclient, tests many of the commands against the TPM 2.0 simulator. The tpmclient application can be altered and used as a sandbox to test and develop any TPM 2.0 command sequences, and provides an excellent development and learning vehicle.
Instructions to build and install TPM2.0-TSS are available in the INSTALL file.
This repository contains a test suite intended to exercise the TCTI and SAPI code. This test suite is not intended to test a TPM implementation and so this test suite should only be run against a TPM simulator. If this test suite is executed against a TPM other than the software simulator it may cause damage to the TPM (NV storage wear out etc). You have been warned.
The TPM library specification contains reference code sufficient to construct a software TPM 2.0 simulator. This code was provided by Microsoft and they provide a binary download for Windows here. IBM has repackaged this code with a few Makefiles so that the Microsoft code can be built and run on Linux systems. The Linux version of the Microsoft TPM 2.0 simulator can be obtained here. Once you've downloaded and successfully built and execute the simulator it will, by default, be accepting connections on the localhost, port 2321.
Issues building or running the simulator should be reported to the IBM software TPM2 project.
NOTE: The Intel TCG TSS is currently tested against the 532 version of the simulator. Compatibility with later versions has not yet been tested.
The current test suite implemented in the tpmclient program requires that the resource manager (resourcemgr) be running. This is due to the test suite requiring session resources beyond those available in the TPM2 simulator. The resource manager is thus required to "virtualize" the session resources. For the resource manager to connect to the simulator the -sim
option must be supplied as an option when executing it:
$ resourcemgr/resourcemgr -sim
The test suite is implemented in the tpmclient program. This is a monolithic C program that exercises various TCTI and SAPI API calls. Once the test environment is set up (simulator and resource manager are built and running), the tpmclient program can be executed:
$ test/tpmclient/tpmclient
The tpmclient
program will run either until completion, or until an error occurs. Please report failures in a Github 'issue' with a full log of the test run. Thus must include output from the resourcemgr
as well as the tpmclient
program. This output must include full debug messages which requires that the libraries and binaries be built with debug flags enabled. See INSTALL for instructions to build with debug flags enabled.
We are currently working to decompose the existing monolithic tpmclient
program into individual test programs that can be integrated into an automated test harness. This approach has a number of advantages including the ability to run individual tests in isolation as well as reduced overhead, maintenance and automation.