There are a handful of terms used throughout the Packer documentation where the
description:|-
meaning may not be immediately obvious if you haven't used Packer before.
There are a handful of terms used throughout the Packer documentation where the meaning may not be immediately obvious if you haven't used Packer before. Luckily, there are relatively few. This page documents all the terminology required to understand and use Packer. The terminology is in alphabetical order for easy referencing.
Luckily, there are relatively few. This page documents all the terminology
---
required to understand and use Packer. The terminology is in alphabetical order
for easy referencing.
layout: docs
page_title: Packer Terminology
...
# Packer Terminology
# Packer Terminology
There are a handful of terms used throughout the Packer documentation where
There are a handful of terms used throughout the Packer documentation where the
the meaning may not be immediately obvious if you haven't used Packer before.
meaning may not be immediately obvious if you haven't used Packer before.
Luckily, there are relatively few. This page documents all the terminology
Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical
required to understand and use Packer. The terminology is in alphabetical order
order for easy referencing.
for easy referencing.
-`Artifacts` are the results of a single build, and are usually a set of IDs
-`Artifacts` are the results of a single build, and are usually a set of IDs or
or files to represent a machine image. Every builder produces a single
files to represent a machine image. Every builder produces a single artifact.
artifact. As an example, in the case of the Amazon EC2 builder, the artifact is
As an example, in the case of the Amazon EC2 builder, the artifact is a set of
a set of AMI IDs (one per region). For the VMware builder, the artifact is a
AMI IDs (one per region). For the VMware builder, the artifact is a directory
directory of files comprising the created virtual machine.
of files comprising the created virtual machine.
-`Builds` are a single task that eventually produces an image for a single
-`Builds` are a single task that eventually produces an image for a
platform. Multiple builds run in parallel. Example usage in a
single platform. Multiple builds run in parallel. Example usage in a sentence:
sentence: "The Packer build produced an AMI to run our web application."
"The Packer build produced an AMI to run our web application." Or: "Packer is
Or: "Packer is running the builds now for VMware, AWS, and VirtualBox."
running the builds now for VMware, AWS, and VirtualBox."
-`Builders` are components of Packer that are able to create a machine
-`Builders` are components of Packer that are able to create a machine image
image for a single platform. Builders read in some configuration and use
for a single platform. Builders read in some configuration and use that to run
that to run and generate a machine image. A builder is invoked as part of a
and generate a machine image. A builder is invoked as part of a build in order
build in order to create the actual resulting images. Example builders include
to create the actual resulting images. Example builders include VirtualBox,
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
VMware, and Amazon EC2. Builders can be created and added to Packer in the
Packer in the form of plugins.
form of plugins.
-`Commands` are sub-commands for the `packer` program that perform some
-`Commands` are sub-commands for the `packer` program that perform some job. An
job. An example command is "build", which is invoked as `packer build`.
example command is "build", which is invoked as `packer build`. Packer ships
Packer ships with a set of commands out of the box in order to define
with a set of commands out of the box in order to define its
its command-line interface. Commands can also be created and added to
command-line interface. Commands can also be created and added to Packer in
Packer in the form of plugins.
the form of plugins.
-`Post-processors` are components of Packer that take the result of
-`Post-processors` are components of Packer that take the result of a builder
a builder or another post-processor and process that to
or another post-processor and process that to create a new artifact. Examples
create a new artifact. Examples of post-processors are
of post-processors are compress to compress artifacts, upload to upload
compress to compress artifacts, upload to upload artifacts, etc.
artifacts, etc.
-`Provisioners` are components of Packer that install and configure
-`Provisioners` are components of Packer that install and configure software
software within a running machine prior to that machine being turned
within a running machine prior to that machine being turned into a
into a static image. They perform the major work of making the image contain
static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef, Puppet,
useful software. Example provisioners include shell scripts, Chef,
etc.
Puppet, etc.
-`Templates` are JSON files which define one or more builds
-`Templates` are JSON files which define one or more builds by configuring the
by configuring the various components of Packer. Packer is able to read a
various components of Packer. Packer is able to read a template and use that
template and use that information to create multiple machine images in
information to create multiple machine images in parallel.
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
description:|-
EBS volume as the root device. For more information on the difference between
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an EBS volume as the root device. For more information on the difference between instance storage and EBS-backed instances, storage for the root device section in the EC2 documentation.
instance storage and EBS-backed instances, storage for the root device section
---
in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (chroot)'
...
# AMI Builder (chroot)
# AMI Builder (chroot)
Type: `amazon-chroot`
Type: `amazon-chroot`
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by
The `amazon-chroot` Packer builder is able to create Amazon AMIs backed by an
an EBS volume as the root device. For more information on the difference
EBS volume as the root device. For more information on the difference between
between instance storage and EBS-backed instances, see the
instance storage and EBS-backed instances, see the ["storage for the root
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
The difference between this builder and the `amazon-ebs` builder is that
The difference between this builder and the `amazon-ebs` builder is that this
this builder is able to build an EBS-backed AMI without launching a new
builder is able to build an EBS-backed AMI without launching a new EC2 instance.
EC2 instance. This can dramatically speed up AMI builds for organizations
This can dramatically speed up AMI builds for organizations who need the extra
who need the extra fast build.
fast build.
~> **This is an advanced builder** If you're just getting
\~>**This is an advanced builder** If you're just getting started with
started with Packer, we recommend starting with the
Packer, we recommend starting with the [amazon-ebs
[amazon-ebs builder](/docs/builders/amazon-ebs.html), which is
builder](/docs/builders/amazon-ebs.html), which is much easier to use.
much easier to use.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
in your account, it is up to you to use, delete, etc. the AMI.
account, it is up to you to use, delete, etc. the AMI.
## How Does it Work?
## How Does it Work?
This builder works by creating a new EBS volume from an existing source AMI
This builder works by creating a new EBS volume from an existing source AMI and
and attaching it into an already-running EC2 instance. Once attached, a
attaching it into an already-running EC2 instance. Once attached, a
[chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the
[chroot](http://en.wikipedia.org/wiki/Chroot) is used to provision the system
system within that volume. After provisioning, the volume is detached,
within that volume. After provisioning, the volume is detached, snapshotted, and
snapshotted, and an AMI is made.
an AMI is made.
Using this process, minutes can be shaved off the AMI creation process
Using this process, minutes can be shaved off the AMI creation process because a
because a new EC2 instance doesn't need to be launched.
new EC2 instance doesn't need to be launched.
There are some restrictions, however. The host EC2 instance where the
There are some restrictions, however. The host EC2 instance where the volume is
volume is attached to must be a similar system (generally the same OS
attached to must be a similar system (generally the same OS version, kernel
version, kernel versions, etc.) as the AMI being built. Additionally,
versions, etc.) as the AMI being built. Additionally, this process is much more
this process is much more expensive because the EC2 instance must be kept
expensive because the EC2 instance must be kept running persistently in order to
running persistently in order to build AMIs, whereas the other AMI builders
build AMIs, whereas the other AMI builders start instances on-demand to build
start instances on-demand to build AMIs as needed.
AMIs as needed.
## Configuration Reference
## Configuration Reference
...
@@ -52,107 +55,109 @@ segmented below into two categories: required and optional parameters. Within
...
@@ -52,107 +55,109 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`access_key` (string) - The access key used to communicate with AWS.
-`access_key` (string) - The access key used to communicate with AWS. If not
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
specified, Packer will use the key from any
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
If the environmental variables aren't set and Packer is running on
file or fall back to environment variables `AWS_ACCESS_KEY_ID` or
an EC2 instance, Packer will check the instance metadata for IAM role
`AWS_ACCESS_KEY` (in that order), if set. If the environmental variables
keys.
aren't set and Packer is running on an EC2 instance, Packer will check the
instance metadata for IAM role keys.
*`ami_name` (string) - The name of the resulting AMI that will appear
when managing AMIs in the AWS console or via APIs. This must be unique.
-`ami_name` (string) - The name of the resulting AMI that will appear when
To help make this unique, use a function like `timestamp` (see
managing AMIs in the AWS console or via APIs. This must be unique. To help
[configuration templates](/docs/templates/configuration-templates.html) for more info)
make this unique, use a function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
*`secret_key` (string) - The secret key used to communicate with AWS.
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
-`secret_key` (string) - The secret key used to communicate with AWS. If not
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
specified, Packer will use the secret from any
If the environmental variables aren't set and Packer is running on
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
description:|-
volumes for use in EC2. For more information on the difference between
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS volumes for use in EC2. For more information on the difference between EBS-backed instances and instance-store backed instances, see the storage for the root device section in the EC2 documentation.
EBS-backed instances and instance-store backed instances, see the storage for
---
the root device section in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (EBS backed)'
...
# AMI Builder (EBS backed)
# AMI Builder (EBS backed)
Type: `amazon-ebs`
Type: `amazon-ebs`
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
The `amazon-ebs` Packer builder is able to create Amazon AMIs backed by EBS
volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information
volumes for use in [EC2](http://aws.amazon.com/ec2/). For more information on
on the difference between EBS-backed instances and instance-store backed
the difference between EBS-backed instances and instance-store backed instances,
instances, see the
see the ["storage for the root device" section in the EC2
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from a source AMI,
This builder builds an AMI by launching an EC2 instance from a source AMI,
provisioning that running machine, and then creating an AMI from that machine.
provisioning that running machine, and then creating an AMI from that machine.
This is all done in your own AWS account. The builder will create temporary
This is all done in your own AWS account. The builder will create temporary
keypairs, security group rules, etc. that provide it temporary access to
keypairs, security group rules, etc. that provide it temporary access to the
the instance while the image is being created. This simplifies configuration
instance while the image is being created. This simplifies configuration quite a
quite a bit.
bit.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
in your account, it is up to you to use, delete, etc. the AMI.
account, it is up to you to use, delete, etc. the AMI.
## Configuration Reference
## Configuration Reference
...
@@ -32,170 +35,173 @@ segmented below into two categories: required and optional parameters. Within
...
@@ -32,170 +35,173 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`access_key` (string) - The access key used to communicate with AWS.
-`access_key` (string) - The access key used to communicate with AWS. If not
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
specified, Packer will use the key from any
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
file or fall back to environment variables `AWS_ACCESS_KEY_ID` or
`AWS_ACCESS_KEY` (in that order), if set.
*`ami_name` (string) - The name of the resulting AMI that will appear
-`ami_name` (string) - The name of the resulting AMI that will appear when
when managing AMIs in the AWS console or via APIs. This must be unique.
managing AMIs in the AWS console or via APIs. This must be unique. To help
To help make this unique, use a function like `timestamp` (see
make this unique, use a function like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info)
templates](/docs/templates/configuration-templates.html) for more info)
*`instance_type` (string) - The EC2 instance type to use while building
-`instance_type` (string) - The EC2 instance type to use while building the
the AMI, such as "m1.small".
AMI, such as "m1.small".
*`region` (string) - The name of the region, such as "us-east-1", in which
-`region` (string) - The name of the region, such as "us-east-1", in which to
to launch the EC2 instance to create the AMI.
launch the EC2 instance to create the AMI.
*`secret_key` (string) - The secret key used to communicate with AWS.
-`secret_key` (string) - The secret key used to communicate with AWS. If not
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
specified, Packer will use the secret from any
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
description:|-
instance storage as the root device. For more information on the difference
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by instance storage as the root device. For more information on the difference between instance storage and EBS-backed instances, see the storage for the root device section in the EC2 documentation.
between instance storage and EBS-backed instances, see the storage for the root
---
device section in the EC2 documentation.
layout: docs
page_title: 'Amazon AMI Builder (instance-store)'
...
# AMI Builder (instance-store)
# AMI Builder (instance-store)
...
@@ -11,24 +14,24 @@ Type: `amazon-instance`
...
@@ -11,24 +14,24 @@ Type: `amazon-instance`
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
The `amazon-instance` Packer builder is able to create Amazon AMIs backed by
instance storage as the root device. For more information on the difference
instance storage as the root device. For more information on the difference
between instance storage and EBS-backed instances, see the
between instance storage and EBS-backed instances, see the ["storage for the
["storage for the root device" section in the EC2 documentation](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ComponentsAMIs.html#storage-for-the-root-device).
This builder builds an AMI by launching an EC2 instance from an existing
This builder builds an AMI by launching an EC2 instance from an existing
instance-storage backed AMI, provisioning that running machine, and then
instance-storage backed AMI, provisioning that running machine, and then
bundling and creating a new AMI from that machine.
bundling and creating a new AMI from that machine. This is all done in your own
This is all done in your own AWS account. The builder will create temporary
AWS account. The builder will create temporary keypairs, security group rules,
keypairs, security group rules, etc. that provide it temporary access to
etc. that provide it temporary access to the instance while the image is being
the instance while the image is being created. This simplifies configuration
created. This simplifies configuration quite a bit.
quite a bit.
The builder does _not_ manage AMIs. Once it creates an AMI and stores it
The builder does *not* manage AMIs. Once it creates an AMI and stores it in your
in your account, it is up to you to use, delete, etc. the AMI.
account, it is up to you to use, delete, etc. the AMI.
-> **Note** This builder requires that the
->**Note** This builder requires that the [Amazon EC2 AMI
[Amazon EC2 AMI Tools](http://aws.amazon.com/developertools/368)
Tools](http://aws.amazon.com/developertools/368) are installed onto the machine.
are installed onto the machine. This can be done within a provisioner, but
This can be done within a provisioner, but must be done before the builder
must be done before the builder finishes running.
finishes running.
## Configuration Reference
## Configuration Reference
...
@@ -37,204 +40,207 @@ segmented below into two categories: required and optional parameters. Within
...
@@ -37,204 +40,207 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`access_key` (string) - The access key used to communicate with AWS.
-`access_key` (string) - The access key used to communicate with AWS. If not
If not specified, Packer will use the key from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
specified, Packer will use the key from any
or fall back to environment variables `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` (in that order), if set.
file or fall back to environment variables `AWS_ACCESS_KEY_ID` or
`AWS_ACCESS_KEY` (in that order), if set.
*`account_id` (string) - Your AWS account ID. This is required for bundling
-`account_id` (string) - Your AWS account ID. This is required for bundling
the AMI. This is _not the same_ as the access key. You can find your
the AMI. This is *not the same* as the access key. You can find your account
account ID in the security credentials page of your AWS account.
ID in the security credentials page of your AWS account.
*`ami_name` (string) - The name of the resulting AMI that will appear
-`ami_name` (string) - The name of the resulting AMI that will appear when
when managing AMIs in the AWS console or via APIs. This must be unique.
managing AMIs in the AWS console or via APIs. This must be unique. To help
To help make this unique, use a function like `timestamp` (see
make this unique, use a function like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info)
templates](/docs/templates/configuration-templates.html) for more info)
*`instance_type` (string) - The EC2 instance type to use while building
-`instance_type` (string) - The EC2 instance type to use while building the
the AMI, such as "m1.small".
AMI, such as "m1.small".
*`region` (string) - The name of the region, such as "us-east-1", in which
-`region` (string) - The name of the region, such as "us-east-1", in which to
to launch the EC2 instance to create the AMI.
launch the EC2 instance to create the AMI.
*`s3_bucket` (string) - The name of the S3 bucket to upload the AMI.
-`s3_bucket` (string) - The name of the S3 bucket to upload the AMI. This
This bucket will be created if it doesn't exist.
bucket will be created if it doesn't exist.
*`secret_key` (string) - The secret key used to communicate with AWS.
-`secret_key` (string) - The secret key used to communicate with AWS. If not
If not specified, Packer will use the secret from any [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file
specified, Packer will use the secret from any
or fall back to environment variables `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` (in that order), if set.
from an existing EC2 instance by mounting the root device and using a
from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by
that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
AMI since no new EC2 instance needs to be launched.
since no new EC2 instance needs to be launched.
-> **Don't know which builder to use?** If in doubt, use the
->**Don't know which builder to use?** If in doubt, use the [amazon-ebs
[amazon-ebs builder](/docs/builders/amazon-ebs.html). It is
builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
much easier to use and Amazon generally recommends EBS-backed images nowadays.
generally recommends EBS-backed images nowadays.
## Using an IAM Instance Profile
## Using an IAM Instance Profile
If AWS keys are not specified in the template, a [credentials](http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-config-files) file or through environment variables
If AWS keys are not specified in the template, a
Packer will use credentials provided by the instance's IAM profile, if it has one.
Packer is extensible, allowing you to write new builders without having to
description:|-
modify the core source code of Packer itself. Documentation for creating new
Packer is extensible, allowing you to write new builders without having to modify the core source code of Packer itself. Documentation for creating new builders is covered in the custom builders page of the Packer plugin section.
builders is covered in the custom builders page of the Packer plugin section.
---
layout: docs
page_title: Custom Builder
...
# Custom Builder
# Custom Builder
Packer is extensible, allowing you to write new builders without having to
Packer is extensible, allowing you to write new builders without having to
modify the core source code of Packer itself. Documentation for creating
modify the core source code of Packer itself. Documentation for creating new
new builders is covered in the [custom builders](/docs/extend/builder.html)
builders is covered in the [custom builders](/docs/extend/builder.html) page of
The `digitalocean` Packer builder is able to create new images for use with
description:|-
DigitalOcean. The builder takes a source image, runs any provisioning necessary
The `digitalocean` Packer builder is able to create new images for use with DigitalOcean. The builder takes a source image, runs any provisioning necessary on the image after launching it, then snapshots it into a reusable image. This reusable image can then be used as the foundation of new servers that are launched within DigitalOcean.
on the image after launching it, then snapshots it into a reusable image. This
---
reusable image can then be used as the foundation of new servers that are
launched within DigitalOcean.
layout: docs
page_title: DigitalOcean Builder
...
# DigitalOcean Builder
# DigitalOcean Builder
Type: `digitalocean`
Type: `digitalocean`
The `digitalocean` Packer builder is able to create new images for use with
The `digitalocean` Packer builder is able to create new images for use with
[DigitalOcean](http://www.digitalocean.com). The builder takes a source
[DigitalOcean](http://www.digitalocean.com). The builder takes a source image,
image, runs any provisioning necessary on the image after launching it,
runs any provisioning necessary on the image after launching it, then snapshots
then snapshots it into a reusable image. This reusable image can then be
it into a reusable image. This reusable image can then be used as the foundation
used as the foundation of new servers that are launched within DigitalOcean.
of new servers that are launched within DigitalOcean.
The builder does _not_ manage images. Once it creates an image, it is up to
The builder does *not* manage images. Once it creates an image, it is up to you
you to use it or delete it.
to use it or delete it.
## Configuration Reference
## Configuration Reference
...
@@ -25,50 +29,53 @@ segmented below into two categories: required and optional parameters. Within
...
@@ -25,50 +29,53 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`api_token` (string) - The client TOKEN to use to access your account.
-`api_token` (string) - The client TOKEN to use to access your account. It can
It can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
*`image` (string) - The name (or slug) of the base image to use. This is the
-`image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it.
image that will be used to launch a new droplet and provision it. See
See https://developers.digitalocean.com/documentation/v2/#list-all-images for details on how to get a list of the the accepted image names/slugs.
https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs.
*`region` (string) - The name (or slug) of the region to launch the droplet in.
-`region` (string) - The name (or slug) of the region to launch the droplet in.
Consequently, this is the region where the snapshot will be available.
Consequently, this is the region where the snapshot will be available. See
See https://developers.digitalocean.com/documentation/v2/#list-all-regions for the accepted region names/slugs.
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
*`size` (string) - The name (or slug) of the droplet size to use.
-`size` (string) - The name (or slug) of the droplet size to use. See
See https://developers.digitalocean.com/documentation/v2/#list-all-sizes for the accepted size names/slugs.
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for the
accepted size names/slugs.
### Optional:
### Optional:
*`droplet_name` (string) - The name assigned to the droplet. DigitalOcean
-`droplet_name` (string) - The name assigned to the droplet. DigitalOcean sets
sets the hostname of the machine to this value.
the hostname of the machine to this value.
*`private_networking` (boolean) - Set to `true` to enable private networking
-`private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
for the droplet being created. This defaults to `false`, or not enabled.
*`snapshot_name` (string) - The name of the resulting snapshot that will
-`snapshot_name` (string) - The name of the resulting snapshot that will appear
appear in your account. This must be unique.
in your account. This must be unique. To help make this unique, use a function
To help make this unique, use a function like `timestamp` (see
like `timestamp` (see [configuration
[configuration templates](/docs/templates/configuration-templates.html) for more info)
templates](/docs/templates/configuration-templates.html) for more info)
*`state_timeout` (string) - The time to wait, as a duration string,
-`state_timeout` (string) - The time to wait, as a duration string, for a
for a droplet to enter a desired state (such as "active") before
droplet to enter a desired state (such as "active") before timing out. The
timing out. The default state timeout is "6m".
default state timeout is "6m".
*`user_data` (string) - User data to launch with the Droplet.
-`user_data` (string) - User data to launch with the Droplet.
## Basic Example
## Basic Example
Here is a basic example. It is completely valid as soon as you enter your
Here is a basic example. It is completely valid as soon as you enter your own
The `docker` Packer builder builds Docker images using Docker. The builder
description:|-
starts a Docker container, runs provisioners within this container, then exports
The `docker` Packer builder builds Docker images using Docker. The builder starts a Docker container, runs provisioners within this container, then exports the container for reuse or commits the image.
the container for reuse or commits the image.
---
layout: docs
page_title: Docker Builder
...
# Docker Builder
# Docker Builder
Type: `docker`
Type: `docker`
The `docker` Packer builder builds [Docker](http://www.docker.io) images using
The `docker` Packer builder builds [Docker](http://www.docker.io) images using
Docker. The builder starts a Docker container, runs provisioners within
Docker. The builder starts a Docker container, runs provisioners within this
this container, then exports the container for reuse or commits the image.
container, then exports the container for reuse or commits the image.
Packer builds Docker containers _without_ the use of
Packer builds Docker containers *without* the use of
The `null` Packer builder is not really a builder, it just sets up an SSH
description:|-
connection and runs the provisioners. It can be used to debug provisioners
The `null` Packer builder is not really a builder, it just sets up an SSH connection and runs the provisioners. It can be used to debug provisioners without incurring high wait times. It does not create any kind of image or artifact.
without incurring high wait times. It does not create any kind of image or
---
artifact.
layout: docs
page_title: Null Builder
...
# Null Builder
# Null Builder
Type: `null`
Type: `null`
The `null` Packer builder is not really a builder, it just sets up an SSH connection
The `null` Packer builder is not really a builder, it just sets up an SSH
and runs the provisioners. It can be used to debug provisioners without
connection and runs the provisioners. It can be used to debug provisioners
incurring high wait times. It does not create any kind of image or artifact.
without incurring high wait times. It does not create any kind of image or
artifact.
## Basic Example
## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since
Below is a fully functioning example. It doesn't do anything useful, since no
no provisioners are defined, but it will connect to the specified host via ssh.
provisioners are defined, but it will connect to the specified host via ssh.
```javascript
``` {.javascript}
{
{
"type": "null",
"type": "null",
"ssh_host": "127.0.0.1",
"ssh_host": "127.0.0.1",
...
@@ -31,4 +35,3 @@ no provisioners are defined, but it will connect to the specified host via ssh.
...
@@ -31,4 +35,3 @@ no provisioners are defined, but it will connect to the specified host via ssh.
The null builder has no configuration parameters other than the
The null builder has no configuration parameters other than the
The `openstack` Packer builder is able to create new images for use with
description:|-
OpenStack. The builder takes a source image, runs any provisioning necessary on
The `openstack` Packer builder is able to create new images for use with OpenStack. The builder takes a source image, runs any provisioning necessary on the image after launching it, then creates a new reusable image. This reusable image can then be used as the foundation of new servers that are launched within OpenStack. The builder will create temporary keypairs that provide temporary access to the server while the image is being created. This simplifies configuration quite a bit.
the image after launching it, then creates a new reusable image. This reusable
---
image can then be used as the foundation of new servers that are launched within
OpenStack. The builder will create temporary keypairs that provide temporary
access to the server while the image is being created. This simplifies
configuration quite a bit.
layout: docs
page_title: OpenStack Builder
...
# OpenStack Builder
# OpenStack Builder
Type: `openstack`
Type: `openstack`
The `openstack` Packer builder is able to create new images for use with
The `openstack` Packer builder is able to create new images for use with
[OpenStack](http://www.openstack.org). The builder takes a source
[OpenStack](http://www.openstack.org). The builder takes a source image, runs
image, runs any provisioning necessary on the image after launching it,
any provisioning necessary on the image after launching it, then creates a new
then creates a new reusable image. This reusable image can then be
reusable image. This reusable image can then be used as the foundation of new
used as the foundation of new servers that are launched within OpenStack.
servers that are launched within OpenStack. The builder will create temporary
The builder will create temporary keypairs that provide temporary access to
keypairs that provide temporary access to the server while the image is being
the server while the image is being created. This simplifies configuration
created. This simplifies configuration quite a bit.
quite a bit.
The builder does _not_ manage images. Once it creates an image, it is up to
The builder does *not* manage images. Once it creates an image, it is up to you
you to use it or delete it.
to use it or delete it.
## Configuration Reference
## Configuration Reference
...
@@ -28,81 +33,79 @@ segmented below into two categories: required and optional parameters. Within
...
@@ -28,81 +33,79 @@ segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
each category, the available configuration keys are alphabetized.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`flavor` (string) - The ID, name, or full URL for the desired flavor for the
-`flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
server to be created.
*`image_name` (string) - The name of the resulting image.
-`image_name` (string) - The name of the resulting image.
*`source_image` (string) - The ID or full URL to the base image to use.
-`source_image` (string) - The ID or full URL to the base image to use. This is
This is the image that will be used to launch a new server and provision it.
the image that will be used to launch a new server and provision it. Unless
Unless you specify completely custom SSH settings, the source image must
you specify completely custom SSH settings, the source image must have
have `cloud-init` installed so that the keypair gets assigned properly.
`cloud-init` installed so that the keypair gets assigned properly.
*`username` (string) - The username used to connect to the OpenStack service.
-`username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable
If not specified, Packer will use the environment variable`OS_USERNAME`,
`OS_USERNAME`, if set.
if set.
*`password` (string) - The password used to connect to the OpenStack service.
-`password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables
If not specified, Packer will use the environment variables`OS_PASSWORD`,
`OS_PASSWORD`, if set.
if set.
### Optional:
### Optional:
*`api_key` (string) - The API key used to access OpenStack. Some OpenStack
-`api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
installations require this.
*`availability_zone` (string) - The availability zone to launch the
-`availability_zone` (string) - The availability zone to launch the server in.
server in. If this isn't specified, the default enforced by your OpenStack
If this isn't specified, the default enforced by your OpenStack cluster will
cluster will be used. This may be required for some OpenStack clusters.
be used. This may be required for some OpenStack clusters.
*`floating_ip` (string) - A specific floating IP to assign to this instance.
-`floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
`use_floating_ip` must also be set to true for this to have an affect.
*`floating_ip_pool` (string) - The name of the floating IP pool to use
-`floating_ip_pool` (string) - The name of the floating IP pool to use to
to allocate a floating IP. `use_floating_ip` must also be set to true
allocate a floating IP. `use_floating_ip` must also be set to true for this to
for this to have an affect.
have an affect.
*`insecure` (boolean) - Whether or not the connection to OpenStack can be done
-`insecure` (boolean) - Whether or not the connection to OpenStack can be done
over an insecure connection. By default this is false.
over an insecure connection. By default this is false.
*`networks` (array of strings) - A list of networks by UUID to attach
-`networks` (array of strings) - A list of networks by UUID to attach to
to this instance.
this instance.
*`tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
-`tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this.
instance into. Some OpenStack installations require this. If not specified,
If not specified, Packer will use the environment variable
Packer will use the environment variable `OS_TENANT_NAME`, if set.
`OS_TENANT_NAME`, if set.
*`security_groups` (array of strings) - A list of security groups by name
-`security_groups` (array of strings) - A list of security groups by name to
to add to this instance.
add to this instance.
*`region` (string) - The name of the region, such as "DFW", in which
-`region` (string) - The name of the region, such as "DFW", in which to launch
to launch the server to create the AMI.
the server to create the AMI. If not specified, Packer will use the
If not specified, Packer will use the environment variable
environment variable `OS_REGION_NAME`, if set.
`OS_REGION_NAME`, if set.
*`ssh_interface` (string) - The type of interface to connect via SSH. Values
-`ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is
useful for Rackspace are "public" or "private", and the default behavior is to
to connect via whichever is returned first from the OpenStack API.
connect via whichever is returned first from the OpenStack API.
*`use_floating_ip` (boolean) - Whether or not to use a floating IP for
-`use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
the instance. Defaults to false.
*`rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
-`rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH.
Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false.
Defaults to false.
## Basic Example: Rackspace public cloud
## Basic Example: Rackspace public cloud
Here is a basic example. This is a working example to build a
Here is a basic example. This is a working example to build a Ubuntu 12.04 LTS
Ubuntu 12.04 LTS (Precise Pangolin) on Rackspace OpenStack cloud offering.
(Precise Pangolin) on Rackspace OpenStack cloud offering.
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual
description:|-
machines and export them in the PVM format, starting from an ISO image.
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an ISO image.
layout: docs
---
page_title: 'Parallels Builder (from an ISO)'
...
# Parallels Builder (from an ISO)
# Parallels Builder (from an ISO)
Type: `parallels-iso`
Type: `parallels-iso`
The Parallels Packer builder is able to create
The Parallels Packer builder is able to create [Parallels Desktop for
[Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) virtual
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
machines and export them in the PVM format, starting from an
them in the PVM format, starting from an ISO image.
ISO image.
The builder builds a virtual machine by creating a new virtual machine
The builder builds a virtual machine by creating a new virtual machine from
from scratch, booting it, installing an OS, provisioning software within
scratch, booting it, installing an OS, provisioning software within the OS, then
the OS, then shutting it down. The result of the Parallels builder is a directory
shutting it down. The result of the Parallels builder is a directory containing
containing all the files necessary to run the virtual machine portably.
all the files necessary to run the virtual machine portably.
## Basic Example
## Basic Example
Here is a basic example. This example is not functional. It will start the
Here is a basic example. This example is not functional. It will start the OS
OS installer but then fail because we don't provide the preseed file for
installer but then fail because we don't provide the preseed file for Ubuntu to
Ubuntu to self-install. Still, the example serves to show the basic configuration:
self-install. Still, the example serves to show the basic configuration:
```javascript
``` {.javascript}
{
{
"type": "parallels-iso",
"type": "parallels-iso",
"guest_os_type": "ubuntu",
"guest_os_type": "ubuntu",
...
@@ -40,219 +40,219 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
...
@@ -40,219 +40,219 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
}
}
```
```
It is important to add a `shutdown_command`. By default Packer halts the
It is important to add a `shutdown_command`. By default Packer halts the virtual
virtual machine and the file system may not be sync'd. Thus, changes made in a
machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved.
provisioner might not be saved.
## Configuration Reference
## Configuration Reference
There are many configuration options available for the Parallels builder.
There are many configuration options available for the Parallels builder. They
They are organized below into two categories: required and optional. Within
are organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described.
category, the available options are alphabetized and described.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
-`iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files
files are so large, this is required and Packer will verify it prior
are so large, this is required and Packer will verify it prior to booting a
to booting a virtual machine with the ISO attached. The type of the
virtual machine with the ISO attached. The type of the checksum is specified
checksum is specified with `iso_checksum_type`, documented below.
with `iso_checksum_type`, documented below.
*`iso_checksum_type` (string) - The type of the checksum specified in
-`iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
recommended since ISO files are generally large and corruption does happen
from time to time.
from time to time.
*`iso_url` (string) - A URL to the ISO containing the installation image.
-`iso_url` (string) - A URL to the ISO containing the installation image. This
This URL can be either an HTTP URL or a file URL (or path to a file).
URL can be either an HTTP URL or a file URL (or path to a file). If this is an
If this is an HTTP URL, Packer will download it and cache it between
HTTP URL, Packer will download it and cache it between runs.
runs.
*`ssh_username` (string) - The username to use to SSH into the machine
-`ssh_username` (string) - The username to use to SSH into the machine once the
once the OS is installed.
OS is installed.
*`parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
-`parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
This can be omitted only if `parallels_tools_mode` is "disable".
This can be omitted only if `parallels_tools_mode` is "disable".
### Optional:
### Optional:
*`boot_command` (array of strings) - This is an array of commands to type
-`boot_command` (array of strings) - This is an array of commands to type when
when the virtual machine is first booted. The goal of these commands should
the virtual machine is first booted. The goal of these commands should be to
be to type just enough to initialize the operating system installer. Special
type just enough to initialize the operating system installer. Special keys
keys can be typed as well, and are covered in the section below on the boot
can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start
boot command. If this is not specified, it is assumed the installer will
itself.
start itself.
*`boot_wait` (string) - The time to wait after booting the initial virtual
-`boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five
five seconds and one minute 30 seconds, respectively. If this isn't specified,
seconds and one minute 30 seconds, respectively. If this isn't specified, the
the default is 10 seconds.
default is 10 seconds.
*`disk_size` (integer) - The size, in megabytes, of the hard disk to create
-`disk_size` (integer) - The size, in megabytes, of the hard disk to create for
for the VM. By default, this is 40000 (about 40 GB).
the VM. By default, this is 40000 (about 40 GB).
*`floppy_files` (array of strings) - A list of files to place onto a floppy
-`floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file
unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files
removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy
this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no
is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard
creating sub-directories on the floppy. Wildcard characters (\*, ?, and \[\])
characters (*, ?, and []) are allowed. Directory names are also allowed,
are allowed. Directory names are also allowed, which will add all the files
which will add all the files found in the directory to the floppy.
found in the directory to the floppy.
*`guest_os_type` (string) - The guest OS type being installed. By default
-`guest_os_type` (string) - The guest OS type being installed. By default this
this is "other", but you can get _dramatic_ performance improvements by
is "other", but you can get *dramatic* performance improvements by setting
setting this to the proper value. To view all available values for this
this to the proper value. To view all available values for this run
run `prlctl create x --distribution list`. Setting the correct value hints to
`prlctl create x --distribution list`. Setting the correct value hints to
Parallels Desktop how to optimize the virtual hardware to work best with
Parallels Desktop how to optimize the virtual hardware to work best with that
that operating system.
operating system.
*`hard_drive_interface` (string) - The type of controller that the
-`hard_drive_interface` (string) - The type of controller that the hard drives
hard drives are attached to, defaults to "sata". Valid options are
are attached to, defaults to "sata". Valid options are "sata", "ide",
"sata", "ide", and "scsi".
and "scsi".
*`host_interfaces` (array of strings) - A list of which interfaces on the
-`host_interfaces` (array of strings) - A list of which interfaces on the host
host should be searched for a IP address. The first IP address found on
should be searched for a IP address. The first IP address found on one of
one of these will be used as `{{ .HTTPIP }}` in the `boot_command`.
these will be used as `{{ .HTTPIP }}` in the `boot_command`. Defaults to
This Parallels builder is able to create Parallels Desktop for Mac virtual
description:|-
machines and export them in the PVM format, starting from an existing PVM
This Parallels builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format, starting from an existing PVM (exported virtual machine image).
(exported virtual machine image).
---
layout: docs
page_title: 'Parallels Builder (from a PVM)'
...
# Parallels Builder (from a PVM)
# Parallels Builder (from a PVM)
Type: `parallels-pvm`
Type: `parallels-pvm`
This Parallels builder is able to create
This Parallels builder is able to create[Parallels Desktop for
[Parallels Desktop for Mac](http://www.parallels.com/products/desktop/)
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
virtual machines and export them in the PVM format, starting from an
them in the PVM format, starting from an existing PVM (exported virtual machine
existing PVM (exported virtual machine image).
image).
The builder builds a virtual machine by importing an existing PVM
The builder builds a virtual machine by importing an existing PVM file. It then
file. It then boots this image, runs provisioners on this new VM, and
boots this image, runs provisioners on this new VM, and exports that VM to
exports that VM to create the image. The imported machine is deleted prior
create the image. The imported machine is deleted prior to finishing the build.
to finishing the build.
## Basic Example
## Basic Example
Here is a basic example. This example is functional if you have an PVM matching
Here is a basic example. This example is functional if you have an PVM matching
the settings here.
the settings here.
```javascript
``` {.javascript}
{
{
"type": "parallels-pvm",
"type": "parallels-pvm",
"parallels_tools_flavor": "lin",
"parallels_tools_flavor": "lin",
...
@@ -36,175 +37,180 @@ the settings here.
...
@@ -36,175 +37,180 @@ the settings here.
}
}
```
```
It is important to add a `shutdown_command`. By default Packer halts the
It is important to add a `shutdown_command`. By default Packer halts the virtual
virtual machine and the file system may not be sync'd. Thus, changes made in a
machine and the file system may not be sync'd. Thus, changes made in a
provisioner might not be saved.
provisioner might not be saved.
## Configuration Reference
## Configuration Reference
There are many configuration options available for the Parallels builder.
There are many configuration options available for the Parallels builder. They
They are organized below into two categories: required and optional. Within
are organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described.
category, the available options are alphabetized and described.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`source_path` (string) - The path to a PVM directory that acts as
-`source_path` (string) - The path to a PVM directory that acts as the source
the source of this build.
of this build.
*`ssh_username` (string) - The username to use to SSH into the machine
-`ssh_username` (string) - The username to use to SSH into the machine once the
once the OS is installed.
OS is installed.
*`parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
-`parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
This can be omitted only if `parallels_tools_mode` is "disable".
This can be omitted only if `parallels_tools_mode` is "disable".
### Optional:
### Optional:
*`boot_command` (array of strings) - This is an array of commands to type
-`boot_command` (array of strings) - This is an array of commands to type when
when the virtual machine is first booted. The goal of these commands should
the virtual machine is first booted. The goal of these commands should be to
be to type just enough to initialize the operating system installer. Special
type just enough to initialize the operating system installer. Special keys
keys can be typed as well, and are covered in the section below on the boot
can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start
boot command. If this is not specified, it is assumed the installer will
itself.
start itself.
*`boot_wait` (string) - The time to wait after booting the initial virtual
-`boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five
five seconds and one minute 30 seconds, respectively. If this isn't specified,
seconds and one minute 30 seconds, respectively. If this isn't specified, the
the default is 10 seconds.
default is 10 seconds.
*`floppy_files` (array of strings) - A list of files to put onto a floppy
-`floppy_files` (array of strings) - A list of files to put onto a floppy disk
disk that is attached when the VM is booted for the first time. This is
that is attached when the VM is booted for the first time. This is most useful
most useful for unattended Windows installs, which look for an
for unattended Windows installs, which look for an `Autounattend.xml` file on
`Autounattend.xml` file on removable media. By default no floppy will
removable media. By default no floppy will be attached. The files listed in
be attached. The files listed in this configuration will all be put
this configuration will all be put into the root directory of the floppy disk;
into the root directory of the floppy disk; sub-directories are not supported.
sub-directories are not supported.
*`reassign_mac` (boolean) - If this is "false" the MAC address of the first
-`reassign_mac` (boolean) - If this is "false" the MAC address of the first NIC
NIC will reused when imported else a new MAC address will be generated by
will reused when imported else a new MAC address will be generated
Parallels. Defaults to "false".
by Parallels. Defaults to "false".
*`output_directory` (string) - This is the path to the directory where the
-`output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build.
name of the build.
*`parallels_tools_guest_path` (string) - The path in the VM to upload
-`parallels_tools_guest_path` (string) - The path in the VM to upload
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload".
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload".
This is a [configuration template](/docs/templates/configuration-templates.html)
This is a [configuration
that has a single valid variable: `Flavor`, which will be the value of
template](/docs/templates/configuration-templates.html) that has a single
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso" which
valid variable: `Flavor`, which will be the value of `parallels_tools_flavor`.
should upload into the login directory of the user.
By default this is "prl-tools-{{.Flavor}}.iso" which should upload into the
login directory of the user.
*`parallels_tools_mode` (string) - The method by which Parallels Tools are made
-`parallels_tools_mode` (string) - The method by which Parallels Tools are made
available to the guest for installation. Valid options are "upload", "attach",
available to the guest for installation. Valid options are "upload", "attach",
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached
as a CD device to the virtual machine. If the mode is "upload" the Parallels
as a CD device to the virtual machine. If the mode is "upload" the Parallels
Tools ISO will be uploaded to the path specified by
Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload".
`parallels_tools_guest_path`. The default value is "upload".
*`prlctl` (array of array of strings) - Custom `prlctl` commands to execute in
-`prlctl` (array of array of strings) - Custom `prlctl` commands to execute in
order to further customize the virtual machine being created. The value of
order to further customize the virtual machine being created. The value of
this is an array of commands to execute. The commands are executed in the order
this is an array of commands to execute. The commands are executed in the
defined in the template. For each command, the command is defined itself as an
order defined in the template. For each command, the command is defined itself
array of strings, where each string represents a single argument on the
as an array of strings, where each string represents a single argument on the
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated
as a [configuration template](/docs/templates/configuration-templates.html),
as a [configuration template](/docs/templates/configuration-templates.html),
where the `Name` variable is replaced with the VM name. More details on how
where the `Name` variable is replaced with the VM name. More details on how to
to use `prlctl` are below.
use `prlctl` are below.
*`prlctl_post` (array of array of strings) - Identical to `prlctl`,
-`prlctl_post` (array of array of strings) - Identical to `prlctl`, except that
except that it is run after the virtual machine is shutdown, and before the
it is run after the virtual machine is shutdown, and before the virtual
virtual machine is exported.
machine is exported.
*`prlctl_version_file` (string) - The path within the virtual machine to upload
-`prlctl_version_file` (string) - The path within the virtual machine to upload
a file that contains the `prlctl` version that was used to create the machine.
a file that contains the `prlctl` version that was used to create the machine.
This information can be useful for provisioning. By default this is
This information can be useful for provisioning. By default this is
".prlctl_version", which will generally upload it into the home directory.
".prlctl\_version", which will generally upload it into the home directory.
*`shutdown_command` (string) - The command to use to gracefully shut down
-`shutdown_command` (string) - The command to use to gracefully shut down the
the machine once all the provisioning is done. By default this is an empty
machine once all the provisioning is done. By default this is an empty string,
string, which tells Packer to just forcefully shut down the machine.
which tells Packer to just forcefully shut down the machine.
*`shutdown_timeout` (string) - The amount of time to wait after executing
-`shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down.
`shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout
doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes.
"5m", or five minutes.
*`vm_name` (string) - This is the name of the virtual machine when it is
-`vm_name` (string) - This is the name of the virtual machine when it is
imported as well as the name of the PVM directory when the virtual machine is
imported as well as the name of the PVM directory when the virtual machine
exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is
is exported. By default this is "packer-BUILDNAME", where "BUILDNAME" is the
the name of the build.
name of the build.
## Parallels Tools
## Parallels Tools
After the virtual machine is up and the operating system is installed, Packer
After the virtual machine is up and the operating system is installed, Packer
uploads the Parallels Tools into the virtual machine. The path where they are
uploads the Parallels Tools into the virtual machine. The path where they are
uploaded is controllable by `parallels_tools_path`, and defaults to
uploaded is controllable by `parallels_tools_path`, and defaults to
"prl-tools.iso". Without an absolute path, it is uploaded to the home directory
"prl-tools.iso". Without an absolute path, it is uploaded to the home directory
of the SSH user. Parallels Tools ISO's can be found in:
of the SSH user. Parallels Tools ISO's can be found in: "/Applications/Parallels
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual
description:|-
machines and export them in the PVM format.
The Parallels Packer builder is able to create Parallels Desktop for Mac virtual machines and export them in the PVM format.
layout: docs
---
page_title: Parallels Builder
...
# Parallels Builder
# Parallels Builder
The Parallels Packer builder is able to create [Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) virtual machines and export them in the PVM format.
The Parallels Packer builder is able to create [Parallels Desktop for
Mac](http://www.parallels.com/products/desktop/) virtual machines and export
Packer actually comes with multiple builders able to create Parallels
them in the PVM format.
machines, depending on the strategy you want to use to build the image.
Packer supports the following Parallels builders:
*[parallels-iso](/docs/builders/parallels-iso.html) - Starts from
Packer actually comes with multiple builders able to create Parallels machines,
an ISO file, creates a brand new Parallels VM, installs an OS,
depending on the strategy you want to use to build the image. Packer supports
provisions software within the OS, then exports that machine to create
the following Parallels builders:
an image. This is best for people who want to start from scratch.
*[parallels-pvm](/docs/builders/parallels-pvm.html) - This builder
-[parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file,
imports an existing PVM file, runs provisioners on top of that VM,
creates a brand new Parallels VM, installs an OS, provisions software within
and exports that machine to create an image. This is best if you have
the OS, then exports that machine to create an image. This is best for people
an existing Parallels VM export you want to use as the source. As an
who want to start from scratch.
additional benefit, you can feed the artifact of this builder back into
itself to iterate on a machine.
-[parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
## Requirements
## Requirements
In addition to [Parallels Desktop for Mac](http://www.parallels.com/products/desktop/) this requires the
This VirtualBox Packer builder is able to create VirtualBox virtual machines and
description:|-
export them in the OVF format, starting from an existing OVF/OVA (exported
This VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVF format, starting from an existing OVF/OVA (exported virtual machine image).
virtual machine image).
---
layout: docs
page_title: 'VirtualBox Builder (from an OVF/OVA)'
...
# VirtualBox Builder (from an OVF/OVA)
# VirtualBox Builder (from an OVF/OVA)
Type: `virtualbox-ovf`
Type: `virtualbox-ovf`
This VirtualBox Packer builder is able to create [VirtualBox](https://www.virtualbox.org/)
This VirtualBox Packer builder is able to create
virtual machines and export them in the OVF format, starting from an
[VirtualBox](https://www.virtualbox.org/) virtual machines and export them in
This VMware Packer builder is able to create VMware virtual machines from an ISO
description:|-
file as a source. It currently supports building virtual machines on hosts
This VMware Packer builder is able to create VMware virtual machines from an ISO file as a source. It currently supports building virtual machines on hosts running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and VMware Player on Linux. It can also build machines directly on VMware vSphere Hypervisor using SSH as opposed to the vSphere API.
running VMware Fusion for OS X, VMware Workstation for Linux and Windows, and
---
VMware Player on Linux. It can also build machines directly on VMware vSphere
Hypervisor using SSH as opposed to the vSphere API.
layout: docs
page_title: VMware Builder from ISO
...
# VMware Builder (from ISO)
# VMware Builder (from ISO)
Type: `vmware-iso`
Type: `vmware-iso`
This VMware Packer builder is able to create VMware virtual machines from an
This VMware Packer builder is able to create VMware virtual machines from an ISO
ISO file as a source. It currently
file as a source. It currently supports building virtual machines on hosts
supports building virtual machines on hosts running
running [VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for
[VMware Fusion](http://www.vmware.com/products/fusion/overview.html) for OS X,
@@ -44,261 +47,261 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
...
@@ -44,261 +47,261 @@ Ubuntu to self-install. Still, the example serves to show the basic configuratio
## Configuration Reference
## Configuration Reference
There are many configuration options available for the VMware builder.
There are many configuration options available for the VMware builder. They are
They are organized below into two categories: required and optional. Within
organized below into two categories: required and optional. Within each
each category, the available options are alphabetized and described.
category, the available options are alphabetized and described.
In addition to the options listed here, a
In addition to the options listed here, a
[communicator](/docs/templates/communicator.html)
[communicator](/docs/templates/communicator.html) can be configured for this
can be configured for this builder.
builder.
### Required:
### Required:
*`iso_checksum` (string) - The checksum for the OS ISO file. Because ISO
-`iso_checksum` (string) - The checksum for the OS ISO file. Because ISO files
files are so large, this is required and Packer will verify it prior
are so large, this is required and Packer will verify it prior to booting a
to booting a virtual machine with the ISO attached. The type of the
virtual machine with the ISO attached. The type of the checksum is specified
checksum is specified with `iso_checksum_type`, documented below.
with `iso_checksum_type`, documented below.
*`iso_checksum_type` (string) - The type of the checksum specified in
-`iso_checksum_type` (string) - The type of the checksum specified in
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
`iso_checksum`. Valid values are "none", "md5", "sha1", "sha256", or
"sha512" currently. While "none" will skip checksumming, this is not
"sha512" currently. While "none" will skip checksumming, this is not
recommended since ISO files are generally large and corruption does happen
recommended since ISO files are generally large and corruption does happen
from time to time.
from time to time.
*`iso_url` (string) - A URL to the ISO containing the installation image.
-`iso_url` (string) - A URL to the ISO containing the installation image. This
This URL can be either an HTTP URL or a file URL (or path to a file).
URL can be either an HTTP URL or a file URL (or path to a file). If this is an
If this is an HTTP URL, Packer will download it and cache it between
HTTP URL, Packer will download it and cache it between runs.
runs.
*`ssh_username` (string) - The username to use to SSH into the machine
-`ssh_username` (string) - The username to use to SSH into the machine once the
once the OS is installed.
OS is installed.
### Optional:
### Optional:
*`disk_additional_size` (array of integers) - The size(s) of any additional
-`disk_additional_size` (array of integers) - The size(s) of any additional
hard disks for the VM in megabytes. If this is not specified then the VM will
hard disks for the VM in megabytes. If this is not specified then the VM will
only contain a primary hard disk. The builder uses expandable, not fixed-size
only contain a primary hard disk. The builder uses expandable, not fixed-size
virtual hard disks, so the actual file representing the disk will not use the
virtual hard disks, so the actual file representing the disk will not use the
full size unless it is full.
full size unless it is full.
*`boot_command` (array of strings) - This is an array of commands to type
-`boot_command` (array of strings) - This is an array of commands to type when
when the virtual machine is first booted. The goal of these commands should
the virtual machine is first booted. The goal of these commands should be to
be to type just enough to initialize the operating system installer. Special
type just enough to initialize the operating system installer. Special keys
keys can be typed as well, and are covered in the section below on the boot
can be typed as well, and are covered in the section below on the
command. If this is not specified, it is assumed the installer will start
boot command. If this is not specified, it is assumed the installer will
itself.
start itself.
*`boot_wait` (string) - The time to wait after booting the initial virtual
-`boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five
five seconds and one minute 30 seconds, respectively. If this isn't specified,
seconds and one minute 30 seconds, respectively. If this isn't specified, the
the default is 10 seconds.
default is 10 seconds.
*`disk_size` (integer) - The size of the hard disk for the VM in megabytes.
-`disk_size` (integer) - The size of the hard disk for the VM in megabytes. The
The builder uses expandable, not fixed-size virtual hard disks, so the
builder uses expandable, not fixed-size virtual hard disks, so the actual file
actual file representing the disk will not use the full size unless it is full.
representing the disk will not use the full size unless it is full. By default
By default this is set to 40,000 (about 40 GB).
this is set to 40,000 (about 40 GB).
*`disk_type_id` (string) - The type of VMware virtual disk to create.
-`disk_type_id` (string) - The type of VMware virtual disk to create. The
The default is "1", which corresponds to a growable virtual disk split in
default is "1", which corresponds to a growable virtual disk split in
2GB files. This option is for advanced usage, modify only if you
2GB files. This option is for advanced usage, modify only if you know what
know what you're doing. For more information, please consult the
you're doing. For more information, please consult the [Virtual Disk Manager
[Virtual Disk Manager User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf)
User's Guide](http://www.vmware.com/pdf/VirtualDiskManager.pdf) for desktop
for desktop VMware clients. For ESXi, refer to the proper ESXi documentation.
VMware clients. For ESXi, refer to the proper ESXi documentation.
*`floppy_files` (array of strings) - A list of files to place onto a floppy
-`floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful
disk that is attached when the VM is booted. This is most useful for
for unattended Windows installs, which look for an `Autounattend.xml` file
unattended Windows installs, which look for an `Autounattend.xml` file on
on removable media. By default, no floppy will be attached. All files
removable media. By default, no floppy will be attached. All files listed in
listed in this setting get placed into the root directory of the floppy
this setting get placed into the root directory of the floppy and the floppy
and the floppy is attached as the first floppy device. Currently, no
is attached as the first floppy device. Currently, no support exists for
support exists for creating sub-directories on the floppy. Wildcard
creating sub-directories on the floppy. Wildcard characters (\*, ?, and \[\])
characters (*, ?, and []) are allowed. Directory names are also allowed,
are allowed. Directory names are also allowed, which will add all the files
which will add all the files found in the directory to the floppy.
found in the directory to the floppy.
*`fusion_app_path` (string) - Path to "VMware Fusion.app". By default this
-`fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is
is "/Applications/VMware Fusion.app" but this setting allows you to
"/Applications/VMware Fusion.app" but this setting allows you to
customize this.
customize this.
*`guest_os_type` (string) - The guest OS type being installed. This will be
-`guest_os_type` (string) - The guest OS type being installed. This will be set
set in the VMware VMX. By default this is "other". By specifying a more specific
in the VMware VMX. By default this is "other". By specifying a more specific
OS type, VMware may perform some optimizations or virtual hardware changes
OS type, VMware may perform some optimizations or virtual hardware changes to
to better support the operating system running in the virtual machine.
better support the operating system running in the virtual machine.
*`headless` (boolean) - Packer defaults to building VMware
-`headless` (boolean) - Packer defaults to building VMware virtual machines by
virtual machines by launching a GUI that shows the console of the
launching a GUI that shows the console of the machine being built. When this
machine being built. When this value is set to true, the machine will
value is set to true, the machine will start without a console. For VMware
start without a console. For VMware machines, Packer will output VNC
machines, Packer will output VNC connection information in case you need to
connection information in case you need to connect to the console to
connect to the console to debug the build process.
debug the build process.
-`http_directory` (string) - Path to a directory to serve using an HTTP server.
*`http_directory` (string) - Path to a directory to serve using an HTTP
The files in this directory will be available over HTTP that will be
server. The files in this directory will be available over HTTP that will
requestable from the virtual machine. This is useful for hosting kickstart
be requestable from the virtual machine. This is useful for hosting
files and so on. By default this is "", which means no HTTP server will
kickstart files and so on. By default this is "", which means no HTTP
be started. The address and port of the HTTP server will be available as
server will be started. The address and port of the HTTP server will be
variables in `boot_command`. This is covered in more detail below.
available as variables in `boot_command`. This is covered in more detail
below.
-`http_port_min` and `http_port_max` (integer) - These are the minimum and
*`http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same.
server to be on one port, make this minimum and maximum port the same. By
By default the values are 8000 and 9000, respectively.
default the values are 8000 and 9000, respectively.
*`iso_urls` (array of strings) - Multiple URLs for the ISO to download.
-`iso_urls` (array of strings) - Multiple URLs for the ISO to download. Packer
Packer will try these in order. If anything goes wrong attempting to download
will try these in order. If anything goes wrong attempting to download or
or while downloading a single URL, it will move on to the next. All URLs
while downloading a single URL, it will move on to the next. All URLs must
must point to the same file (same checksum). By default this is empty
point to the same file (same checksum). By default this is empty and `iso_url`
and `iso_url`is used. Only one of `iso_url` or `iso_urls` can be specified.
is used. Only one of `iso_url` or `iso_urls` can be specified.
*`output_directory` (string) - This is the path to the directory where the
-`output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
resulting virtual machine will be created. This may be relative or absolute.
If relative, the path is relative to the working directory when `packer`
If relative, the path is relative to the working directory when `packer`
is executed. This directory must not exist or be empty prior to running the builder.
is executed. This directory must not exist or be empty prior to running
By default this is "output-BUILDNAME" where "BUILDNAME" is the name
the builder. By default this is "output-BUILDNAME" where "BUILDNAME" is the
of the build.
name of the build.
*`remote_cache_datastore` (string) - The path to the datastore where
-`remote_cache_datastore` (string) - The path to the datastore where supporting
supporting files will be stored during the build on the remote machine.
files will be stored during the build on the remote machine. By default this
By default this is the same as the `remote_datastore` option. This only
is the same as the `remote_datastore` option. This only has an effect if
has an effect if `remote_type` is enabled.
`remote_type` is enabled.
*`remote_cache_directory` (string) - The path where the ISO and/or floppy
-`remote_cache_directory` (string) - The path where the ISO and/or floppy files
files will be stored during the build on the remote machine. The path is
will be stored during the build on the remote machine. The path is relative to
relative to the `remote_cache_datastore` on the remote machine. By default
the `remote_cache_datastore` on the remote machine. By default this
this is "packer_cache". This only has an effect if `remote_type` is enabled.
is "packer\_cache". This only has an effect if `remote_type` is enabled.
*`remote_datastore` (string) - The path to the datastore where the resulting
-`remote_datastore` (string) - The path to the datastore where the resulting VM
VM will be stored when it is built on the remote machine. By default this
will be stored when it is built on the remote machine. By default this
is "datastore1". This only has an effect if `remote_type` is enabled.
is "datastore1". This only has an effect if `remote_type` is enabled.
*`remote_host` (string) - The host of the remote machine used for access.
-`remote_host` (string) - The host of the remote machine used for access. This
This is only required if `remote_type` is enabled.
is only required if `remote_type` is enabled.
*`remote_password` (string) - The SSH password for the user used to
-`remote_password` (string) - The SSH password for the user used to access the
access the remote machine. By default this is empty. This only has an
remote machine. By default this is empty. This only has an effect if
effect if `remote_type` is enabled.
`remote_type` is enabled.
*`remote_type` (string) - The type of remote machine that will be used to
-`remote_type` (string) - The type of remote machine that will be used to build
build this VM rather than a local desktop product. The only value accepted
this VM rather than a local desktop product. The only value accepted for this
for this currently is "esx5". If this is not set, a desktop product will be
currently is "esx5". If this is not set, a desktop product will be used. By
used. By default, this is not set.
default, this is not set.
*`remote_username` (string) - The username for the SSH user that will access
-`remote_username` (string) - The username for the SSH user that will access
the remote machine. This is required if `remote_type` is enabled.
the remote machine. This is required if `remote_type` is enabled.
*`shutdown_command` (string) - The command to use to gracefully shut down
-`shutdown_command` (string) - The command to use to gracefully shut down the
the machine once all the provisioning is done. By default this is an empty
machine once all the provisioning is done. By default this is an empty string,
string, which tells Packer to just forcefully shut down the machine.
which tells Packer to just forcefully shut down the machine.
*`shutdown_timeout` (string) - The amount of time to wait after executing
-`shutdown_timeout` (string) - The amount of time to wait after executing the
the `shutdown_command` for the virtual machine to actually shut down.
`shutdown_command` for the virtual machine to actually shut down. If it
If it doesn't shut down in this time, it is an error. By default, the timeout
doesn't shut down in this time, it is an error. By default, the timeout is
is "5m", or five minutes.
"5m", or five minutes.
*`skip_compaction` (boolean) - VMware-created disks are defragmented
-`skip_compaction` (boolean) - VMware-created disks are defragmented and
and compacted at the end of the build process using `vmware-vdiskmanager`.
compacted at the end of the build process using `vmware-vdiskmanager`. In
In certain rare cases, this might actually end up making the resulting disks
certain rare cases, this might actually end up making the resulting disks
slightly larger. If you find this to be the case, you can disable compaction
slightly larger. If you find this to be the case, you can disable compaction
using this configuration value.
using this configuration value.
*`tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to
-`tools_upload_flavor` (string) - The flavor of the VMware Tools ISO to upload
upload into the VM. Valid values are "darwin", "linux", and "windows".
into the VM. Valid values are "darwin", "linux", and "windows". By default,
By default, this is empty, which means VMware tools won't be uploaded.
this is empty, which means VMware tools won't be uploaded.
*`tools_upload_path` (string) - The path in the VM to upload the VMware
-`tools_upload_path` (string) - The path in the VM to upload the VMware tools.
tools. This only takes effect if `tools_upload_flavor` is non-empty.
This only takes effect if `tools_upload_flavor` is non-empty. This is a
This is a [configuration template](/docs/templates/configuration-templates.html)
[configuration template](/docs/templates/configuration-templates.html) that
that has a single valid variable: `Flavor`, which will be the value of
has a single valid variable: `Flavor`, which will be the value of
`tools_upload_flavor`. By default the upload path is set to
`tools_upload_flavor`. By default the upload path is set to`{{.Flavor}}.iso`.
`{{.Flavor}}.iso`. This setting is not used when `remote_type` is "esx5".
This setting is not used when `remote_type` is "esx5".
*`version` (string) - The [vmx hardware version](http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003746) for the new virtual machine. Only the default value has been tested, any other value is experimental. Default value is '9'.
This VMware Packer builder is able to create VMware virtual machines from an
description:|-
existing VMware virtual machine (a VMX file). It currently supports building
This VMware Packer builder is able to create VMware virtual machines from an existing VMware virtual machine (a VMX file). It currently supports building virtual machines on hosts running VMware Fusion Professional for OS X, VMware Workstation for Linux and Windows, and VMware Player on Linux.
virtual machines on hosts running VMware Fusion Professional for OS X, VMware
---
Workstation for Linux and Windows, and VMware Player on Linux.
layout: docs
page_title: VMware Builder from VMX
...
# VMware Builder (from VMX)
# VMware Builder (from VMX)
Type: `vmware-vmx`
Type: `vmware-vmx`
This VMware Packer builder is able to create VMware virtual machines from an
This VMware Packer builder is able to create VMware virtual machines from an
existing VMware virtual machine (a VMX file). It currently
existing VMware virtual machine (a VMX file). It currently supports building
supports building virtual machines on hosts running
virtual machines on hosts running [VMware Fusion
[VMware Fusion Professional](http://www.vmware.com/products/fusion-professional/) for OS X,
Professional](http://www.vmware.com/products/fusion-professional/) for OS X,
The `packer build` Packer command takes a template and runs all the builds
description:|-
within it in order to generate a set of artifacts. The various builds specified
The `packer build` Packer command takes a template and runs all the builds within it in order to generate a set of artifacts. The various builds specified within a template are executed in parallel, unless otherwise specified. And the artifacts that are created will be outputted at the end of the build.
within a template are executed in parallel, unless otherwise specified. And the
---
artifacts that are created will be outputted at the end of the build.
layout: docs
page_title: 'Build - Command-Line'
...
# Command-Line: Build
# Command-Line: Build
The `packer build` Packer command takes a template and runs all the builds within
The `packer build` Packer command takes a template and runs all the builds
it in order to generate a set of artifacts. The various builds specified within
within it in order to generate a set of artifacts. The various builds specified
a template are executed in parallel, unless otherwise specified. And the
within a template are executed in parallel, unless otherwise specified. And the
artifacts that are created will be outputted at the end of the build.
artifacts that are created will be outputted at the end of the build.
## Options
## Options
*`-color=false` - Disables colorized output. Enabled by default.
-`-color=false` - Disables colorized output. Enabled by default.
The `packer fix` Packer command takes a template and finds backwards
description:|-
incompatible parts of it and brings it up to date so it can be used with the
The `packer fix` Packer command takes a template and finds backwards incompatible parts of it and brings it up to date so it can be used with the latest version of Packer. After you update to a new Packer release, you should run the fix command to make sure your templates work with the new release.
latest version of Packer. After you update to a new Packer release, you should
---
run the fix command to make sure your templates work with the new release.
layout: docs
page_title: 'Fix - Command-Line'
...
# Command-Line: Fix
# Command-Line: Fix
The `packer fix` Packer command takes a template and finds backwards incompatible
The `packer fix` Packer command takes a template and finds backwards
parts of it and brings it up to date so it can be used with the latest version
incompatible parts of it and brings it up to date so it can be used with the
of Packer. After you update to a new Packer release, you should run the
latest version of Packer. After you update to a new Packer release, you should
fix command to make sure your templates work with the new release.
run the fix command to make sure your templates work with the new release.
The fix command will output the changed template to standard out, so you
The fix command will output the changed template to standard out, so you should
should redirect standard using standard OS-specific techniques if you want to
redirect standard using standard OS-specific techniques if you want to save it
save it to a file. For example, on Linux systems, you may want to do this:
to a file. For example, on Linux systems, you may want to do this:
```
$ packer fix old.json > new.json
$ packer fix old.json > new.json
```
If fixing fails for any reason, the fix command will exit with a non-zero
If fixing fails for any reason, the fix command will exit with a non-zero exit
exit status. Error messages appear on standard error, so if you're redirecting
status. Error messages appear on standard error, so if you're redirecting
output, you'll still see error messages.
output, you'll still see error messages.
-> **Even when Packer fix doesn't do anything** to the template,
->**Even when Packer fix doesn't do anything** to the template, the template
the template will be outputted to standard out. Things such as configuration
will be outputted to standard out. Things such as configuration key ordering and
key ordering and indentation may be changed. The output format however, is
indentation may be changed. The output format however, is pretty-printed for
pretty-printed for human readability.
human readability.
The full list of fixes that the fix command performs is visible in the
The full list of fixes that the fix command performs is visible in the help
help output, which can be seen via `packer fix -h`.
The `packer inspect` Packer command takes a template and outputs the various
description:|-
components a template defines. This can help you quickly learn about a template
The `packer inspect` Packer command takes a template and outputs the various components a template defines. This can help you quickly learn about a template without having to dive into the JSON itself. The command will tell you things like what variables a template accepts, the builders it defines, the provisioners it defines and the order they'll run, and more.
without having to dive into the JSON itself. The command will tell you things
---
like what variables a template accepts, the builders it defines, the
provisioners it defines and the order they'll run, and more.
layout: docs
page_title: 'Inspect - Command-Line'
...
# Command-Line: Inspect
# Command-Line: Inspect
The `packer inspect` Packer command takes a template and outputs the various components
The `packer inspect` Packer command takes a template and outputs the various
a template defines. This can help you quickly learn about a template without
components a template defines. This can help you quickly learn about a template
having to dive into the JSON itself.
without having to dive into the JSON itself. The command will tell you things
The command will tell you things like what variables a template accepts,
like what variables a template accepts, the builders it defines, the
the builders it defines, the provisioners it defines and the order they'll
provisioners it defines and the order they'll run, and more.
run, and more.
This command is extra useful when used with
This command is extra useful when used with[machine-readable
Packer is controlled using a command-line interface. All interaction with Packer
description:|-
is done via the `packer` tool. Like many other command-line tools, the `packer`
Packer is controlled using a command-line interface. All interaction with Packer is done via the `packer` tool. Like many other command-line tools, the `packer` tool takes a subcommand to execute, and that subcommand may have additional options as well. Subcommands are executed with `packer SUBCOMMAND`, where "SUBCOMMAND" is obviously the actual command you wish to execute.
tool takes a subcommand to execute, and that subcommand may have additional
---
options as well. Subcommands are executed with `packer SUBCOMMAND`, where
"SUBCOMMAND" is obviously the actual command you wish to execute.
layout: docs
page_title: 'Packer Command-Line'
...
# Packer Command-Line
# Packer Command-Line
Packer is controlled using a command-line interface. All interaction with
Packer is controlled using a command-line interface. All interaction with Packer
Packer is done via the `packer` tool. Like many other command-line tools,
is done via the `packer` tool. Like many other command-line tools, the `packer`
the `packer` tool takes a subcommand to execute, and that subcommand may
tool takes a subcommand to execute, and that subcommand may have additional
have additional options as well. Subcommands are executed with
options as well. Subcommands are executed with `packer SUBCOMMAND`, where
`packer SUBCOMMAND`, where "SUBCOMMAND" is obviously the actual command you wish
"SUBCOMMAND" is obviously the actual command you wish to execute.
to execute.
If you run `packer` by itself, help will be displayed showing all available
If you run `packer` by itself, help will be displayed showing all available
subcommands and a brief synopsis of what they do. In addition to this, you can
subcommands and a brief synopsis of what they do. In addition to this, you can
run any `packer` command with the `-h` flag to output more detailed help for
run any `packer` command with the `-h` flag to output more detailed help for a
a specific subcommand.
specific subcommand.
In addition to the documentation available on the command-line, each command
In addition to the documentation available on the command-line, each command is
is documented on this website. You can find the documentation for a specific
documented on this website. You can find the documentation for a specific
By default, the output of Packer is very human-readable. It uses nice
description:|-
formatting, spacing, and colors in order to make Packer a pleasure to use.
By default, the output of Packer is very human-readable. It uses nice formatting, spacing, and colors in order to make Packer a pleasure to use. However, Packer was built with automation in mind. To that end, Packer supports a fully machine-readable output setting, allowing you to use Packer in automated environments.
However, Packer was built with automation in mind. To that end, Packer supports
---
a fully machine-readable output setting, allowing you to use Packer in automated
The `packer validate` Packer command is used to validate the syntax and
description:|-
configuration of a template. The command will return a zero exit status on
The `packer validate` Packer command is used to validate the syntax and configuration of a template. The command will return a zero exit status on success, and a non-zero exit status on failure. Additionally, if a template doesn't validate, any error messages will be outputted.
success, and a non-zero exit status on failure. Additionally, if a template
---
doesn't validate, any error messages will be outputted.
layout: docs
page_title: 'Validate - Command-Line'
...
# Command-Line: Validate
# Command-Line: Validate
The `packer validate` Packer command is used to validate the syntax and configuration
The `packer validate` Packer command is used to validate the syntax and
of a [template](/docs/templates/introduction.html). The command will return
configuration of a [template](/docs/templates/introduction.html). The command
a zero exit status on success, and a non-zero exit status on failure. Additionally,
will return a zero exit status on success, and a non-zero exit status on
if a template doesn't validate, any error messages will be outputted.
failure. Additionally, if a template doesn't validate, any error messages will
be outputted.
Example usage:
Example usage:
```text
``` {.text}
$ packer validate my-template.json
$ packer validate my-template.json
Template validation failed. Errors are shown below.
Template validation failed. Errors are shown below.
Packer Builders are the components of Packer responsible for creating a machine,
description:|-
bringing it to a point where it can be provisioned, and then turning that
Packer Builders are the components of Packer responsible for creating a machine, bringing it to a point where it can be provisioned, and then turning that provisioned machine into some sort of machine image. Several builders are officially distributed with Packer itself, such as the AMI builder, the VMware builder, etc. However, it is possible to write custom builders using the Packer plugin interface, and this page documents how to do that.
provisioned machine into some sort of machine image. Several builders are
---
officially distributed with Packer itself, such as the AMI builder, the VMware
builder, etc. However, it is possible to write custom builders using the Packer
plugin interface, and this page documents how to do that.
layout: docs
page_title: 'Custom Builder - Extend Packer'
...
# Custom Builder Development
# Custom Builder Development
Packer Builders are the components of Packer responsible for creating a machine,
Packer Builders are the components of Packer responsible for creating a machine,
bringing it to a point where it can be provisioned, and then turning
bringing it to a point where it can be provisioned, and then turning that
that provisioned machine into some sort of machine image. Several builders
provisioned machine into some sort of machine image. Several builders are
are officially distributed with Packer itself, such as the AMI builder, the
officially distributed with Packer itself, such as the AMI builder, the VMware
VMware builder, etc. However, it is possible to write custom builders using
builder, etc. However, it is possible to write custom builders using the Packer
the Packer plugin interface, and this page documents how to do that.
plugin interface, and this page documents how to do that.
Prior to reading this page, it is assumed you have read the page on
Prior to reading this page, it is assumed you have read the page on[plugin
[plugin development basics](/docs/extend/developing-plugins.html).
development basics](/docs/extend/developing-plugins.html).
~>**Warning!** This is an advanced topic. If you're new to Packer, we
\~>**Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
## The Interface
The interface that must be implemented for a builder is the `packer.Builder`
The interface that must be implemented for a builder is the `packer.Builder`
interface. It is reproduced below for easy reference. The actual interface
interface. It is reproduced below for easy reference. The actual interface in
in the source code contains some basic documentation as well explaining
the source code contains some basic documentation as well explaining what each
Packer Commands are the components of Packer that add functionality to the
description:|-
`packer` application. Packer comes with a set of commands out of the box, such
Packer Commands are the components of Packer that add functionality to the `packer` application. Packer comes with a set of commands out of the box, such as `build`. Commands are invoked as `packer <COMMAND>`. Custom commands allow you to add new commands to Packer to perhaps perform new functionality.
as `build`. Commands are invoked as `packer <COMMAND>`. Custom commands allow
---
you to add new commands to Packer to perhaps perform new functionality.
layout: docs
page_title: Custom Command Development
...
# Custom Command Development
# Custom Command Development
Packer Commands are the components of Packer that add functionality to the
Packer Commands are the components of Packer that add functionality to the
`packer` application. Packer comes with a set of commands out of the
`packer` application. Packer comes with a set of commands out of the box, such
box, such as `build`. Commands are invoked as `packer <COMMAND>`.
as `build`. Commands are invoked as `packer <COMMAND>`. Custom commands allow
Custom commands allow you to add new commands to Packer to perhaps
you to add new commands to Packer to perhaps perform new functionality.
perform new functionality.
Prior to reading this page, it is assumed you have read the page on
Prior to reading this page, it is assumed you have read the page on[plugin
[plugin development basics](/docs/extend/developing-plugins.html).
development basics](/docs/extend/developing-plugins.html).
Command plugins implement the `packer.Command` interface and are served
Command plugins implement the `packer.Command` interface and are served using
using the `plugin.ServeCommand` function. Commands actually have no control
the `plugin.ServeCommand` function. Commands actually have no control over what
over what keyword invokes the command with the `packer` binary. The keyword
keyword invokes the command with the `packer` binary. The keyword to invoke the
to invoke the command depends on how the plugin is installed and configured
command depends on how the plugin is installed and configured in the core Packer
in the core Packer configuration.
configuration.
~>**Warning!** This is an advanced topic. If you're new to Packer, we
\~>**Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
## The Interface
The interface that must be implemented for a command is the `packer.Command`
The interface that must be implemented for a command is the `packer.Command`
interface. It is reproduced below for easy reference. The actual interface
interface. It is reproduced below for easy reference. The actual interface in
in the source code contains some basic documentation as well explaining
the source code contains some basic documentation as well explaining what each
what each method should do.
method should do.
```go
``` {.go}
type Command interface {
type Command interface {
Help()string
Help() string
Run(envEnvironment,args[]string)int
Run(env Environment, args []string) int
Synopsis()string
Synopsis() string
}
}
```
```
### The "Help" Method
### The "Help" Method
The `Help` method returns long-form help. This help is most commonly
The `Help` method returns long-form help. This help is most commonly shown when
shown when a command is invoked with the `--help` or `-h` option.
a command is invoked with the `--help` or `-h` option. The help should document
The help should document all the available command line flags, purpose
all the available command line flags, purpose of the command, etc.
of the command, etc.
Packer commands generally follow the following format for help, but
Packer commands generally follow the following format for help, but it is not
it is not required. You're allowed to make the help look like anything
required. You're allowed to make the help look like anything you please.
you please.
```text
``` {.text}
Usage: packer COMMAND [options] ARGS...
Usage: packer COMMAND [options] ARGS...
Brief one or two sentence about the function of the command.
Brief one or two sentence about the function of the command.
...
@@ -64,23 +64,23 @@ Options:
...
@@ -64,23 +64,23 @@ Options:
### The "Run" Method
### The "Run" Method
`Run` is what is called when the command is actually invoked. It is given
`Run` is what is called when the command is actually invoked. It is given the
the `packer.Environment`, which has access to almost all components of
`packer.Environment`, which has access to almost all components of the current
the current Packer run, such as UI, builders, other plugins, etc. In addition
Packer run, such as UI, builders, other plugins, etc. In addition to the
to the environment, the remaining command line args are given. These command
environment, the remaining command line args are given. These command line args
line args have already been stripped of the command name, so they can be
have already been stripped of the command name, so they can be passed directly
passed directly into something like the standard Go `flag` package for
into something like the standard Go `flag` package for command-line flag
command-line flag parsing.
parsing.
The return value of `Run` is the exit status for the command. If everything
The return value of `Run` is the exit status for the command. If everything ran
ran successfully, this should be 0. If any errors occurred, it should be any
successfully, this should be 0. If any errors occurred, it should be any
positive integer.
positive integer.
### The "Synopsis" Method
### The "Synopsis" Method
The `Synopsis` method should return a short single-line description
The `Synopsis` method should return a short single-line description of what the
of what the command does. This is used when `packer` is invoked on its own
command does. This is used when `packer` is invoked on its own in order to show
in order to show a brief summary of the commands that Packer supports.
a brief summary of the commands that Packer supports.
The synopsis should be no longer than around 50 characters, since it is
The synopsis should be no longer than around 50 characters, since it is already
This page will document how you can develop your own Packer plugins. Prior to
description:|-
reading this, it is assumed that you're comfortable with Packer and also know
This page will document how you can develop your own Packer plugins. Prior to reading this, it is assumed that you're comfortable with Packer and also know the basics of how Plugins work, from a user standpoint.
the basics of how Plugins work, from a user standpoint.
---
layout: docs
page_title: Developing Plugins
...
# Developing Plugins
# Developing Plugins
This page will document how you can develop your own Packer plugins.
This page will document how you can develop your own Packer plugins. Prior to
Prior to reading this, it is assumed that you're comfortable with Packer
reading this, it is assumed that you're comfortable with Packer and also know
and also know the [basics of how Plugins work](/docs/extend/plugins.html),
the [basics of how Plugins work](/docs/extend/plugins.html), from a user
from a user standpoint.
standpoint.
Packer plugins must be written in [Go](http://golang.org/), so it is also
Packer plugins must be written in [Go](http://golang.org/), so it is also
assumed that you're familiar with the language. This page will not be a
assumed that you're familiar with the language. This page will not be a Go
Go language tutorial. Thankfully, if you are familiar with Go, the Go toolchain
language tutorial. Thankfully, if you are familiar with Go, the Go toolchain
makes it extremely easy to develop Packer plugins.
makes it extremely easy to develop Packer plugins.
~>**Warning!** This is an advanced topic. If you're new to Packer, we
\~>**Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
recommend getting a bit more comfortable before you dive into writing plugins.
## Plugin System Architecture
## Plugin System Architecture
Packer has a fairly unique plugin architecture. Instead of loading plugins
Packer has a fairly unique plugin architecture. Instead of loading plugins
directly into a running application, Packer runs each plugin as a
directly into a running application, Packer runs each plugin as a*separate
_separate application_. Inter-process communication and RPC is then used
application*. Inter-process communication and RPC is then used to communicate
to communicate between the many running Packer processes. Packer core
between the many running Packer processes. Packer core itself is responsible for
itself is responsible for orchestrating the processes and handles cleanup.
orchestrating the processes and handles cleanup.
The beauty of this is that your plugin can have any dependencies it wants.
The beauty of this is that your plugin can have any dependencies it wants.
Dependencies don't need to line up with what Packer core or any other plugin
Dependencies don't need to line up with what Packer core or any other plugin
uses, because they're completely isolated into the process space of the
uses, because they're completely isolated into the process space of the plugin
plugin itself.
itself.
And, thanks to Go's [interfaces](http://golang.org/doc/effective_go.html#interfaces_and_types),
And, thanks to Go's
it doesn't even look like inter-process communication is occurring. You just
[interfaces](http://golang.org/doc/effective_go.html#interfaces_and_types), it
use the interfaces like normal, but in fact they're being executed in
doesn't even look like inter-process communication is occurring. You just use
a remote process. Pretty cool.
the interfaces like normal, but in fact they're being executed in a remote
process. Pretty cool.
## Plugin Development Basics
## Plugin Development Basics
Developing a plugin is quite simple. All the various kinds of plugins
Developing a plugin is quite simple. All the various kinds of plugins have a
have a corresponding interface. The plugin simply needs to implement
corresponding interface. The plugin simply needs to implement this interface and
this interface and expose it using the Packer plugin package (covered here shortly),
expose it using the Packer plugin package (covered here shortly), and that's it!
and that's it!
There are two packages that really matter that every plugin must use.
There are two packages that really matter that every plugin must use. Other than
Other than the following two packages, you're encouraged to use whatever
the following two packages, you're encouraged to use whatever packages you want.
packages you want. Because plugins are their own processes, there is
Because plugins are their own processes, there is no danger of colliding
no danger of colliding dependencies.
dependencies.
*`github.com/mitchellh/packer` - Contains all the interfaces that you
-`github.com/mitchellh/packer` - Contains all the interfaces that you have to
have to implement for any given plugin.
implement for any given plugin.
*`github.com/mitchellh/packer/plugin` - Contains the code to serve the
-`github.com/mitchellh/packer/plugin` - Contains the code to serve the plugin.
plugin. This handles all the inter-process communication stuff.
This handles all the inter-process communication stuff.
There are two steps involved in creating a plugin:
There are two steps involved in creating a plugin:
1.Implement the desired interface. For example, if you're building a
1. Implement the desired interface. For example, if you're building a builder
builder plugin, implement the `packer.Builder` interface.
plugin, implement the `packer.Builder` interface.
2.Serve the interface by calling the appropriate plugin serving method
2. Serve the interface by calling the appropriate plugin serving method in your
in your main method. In the case of a builder, this is `plugin.ServeBuilder`.
main method. In the case of a builder, this is `plugin.ServeBuilder`.
A basic example is shown below. In this example, assume the `Builder` struct
A basic example is shown below. In this example, assume the `Builder` struct
implements the `packer.Builder` interface:
implements the `packer.Builder` interface:
```go
``` {.go}
import (
import (
"github.com/mitchellh/packer/plugin"
"github.com/mitchellh/packer/plugin"
)
)
...
@@ -76,40 +78,38 @@ import (
...
@@ -76,40 +78,38 @@ import (
type Builder struct{}
type Builder struct{}
func main() {
func main() {
plugin.ServeBuilder(new(Builder))
plugin.ServeBuilder(new(Builder))
}
}
```
```
**That's it!**`plugin.ServeBuilder` handles all the nitty gritty of
**That's it!**`plugin.ServeBuilder` handles all the nitty gritty of
communicating with Packer core and serving your builder over RPC. It
communicating with Packer core and serving your builder over RPC. It can't get
can't get much easier than that.
much easier than that.
Next, just build your plugin like a normal Go application, using `go build`
Next, just build your plugin like a normal Go application, using `go build` or
or however you please. The resulting binary is the plugin that can be
however you please. The resulting binary is the plugin that can be installed
installed using standard installation procedures.
using standard installation procedures.
The specifics of how to implement each type of interface are covered
The specifics of how to implement each type of interface are covered in the
in the relevant subsections available in the navigation to the left.
relevant subsections available in the navigation to the left.
~> **Lock your dependencies!** Unfortunately, Go's dependency
\~>**Lock your dependencies!** Unfortunately, Go's dependency management
management story is fairly sad. There are various unofficial methods out
story is fairly sad. There are various unofficial methods out there for locking
there for locking dependencies, and using one of them is highly recommended
dependencies, and using one of them is highly recommended since the Packer
since the Packer codebase will continue to improve, potentially breaking
codebase will continue to improve, potentially breaking APIs along the way until
APIs along the way until there is a stable release. By locking your dependencies,
there is a stable release. By locking your dependencies, your plugins will
your plugins will continue to work with the version of Packer you lock to.
continue to work with the version of Packer you lock to.
## Logging and Debugging
## Logging and Debugging
Plugins can use the standard Go `log` package to log. Anything logged
Plugins can use the standard Go `log` package to log. Anything logged using this
using this will be available in the Packer log files automatically.
will be available in the Packer log files automatically. The Packer log is
The Packer log is visible on stderr when the `PACKER_LOG` environmental
visible on stderr when the `PACKER_LOG` environmental is set.
is set.
Packer will prefix any logs from plugins with the path to that plugin
Packer will prefix any logs from plugins with the path to that plugin to make it
to make it identifiable where the logs come from. Some example logs are
identifiable where the logs come from. Some example logs are shown below:
Packer Plugins allow new functionality to be added to Packer without modifying
description:|-
the core source code. Packer plugins are able to add new commands, builders,
Packer Plugins allow new functionality to be added to Packer without modifying the core source code. Packer plugins are able to add new commands, builders, provisioners, hooks, and more. In fact, much of Packer itself is implemented by writing plugins that are simply distributed with Packer. For example, all the commands, builders, provisioners, and more that ship with Packer are implemented as Plugins that are simply hardcoded to load with Packer.
provisioners, hooks, and more. In fact, much of Packer itself is implemented by
---
writing plugins that are simply distributed with Packer. For example, all the
commands, builders, provisioners, and more that ship with Packer are implemented
as Plugins that are simply hardcoded to load with Packer.
layout: docs
page_title: 'Packer Plugins - Extend Packer'
...
# Packer Plugins
# Packer Plugins
Packer Plugins allow new functionality to be added to Packer without
Packer Plugins allow new functionality to be added to Packer without modifying
modifying the core source code. Packer plugins are able to add new
the core source code. Packer plugins are able to add new commands, builders,
commands, builders, provisioners, hooks, and more. In fact, much of Packer
provisioners, hooks, and more. In fact, much of Packer itself is implemented by
itself is implemented by writing plugins that are simply distributed with
writing plugins that are simply distributed with Packer. For example, all the
Packer. For example, all the commands, builders, provisioners, and more
commands, builders, provisioners, and more that ship with Packer are implemented
that ship with Packer are implemented as Plugins that are simply hardcoded
as Plugins that are simply hardcoded to load with Packer.
to load with Packer.
This page will cover how to install and use plugins. If you're interested
This page will cover how to install and use plugins. If you're interested in
in developing plugins, the documentation for that is available the
developing plugins, the documentation for that is available the [developing
Packer Provisioners are the components of Packer that install and configure
description:|-
software into a running machine prior to turning that machine into an image. An
Packer Provisioners are the components of Packer that install and configure software into a running machine prior to turning that machine into an image. An example of a provisioner is the shell provisioner, which runs shell scripts within the machines.
example of a provisioner is the shell provisioner, which runs shell scripts
---
within the machines.
layout: docs
page_title: Custom Provisioner Development
...
# Custom Provisioner Development
# Custom Provisioner Development
Packer Provisioners are the components of Packer that install and configure
Packer Provisioners are the components of Packer that install and configure
software into a running machine prior to turning that machine into an
software into a running machine prior to turning that machine into an image. An
image. An example of a provisioner is the [shell provisioner](/docs/provisioners/shell.html),
example of a provisioner is the [shell
which runs shell scripts within the machines.
provisioner](/docs/provisioners/shell.html), which runs shell scripts within the
machines.
Prior to reading this page, it is assumed you have read the page on
Prior to reading this page, it is assumed you have read the page on[plugin
[plugin development basics](/docs/extend/developing-plugins.html).
development basics](/docs/extend/developing-plugins.html).
Provisioner plugins implement the `packer.Provisioner` interface and
Provisioner plugins implement the `packer.Provisioner` interface and are served
are served using the `plugin.ServeProvisioner` function.
using the `plugin.ServeProvisioner` function.
~>**Warning!** This is an advanced topic. If you're new to Packer, we
\~>**Warning!** This is an advanced topic. If you're new to Packer, we
recommend getting a bit more comfortable before you dive into writing plugins.
recommend getting a bit more comfortable before you dive into writing plugins.
## The Interface
## The Interface
The interface that must be implemented for a provisioner is the
The interface that must be implemented for a provisioner is the
`packer.Provisioner` interface. It is reproduced below for easy reference.
`packer.Provisioner` interface. It is reproduced below for easy reference. The
The actual interface in the source code contains some basic documentation as well explaining
actual interface in the source code contains some basic documentation as well
what each method should do.
explaining what each method should do.
```go
``` {.go}
type Provisioner interface {
type Provisioner interface {
Prepare(...interface{})error
Prepare(...interface{}) error
Provision(Ui,Communicator)error
Provision(Ui, Communicator) error
}
}
```
```
### The "Prepare" Method
### The "Prepare" Method
The `Prepare` method for each provisioner is called prior to any runs with
The `Prepare` method for each provisioner is called prior to any runs with the
the configuration that was given in the template. This is passed in as
configuration that was given in the template. This is passed in as an array of
an array of `interface{}` types, but is generally `map[string]interface{}`. The prepare
`interface{}` types, but is generally `map[string]interface{}`. The prepare
method is responsible for translating this configuration into an internal
method is responsible for translating this configuration into an internal
structure, validating it, and returning any errors.
structure, validating it, and returning any errors.
For multiple parameters, they should be merged together into the final
For multiple parameters, they should be merged together into the final
configuration, with later parameters overwriting any previous configuration.
configuration, with later parameters overwriting any previous configuration. The
The exact semantics of the merge are left to the builder author.
exact semantics of the merge are left to the builder author.
For decoding the `interface{}` into a meaningful structure, the
For decoding the `interface{}` into a meaningful structure, the
[mapstructure](https://github.com/mitchellh/mapstructure) library is recommended.
[mapstructure](https://github.com/mitchellh/mapstructure) library is
Mapstructure will take an `interface{}` and decode it into an arbitrarily
recommended. Mapstructure will take an `interface{}` and decode it into an
complex struct. If there are any errors, it generates very human friendly
arbitrarily complex struct. If there are any errors, it generates very human
errors that can be returned directly from the prepare method.
friendly errors that can be returned directly from the prepare method.
While it is not actively enforced, **no side effects** should occur from
While it is not actively enforced, **no side effects** should occur from running
running the `Prepare` method. Specifically, don't create files, don't launch
the `Prepare` method. Specifically, don't create files, don't launch virtual
virtual machines, etc. Prepare's purpose is solely to configure the builder
machines, etc. Prepare's purpose is solely to configure the builder and validate
and validate the configuration.
the configuration.
The `Prepare` method is called very early in the build process so that
The `Prepare` method is called very early in the build process so that errors
errors may be displayed to the user before anything actually happens.
may be displayed to the user before anything actually happens.
### The "Provision" Method
### The "Provision" Method
The `Provision` method is called when a machine is running and ready
The `Provision` method is called when a machine is running and ready to be
to be provisioned. The provisioner should do its real work here.
provisioned. The provisioner should do its real work here.
The method takes two parameters: a `packer.Ui` and a `packer.Communicator`.
The method takes two parameters: a `packer.Ui` and a `packer.Communicator`. The
The UI can be used to communicate with the user what is going on. The
UI can be used to communicate with the user what is going on. The communicator
communicator is used to communicate with the running machine, and is
is used to communicate with the running machine, and is guaranteed to be
guaranteed to be connected at this point.
connected at this point.
The provision method should not return until provisioning is complete.
The provision method should not return until provisioning is complete.
## Using the Communicator
## Using the Communicator
The `packer.Communicator` parameter and interface is used to communicate
The `packer.Communicator` parameter and interface is used to communicate with
with running machine. The machine may be local (in a virtual machine or
running machine. The machine may be local (in a virtual machine or container of
container of some sort) or it may be remote (in a cloud). The communicator
some sort) or it may be remote (in a cloud). The communicator interface
interface abstracts this away so that communication is the same overall.
abstracts this away so that communication is the same overall.
The documentation around the [code itself](https://github.com/mitchellh/packer/blob/master/packer/communicator.go)
The documentation around the [code
is really great as an overview of how to use the interface. You should begin
Welcome to the Packer documentation! This documentation is more of a reference
description:|-
guide for all available features and options in Packer. If you're just getting
Welcome to the Packer documentation! This documentation is more of a reference guide for all available features and options in Packer. If you're just getting started with Packer, please start with the introduction and getting started guide instead.
started with Packer, please start with the introduction and getting started
---
guide instead.
layout: docs
page_title: Packer Documentation
...
# Packer Documentation
# Packer Documentation
Welcome to the Packer documentation! This documentation is more of a reference
Welcome to the Packer documentation! This documentation is more of a reference
guide for all available features and options in Packer. If you're just getting
guide for all available features and options in Packer. If you're just getting
started with Packer, please start with the
started with Packer, please start with the[introduction and getting started
[introduction and getting started guide](/intro) instead.
Packer must first be installed on the machine you want to run it on. To make
description:|-
installation easy, Packer is distributed as a binary package for all supported
Packer must first be installed on the machine you want to run it on. To make installation easy, Packer is distributed as a binary package for all supported platforms and architectures. This page will not cover how to compile Packer from source, as that is covered in the README and is only recommended for advanced users.
platforms and architectures. This page will not cover how to compile Packer from
---
source, as that is covered in the README and is only recommended for advanced
users.
layout: docs
page_title: Install Packer
...
# Install Packer
# Install Packer
Packer must first be installed on the machine you want to run it on.
Packer must first be installed on the machine you want to run it on. To make
To make installation easy, Packer is distributed as a [binary package](/downloads.html)
installation easy, Packer is distributed as a [binary package](/downloads.html)
for all supported platforms and architectures. This page will not cover how
for all supported platforms and architectures. This page will not cover how to
to compile Packer from source, as that is covered in the
compile Packer from source, as that is covered in the
[README](https://github.com/mitchellh/packer/blob/master/README.md) and is only
[README](https://github.com/mitchellh/packer/blob/master/README.md) and is only
recommended for advanced users.
recommended for advanced users.
## Installing Packer
## Installing Packer
To install packer, first find the [appropriate package](/downloads.html)
To install packer, first find the [appropriate package](/downloads.html) for
for your system and download it. Packer is packaged as a "zip" file.
your system and download it. Packer is packaged as a "zip" file.
Next, unzip the downloaded package into a directory where Packer will be
Next, unzip the downloaded package into a directory where Packer will be
installed. On Unix systems, `~/packer` or `/usr/local/packer` is generally good,
installed. On Unix systems, `~/packer` or `/usr/local/packer` is generally good,
depending on whether you want to restrict the install to just your user
depending on whether you want to restrict the install to just your user or
or install it system-wide. On Windows systems, you can put it wherever you'd
install it system-wide. On Windows systems, you can put it wherever you'd like.
like.
After unzipping the package, the directory should contain a set of binary
After unzipping the package, the directory should contain a set of binary
programs, such as `packer`, `packer-build-amazon-ebs`, etc. The final step
programs, such as `packer`, `packer-build-amazon-ebs`, etc. The final step to
to installation is to make sure the directory you installed Packer to
installation is to make sure the directory you installed Packer to is on the
is on the PATH. See [this page](http://stackoverflow.com/questions/14637979/how-to-permanently-set-path-on-linux)
PATH. See [this
for instructions on setting the PATH on Linux and Mac.
This is the reference for the various message categories for Packer
description:|-
machine-readable output. Please read that page if you're unfamiliar with the
This is the reference for the various message categories for Packer machine-readable output. Please read that page if you're unfamiliar with the general format and usage for the machine-readable output.
general format and usage for the machine-readable output.
---
layout: 'docs\_machine\_readable'
page_title: 'Machine-Readable Reference'
...
# Machine-Readable Reference
# Machine-Readable Reference
This is the reference for the various message categories for Packer
This is the reference for the various message categories for Packer
There are a few configuration settings that affect Packer globally by
description:|-
configuring the core of Packer. These settings all have reasonable defaults, so
There are a few configuration settings that affect Packer globally by configuring the core of Packer. These settings all have reasonable defaults, so you generally don't have to worry about it until you want to tweak a configuration. If you're just getting started with Packer, don't worry about core configuration for now.
you generally don't have to worry about it until you want to tweak a
---
configuration. If you're just getting started with Packer, don't worry about
core configuration for now.
layout: docs
page_title: Core Configuration
...
# Core Configuration
# Core Configuration
There are a few configuration settings that affect Packer globally by
There are a few configuration settings that affect Packer globally by
configuring the core of Packer. These settings all have reasonable defaults, so
configuring the core of Packer. These settings all have reasonable defaults, so
you generally don't have to worry about it until you want to tweak
you generally don't have to worry about it until you want to tweak a
a configuration. If you're just getting started with Packer, don't worry
configuration. If you're just getting started with Packer, don't worry about
about core configuration for now.
core configuration for now.
The default location where Packer looks for this file depends on the
The default location where Packer looks for this file depends on the platform.
platform. For all non-Windows platforms, Packer looks for `$HOME/.packerconfig`.
For all non-Windows platforms, Packer looks for `$HOME/.packerconfig`. For
For Windows, Packer looks for `%APPDATA%/packer.config`. If the file
Windows, Packer looks for `%APPDATA%/packer.config`. If the file doesn't exist,
doesn't exist, then Packer ignores it and just uses the default configuration.
then Packer ignores it and just uses the default configuration.
The location of the core configuration file can be modified by setting
The location of the core configuration file can be modified by setting the
the `PACKER_CONFIG` environmental variable to be the path to another file.
`PACKER_CONFIG` environmental variable to be the path to another file.
The format of the configuration file is basic JSON.
The format of the configuration file is basic JSON.
...
@@ -28,12 +32,13 @@ The format of the configuration file is basic JSON.
...
@@ -28,12 +32,13 @@ The format of the configuration file is basic JSON.
Below is the list of all available configuration parameters for the core
Below is the list of all available configuration parameters for the core
configuration file. None of these are required, since all have sane defaults.
configuration file. None of these are required, since all have sane defaults.
*`plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and
-`plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and
maximum ports that Packer uses for communication with plugins, since
maximum ports that Packer uses for communication with plugins, since plugin
plugin communication happens over TCP connections on your local host.
communication happens over TCP connections on your local host. By default
By default these are 10,000 and 25,000, respectively. Be sure to set a fairly
these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range
wide range here, since Packer can easily use over 25 ports on a single run.
here, since Packer can easily use over 25 ports on a single run.
*`builders`, `commands`, `post-processors`, and `provisioners` are objects that are used to
-`builders`, `commands`, `post-processors`, and `provisioners` are objects that
install plugins. The details of how exactly these are set is covered
are used to install plugins. The details of how exactly these are set is
in more detail in the [installing plugins documentation page](/docs/extend/plugins.html).
covered in more detail in the [installing plugins documentation
Packer strives to be stable and bug-free, but issues inevitably arise where
description:|-
certain things may not work entirely correctly, or may not appear to work
Packer strives to be stable and bug-free, but issues inevitably arise where certain things may not work entirely correctly, or may not appear to work correctly. In these cases, it is sometimes helpful to see more details about what Packer is actually doing.
correctly. In these cases, it is sometimes helpful to see more details about
---
what Packer is actually doing.
layout: docs
page_title: Debugging Packer
...
# Debugging Packer Builds
# Debugging Packer Builds
...
@@ -17,39 +20,40 @@ usually will stop between each step, waiting for keyboard input before
...
@@ -17,39 +20,40 @@ usually will stop between each step, waiting for keyboard input before
continuing. This will allow you to inspect state and so on.
continuing. This will allow you to inspect state and so on.
In debug mode once the remote instance is instantiated, Packer will emit to the
In debug mode once the remote instance is instantiated, Packer will emit to the
current directory an emphemeral private ssh key as a .pem file. Using that you
current directory an emphemeral private ssh key as a .pem file. Using that you
can `ssh -i <key.pem>` into the remote build instance and see what is going on
can `ssh -i <key.pem>` into the remote build instance and see what is going on
for debugging. The emphemeral key will be deleted at the end of the packer run
for debugging. The emphemeral key will be deleted at the end of the packer run
during cleanup.
during cleanup.
### Windows
### Windows
As of Packer 0.8.1 the default WinRM communicator will emit the password for a
As of Packer 0.8.1 the default WinRM communicator will emit the password for a
Remote Desktop Connection into your instance. This happens following the several
Remote Desktop Connection into your instance. This happens following the several
minute pause as the instance is booted. Note a .pem key is still created for
minute pause as the instance is booted. Note a .pem key is still created for
securely transmitting the password. Packer automatically decrypts the password
securely transmitting the password. Packer automatically decrypts the password
for you in debug mode.
for you in debug mode.
## Debugging Packer
## Debugging Packer
Issues occasionally arise where certain things may not work entirely correctly,
Issues occasionally arise where certain things may not work entirely correctly,
or may not appear to work correctly. In these cases, it is sometimes helpful to
or may not appear to work correctly. In these cases, it is sometimes helpful to
see more details about what Packer is actually doing.
see more details about what Packer is actually doing.
Packer has detailed logs which can be enabled by setting the `PACKER_LOG`
Packer has detailed logs which can be enabled by setting the `PACKER_LOG`
environmental variable to any value like this `PACKER_LOG=1 packer build
environmental variable to any value like this
<config.json>`. This will cause detailed logs to appear on stderr. The logs
`PACKER_LOG=1 packer build <config.json>`. This will cause detailed logs to
contain log messages from Packer as well as any plugins that are being used. Log
appear on stderr. The logs contain log messages from Packer as well as any
messages from plugins are prefixed by their application name.
plugins that are being used. Log messages from plugins are prefixed by their
application name.
Note that because Packer is highly parallelized, log messages sometimes
Note that because Packer is highly parallelized, log messages sometimes appear
appear out of order, especially with respect to plugins. In this case,
out of order, especially with respect to plugins. In this case, it is important
it is important to pay attention to the timestamp of the log messages
to pay attention to the timestamp of the log messages to determine order.
to determine order.
In addition to simply enabling the log, you can set `PACKER_LOG_PATH` in order
In addition to simply enabling the log, you can set `PACKER_LOG_PATH` in order
to force the log to always go to a specific file when logging is enabled.
to force the log to always go to a specific file when logging is enabled. Note
Note that even when `PACKER_LOG_PATH` is set, `PACKER_LOG` must be set in
that even when `PACKER_LOG_PATH` is set, `PACKER_LOG` must be set in order for
order for any logging to be enabled.
any logging to be enabled.
If you find a bug with Packer, please include the detailed log by using
If you find a bug with Packer, please include the detailed log by using a
The Atlas post-processor for Packer receives an artifact from a Packer build and
description:|-
uploads it to Atlas. Atlas hosts and serves artifacts, allowing you to version
The Atlas post-processor for Packer receives an artifact from a Packer build and uploads it to Atlas. Atlas hosts and serves artifacts, allowing you to version and distribute them in a simple way.
and distribute them in a simple way.
---
layout: docs
page_title: 'Atlas Post-Processor'
...
# Atlas Post-Processor
# Atlas Post-Processor
Type: `atlas`
Type: `atlas`
The Atlas post-processor for Packer receives an artifact from a Packer build and uploads it to Atlas. [Atlas](https://atlas.hashicorp.com) hosts and serves artifacts, allowing you to version and distribute them in a simple way.
The Atlas post-processor for Packer receives an artifact from a Packer build and
uploads it to Atlas. [Atlas](https://atlas.hashicorp.com) hosts and serves
artifacts, allowing you to version and distribute them in a simple way.
## Workflow
## Workflow
To take full advantage of Packer and Atlas, it's important to understand the
To take full advantage of Packer and Atlas, it's important to understand the
workflow for creating artifacts with Packer and storing them in Atlas using this post-processor. The goal of the Atlas post-processor is to streamline the distribution of public or private artifacts by hosting them in a central location in Atlas.
workflow for creating artifacts with Packer and storing them in Atlas using this
post-processor. The goal of the Atlas post-processor is to streamline the
distribution of public or private artifacts by hosting them in a central
location in Atlas.
Here is an example workflow:
Here is an example workflow:
1. Packer builds an AMI with the [Amazon AMI builder](/docs/builders/amazon.html)
1. Packer builds an AMI with the [Amazon AMI
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas. The `atlas` post-processor is configured with the name of the AMI, for example `hashicorp/foobar`, to create the artifact in Atlas or update the version if the artifact already exists
builder](/docs/builders/amazon.html)
3. The new version is ready and available to be used in deployments with a tool like [Terraform](https://terraform.io)
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
The `atlas` post-processor is configured with the name of the AMI, for example
`hashicorp/foobar`, to create the artifact in Atlas or update the version if
the artifact already exists
3. The new version is ready and available to be used in deployments with a tool
like [Terraform](https://terraform.io)
## Configuration
## Configuration
...
@@ -29,32 +40,36 @@ The configuration allows you to specify and access the artifact in Atlas.
...
@@ -29,32 +40,36 @@ The configuration allows you to specify and access the artifact in Atlas.
### Required:
### Required:
*`token` (string) - Your access token for the Atlas API.
-`token` (string) - Your access token for the Atlas API. This can be generated
This can be generated on your [tokens page](https://atlas.hashicorp.com/settings/tokens). Alternatively you can export your Atlas token as an environmental variable and remove it from the configuration.
on your [tokens page](https://atlas.hashicorp.com/settings/tokens).
Alternatively you can export your Atlas token as an environmental variable and
remove it from the configuration.
*`artifact` (string) - The shorthand tag for your artifact that maps to
-`artifact` (string) - The shorthand tag for your artifact that maps to Atlas,
Atlas, i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`. You must
i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`. You must
have access to the organization, hashicorp in this example, in order to add an artifact to
have access to the organization, hashicorp in this example, in order to add an
the organization in Atlas.
artifact to the organization in Atlas.
*`artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will always be `amazon.ami`.
-`artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will
This field must be defined because Atlas can host other artifact types, such as Vagrant boxes.
always be `amazon.ami`. This field must be defined because Atlas can host
other artifact types, such as Vagrant boxes.
-> **Note:** If you want to upload Vagrant boxes to Atlas, use the [Atlas post-processor](/docs/post-processors/atlas.html).
->**Note:** If you want to upload Vagrant boxes to Atlas, use the [Atlas
The Packer compress post-processor takes an artifact with files (such as from
description:|-
VMware or VirtualBox) and compresses the artifact into a single archive.
The Packer compress post-processor takes an artifact with files (such as from VMware or VirtualBox) and compresses the artifact into a single archive.
layout: docs
---
page_title: 'compress Post-Processor'
...
# Compress Post-Processor
# Compress Post-Processor
...
@@ -16,49 +17,55 @@ VMware or VirtualBox) and compresses the artifact into a single archive.
...
@@ -16,49 +17,55 @@ VMware or VirtualBox) and compresses the artifact into a single archive.
### Required:
### Required:
You must specify the output filename. The archive format is derived from the filename.
You must specify the output filename. The archive format is derived from the
filename.
*`output` (string) - The path to save the compressed archive. The archive
-`output` (string) - The path to save the compressed archive. The archive
format is inferred from the filename. E.g. `.tar.gz` will be a gzipped
format is inferred from the filename. E.g. `.tar.gz` will be a
tarball. `.zip` will be a zip file. If the extension can't be detected packer
gzipped tarball. `.zip` will be a zip file. If the extension can't be detected
defaults to `.tar.gz` behavior but will not change the filename.
packer defaults to `.tar.gz` behavior but will not change the filename.
If you are executing multiple builders in parallel you should make sure
If you are executing multiple builders in parallel you should make sure
`output` is unique for each one. For example `packer_{{.BuildName}}_{{.Provider}}.zip`.
`output` is unique for each one. For example
`packer_{{.BuildName}}_{{.Provider}}.zip`.
### Optional:
### Optional:
If you want more control over how the archive is created you can specify the following settings:
If you want more control over how the archive is created you can specify the
following settings:
*`compression_level` (integer) - Specify the compression level, for algorithms
-`compression_level` (integer) - Specify the compression level, for algorithms
that support it, from 1 through 9 inclusive. Typically higher compression
that support it, from 1 through 9 inclusive. Typically higher compression
levels take longer but produce smaller files. Defaults to `6`
levels take longer but produce smaller files. Defaults to `6`
*`keep_input_artifact` (boolean) - Keep source files; defaults to `false`
-`keep_input_artifact` (boolean) - Keep source files; defaults to `false`
### Supported Formats
### Supported Formats
Supported file extensions include `.zip`, `.tar`, `.gz`, `.tar.gz`, `.lz4` and `.tar.lz4`. Note that `.gz` and `.lz4` will fail if you have multiple files to compress.
Supported file extensions include `.zip`, `.tar`, `.gz`, `.tar.gz`, `.lz4` and
`.tar.lz4`. Note that `.gz` and `.lz4` will fail if you have multiple files to
compress.
## Examples
## Examples
Some minimal examples are shown below, showing only the post-processor configuration:
Some minimal examples are shown below, showing only the post-processor
The Packer Docker import post-processor takes an artifact from the docker
description:|-
builder and imports it with Docker locally. This allows you to apply a
The Packer Docker import post-processor takes an artifact from the docker builder and imports it with Docker locally. This allows you to apply a repository and tag to the image and lets you use the other Docker post-processors such as docker-push to push the image to a registry.
repository and tag to the image and lets you use the other Docker
---
post-processors such as docker-push to push the image to a registry.
layout: docs
page_title: 'docker-import Post-Processor'
...
# Docker Import Post-Processor
# Docker Import Post-Processor
Type: `docker-import`
Type: `docker-import`
The Packer Docker import post-processor takes an artifact from the
The Packer Docker import post-processor takes an artifact from the[docker
[docker builder](/docs/builders/docker.html) and imports it with Docker
builder](/docs/builders/docker.html) and imports it with Docker locally. This
locally. This allows you to apply a repository and tag to the image
allows you to apply a repository and tag to the image and lets you use the other
and lets you use the other Docker post-processors such as
Docker post-processors such as
[docker-push](/docs/post-processors/docker-push.html) to push the image
[docker-push](/docs/post-processors/docker-push.html) to push the image to a
to a registry.
registry.
## Configuration
## Configuration
The configuration for this post-processor is extremely simple. At least
The configuration for this post-processor is extremely simple. At least a
a repository is required.
repository is required.
*`repository` (string) - The repository of the imported image.
-`repository` (string) - The repository of the imported image.
*`tag` (string) - The tag for the imported image. By default this is not
-`tag` (string) - The tag for the imported image. By default this is not set.
set.
## Example
## Example
An example is shown below, showing only the post-processor configuration:
An example is shown below, showing only the post-processor configuration:
```javascript
``` {.javascript}
{
{
"type": "docker-import",
"type": "docker-import",
"repository": "mitchellh/packer",
"repository": "mitchellh/packer",
...
@@ -38,9 +40,9 @@ An example is shown below, showing only the post-processor configuration:
...
@@ -38,9 +40,9 @@ An example is shown below, showing only the post-processor configuration:
}
}
```
```
This example would take the image created by the Docker builder
This example would take the image created by the Docker builder and import it
and import it into the local Docker process with a name of `mitchellh/packer:0.7`.
into the local Docker process with a name of `mitchellh/packer:0.7`.
The Packer Docker Save post-processor takes an artifact from the docker builder
description:|-
that was committed and saves it to a file. This is similar to exporting the
The Packer Docker Save post-processor takes an artifact from the docker builder that was committed and saves it to a file. This is similar to exporting the Docker image directly from the builder, except that it preserves the hierarchy of images and metadata.
Docker image directly from the builder, except that it preserves the hierarchy
---
of images and metadata.
layout: docs
page_title: 'docker-save Post-Processor'
...
# Docker Save Post-Processor
# Docker Save Post-Processor
Type: `docker-save`
Type: `docker-save`
The Packer Docker Save post-processor takes an artifact from the
The Packer Docker Save post-processor takes an artifact from the [docker
[docker builder](/docs/builders/docker.html) that was committed
builder](/docs/builders/docker.html) that was committed and saves it to a file.
and saves it to a file. This is similar to exporting the Docker image
This is similar to exporting the Docker image directly from the builder, except
directly from the builder, except that it preserves the hierarchy of
that it preserves the hierarchy of images and metadata.
images and metadata.
We understand the terminology can be a bit confusing, but we've
We understand the terminology can be a bit confusing, but we've adopted the
adopted the terminology from Docker, so if you're familiar with that, then
terminology from Docker, so if you're familiar with that, then you'll be
you'll be familiar with this and vice versa.
familiar with this and vice versa.
## Configuration
## Configuration
The configuration for this post-processor is extremely simple.
The configuration for this post-processor is extremely simple.
*`path` (string) - The path to save the image.
-`path` (string) - The path to save the image.
## Example
## Example
An example is shown below, showing only the post-processor configuration:
An example is shown below, showing only the post-processor configuration:
The Packer Docker Tag post-processor takes an artifact from the docker builder
description:|-
that was committed and tags it into a repository. This allows you to use the
The Packer Docker Tag post-processor takes an artifact from the docker builder that was committed and tags it into a repository. This allows you to use the other Docker post-processors such as docker-push to push the image to a registry.
other Docker post-processors such as docker-push to push the image to a
---
registry.
layout: docs
page_title: 'docker-tag Post-Processor'
...
# Docker Tag Post-Processor
# Docker Tag Post-Processor
Type: `docker-tag`
Type: `docker-tag`
The Packer Docker Tag post-processor takes an artifact from the
The Packer Docker Tag post-processor takes an artifact from the [docker
[docker builder](/docs/builders/docker.html) that was committed
builder](/docs/builders/docker.html) that was committed and tags it into a
and tags it into a repository. This allows you to use the other
repository. This allows you to use the other Docker post-processors such as
Docker post-processors such as
[docker-push](/docs/post-processors/docker-push.html) to push the image to a
[docker-push](/docs/post-processors/docker-push.html) to push the image
registry.
to a registry.
This is very similar to the[docker-import](/docs/post-processors/docker-import.html)
This is very similar to the
post-processor except that this works with committed resources, rather
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
description:|-
`vagrant` post-processor and pushes it to Vagrant Cloud. Vagrant Cloud hosts and
The Packer Vagrant Cloud post-processor receives a Vagrant box from the `vagrant` post-processor and pushes it to Vagrant Cloud. Vagrant Cloud hosts and serves boxes to Vagrant, allowing you to version and distribute boxes to an organization in a simple way.
serves boxes to Vagrant, allowing you to version and distribute boxes to an
---
organization in a simple way.
layout: docs
page_title: 'Vagrant Cloud Post-Processor'
...
# Vagrant Cloud Post-Processor
# Vagrant Cloud Post-Processor
~> Vagrant Cloud has been superseded by Atlas. Please use the [Atlas post-processor](/docs/post-processors/atlas.html) instead. Learn more about [Atlas](https://atlas.hashicorp.com/).
\~> Vagrant Cloud has been superseded by Atlas. Please use the [Atlas
post-processor](/docs/post-processors/atlas.html) instead. Learn more about
[Atlas](https://atlas.hashicorp.com/).
Type: `vagrant-cloud`
Type: `vagrant-cloud`
The Packer Vagrant Cloud post-processor receives a Vagrant box from the`vagrant`
The Packer Vagrant Cloud post-processor receives a Vagrant box from the
post-processor and pushes it to Vagrant Cloud. [Vagrant Cloud](https://vagrantcloud.com)
`vagrant` post-processor and pushes it to Vagrant Cloud. [Vagrant
hosts and serves boxes to Vagrant, allowing you to version and distribute
Cloud](https://vagrantcloud.com) hosts and serves boxes to Vagrant, allowing you
boxes to an organization in a simple way.
to version and distribute boxes to an organization in a simple way.
You'll need to be familiar with Vagrant Cloud, have an upgraded account
You'll need to be familiar with Vagrant Cloud, have an upgraded account to
to enable box hosting, and be distributing your box via the [shorthand name](http://docs.vagrantup.com/v2/cli/box.html)
enable box hosting, and be distributing your box via the [shorthand
The Packer Vagrant post-processor takes a build and converts the artifact into a
description:|-
valid Vagrant box, if it can. This lets you use Packer to automatically create
The Packer Vagrant post-processor takes a build and converts the artifact into a valid Vagrant box, if it can. This lets you use Packer to automatically create arbitrarily complex Vagrant boxes, and is in fact how the official boxes distributed by Vagrant are created.
arbitrarily complex Vagrant boxes, and is in fact how the official boxes
---
distributed by Vagrant are created.
layout: docs
page_title: 'Vagrant Post-Processor'
...
# Vagrant Post-Processor
# Vagrant Post-Processor
Type: `vagrant`
Type: `vagrant`
The Packer Vagrant post-processor takes a build and converts the artifact
The Packer Vagrant post-processor takes a build and converts the artifact into a
into a valid [Vagrant](http://www.vagrantup.com) box, if it can.
valid [Vagrant](http://www.vagrantup.com) box, if it can. This lets you use
This lets you use Packer to automatically create arbitrarily complex
Packer to automatically create arbitrarily complex Vagrant boxes, and is in fact
Vagrant boxes, and is in fact how the official boxes distributed by
how the official boxes distributed by Vagrant are created.
Vagrant are created.
If you've never used a post-processor before, please read the
If you've never used a post-processor before, please read the documentation on
documentation on [using post-processors](/docs/templates/post-processors.html)
[using post-processors](/docs/templates/post-processors.html) in templates. This
in templates. This knowledge will be expected for the remainder of
knowledge will be expected for the remainder of this document.
this document.
Because Vagrant boxes are[provider-specific](http://docs.vagrantup.com/v2/boxes/format.html),
Because Vagrant boxes are
the Vagrant post-processor is hardcoded to understand how to convert
[provider-specific](http://docs.vagrantup.com/v2/boxes/format.html), the Vagrant
the artifacts of certain builders into proper boxes for their
post-processor is hardcoded to understand how to convert the artifacts of
respective providers.
certain builders into proper boxes for their respective providers.
Currently, the Vagrant post-processor can create boxes for the following
Currently, the Vagrant post-processor can create boxes for the following
providers.
providers.
* AWS
- AWS
* DigitalOcean
- DigitalOcean
* Hyper-V
- Hyper-V
* Parallels
- Parallels
* QEMU
- QEMU
* VirtualBox
- VirtualBox
* VMware
- VMware
-> **Support for additional providers** is planned. If the
->**Support for additional providers** is planned. If the Vagrant
Vagrant post-processor doesn't support creating boxes for a provider you
post-processor doesn't support creating boxes for a provider you care about,
care about, please help by contributing to Packer and adding support for it.
please help by contributing to Packer and adding support for it.
## Configuration
## Configuration
The simplest way to use the post-processor is to just enable it. No
The simplest way to use the post-processor is to just enable it. No
configuration is required by default. This will mostly do what you expect
configuration is required by default. This will mostly do what you expect and
and will build functioning boxes for many of the built-in builders of
will build functioning boxes for many of the built-in builders of Packer.
Packer.
However, if you want to configure things a bit more, the post-processor
However, if you want to configure things a bit more, the post-processor does
does expose some configuration options. The available options are listed
expose some configuration options. The available options are listed below, with
below, with more details about certain options in following sections.
more details about certain options in following sections.
*`compression_level` (integer) - An integer representing the
-`compression_level` (integer) - An integer representing the compression level
compression level to use when creating the Vagrant box. Valid
to use when creating the Vagrant box. Valid values range from 0 to 9, with 0
values range from 0 to 9, with 0 being no compression and 9 being
being no compression and 9 being the best compression. By default, compression
the best compression. By default, compression is enabled at level 6.
is enabled at level 6.
*`include` (array of strings) - Paths to files to include in the
-`include` (array of strings) - Paths to files to include in the Vagrant box.
Vagrant box. These files will each be copied into the top level directory
These files will each be copied into the top level directory of the Vagrant
of the Vagrant box (regardless of their paths). They can then be used
box (regardless of their paths). They can then be used from the Vagrantfile.
from the Vagrantfile.
*`keep_input_artifact` (boolean) - If set to true, do not delete the
-`keep_input_artifact` (boolean) - If set to true, do not delete the
`output_directory` on a successful build. Defaults to false.
`output_directory` on a successful build. Defaults to false.
*`output` (string) - The full path to the box file that will be created
-`output` (string) - The full path to the box file that will be created by
The `ansible-local` Packer provisioner configures Ansible to run on the machine
description:|-
by Packer from local Playbook and Role files. Playbooks and Roles can be
The `ansible-local` Packer provisioner configures Ansible to run on the machine by Packer from local Playbook and Role files. Playbooks and Roles can be uploaded from your local machine to the remote machine. Ansible is run in local mode via the `ansible-playbook` command.
uploaded from your local machine to the remote machine. Ansible is run in local
---
mode via the `ansible-playbook` command.
layout: docs
page_title: 'Ansible (Local) Provisioner'
...
# Ansible Local Provisioner
# Ansible Local Provisioner
Type: `ansible-local`
Type: `ansible-local`
The `ansible-local` Packer provisioner configures Ansible to run on the machine by
The `ansible-local` Packer provisioner configures Ansible to run on the machine
Packer from local Playbook and Role files. Playbooks and Roles can be uploaded
by Packer from local Playbook and Role files. Playbooks and Roles can be
from your local machine to the remote machine. Ansible is run in [local mode](http://docs.ansible.com/playbooks_delegation.html#local-playbooks) via the `ansible-playbook` command.
uploaded from your local machine to the remote machine. Ansible is run in [local
mode](http://docs.ansible.com/playbooks_delegation.html#local-playbooks) via the
`ansible-playbook` command.
## Basic Example
## Basic Example
The example below is fully functional.
The example below is fully functional.
```javascript
``` {.javascript}
{
{
"type": "ansible-local",
"type": "ansible-local",
"playbook_file": "local.yml"
"playbook_file": "local.yml"
...
@@ -30,39 +35,40 @@ The reference of available configuration options is listed below.
...
@@ -30,39 +35,40 @@ The reference of available configuration options is listed below.
Required:
Required:
*`playbook_file` (string) - The playbook file to be executed by ansible.
-`playbook_file` (string) - The playbook file to be executed by ansible. This
This file must exist on your local system and will be uploaded to the
file must exist on your local system and will be uploaded to the
remote machine.
remote machine.
Optional:
Optional:
*`command` (string) - The command to invoke ansible. Defaults to "ansible-playbook".
-`command` (string) - The command to invoke ansible. Defaults
to "ansible-playbook".
*`extra_arguments` (array of strings) - An array of extra arguments to pass to the
-`extra_arguments` (array of strings) - An array of extra arguments to pass to
ansible command. By default, this is empty.
the ansible command. By default, this is empty.
*`inventory_groups` (string) - A comma-separated list of groups to which
-`inventory_groups` (string) - A comma-separated list of groups to which packer
packer will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2`
will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2` will
will generate an Ansible inventory like:
generate an Ansible inventory like:
```text
``` {.text}
[my_group_1]
[my_group_1]
127.0.0.1
127.0.0.1
[my_group_2]
[my_group_2]
127.0.0.1
127.0.0.1
```
```
*`inventory_file` (string) - The inventory file to be used by ansible.
-`inventory_file` (string) - The inventory file to be used by ansible. This
This file must exist on your local system and will be uploaded to the
file must exist on your local system and will be uploaded to the
remote machine.
remote machine.
When using an inventory file, it's also required to `--limit` the hosts to
When using an inventory file, it's also required to `--limit` the hosts to the
the specified host you're buiding. The `--limit` argument can be provided in
specified host you're buiding. The `--limit` argument can be provided in the
the `extra_arguments` option.
`extra_arguments` option.
An example inventory file may look like:
An example inventory file may look like:
```text
``` {.text}
[chi-dbservers]
[chi-dbservers]
db-01 ansible_connection=local
db-01 ansible_connection=local
db-02 ansible_connection=local
db-02 ansible_connection=local
...
@@ -82,29 +88,30 @@ Optional:
...
@@ -82,29 +88,30 @@ Optional:
chi-appservers
chi-appservers
```
```
*`playbook_dir` (string) - a path to the complete ansible directory
-`playbook_dir` (string) - a path to the complete ansible directory structure
structure on your local system to be copied to the remote machine
on your local system to be copied to the remote machine as the
as the `staging_directory` before all other files and directories.
`staging_directory` before all other files and directories.
*`playbook_paths` (array of strings) - An array of paths to playbook files on
-`playbook_paths` (array of strings) - An array of paths to playbook files on
your local system. These will be uploaded to the remote machine under
your local system. These will be uploaded to the remote machine under
`staging_directory`/playbooks. By default, this is empty.
`staging_directory`/playbooks. By default, this is empty.
*`group_vars` (string) - a path to the directory containing ansible
-`group_vars` (string) - a path to the directory containing ansible group
group variables on your local system to be copied to the
variables on your local system to be copied to the remote machine. By default,
remote machine. By default, this is empty.
this is empty.
*`host_vars` (string) - a path to the directory containing ansible
-`host_vars` (string) - a path to the directory containing ansible host
host variables on your local system to be copied to the
variables on your local system to be copied to the remote machine. By default,
remote machine. By default, this is empty.
this is empty.
*`role_paths` (array of strings) - An array of paths to role directories on
-`role_paths` (array of strings) - An array of paths to role directories on
your local system. These will be uploaded to the remote machine under
your local system. These will be uploaded to the remote machine under
`staging_directory`/roles. By default, this is empty.
`staging_directory`/roles. By default, this is empty.
*`staging_directory` (string) - The directory where all the configuration of
-`staging_directory` (string) - The directory where all the configuration of
Ansible by Packer will be placed. By default this is "/tmp/packer-provisioner-ansible-local".
Ansible by Packer will be placed. By default this
This directory doesn't need to exist but must have proper permissions so that
is "/tmp/packer-provisioner-ansible-local". This directory doesn't need to
the SSH user that Packer uses is able to create directories and write into
exist but must have proper permissions so that the SSH user that Packer uses
this folder. If the permissions are not correct, use a shell provisioner prior
is able to create directories and write into this folder. If the permissions
to this to configure it properly.
are not correct, use a shell provisioner prior to this to configure
The Chef Client Packer provisioner installs and configures software on machines
description:|-
built by Packer using chef-client. Packer configures a Chef client to talk to a
The Chef Client Packer provisioner installs and configures software on machines built by Packer using chef-client. Packer configures a Chef client to talk to a remote Chef Server to provision the machine.
remote Chef Server to provision the machine.
---
layout: docs
page_title: 'Chef-Client Provisioner'
...
# Chef Client Provisioner
# Chef Client Provisioner
Type: `chef-client`
Type: `chef-client`
The Chef Client Packer provisioner installs and configures software on machines built
The Chef Client Packer provisioner installs and configures software on machines
by Packer using [chef-client](http://docs.opscode.com/chef_client.html).
built by Packer using [chef-client](http://docs.opscode.com/chef_client.html).
Packer configures a Chef client to talk to a remote Chef Server to
Packer configures a Chef client to talk to a remote Chef Server to provision the
provision the machine.
machine.
The provisioner will even install Chef onto your machine if it isn't already
The provisioner will even install Chef onto your machine if it isn't already
installed, using the official Chef installers provided by Opscode.
installed, using the official Chef installers provided by Opscode.
## Basic Example
## Basic Example
The example below is fully functional. It will install Chef onto the
The example below is fully functional. It will install Chef onto the remote
remote machine and run Chef client.
machine and run Chef client.
```javascript
``` {.javascript}
{
{
"type": "chef-client",
"type": "chef-client",
"server_url": "https://mychefserver.com/"
"server_url": "https://mychefserver.com/"
}
}
```
```
Note: to properly clean up the Chef node and client the machine on which
Note: to properly clean up the Chef node and client the machine on which packer
packer is running must have knife on the path and configured globally,
is running must have knife on the path and configured globally, i.e,
i.e, ~/.chef/knife.rb must be present and configured for the target chef server
\~/.chef/knife.rb must be present and configured for the target chef server
## Configuration Reference
## Configuration Reference
The reference of available configuration options is listed below. No
The reference of available configuration options is listed below. No
configuration is actually required.
configuration is actually required.
*`chef_environment` (string) - The name of the chef_environment sent to the
-`chef_environment` (string) - The name of the chef\_environment sent to the
Chef server. By default this is empty and will not use an environment.
Chef server. By default this is empty and will not use an environment.
*`config_template` (string) - Path to a template that will be used for
-`config_template` (string) - Path to a template that will be used for the Chef
the Chef configuration file. By default Packer only sets configuration
configuration file. By default Packer only sets configuration it needs to
it needs to match the settings set in the provisioner configuration. If
match the settings set in the provisioner configuration. If you need to set
you need to set configurations that the Packer provisioner doesn't support,
configurations that the Packer provisioner doesn't support, then you should
then you should use a custom configuration template. See the dedicated
use a custom configuration template. See the dedicated "Chef Configuration"
"Chef Configuration" section below for more details.
section below for more details.
*`execute_command` (string) - The command used to execute Chef. This has
-`execute_command` (string) - The command used to execute Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html)
various [configuration template
available. See below for more information.
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
*`install_command` (string) - The command used to install Chef. This has
-`install_command` (string) - The command used to install Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html)
various [configuration template
available. See below for more information.
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
*`json` (object) - An arbitrary mapping of JSON that will be available as
-`json` (object) - An arbitrary mapping of JSON that will be available as node
node attributes while running Chef.
attributes while running Chef.
*`node_name` (string) - The name of the node to register with the Chef
-`node_name` (string) - The name of the node to register with the Chef Server.
Server. This is optional and by default is packer-{{uuid}}.
This is optional and by default is packer-{{uuid}}.
*`prevent_sudo` (boolean) - By default, the configured commands that are
-`prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted.
then the sudo will be omitted.
*`run_list` (array of strings) - The [run list](http://docs.opscode.com/essentials_node_object_run_lists.html)
-`run_list` (array of strings) - The [run
for Chef. By default this is empty, and will use the run list sent
list](http://docs.opscode.com/essentials_node_object_run_lists.html) for Chef.
down by the Chef Server.
By default this is empty, and will use the run list sent down by the
Chef Server.
*`server_url` (string) - The URL to the Chef server. This is required.
-`server_url` (string) - The URL to the Chef server. This is required.
*`skip_clean_client` (boolean) - If true, Packer won't remove the client
-`skip_clean_client` (boolean) - If true, Packer won't remove the client from
from the Chef server after it is done running. By default, this is false.
the Chef server after it is done running. By default, this is false.
*`skip_clean_node` (boolean) - If true, Packer won't remove the node
-`skip_clean_node` (boolean) - If true, Packer won't remove the node from the
from the Chef server after it is done running. By default, this is false.
Chef server after it is done running. By default, this is false.
*`skip_install` (boolean) - If true, Chef will not automatically be installed
-`skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Opscode omnibus installers.
on the machine using the Opscode omnibus installers.
*`staging_directory` (string) - This is the directory where all the configuration
-`staging_directory` (string) - This is the directory where all the
of Chef by Packer will be placed. By default this is "/tmp/packer-chef-client".
configuration of Chef by Packer will be placed. By default this
This directory doesn't need to exist but must have proper permissions so that
is "/tmp/packer-chef-client". This directory doesn't need to exist but must
the SSH user that Packer uses is able to create directories and write into
have proper permissions so that the SSH user that Packer uses is able to
this folder. If the permissions are not correct, use a shell provisioner
create directories and write into this folder. If the permissions are not
prior to this to configure it properly.
correct, use a shell provisioner prior to this to configure it properly.
*`client_key` (string) - Path to client key. If not set, this defaults to a file
-`client_key` (string) - Path to client key. If not set, this defaults to a
named client.pem in `staging_directory`.
file named client.pem in `staging_directory`.
*`validation_client_name` (string) - Name of the validation client. If
-`validation_client_name` (string) - Name of the validation client. If not set,
not set, this won't be set in the configuration and the default that Chef
this won't be set in the configuration and the default that Chef uses will
uses will be used.
be used.
*`validation_key_path` (string) - Path to the validation key for communicating
-`validation_key_path` (string) - Path to the validation key for communicating
with the Chef Server. This will be uploaded to the remote machine. If this
with the Chef Server. This will be uploaded to the remote machine. If this is
is NOT set, then it is your responsibility via other means (shell provisioner,
NOT set, then it is your responsibility via other means (shell
etc.) to get a validation key to where Chef expects it.
provisioner, etc.) to get a validation key to where Chef expects it.
## Chef Configuration
## Chef Configuration
By default, Packer uses a simple Chef configuration file in order to set
By default, Packer uses a simple Chef configuration file in order to set the
the options specified for the provisioner. But Chef is a complex tool that
options specified for the provisioner. But Chef is a complex tool that supports
supports many configuration options. Packer allows you to specify a custom
many configuration options. Packer allows you to specify a custom configuration
configuration template if you'd like to set custom configurations.
template if you'd like to set custom configurations.
The default value for the configuration template is:
The default value for the configuration template is:
```liquid
``` {.liquid}
log_level :info
log_level :info
log_location STDOUT
log_location STDOUT
chef_server_url "{{.ServerUrl}}"
chef_server_url "{{.ServerUrl}}"
...
@@ -126,42 +131,42 @@ node_name "{{.NodeName}}"
...
@@ -126,42 +131,42 @@ node_name "{{.NodeName}}"
{{end}}
{{end}}
```
```
This template is a [configuration template](/docs/templates/configuration-templates.html)
This template is a [configuration
and has a set of variables available to use:
template](/docs/templates/configuration-templates.html) and has a set of
variables available to use:
*`NodeName` - The node name set in the configuration.
-`NodeName` - The node name set in the configuration.
*`ServerUrl` - The URL of the Chef Server set in the configuration.
-`ServerUrl` - The URL of the Chef Server set in the configuration.
*`ValidationKeyPath` - Path to the validation key, if it is set.
-`ValidationKeyPath` - Path to the validation key, if it is set.
## Execute Command
## Execute Command
By default, Packer uses the following command (broken across multiple lines
By default, Packer uses the following command (broken across multiple lines for
for readability) to execute Chef:
readability) to execute Chef:
```liquid
``` {.liquid}
{{if .Sudo}}sudo {{end}}chef-client \
{{if .Sudo}}sudo {{end}}chef-client \
--no-color \
--no-color \
-c {{.ConfigPath}} \
-c {{.ConfigPath}} \
-j {{.JsonPath}}
-j {{.JsonPath}}
```
```
This command can be customized using the `execute_command` configuration.
This command can be customized using the `execute_command` configuration. As you
As you can see from the default value above, the value of this configuration
can see from the default value above, the value of this configuration can
can contain various template variables, defined below:
contain various template variables, defined below:
*`ConfigPath` - The path to the Chef configuration file.
-`ConfigPath` - The path to the Chef configuration file. file.
file.
-`JsonPath` - The path to the JSON attributes file for the node.
*`JsonPath` - The path to the JSON attributes file for the node.
-`Sudo` - A boolean of whether to `sudo` the command or not, depending on the
*`Sudo` - A boolean of whether to `sudo` the command or not, depending on
value of the `prevent_sudo` configuration.
the value of the `prevent_sudo` configuration.
## Install Command
## Install Command
By default, Packer uses the following command (broken across multiple lines
By default, Packer uses the following command (broken across multiple lines for
for readability) to install Chef. This command can be customized if you want
readability) to install Chef. This command can be customized if you want to
The Chef solo Packer provisioner installs and configures software on machines
description:|-
built by Packer using chef-solo. Cookbooks can be uploaded from your local
The Chef solo Packer provisioner installs and configures software on machines built by Packer using chef-solo. Cookbooks can be uploaded from your local machine to the remote machine or remote paths can be used.
machine to the remote machine or remote paths can be used.
---
layout: docs
page_title: 'Chef-Solo Provisioner'
...
# Chef Solo Provisioner
# Chef Solo Provisioner
Type: `chef-solo`
Type: `chef-solo`
The Chef solo Packer provisioner installs and configures software on machines built
The Chef solo Packer provisioner installs and configures software on machines
by Packer using [chef-solo](https://docs.chef.io/chef_solo.html). Cookbooks
built by Packer using [chef-solo](https://docs.chef.io/chef_solo.html).
can be uploaded from your local machine to the remote machine or remote paths
Cookbooks can be uploaded from your local machine to the remote machine or
can be used.
remote paths can be used.
The provisioner will even install Chef onto your machine if it isn't already
The provisioner will even install Chef onto your machine if it isn't already
installed, using the official Chef installers provided by Chef Inc.
installed, using the official Chef installers provided by Chef Inc.
## Basic Example
## Basic Example
The example below is fully functional and expects cookbooks in the
The example below is fully functional and expects cookbooks in the "cookbooks"
"cookbooks" directory relative to your working directory.
directory relative to your working directory.
```javascript
``` {.javascript}
{
{
"type": "chef-solo",
"type": "chef-solo",
"cookbook_paths": ["cookbooks"]
"cookbook_paths": ["cookbooks"]
...
@@ -34,124 +36,126 @@ The example below is fully functional and expects cookbooks in the
...
@@ -34,124 +36,126 @@ The example below is fully functional and expects cookbooks in the
The reference of available configuration options is listed below. No
The reference of available configuration options is listed below. No
configuration is actually required, but at least `run_list` is recommended.
configuration is actually required, but at least `run_list` is recommended.
*`chef_environment` (string) - The name of the `chef_environment` sent to the
-`chef_environment` (string) - The name of the `chef_environment` sent to the
Chef server. By default this is empty and will not use an environment
Chef server. By default this is empty and will not use an environment
*`config_template` (string) - Path to a template that will be used for
-`config_template` (string) - Path to a template that will be used for the Chef
the Chef configuration file. By default Packer only sets configuration
configuration file. By default Packer only sets configuration it needs to
it needs to match the settings set in the provisioner configuration. If
match the settings set in the provisioner configuration. If you need to set
you need to set configurations that the Packer provisioner doesn't support,
configurations that the Packer provisioner doesn't support, then you should
then you should use a custom configuration template. See the dedicated
use a custom configuration template. See the dedicated "Chef Configuration"
"Chef Configuration" section below for more details.
section below for more details.
*`cookbook_paths` (array of strings) - This is an array of paths to
-`cookbook_paths` (array of strings) - This is an array of paths to "cookbooks"
"cookbooks" directories on your local filesystem. These will be uploaded
directories on your local filesystem. These will be uploaded to the remote
to the remote machine in the directory specified by the `staging_directory`.
machine in the directory specified by the `staging_directory`. By default,
By default, this is empty.
this is empty.
*`data_bags_path` (string) - The path to the "data\_bags" directory on your local filesystem.
-`data_bags_path` (string) - The path to the "data\_bags" directory on your
These will be uploaded to the remote machine in the directory specified by the
local filesystem. These will be uploaded to the remote machine in the
`staging_directory`. By default, this is empty.
directory specified by the `staging_directory`. By default, this is empty.
*`encrypted_data_bag_secret_path` (string) - The path to the file containing
-`encrypted_data_bag_secret_path` (string) - The path to the file containing
the secret for encrypted data bags. By default, this is empty, so no
the secret for encrypted data bags. By default, this is empty, so no secret
secret will be available.
will be available.
*`environments_path` (string) - The path to the "environments" directory on your local filesystem.
-`environments_path` (string) - The path to the "environments" directory on
These will be uploaded to the remote machine in the directory specified by the
your local filesystem. These will be uploaded to the remote machine in the
`staging_directory`. By default, this is empty.
directory specified by the `staging_directory`. By default, this is empty.
*`execute_command` (string) - The command used to execute Chef. This has
-`execute_command` (string) - The command used to execute Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html)
various [configuration template
available. See below for more information.
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
*`install_command` (string) - The command used to install Chef. This has
-`install_command` (string) - The command used to install Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html)
various [configuration template
available. See below for more information.
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
*`json` (object) - An arbitrary mapping of JSON that will be available as
-`json` (object) - An arbitrary mapping of JSON that will be available as node
node attributes while running Chef.
attributes while running Chef.
*`prevent_sudo` (boolean) - By default, the configured commands that are
-`prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted.
then the sudo will be omitted.
*`remote_cookbook_paths` (array of strings) - A list of paths on the remote
-`remote_cookbook_paths` (array of strings) - A list of paths on the remote
machine where cookbooks will already exist. These may exist from a previous
machine where cookbooks will already exist. These may exist from a previous
provisioner or step. If specified, Chef will be configured to look for
provisioner or step. If specified, Chef will be configured to look for
cookbooks here. By default, this is empty.
cookbooks here. By default, this is empty.
*`roles_path` (string) - The path to the "roles" directory on your local filesystem.
-`roles_path` (string) - The path to the "roles" directory on your
These will be uploaded to the remote machine in the directory specified by the
local filesystem. These will be uploaded to the remote machine in the
`staging_directory`. By default, this is empty.
directory specified by the `staging_directory`. By default, this is empty.
*`run_list` (array of strings) - The [run list](https://docs.chef.io/run_lists.html)
-`run_list` (array of strings) - The [run
for Chef. By default this is empty.
list](https://docs.chef.io/run_lists.html)for Chef. By default this is empty.
*`skip_install` (boolean) - If true, Chef will not automatically be installed
-`skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Chef omnibus installers.
on the machine using the Chef omnibus installers.
*`staging_directory` (string) - This is the directory where all the configuration
-`staging_directory` (string) - This is the directory where all the
of Chef by Packer will be placed. By default this is "/tmp/packer-chef-solo".
configuration of Chef by Packer will be placed. By default this
This directory doesn't need to exist but must have proper permissions so that
is "/tmp/packer-chef-solo". This directory doesn't need to exist but must have
the SSH user that Packer uses is able to create directories and write into
proper permissions so that the SSH user that Packer uses is able to create
this folder. If the permissions are not correct, use a shell provisioner
directories and write into this folder. If the permissions are not correct,
prior to this to configure it properly.
use a shell provisioner prior to this to configure it properly.
## Chef Configuration
## Chef Configuration
By default, Packer uses a simple Chef configuration file in order to set
By default, Packer uses a simple Chef configuration file in order to set the
the options specified for the provisioner. But Chef is a complex tool that
options specified for the provisioner. But Chef is a complex tool that supports
supports many configuration options. Packer allows you to specify a custom
many configuration options. Packer allows you to specify a custom configuration
configuration template if you'd like to set custom configurations.
template if you'd like to set custom configurations.
The default value for the configuration template is:
The default value for the configuration template is:
```liquid
``` {.liquid}
cookbook_path [{{.CookbookPaths}}]
cookbook_path [{{.CookbookPaths}}]
```
```
This template is a [configuration template](/docs/templates/configuration-templates.html)
This template is a [configuration
and has a set of variables available to use:
template](/docs/templates/configuration-templates.html) and has a set of
variables available to use:
*`ChefEnvironment` - The current enabled environment. Only non-empty
-`ChefEnvironment` - The current enabled environment. Only non-empty if the
if the environment path is set.
environment path is set.
*`CookbookPaths` is the set of cookbook paths ready to embedded directly
-`CookbookPaths` is the set of cookbook paths ready to embedded directly into a
into a Ruby array to configure Chef.
Ruby array to configure Chef.
*`DataBagsPath` is the path to the data bags folder.
-`DataBagsPath` is the path to the data bags folder.
*`EncryptedDataBagSecretPath` - The path to the encrypted data bag secret
-`EncryptedDataBagSecretPath` - The path to the encrypted data bag secret
*`EnvironmentsPath` - The path to the environments folder.
-`EnvironmentsPath` - The path to the environments folder.
*`RolesPath` - The path to the roles folder.
-`RolesPath` - The path to the roles folder.
## Execute Command
## Execute Command
By default, Packer uses the following command (broken across multiple lines
By default, Packer uses the following command (broken across multiple lines for
for readability) to execute Chef:
readability) to execute Chef:
```liquid
``` {.liquid}
{{if .Sudo}}sudo {{end}}chef-solo \
{{if .Sudo}}sudo {{end}}chef-solo \
--no-color \
--no-color \
-c {{.ConfigPath}} \
-c {{.ConfigPath}} \
-j {{.JsonPath}}
-j {{.JsonPath}}
```
```
This command can be customized using the `execute_command` configuration.
This command can be customized using the `execute_command` configuration. As you
As you can see from the default value above, the value of this configuration
can see from the default value above, the value of this configuration can
can contain various template variables, defined below:
contain various template variables, defined below:
*`ConfigPath` - The path to the Chef configuration file.
-`ConfigPath` - The path to the Chef configuration file. file.
file.
-`JsonPath` - The path to the JSON attributes file for the node.
*`JsonPath` - The path to the JSON attributes file for the node.
-`Sudo` - A boolean of whether to `sudo` the command or not, depending on the
*`Sudo` - A boolean of whether to `sudo` the command or not, depending on
value of the `prevent_sudo` configuration.
the value of the `prevent_sudo` configuration.
## Install Command
## Install Command
By default, Packer uses the following command (broken across multiple lines
By default, Packer uses the following command (broken across multiple lines for
for readability) to install Chef. This command can be customized if you want
readability) to install Chef. This command can be customized if you want to
Packer is extensible, allowing you to write new provisioners without having to
description:|-
modify the core source code of Packer itself. Documentation for creating new
Packer is extensible, allowing you to write new provisioners without having to modify the core source code of Packer itself. Documentation for creating new provisioners is covered in the custom provisioners page of the Packer plugin section.
provisioners is covered in the custom provisioners page of the Packer plugin
---
section.
layout: docs
page_title: Custom Provisioner
...
# Custom Provisioner
# Custom Provisioner
Packer is extensible, allowing you to write new provisioners without having to
Packer is extensible, allowing you to write new provisioners without having to
modify the core source code of Packer itself. Documentation for creating
modify the core source code of Packer itself. Documentation for creating new
new provisioners is covered in the [custom provisioners](/docs/extend/provisioner.html)
provisioners is covered in the [custom
page of the Packer plugin section.
provisioners](/docs/extend/provisioner.html) page of the Packer plugin section.
The file Packer provisioner uploads files to machines built by Packer. The
description:|-
recommended usage of the file provisioner is to use it to upload files, and then
The file Packer provisioner uploads files to machines built by Packer. The recommended usage of the file provisioner is to use it to upload files, and then use shell provisioner to move them to the proper place, set permissions, etc.
use shell provisioner to move them to the proper place, set permissions, etc.
---
layout: docs
page_title: File Provisioner
...
# File Provisioner
# File Provisioner
Type: `file`
Type: `file`
The file Packer provisioner uploads files to machines built by Packer. The
The file Packer provisioner uploads files to machines built by Packer. The
recommended usage of the file provisioner is to use it to upload files,
recommended usage of the file provisioner is to use it to upload files, and then
and then use [shell provisioner](/docs/provisioners/shell.html) to move
use [shell provisioner](/docs/provisioners/shell.html) to move them to the
them to the proper place, set permissions, etc.
proper place, set permissions, etc.
The file provisioner can upload both single files and complete directories.
The file provisioner can upload both single files and complete directories.
## Basic Example
## Basic Example
```javascript
``` {.javascript}
{
{
"type": "file",
"type": "file",
"source": "app.tar.gz",
"source": "app.tar.gz",
...
@@ -30,42 +32,42 @@ The file provisioner can upload both single files and complete directories.
...
@@ -30,42 +32,42 @@ The file provisioner can upload both single files and complete directories.
The available configuration options are listed below. All elements are required.
The available configuration options are listed below. All elements are required.
*`source` (string) - The path to a local file or directory to upload to the
-`source` (string) - The path to a local file or directory to upload to
machine. The path can be absolute or relative. If it is relative, it is
the machine. The path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed. If this is a
relative to the working directory when Packer is executed. If this is a
directory, the existence of a trailing slash is important. Read below on
directory, the existence of a trailing slash is important. Read below on
uploading directories.
uploading directories.
*`destination` (string) - The path where the file will be uploaded to in the
-`destination` (string) - The path where the file will be uploaded to in
machine. This value must be a writable location and any parent directories
the machine. This value must be a writable location and any parent directories
must already exist.
must already exist.
*`direction` (string) - The direction of the file transfer. This defaults
-`direction` (string) - The direction of the file transfer. This defaults to
to "upload." If it is set to "download" then the file "source" in
"upload." If it is set to "download" then the file "source" in the machine wll
the machine wll be downloaded locally to "destination"
be downloaded locally to "destination"
## Directory Uploads
## Directory Uploads
The file provisioner is also able to upload a complete directory to the
The file provisioner is also able to upload a complete directory to the remote
remote machine. When uploading a directory, there are a few important things
machine. When uploading a directory, there are a few important things you should
you should know.
know.
First, the destination directory must already exist. If you need to
First, the destination directory must already exist. If you need to create it,
create it, use a shell provisioner just prior to the file provisioner
use a shell provisioner just prior to the file provisioner in order to create
in order to create the directory.
the directory.
Next, the existence of a trailing slash on the source path will determine
Next, the existence of a trailing slash on the source path will determine
whether the directory name will be embedded within the destination, or
whether the directory name will be embedded within the destination, or whether
whether the destination will be created. An example explains this best:
the destination will be created. An example explains this best:
If the source is `/foo` (no trailing slash), and the destination is
If the source is `/foo` (no trailing slash), and the destination is`/tmp`, then
`/tmp`, then the contents of `/foo` on the local machine will be uploaded
the contents of `/foo` on the local machine will be uploaded to `/tmp/foo` on
to `/tmp/foo` on the remote machine. The `foo` directory on the remote
the remote machine. The `foo` directory on the remote machine will be created by
machine will be created by Packer.
Packer.
If the source, however, is `/foo/` (a trailing slash is present), and
If the source, however, is `/foo/` (a trailing slash is present), and the
the destination is `/tmp`, then the contents of `/foo` will be uploaded
destination is `/tmp`, then the contents of `/foo` will be uploaded into `/tmp`
into `/tmp`directly.
directly.
This behavior was adopted from the standard behavior of rsync. Note that
This behavior was adopted from the standard behavior of rsync. Note that under
The shell Packer provisioner provisions machines built by Packer using shell
description:|-
scripts. Shell provisioning is the easiest way to get software installed and
The shell Packer provisioner provisions machines built by Packer using shell scripts. Shell provisioning is the easiest way to get software installed and configured on a machine.
configured on a machine.
---
layout: docs
page_title: PowerShell Provisioner
...
# PowerShell Provisioner
# PowerShell Provisioner
...
@@ -16,7 +18,7 @@ It assumes that the communicator in use is WinRM.
...
@@ -16,7 +18,7 @@ It assumes that the communicator in use is WinRM.
The example below is fully functional.
The example below is fully functional.
```javascript
``` {.javascript}
{
{
"type": "powershell",
"type": "powershell",
"inline": ["dir c:\\"]
"inline": ["dir c:\\"]
...
@@ -28,55 +30,54 @@ The example below is fully functional.
...
@@ -28,55 +30,54 @@ The example below is fully functional.
The reference of available configuration options is listed below. The only
The reference of available configuration options is listed below. The only
required element is either "inline" or "script". Every other option is optional.
required element is either "inline" or "script". Every other option is optional.
Exactly _one_ of the following is required:
Exactly *one* of the following is required:
*`inline` (array of strings) - This is an array of commands to execute.
-`inline` (array of strings) - This is an array of commands to execute. The
The commands are concatenated by newlines and turned into a single file,
commands are concatenated by newlines and turned into a single file, so they
so they are all executed within the same context. This allows you to
are all executed within the same context. This allows you to change
change directories in one command and use something in the directory in
directories in one command and use something in the directory in the next and
the next and so on. Inline scripts are the easiest way to pull off simple
so on. Inline scripts are the easiest way to pull off simple tasks within
tasks within the machine.
the machine.
*`script` (string) - The path to a script to upload and execute in the machine.
-`script` (string) - The path to a script to upload and execute in the machine.
This path can be absolute or relative. If it is relative, it is relative
This path can be absolute or relative. If it is relative, it is relative to
to the working directory when Packer is executed.
the working directory when Packer is executed.
*`scripts` (array of strings) - An array of scripts to execute. The scripts
-`scripts` (array of strings) - An array of scripts to execute. The scripts
will be uploaded and executed in the order specified. Each script is executed
will be uploaded and executed in the order specified. Each script is executed
in isolation, so state such as variables from one script won't carry on to
in isolation, so state such as variables from one script won't carry on to
the next.
the next.
Optional parameters:
Optional parameters:
*`binary` (boolean) - If true, specifies that the script(s) are binary
-`binary` (boolean) - If true, specifies that the script(s) are binary files,
files, and Packer should therefore not convert Windows line endings to
and Packer should therefore not convert Windows line endings to Unix line
Unix line endings (if there are any). By default this is false.
endings (if there are any). By default this is false.
*`environment_vars` (array of strings) - An array of key/value pairs
-`environment_vars` (array of strings) - An array of key/value pairs to inject
to inject prior to the execute_command. The format should be
prior to the execute\_command. The format should be `key=value`. Packer
`key=value`. Packer injects some environmental variables by default
injects some environmental variables by default into the environment, as well,
into the environment, as well, which are covered in the section below.
which are covered in the section below.
*`execute_command` (string) - The command to use to execute the script.
-`execute_command` (string) - The command to use to execute the script. By
By default this is `powershell "& { {{.Vars}}{{.Path}}; exit $LastExitCode}"`.
default this is `powershell "& { {{.Vars}}{{.Path}}; exit $LastExitCode}"`.
The value of this is treated as [configuration template](/docs/templates/configuration-templates.html).
The value of this is treated as [configuration
There are two available variables: `Path`, which is
template](/docs/templates/configuration-templates.html). There are two
the path to the script to run, and `Vars`, which is the list of
available variables: `Path`, which is the path to the script to run, and
`environment_vars`, if configured.
`Vars`, which is the list of `environment_vars`, if configured.
*`elevated_user` and `elevated_password` (string) - If specified,
-`elevated_user` and `elevated_password` (string) - If specified, the
the PowerShell script will be run with elevated privileges using
PowerShell script will be run with elevated privileges using the given
the given Windows user.
Windows user.
*`remote_path` (string) - The path where the script will be uploaded to
-`remote_path` (string) - The path where the script will be uploaded to in
in the machine. This defaults to "/tmp/script.sh". This value must be
the machine. This defaults to "/tmp/script.sh". This value must be a writable
a writable location and any parent directories must already exist.
location and any parent directories must already exist.
*`start_retry_timeout` (string) - The amount of time to attempt to
-`start_retry_timeout` (string) - The amount of time to attempt to *start* the
_start_ the remote process. By default this is "5m" or 5 minutes. This
remote process. By default this is "5m" or 5 minutes. This setting exists in
setting exists in order to deal with times when SSH may restart, such as
order to deal with times when SSH may restart, such as a system reboot. Set
a system reboot. Set this to a higher value if reboots take a longer
this to a higher value if reboots take a longer amount of time.
amount of time.
-`valid_exit_codes` (list of ints) - Valid exit codes for the script. By
*`valid_exit_codes` (list of ints) - Valid exit codes for the script.
The masterless Puppet Packer provisioner configures Puppet to run on the
description:|-
machines by Packer from local modules and manifest files. Modules and manifests
The masterless Puppet Packer provisioner configures Puppet to run on the machines by Packer from local modules and manifest files. Modules and manifests can be uploaded from your local machine to the remote machine or can simply use remote paths (perhaps obtained using something like the shell provisioner). Puppet is run in masterless mode, meaning it never communicates to a Puppet master.
can be uploaded from your local machine to the remote machine or can simply use
---
remote paths (perhaps obtained using something like the shell provisioner).
Puppet is run in masterless mode, meaning it never communicates to a Puppet
master.
layout: docs
page_title: 'Puppet (Masterless) Provisioner'
...
# Puppet (Masterless) Provisioner
# Puppet (Masterless) Provisioner
Type: `puppet-masterless`
Type: `puppet-masterless`
The masterless Puppet Packer provisioner configures Puppet to run on the machines
The masterless Puppet Packer provisioner configures Puppet to run on the
by Packer from local modules and manifest files. Modules and manifests
machines by Packer from local modules and manifest files. Modules and manifests
can be uploaded from your local machine to the remote machine or can simply
can be uploaded from your local machine to the remote machine or can simply use
use remote paths (perhaps obtained using something like the shell provisioner).
remote paths (perhaps obtained using something like the shell provisioner).
Puppet is run in masterless mode, meaning it never communicates to a Puppet
Puppet is run in masterless mode, meaning it never communicates to a Puppet
master.
master.
-> **Note:** Puppet will _not_ be installed automatically
->**Note:** Puppet will *not* be installed automatically by this
by this provisioner. This provisioner expects that Puppet is already
provisioner. This provisioner expects that Puppet is already installed on the
installed on the machine. It is common practice to use the
machine. It is common practice to use the [shell
[shell provisioner](/docs/provisioners/shell.html) before the
provisioner](/docs/provisioners/shell.html) before the Puppet provisioner to do
Puppet provisioner to do this.
this.
## Basic Example
## Basic Example
The example below is fully functional and expects the configured manifest
The example below is fully functional and expects the configured manifest file
file to exist relative to your working directory:
to exist relative to your working directory:
```javascript
``` {.javascript}
{
{
"type": "puppet-masterless",
"type": "puppet-masterless",
"manifest_file": "site.pp"
"manifest_file": "site.pp"
...
@@ -40,63 +45,66 @@ The reference of available configuration options is listed below.
...
@@ -40,63 +45,66 @@ The reference of available configuration options is listed below.
Required parameters:
Required parameters:
*`manifest_file` (string) - This is either a path to a puppet manifest (`.pp`
-`manifest_file` (string) - This is either a path to a puppet manifest
file) _or_ a directory containing multiple manifests that puppet will apply
(`.pp` file) *or* a directory containing multiple manifests that puppet will
(the ["main manifest"][1]). These file(s) must exist on your local system and
The shell Packer provisioner provisions machines built by Packer using shell
description:|-
scripts. Shell provisioning is the easiest way to get software installed and
The shell Packer provisioner provisions machines built by Packer using shell scripts. Shell provisioning is the easiest way to get software installed and configured on a machine.
configured on a machine.
---
layout: docs
page_title: Shell Provisioner
...
# Shell Provisioner
# Shell Provisioner
Type: `shell`
Type: `shell`
The shell Packer provisioner provisions machines built by Packer using shell scripts.
The shell Packer provisioner provisions machines built by Packer using shell
Shell provisioning is the easiest way to get software installed and configured
scripts. Shell provisioning is the easiest way to get software installed and
on a machine.
configured on a machine.
->**Building Windows images?** You probably want to use the
->**Building Windows images?** You probably want to use the
[PowerShell](/docs/provisioners/powershell.html) or
All strings within templates are processed by a common Packer templating engine,
description:|-
where variables and functions can be used to modify the value of a configuration
All strings within templates are processed by a common Packer templating engine, where variables and functions can be used to modify the value of a configuration parameter at runtime.
parameter at runtime.
---
layout: docs
page_title: Configuration Templates
...
# Configuration Templates
# Configuration Templates
All strings within templates are processed by a common Packer templating
All strings within templates are processed by a common Packer templating engine,
engine, where variables and functions can be used to modify the value of
where variables and functions can be used to modify the value of a configuration
a configuration parameter at runtime.
parameter at runtime.
For example, the `{{timestamp}}` function can be used in any string to
For example, the `{{timestamp}}` function can be used in any string to generate
generate the current timestamp. This is useful for configurations that require
the current timestamp. This is useful for configurations that require unique
unique keys, such as AMI names. By setting the AMI name to something like
keys, such as AMI names. By setting the AMI name to something like
`My Packer AMI {{timestamp}}`, the AMI name will be unique down to the second.
`My Packer AMI {{timestamp}}`, the AMI name will be unique down to the second.
In addition to globally available functions like timestamp shown before,
In addition to globally available functions like timestamp shown before, some
some configurations have special local variables that are available only
configurations have special local variables that are available only for that
for that configuration. These are recognizable because they're prefixed by
configuration. These are recognizable because they're prefixed by a period, such
a period, such as `{{.Name}}`.
as `{{.Name}}`.
The complete syntax is covered in the next section, followed by a reference
The complete syntax is covered in the next section, followed by a reference of
of globally available functions.
globally available functions.
## Syntax
## Syntax
The syntax of templates is extremely simple. Anything template related
The syntax of templates is extremely simple. Anything template related happens
happens within double-braces: `{{ }}`. Variables are prefixed with a period
within double-braces: `{{ }}`. Variables are prefixed with a period and
and capitalized, such as `{{.Variable}}` and functions are just directly
capitalized, such as `{{.Variable}}` and functions are just directly within the
within the braces, such as `{{timestamp}}`.
braces, such as `{{timestamp}}`.
Here is an example from the VMware VMX template that shows configuration
Here is an example from the VMware VMX template that shows configuration
templates in action:
templates in action:
```liquid
``` {.liquid}
.encoding = "UTF-8"
.encoding = "UTF-8"
displayName = "{{ .Name }}"
displayName = "{{ .Name }}"
guestOS = "{{ .GuestOS }}"
guestOS = "{{ .GuestOS }}"
...
@@ -43,7 +45,7 @@ guestOS = "{{ .GuestOS }}"
...
@@ -43,7 +45,7 @@ guestOS = "{{ .GuestOS }}"
In this case, the "Name" and "GuestOS" variables will be replaced, potentially
In this case, the "Name" and "GuestOS" variables will be replaced, potentially
resulting in a VMX that looks like this:
resulting in a VMX that looks like this:
```liquid
``` {.liquid}
.encoding = "UTF-8"
.encoding = "UTF-8"
displayName = "packer"
displayName = "packer"
guestOS = "otherlinux"
guestOS = "otherlinux"
...
@@ -52,70 +54,126 @@ guestOS = "otherlinux"
...
@@ -52,70 +54,126 @@ guestOS = "otherlinux"
## Global Functions
## Global Functions
While some configuration settings have local variables specific to only that
While some configuration settings have local variables specific to only that
configuration, a set of functions are available globally for use in _any string_
configuration, a set of functions are available globally for use in *any string*
in Packer templates. These are listed below for reference.
in Packer templates. These are listed below for reference.
*`build_name` - The name of the build being run.
-`build_name` - The name of the build being run.
*`build_type` - The type of the builder being used currently.
-`build_type` - The type of the builder being used currently.
*`isotime [FORMAT]` - UTC time, which can be [formatted](http://golang.org/pkg/time/#example_Time_Format).
-`isotime [FORMAT]` - UTC time, which can be
See more examples below.
[formatted](http://golang.org/pkg/time/#example_Time_Format). See more
*`lower` - Lowercases the string.
examples below.
*`pwd` - The working directory while executing Packer.
-`lower` - Lowercases the string.
*`template_dir` - The directory to the template for the build.
-`pwd` - The working directory while executing Packer.
*`timestamp` - The current Unix timestamp in UTC.
-`template_dir` - The directory to the template for the build.
*`uuid` - Returns a random UUID.
-`timestamp` - The current Unix timestamp in UTC.
*`upper` - Uppercases the string.
-`uuid` - Returns a random UUID.
-`upper` - Uppercases the string.
### isotime Format
### isotime Format
Formatting for the function `isotime` uses the magic reference date
Formatting for the function `isotime` uses the magic reference date**Mon Jan 2
**Mon Jan 2 15:04:05 -0700 MST 2006**, which breaks down to the following:
15:04:05 -0700 MST 2006**, which breaks down to the following:
Templates are JSON files that configure the various components of Packer in
description:|-
order to create one or more machine images. Templates are portable, static, and
Templates are JSON files that configure the various components of Packer in order to create one or more machine images. Templates are portable, static, and readable and writable by both humans and computers. This has the added benefit of being able to not only create and modify templates by hand, but also write scripts to dynamically create or modify templates.
readable and writable by both humans and computers. This has the added benefit
---
of being able to not only create and modify templates by hand, but also write
scripts to dynamically create or modify templates.
layout: docs
page_title: Templates
...
# Templates
# Templates
Templates are JSON files that configure the various components of Packer
Templates are JSON files that configure the various components of Packer in
in order to create one or more machine images. Templates are portable, static,
order to create one or more machine images. Templates are portable, static, and
and readable and writable by both humans and computers. This has the added
readable and writable by both humans and computers. This has the added benefit
benefit of being able to not only create and modify templates by hand, but
of being able to not only create and modify templates by hand, but also write
also write scripts to dynamically create or modify templates.
scripts to dynamically create or modify templates.
Templates are given to commands such as `packer build`, which will
Templates are given to commands such as `packer build`, which will take the
take the template and actually run the builds within it, producing
template and actually run the builds within it, producing any resulting machine
any resulting machine images.
images.
## Template Structure
## Template Structure
...
@@ -23,64 +27,64 @@ A template is a JSON object that has a set of keys configuring various
...
@@ -23,64 +27,64 @@ A template is a JSON object that has a set of keys configuring various
components of Packer. The available keys within a template are listed below.
components of Packer. The available keys within a template are listed below.
Along with each key, it is noted whether it is required or not.
Along with each key, it is noted whether it is required or not.
*`builders` (_required_) is an array of one or more objects that defines
-`builders` (*required*) is an array of one or more objects that defines the
the builders that will be used to create machine images for this template,
builders that will be used to create machine images for this template, and
and configures each of those builders. For more information on how to define
configures each of those builders. For more information on how to define and
and configure a builder, read the sub-section on
configure a builder, read the sub-section on [configuring builders in
[configuring builders in templates](/docs/templates/builders.html).
templates](/docs/templates/builders.html).
*`description` (optional) is a string providing a description of what
-`description` (optional) is a string providing a description of what the
the template does. This output is used only in the
template does. This output is used only in the [inspect
The post-processor section within a template configures any post-processing that
description:|-
will be done to images built by the builders. Examples of post-processing would
The post-processor section within a template configures any post-processing that will be done to images built by the builders. Examples of post-processing would be compressing files, uploading artifacts, etc.
be compressing files, uploading artifacts, etc.
---
layout: docs
page_title: 'Templates: Post-Processors'
...
# Templates: Post-Processors
# Templates: Post-Processors
The post-processor section within a template configures any post-processing
The post-processor section within a template configures any post-processing that
that will be done to images built by the builders. Examples of post-processing
will be done to images built by the builders. Examples of post-processing would
would be compressing files, uploading artifacts, etc.
be compressing files, uploading artifacts, etc.
Post-processors are _optional_. If no post-processors are defined within a template,
Post-processors are *optional*. If no post-processors are defined within a
then no post-processing will be done to the image. The resulting artifact of
template, then no post-processing will be done to the image. The resulting
a build is just the image outputted by the builder.
artifact of a build is just the image outputted by the builder.
This documentation page will cover how to configure a post-processor in a
This documentation page will cover how to configure a post-processor in a
template. The specific configuration options available for each post-processor,
template. The specific configuration options available for each post-processor,
however, must be referenced from the documentation for that specific post-processor.
however, must be referenced from the documentation for that specific
post-processor.
Within a template, a section of post-processor definitions looks like this:
Within a template, a section of post-processor definitions looks like this:
```javascript
``` {.javascript}
{
{
"post-processors": [
"post-processors": [
// ... one or more post-processor definitions here
// ... one or more post-processor definitions here
...
@@ -38,29 +41,29 @@ apply to, if you wish.
...
@@ -38,29 +41,29 @@ apply to, if you wish.
## Post-Processor Definition
## Post-Processor Definition
Within the `post-processors` array in a template, there are three ways to
Within the `post-processors` array in a template, there are three ways to define
define a post-processor. There are _simple_ definitions, _detailed_ definitions,
a post-processor. There are *simple* definitions, *detailed* definitions, and
and _sequence_ definitions. Don't worry, they're all very easy to understand,
*sequence* definitions. Don't worry, they're all very easy to understand, and
and the "simple" and "detailed" definitions are simply shortcuts for the
the "simple" and "detailed" definitions are simply shortcuts for the "sequence"
"sequence" definition.
definition.
A **simple definition** is just a string; the name of the post-processor. An
A **simple definition** is just a string; the name of the post-processor. An
example is shown below. Simple definitions are used when no additional configuration
example is shown below. Simple definitions are used when no additional
is needed for the post-processor.
configuration is needed for the post-processor.
```javascript
``` {.javascript}
{
{
"post-processors": ["compress"]
"post-processors": ["compress"]
}
}
```
```
A **detailed definition** is a JSON object. It is very similar to a builder
A **detailed definition** is a JSON object. It is very similar to a builder or
or provisioner definition. It contains a `type` field to denote the type of
provisioner definition. It contains a `type` field to denote the type of the
the post-processor, but may also contain additional configuration for the
post-processor, but may also contain additional configuration for the
post-processor. A detailed definition is used when additional configuration
post-processor. A detailed definition is used when additional configuration is
is needed beyond simply the type for the post-processor. An example is shown below.
needed beyond simply the type for the post-processor. An example is shown below.
```javascript
``` {.javascript}
{
{
"post-processors": [
"post-processors": [
{
{
...
@@ -72,14 +75,14 @@ is needed beyond simply the type for the post-processor. An example is shown bel
...
@@ -72,14 +75,14 @@ is needed beyond simply the type for the post-processor. An example is shown bel
```
```
A **sequence definition** is a JSON array comprised of other **simple** or
A **sequence definition** is a JSON array comprised of other **simple** or
**detailed** definitions. The post-processors defined in the array are run
**detailed** definitions. The post-processors defined in the array are run in
in order, with the artifact of each feeding into the next, and any intermediary
order, with the artifact of each feeding into the next, and any intermediary
artifacts being discarded. A sequence definition may not contain another
artifacts being discarded. A sequence definition may not contain another
sequence definition. Sequence definitions are used to chain together multiple
sequence definition. Sequence definitions are used to chain together multiple
post-processors. An example is shown below, where the artifact of a build is
post-processors. An example is shown below, where the artifact of a build is
compressed then uploaded, but the compressed result is not kept.
compressed then uploaded, but the compressed result is not kept.
```javascript
``` {.javascript}
{
{
"post-processors": [
"post-processors": [
[
[
...
@@ -90,21 +93,21 @@ compressed then uploaded, but the compressed result is not kept.
...
@@ -90,21 +93,21 @@ compressed then uploaded, but the compressed result is not kept.
}
}
```
```
As you may be able to imagine, the **simple** and **detailed** definitions
As you may be able to imagine, the **simple** and **detailed** definitions are
are simply shortcuts for a **sequence** definition of only one element.
simply shortcuts for a **sequence** definition of only one element.
## Input Artifacts
## Input Artifacts
When using post-processors, the input artifact (coming from a builder or
When using post-processors, the input artifact (coming from a builder or another
another post-processor) is discarded by default after the post-processor runs.
post-processor) is discarded by default after the post-processor runs. This is
This is because generally, you don't want the intermediary artifacts on the
because generally, you don't want the intermediary artifacts on the way to the
way to the final artifact created.
final artifact created.
In some cases, however, you may want to keep the intermediary artifacts.
In some cases, however, you may want to keep the intermediary artifacts. You can
You can tell Packer to keep these artifacts by setting the
tell Packer to keep these artifacts by setting the `keep_input_artifact`
`keep_input_artifact`configuration to `true`. An example is shown below:
configuration to `true`. An example is shown below:
```javascript
``` {.javascript}
{
{
"post-processors": [
"post-processors": [
{
{
...
@@ -115,39 +118,37 @@ You can tell Packer to keep these artifacts by setting the
...
@@ -115,39 +118,37 @@ You can tell Packer to keep these artifacts by setting the
}
}
```
```
This setting will only keep the input artifact to _that specific_
This setting will only keep the input artifact to *that specific*
post-processor. If you're specifying a sequence of post-processors, then
post-processor. If you're specifying a sequence of post-processors, then all
all intermediaries are discarded by default except for the input artifacts
intermediaries are discarded by default except for the input artifacts to
to post-processors that explicitly state to keep the input artifact.
post-processors that explicitly state to keep the input artifact.
-> **Note:** The intuitive reader may be wondering what happens
->**Note:** The intuitive reader may be wondering what happens if multiple
if multiple post-processors are specified (not in a sequence). Does Packer require the
post-processors are specified (not in a sequence). Does Packer require the
configuration to keep the input artifact on all the post-processors?
configuration to keep the input artifact on all the post-processors? The answer
The answer is no, of course not. Packer is smart enough to figure out
is no, of course not. Packer is smart enough to figure out that at least one
that at least one post-processor requested that the input be kept, so it will keep
post-processor requested that the input be kept, so it will keep it around.
it around.
## Run on Specific Builds
## Run on Specific Builds
You can use the `only` or `except` configurations to run a post-processor
You can use the `only` or `except` configurations to run a post-processor only
only with specific builds. These two configurations do what you expect:
with specific builds. These two configurations do what you expect: `only` will
`only` will only run the post-processor on the specified builds and
only run the post-processor on the specified builds and `except` will run the
`except` will run the post-processor on anything other than the specified
post-processor on anything other than the specified builds.
builds.
An example of `only` being used is shown below, but the usage of `except`
An example of `only` being used is shown below, but the usage of `except` is
is effectively the same. `only` and `except` can only be specified on "detailed"
effectively the same. `only` and `except` can only be specified on "detailed"
configurations. If you have a sequence of post-processors to run, `only`
configurations. If you have a sequence of post-processors to run, `only` and
and `except` will only affect that single post-processor in the sequence.
`except` will only affect that single post-processor in the sequence.
```javascript
``` {.javascript}
{
{
"type": "vagrant",
"type": "vagrant",
"only": ["virtualbox-iso"]
"only": ["virtualbox-iso"]
}
}
```
```
The values within `only` or `except` are _build names_, not builder
The values within `only` or `except` are *build names*, not builder types. If
types. If you recall, build names by default are just their builder type,
you recall, build names by default are just their builder type, but if you
but if you specify a custom `name` parameter, then you should use that
specify a custom `name` parameter, then you should use that as the value instead
Within the template, the provisioners section contains an array of all the
description:|-
provisioners that Packer should use to install and configure software within
Within the template, the provisioners section contains an array of all the provisioners that Packer should use to install and configure software within running machines prior to turning them into machine images.
running machines prior to turning them into machine images.
---
layout: docs
page_title: 'Templates: Provisioners'
...
# Templates: Provisioners
# Templates: Provisioners
...
@@ -11,19 +13,18 @@ Within the template, the provisioners section contains an array of all the
...
@@ -11,19 +13,18 @@ Within the template, the provisioners section contains an array of all the
provisioners that Packer should use to install and configure software within
provisioners that Packer should use to install and configure software within
running machines prior to turning them into machine images.
running machines prior to turning them into machine images.
Provisioners are _optional_. If no provisioners are defined within a template,
Provisioners are *optional*. If no provisioners are defined within a template,
then no software other than the defaults will be installed within the
then no software other than the defaults will be installed within the resulting
resulting machine images. This is not typical, however, since much of the
machine images. This is not typical, however, since much of the value of Packer
value of Packer is to produce multiple identical images
is to produce multiple identical images of pre-configured software.
of pre-configured software.
This documentation page will cover how to configure a provisioner in a template.
This documentation page will cover how to configure a provisioner in a template.
The specific configuration options available for each provisioner, however,
The specific configuration options available for each provisioner, however, must
must be referenced from the documentation for that specific provisioner.
be referenced from the documentation for that specific provisioner.
Within a template, a section of provisioner definitions looks like this:
Within a template, a section of provisioner definitions looks like this:
```javascript
``` {.javascript}
{
{
"provisioners": [
"provisioners": [
// ... one or more provisioner definitions here
// ... one or more provisioner definitions here
...
@@ -31,25 +32,24 @@ Within a template, a section of provisioner definitions looks like this:
...
@@ -31,25 +32,24 @@ Within a template, a section of provisioner definitions looks like this:
}
}
```
```
For each of the definitions, Packer will run the provisioner for each
For each of the definitions, Packer will run the provisioner for each of the
of the configured builds. The provisioners will be run in the order
configured builds. The provisioners will be run in the order they are defined
they are defined within the template.
within the template.
## Provisioner Definition
## Provisioner Definition
A provisioner definition is a JSON object that must contain at least
A provisioner definition is a JSON object that must contain at least the `type`
the `type` key. This key specifies the name of the provisioner to use.
key. This key specifies the name of the provisioner to use. Additional keys
Additional keys within the object are used to configure the provisioner,
within the object are used to configure the provisioner, with the exception of a
with the exception of a handful of special keys, covered later.
handful of special keys, covered later.
As an example, the "shell" provisioner requires a key such as `script`
As an example, the "shell" provisioner requires a key such as `script` which
which specifies a path to a shell script to execute within the machines
specifies a path to a shell script to execute within the machines being created.
being created.
An example provisioner definition is shown below, configuring the shell
An example provisioner definition is shown below, configuring the shell
provisioner to run a local script within the machines:
provisioner to run a local script within the machines:
```javascript
``` {.javascript}
{
{
"type": "shell",
"type": "shell",
"script": "script.sh"
"script": "script.sh"
...
@@ -58,16 +58,15 @@ provisioner to run a local script within the machines:
...
@@ -58,16 +58,15 @@ provisioner to run a local script within the machines:
## Run on Specific Builds
## Run on Specific Builds
You can use the `only` or `except` configurations to run a provisioner
You can use the `only` or `except` configurations to run a provisioner only with
only with specific builds. These two configurations do what you expect:
specific builds. These two configurations do what you expect: `only` will only
`only` will only run the provisioner on the specified builds and
run the provisioner on the specified builds and `except` will run the
`except` will run the provisioner on anything other than the specified
provisioner on anything other than the specified builds.
builds.
An example of `only` being used is shown below, but the usage of `except`
An example of `only` being used is shown below, but the usage of `except` is
is effectively the same:
effectively the same:
```javascript
``` {.javascript}
{
{
"type": "shell",
"type": "shell",
"script": "script.sh",
"script": "script.sh",
...
@@ -75,21 +74,21 @@ is effectively the same:
...
@@ -75,21 +74,21 @@ is effectively the same:
}
}
```
```
The values within `only` or `except` are _build names_, not builder
The values within `only` or `except` are *build names*, not builder types. If
types. If you recall, build names by default are just their builder type,
you recall, build names by default are just their builder type, but if you
but if you specify a custom `name` parameter, then you should use that
specify a custom `name` parameter, then you should use that as the value instead
as the value instead of the type.
of the type.
## Build-Specific Overrides
## Build-Specific Overrides
While the goal of Packer is to produce identical machine images, it
While the goal of Packer is to produce identical machine images, it sometimes
sometimes requires periods of time where the machines are different before
requires periods of time where the machines are different before they eventually
they eventually converge to be identical. In these cases, different configurations
converge to be identical. In these cases, different configurations for
for provisioners may be necessary depending on the build. This can be done
provisioners may be necessary depending on the build. This can be done using
using build-specific overrides.
build-specific overrides.
An example of where this might be necessary is when building both an EC2 AMI
An example of where this might be necessary is when building both an EC2 AMI and
and a VMware machine. The source EC2 AMI may setup a user with administrative
a VMware machine. The source EC2 AMI may setup a user with administrative
privileges by default, whereas the VMware machine doesn't have these privileges.
privileges by default, whereas the VMware machine doesn't have these privileges.
In this case, the shell script may need to be executed differently. Of course,
In this case, the shell script may need to be executed differently. Of course,
the goal is that hopefully the shell script converges these two images to be
the goal is that hopefully the shell script converges these two images to be
...
@@ -97,7 +96,7 @@ identical. However, they may initially need to be run differently.
...
@@ -97,7 +96,7 @@ identical. However, they may initially need to be run differently.
This example is shown below:
This example is shown below:
```javascript
``` {.javascript}
{
{
"type": "shell",
"type": "shell",
"script": "script.sh",
"script": "script.sh",
...
@@ -111,24 +110,23 @@ This example is shown below:
...
@@ -111,24 +110,23 @@ This example is shown below:
```
```
As you can see, the `override` key is used. The value of this key is another
As you can see, the `override` key is used. The value of this key is another
JSON object where the key is the name of a [builder definition](/docs/templates/builders.html).
JSON object where the key is the name of a [builder
The value of this is in turn another JSON object. This JSON object simply
definition](/docs/templates/builders.html). The value of this is in turn another
contains the provisioner configuration as normal. This configuration is merged
JSON object. This JSON object simply contains the provisioner configuration as
into the default provisioner configuration.
normal. This configuration is merged into the default provisioner configuration.
## Pausing Before Running
## Pausing Before Running
With certain provisioners it is sometimes desirable to pause for some period
With certain provisioners it is sometimes desirable to pause for some period of
of time before running it. Specifically, in cases where a provisioner reboots
time before running it. Specifically, in cases where a provisioner reboots the
the machine, you may want to wait for some period of time before starting
machine, you may want to wait for some period of time before starting the next
the next provisioner.
provisioner.
Every provisioner definition in a Packer template can take a special
Every provisioner definition in a Packer template can take a special
configuration `pause_before` that is the amount of time to pause before
configuration `pause_before` that is the amount of time to pause before running
running that provisioner. By default, there is no pause. An example
that provisioner. By default, there is no pause. An example is shown below:
is shown below:
```javascript
``` {.javascript}
{
{
"type": "shell",
"type": "shell",
"script": "script.sh",
"script": "script.sh",
...
@@ -136,5 +134,5 @@ is shown below:
...
@@ -136,5 +134,5 @@ is shown below:
}
}
```
```
For the above provisioner, Packer will wait 10 seconds before uploading
For the above provisioner, Packer will wait 10 seconds before uploading and
User variables allow your templates to be further configured with variables from
description:|-
the command-line, environmental variables, or files. This lets you parameterize
User variables allow your templates to be further configured with variables from the command-line, environmental variables, or files. This lets you parameterize your templates so that you can keep secret tokens, environment-specific data, and other types of information out of your templates. This maximizes the portability and shareability of the template.
your templates so that you can keep secret tokens, environment-specific data,
---
and other types of information out of your templates. This maximizes the
portability and shareability of the template.
layout: docs
page_title: User Variables in Templates
...
# User Variables
# User Variables
User variables allow your templates to be further configured with variables
User variables allow your templates to be further configured with variables from
from the command-line, environmental variables, or files. This lets you
the command-line, environmental variables, or files. This lets you parameterize
parameterize your templates so that you can keep secret tokens,
your templates so that you can keep secret tokens, environment-specific data,
environment-specific data, and other types of information out of your
and other types of information out of your templates. This maximizes the
templates. This maximizes the portability and shareability of the template.
portability and shareability of the template.
Using user variables expects you know how
Using user variables expects you know how [configuration
If you are or were a user of Veewee, then there is an official tool called
description:|-
veewee-to-packer that will convert your Veewee definition into an equivalent
If you are or were a user of Veewee, then there is an official tool called veewee-to-packer that will convert your Veewee definition into an equivalent Packer template. Even if you're not a Veewee user, Veewee has a large library of templates that can be readily used with Packer by simply converting them.
Packer template. Even if you're not a Veewee user, Veewee has a large library of
---
templates that can be readily used with Packer by simply converting them.
layout: docs
page_title: Convert Veewee Definitions to Packer Templates
...
# Veewee-to-Packer
# Veewee-to-Packer
If you are or were a user of [Veewee](https://github.com/jedi4ever/veewee),
If you are or were a user of [Veewee](https://github.com/jedi4ever/veewee), then
then there is an official tool called [veewee-to-packer](https://github.com/mitchellh/veewee-to-packer)
there is an official tool called
that will convert your Veewee definition into an equivalent Packer template.
[veewee-to-packer](https://github.com/mitchellh/veewee-to-packer) that will
Even if you're not a Veewee user, Veewee has a
convert your Veewee definition into an equivalent Packer template. Even if
With Packer installed, let's just dive right into it and build our first image.
prev_url:"/intro/getting-started/setup.html"
Our first image will be an Amazon EC2 AMI with Redis pre-installed. This is just
next_url:"/intro/getting-started/provision.html"
an example. Packer can create images for many platforms with anything
next_title:"Provision"
pre-installed.
description:|-
layout: intro
With Packer installed, let's just dive right into it and build our first image. Our first image will be an Amazon EC2 AMI with Redis pre-installed. This is just an example. Packer can create images for many platforms with anything pre-installed.
next_title: Provision
---
next_url: '/intro/getting-started/provision.html'
page_title: Build an Image
prev_url: '/intro/getting-started/setup.html'
...
# Build an Image
# Build an Image
With Packer installed, let's just dive right into it and build our first
With Packer installed, let's just dive right into it and build our first image.
image. Our first image will be an [Amazon EC2 AMI](http://aws.amazon.com/ec2/)
Our first image will be an [Amazon EC2 AMI](http://aws.amazon.com/ec2/) with
with Redis pre-installed. This is just an example. Packer can create images
Redis pre-installed. This is just an example. Packer can create images for [many
for [many platforms](/intro/platforms.html) with anything pre-installed.
platforms](/intro/platforms.html) with anything pre-installed.
If you don't have an AWS account, [create one now](http://aws.amazon.com/free/).
If you don't have an AWS account, [create one now](http://aws.amazon.com/free/).
For the example, we'll use a "t2.micro" instance to build our image, which
For the example, we'll use a "t2.micro" instance to build our image, which
qualifies under the AWS [free-tier](http://aws.amazon.com/free/), meaning
qualifies under the AWS [free-tier](http://aws.amazon.com/free/), meaning it
it will be free. If you already have an AWS account, you may be charged some
will be free. If you already have an AWS account, you may be charged some amount
amount of money, but it shouldn't be more than a few cents.
of money, but it shouldn't be more than a few cents.
->**Note:** If you're not using an account that qualifies under the AWS
->**Note:** If you're not using an account that qualifies under the AWS
free-tier, you may be charged to run these examples. The charge should only be
free-tier, you may be charged to run these examples. The charge should only be a
a few cents, but we're not responsible if it ends up being more.
few cents, but we're not responsible if it ends up being more.
Packer can build images for [many platforms](/intro/platforms.html) other than
Packer can build images for [many platforms](/intro/platforms.html) other than
AWS, but AWS requires no additional software installed on your computer and
AWS, but AWS requires no additional software installed on your computer and
...
@@ -34,16 +37,16 @@ apply to the other platforms as well.
...
@@ -34,16 +37,16 @@ apply to the other platforms as well.
## The Template
## The Template
The configuration file used to define what image we want built and how
The configuration file used to define what image we want built and how is called
is called a _template_ in Packer terminology. The format of a template
a *template* in Packer terminology. The format of a template is simple
is simple [JSON](http://www.json.org/). JSON struck the best balance between
[JSON](http://www.json.org/). JSON struck the best balance between
human-editable and machine-editable, allowing both hand-made templates as well
human-editable and machine-editable, allowing both hand-made templates as well
as machine generated templates to easily be made.
as machine generated templates to easily be made.
We'll start by creating the entire template, then we'll go over each section
We'll start by creating the entire template, then we'll go over each section
briefly. Create a file `example.json` and fill it with the following contents:
briefly. Create a file `example.json` and fill it with the following contents:
```javascript
``` {.javascript}
{
{
"variables": {
"variables": {
"aws_access_key": "",
"aws_access_key": "",
...
@@ -62,55 +65,55 @@ briefly. Create a file `example.json` and fill it with the following contents:
...
@@ -62,55 +65,55 @@ briefly. Create a file `example.json` and fill it with the following contents:
}
}
```
```
When building, you'll pass in the `aws_access_key` and `aws_secret_key` as
When building, you'll pass in the `aws_access_key` and `aws_secret_key` as a
a [user variable](/docs/templates/user-variables.html), keeping your secret
[user variable](/docs/templates/user-variables.html), keeping your secret keys
keys out of the template. You can create security credentials
out of the template. You can create security credentials on [this
on [this page](https://console.aws.amazon.com/iam/home?#security_credential).
page](https://console.aws.amazon.com/iam/home?#security_credential). An example
An example IAM policy document can be found in the [Amazon EC2 builder docs](/docs/builders/amazon.html).
IAM policy document can be found in the [Amazon EC2 builder
docs](/docs/builders/amazon.html).
This is a basic template that is ready-to-go. It should be immediately recognizable
as a normal, basic JSON object. Within the object, the `builders` section
This is a basic template that is ready-to-go. It should be immediately
contains an array of JSON objects configuring a specific _builder_. A
recognizable as a normal, basic JSON object. Within the object, the `builders`
builder is a component of Packer that is responsible for creating a machine
section contains an array of JSON objects configuring a specific *builder*. A
and turning that machine into an image.
builder is a component of Packer that is responsible for creating a machine and
turning that machine into an image.
In this case, we're only configuring a single builder of type `amazon-ebs`.
This is the Amazon EC2 AMI builder that ships with Packer. This builder
In this case, we're only configuring a single builder of type `amazon-ebs`. This
builds an EBS-backed AMI by launching a source AMI, provisioning on top of
is the Amazon EC2 AMI builder that ships with Packer. This builder builds an
that, and re-packaging it into a new AMI.
EBS-backed AMI by launching a source AMI, provisioning on top of that, and
re-packaging it into a new AMI.
The additional keys within the object are configuration for this builder, specifying things
such as access keys, the source AMI to build from, and more.
The additional keys within the object are configuration for this builder,
The exact set of configuration variables available for a builder are
specifying things such as access keys, the source AMI to build from, and more.
specific to each builder and can be found within the [documentation](/docs).
The exact set of configuration variables available for a builder are specific to
each builder and can be found within the [documentation](/docs).
Before we take this template and build an image from it, let's validate the template
by running `packer validate example.json`. This command checks the syntax
Before we take this template and build an image from it, let's validate the
as well as the configuration values to verify they look valid. The output should
template by running `packer validate example.json`. This command checks the
look similar to below, because the template should be valid. If there are
syntax as well as the configuration values to verify they look valid. The output
should look similar to below, because the template should be valid. If there are
any errors, this command will tell you.
any errors, this command will tell you.
```text
``` {.text}
$ packer validate example.json
$ packer validate example.json
Template validated successfully.
Template validated successfully.
```
```
Next, let's build the image from this template.
Next, let's build the image from this template.
An astute reader may notice that we said earlier we'd be building an
An astute reader may notice that we said earlier we'd be building an image with
image with Redis pre-installed, and yet the template we made doesn't reference
Redis pre-installed, and yet the template we made doesn't reference Redis
Redis anywhere. In fact, this part of the documentation will only cover making
anywhere. In fact, this part of the documentation will only cover making a first
a first basic, non-provisioned image. The next section on provisioning will
basic, non-provisioned image. The next section on provisioning will cover
cover installing Redis.
installing Redis.
## Your First Image
## Your First Image
With a properly validated template. It is time to build your first image.
With a properly validated template. It is time to build your first image. This
This is done by calling `packer build` with the template file. The output
is done by calling `packer build` with the template file. The output should look
should look similar to below. Note that this process typically takes a
similar to below. Note that this process typically takes a few minutes.
few minutes.
```text
``` {.text}
$ packer build \
$ packer build \
-var 'aws_access_key=YOUR ACCESS KEY' \
-var 'aws_access_key=YOUR ACCESS KEY' \
-var 'aws_secret_key=YOUR SECRET KEY' \
-var 'aws_secret_key=YOUR SECRET KEY' \
...
@@ -139,38 +142,36 @@ $ packer build \
...
@@ -139,38 +142,36 @@ $ packer build \
us-east-1: ami-19601070
us-east-1: ami-19601070
```
```
At the end of running `packer build`, Packer outputs the _artifacts_
At the end of running `packer build`, Packer outputs the *artifacts* that were
that were created as part of the build. Artifacts are the results of a
created as part of the build. Artifacts are the results of a build, and
build, and typically represent an ID (such as in the case of an AMI) or
typically represent an ID (such as in the case of an AMI) or a set of files
a set of files (such as for a VMware virtual machine). In this example,
(such as for a VMware virtual machine). In this example, we only have a single
we only have a single artifact: the AMI in us-east-1 that was created.
artifact: the AMI in us-east-1 that was created.
This AMI is ready to use. If you wanted you can go and launch this AMI
This AMI is ready to use. If you wanted you can go and launch this AMI right now
right now and it would work great.
and it would work great.
-> **Note:** Your AMI ID will surely be different than the
->**Note:** Your AMI ID will surely be different than the one above. If you
one above. If you try to launch the one in the example output above, you
try to launch the one in the example output above, you will get an error. If you
will get an error. If you want to try to launch your AMI, get the ID from
want to try to launch your AMI, get the ID from the Packer output.
the Packer output.
## Managing the Image
## Managing the Image
Packer only builds images. It does not attempt to manage them in any way.
Packer only builds images. It does not attempt to manage them in any way. After
After they're built, it is up to you to launch or destroy them as you see
they're built, it is up to you to launch or destroy them as you see fit. If you
fit. If you want to store and namespace images for easy reference, you
want to store and namespace images for easy reference, you can use [Atlas by
can use [Atlas by HashiCorp](https://atlas.hashicorp.com). We'll cover
HashiCorp](https://atlas.hashicorp.com). We'll cover remotely building and
remotely building and storing images at the end of this getting started guide.
storing images at the end of this getting started guide.
After running the above example, your AWS account
After running the above example, your AWS account now has an AMI associated with
now has an AMI associated with it. AMIs are stored in S3 by Amazon,
it. AMIs are stored in S3 by Amazon, so unless you want to be charged about
so unless you want to be charged about $0.01
\$0.01 per month, you'll probably want to remove it. Remove the AMI by first
per month, you'll probably want to remove it. Remove the AMI by
deregistering it on the [AWS AMI management
first deregistering it on the [AWS AMI management page](https://console.aws.amazon.com/ec2/home?region=us-east-1#s=Images).
That concludes the getting started guide for Packer. You should now be
description:|-
comfortable with basic Packer usage, should understand templates, defining
That concludes the getting started guide for Packer. You should now be comfortable with basic Packer usage, should understand templates, defining builds, provisioners, etc. At this point you're ready to begin playing with and using Packer in real scenarios.
builds, provisioners, etc. At this point you're ready to begin playing with and
---
using Packer in real scenarios.
layout: intro
page_title: Next Steps
...
# Next Steps
# Next Steps
That concludes the getting started guide for Packer. You should now be comfortable
That concludes the getting started guide for Packer. You should now be
with basic Packer usage, should understand templates, defining builds, provisioners,
comfortable with basic Packer usage, should understand templates, defining
etc. At this point you're ready to begin playing with and using Packer
builds, provisioners, etc. At this point you're ready to begin playing with and
in real scenarios.
using Packer in real scenarios.
From this point forward, the most important reference for you will be
From this point forward, the most important reference for you will be the
the [documentation](/docs). The documentation is less of a guide and
[documentation](/docs). The documentation is less of a guide and more of a
more of a reference of all the overall features and options of Packer.
reference of all the overall features and options of Packer.
If you're interested in learning more about how Packer fits into the
If you're interested in learning more about how Packer fits into the HashiCorp
HashiCorp ecosystem of tools, read our [Atlas getting started overview](https://atlas.hashicorp.com/help/intro/getting-started).
ecosystem of tools, read our [Atlas getting started
So far we've shown how Packer can automatically build an image and provision it.
prev_url:"/intro/getting-started/provision.html"
This on its own is already quite powerful. But Packer can do better than that.
next_url:"/intro/getting-started/vagrant.html"
Packer can create multiple images for multiple platforms in parallel, all
next_title:"VagrantBoxes"
configured from a single template.
description:|-
layout: intro
So far we've shown how Packer can automatically build an image and provision it. This on its own is already quite powerful. But Packer can do better than that. Packer can create multiple images for multiple platforms in parallel, all configured from a single template.
next_title: Vagrant Boxes
---
next_url: '/intro/getting-started/vagrant.html'
page_title: Parallel Builds
prev_url: '/intro/getting-started/provision.html'
...
# Parallel Builds
# Parallel Builds
So far we've shown how Packer can automatically build an image and provision it.
So far we've shown how Packer can automatically build an image and provision it.
This on its own is already quite powerful. But Packer can do better than that.
This on its own is already quite powerful. But Packer can do better than that.
Packer can create multiple images for multiple platforms _in parallel_, all
Packer can create multiple images for multiple platforms *in parallel*, all
configured from a single template.
configured from a single template.
This is a very useful and important feature of Packer. As an example,
This is a very useful and important feature of Packer. As an example, Packer is
Packer is able to make an AMI and a VMware virtual machine
able to make an AMI and a VMware virtual machine in parallel provisioned with
in parallel provisioned with the _same scripts_, resulting in near-identical
the *same scripts*, resulting in near-identical images. The AMI can be used for
images. The AMI can be used for production, the VMware machine can be used
production, the VMware machine can be used for development. Or, another example,
for development. Or, another example, if you're using Packer to build
previously existing base AMI. The real utility of Packer comes from being able
next_title:"ParallelBuilds"
to install and configure software into the images as well. This stage is also
description:|-
known as the *provision* step. Packer fully supports automated provisioning in
In the previous page of this guide, you created your first image with Packer. The image you just built, however, was basically just a repackaging of a previously existing base AMI. The real utility of Packer comes from being able to install and configure software into the images as well. This stage is also known as the _provision_ step. Packer fully supports automated provisioning in order to install software onto the machines prior to turning them into images.
order to install software onto the machines prior to turning them into images.
Up to this point in the guide, you have been running Packer on your local
prev_url:"/intro/getting-started/vagrant.html"
machine to build and provision images on AWS and DigitalOcean. However, you can
next_url:"/intro/getting-started/next.html"
use Atlas by HashiCorp to both run Packer builds remotely and store the output
next_title:"NextSteps"
of builds.
description:|-
layout: intro
Up to this point in the guide, you have been running Packer on your local machine to build and provision images on AWS and DigitalOcean. However, you can use Atlas by HashiCorp to both run Packer builds remotely and store the output of builds.
next_title: Next Steps
---
next_url: '/intro/getting-started/next.html'
page_title: Remote Builds and Storage
prev_url: '/intro/getting-started/vagrant.html'
...
# Remote Builds and Storage
# Remote Builds and Storage
Up to this point in the guide, you have been running Packer on your local machine to build and provision images on AWS and DigitalOcean. However, you can use [Atlas by HashiCorp](https://atlas.hashicorp.com) to run Packer builds remotely and store the output of builds.
Up to this point in the guide, you have been running Packer on your local
machine to build and provision images on AWS and DigitalOcean. However, you can
use [Atlas by HashiCorp](https://atlas.hashicorp.com) to run Packer builds
remotely and store the output of builds.
## Why Build Remotely?
## Why Build Remotely?
By building remotely, you can move access credentials off of developer machines, release local machines from long-running Packer processes, and automatically start Packer builds from trigger sources such as `vagrant push`, a version control system, or CI tool.
By building remotely, you can move access credentials off of developer machines,
release local machines from long-running Packer processes, and automatically
start Packer builds from trigger sources such as `vagrant push`, a version
control system, or CI tool.
## Run Packer Builds Remotely
## Run Packer Builds Remotely
To run Packer remotely, there are two changes that must be made to the Packer template. The first is the addition of the `push`[configuration](https://www.packer.io/docs/templates/push.html), which sends the Packer template to Atlas so it can run Packer remotely. The second modification is updating the variables section to read variables from the Atlas environment rather than the local environment. Remove the `post-processors` section for now if it is still in your template.
```javascript
To run Packer remotely, there are two changes that must be made to the Packer
template. The first is the addition of the `push`
[configuration](https://www.packer.io/docs/templates/push.html), which sends the
Packer template to Atlas so it can run Packer remotely. The second modification
is updating the variables section to read variables from the Atlas environment
rather than the local environment. Remove the `post-processors` section for now
if it is still in your template.
``` {.javascript}
{
{
"variables": {
"variables": {
"aws_access_key": "{{env `aws_access_key`}}",
"aws_access_key": "{{env `aws_access_key`}}",
...
@@ -45,31 +63,35 @@ To run Packer remotely, there are two changes that must be made to the Packer te
...
@@ -45,31 +63,35 @@ To run Packer remotely, there are two changes that must be made to the Packer te
"name": "ATLAS_USERNAME/packer-tutorial"
"name": "ATLAS_USERNAME/packer-tutorial"
}
}
}
}
```
```
To get an Atlas username, [create an account here](https://atlas.hashicorp.com/account/new?utm_source=oss&utm_medium=getting-started&utm_campaign=packer). Replace "ATLAS_USERNAME" with your username, then run `packer push -create example.json` to send the configuration to Atlas, which automatically starts the build.
Replace "ATLAS\_USERNAME" with your username, then run
`packer push -create example.json` to send the configuration to Atlas, which
automatically starts the build.
This build will fail since neither `aws_access_key` or `aws_secret_key` are set in the Atlas environment. To set environment variables in Atlas, navigate to the [operations tab](https://atlas.hashicorp.com/operations), click the "packer-tutorial" build configuration that was just created, and then click 'variables' in the left navigation. Set `aws_access_key` and `aws_secret_key` with their respective values. Now restart the Packer build by either clicking 'rebuild' in the Atlas UI or by running `packer push example.json` again. Now when you click on the active build, you can view the logs in real-time.
This build will fail since neither `aws_access_key` or `aws_secret_key` are set
in the Atlas environment. To set environment variables in Atlas, navigate to the
[operations tab](https://atlas.hashicorp.com/operations), click the
"packer-tutorial" build configuration that was just created, and then click
'variables' in the left navigation. Set `aws_access_key` and `aws_secret_key`
with their respective values. Now restart the Packer build by either clicking
'rebuild' in the Atlas UI or by running `packer push example.json` again. Now
when you click on the active build, you can view the logs in real-time.
-> **Note:** Whenever a change is made to the Packer template, you must `packer push` to update the configuration in Atlas.
->**Note:** Whenever a change is made to the Packer template, you must
`packer push` to update the configuration in Atlas.
## Store Packer Outputs
## Store Packer Outputs
Now we have Atlas building an AMI with Redis pre-configured. This is great, but it's even better to store and version the AMI output so it can be easily deployed by a tool like [Terraform](https://terraform.io). The `atlas`[post-processor](/docs/post-processors/atlas.html) makes this process simple:
```javascript
Now we have Atlas building an AMI with Redis pre-configured. This is great, but
{
it's even better to store and version the AMI output so it can be easily
"variables":["..."],
deployed by a tool like [Terraform](https://terraform.io). The `atlas`
"builders":["..."],
[post-processor](/docs/post-processors/atlas.html) makes this process simple:
Update the `post-processors` block with your Atlas username, then `packer push example.json` and watch the build kick off in Atlas! When the build completes, the resulting artifact will be saved and stored in Atlas.
Update the `post-processors` block with your Atlas username, then
\ No newline at end of file
`packer push example.json` and watch the build kick off in Atlas! When the build
completes, the resulting artifact will be saved and stored in Atlas.
platforms and architectures. This page will not cover how to compile Packer from
next_title:"BuildanImage"
source, as that is covered in the README and is only recommended for advanced
description:|-
users.
Packer must first be installed on the machine you want to run it on. To make installation easy, Packer is distributed as a binary package for all supported platforms and architectures. This page will not cover how to compile Packer from source, as that is covered in the README and is only recommended for advanced users.
Packer also has the ability to take the results of a builder (such as
Packer also has the ability to take the results of a builder (such as an AMI or
an AMI or plain VMware image) and turn it into a [Vagrant](http://www.vagrantup.com)
plain VMware image) and turn it into a [Vagrant](http://www.vagrantup.com) box.
box.
This is done using [post-processors](/docs/templates/post-processors.html).
This is done using [post-processors](/docs/templates/post-processors.html).
These take an artifact created by a previous builder or post-processor and
These take an artifact created by a previous builder or post-processor and
transforms it into a new one. In the case of the Vagrant post-processor, it
transforms it into a new one. In the case of the Vagrant post-processor, it
takes an artifact from a builder and transforms it into a Vagrant box file.
takes an artifact from a builder and transforms it into a Vagrant box file.
Post-processors are a generally very useful concept. While the example on
Post-processors are a generally very useful concept. While the example on this
this getting-started page will be creating Vagrant images, post-processors
getting-started page will be creating Vagrant images, post-processors have many
have many interesting use cases. For example, you can write a post-processor
interesting use cases. For example, you can write a post-processor to compress
to compress artifacts, upload them, test them, etc.
artifacts, upload them, test them, etc.
Let's modify our template to use the Vagrant post-processor to turn our
Let's modify our template to use the Vagrant post-processor to turn our AWS AMI
AWS AMI into a Vagrant box usable with the [vagrant-aws plugin](https://github.com/mitchellh/vagrant-aws). If you followed along in the previous page and setup DigitalOcean,
into a Vagrant box usable with the [vagrant-aws
Packer can't currently make Vagrant boxes for DigitalOcean, but will be able
plugin](https://github.com/mitchellh/vagrant-aws). If you followed along in the
to soon.
previous page and setup DigitalOcean, Packer can't currently make Vagrant boxes
for DigitalOcean, but will be able to soon.
## Enabling the Post-Processor
## Enabling the Post-Processor
...
@@ -35,7 +36,7 @@ Post-processors are added in the `post-processors` section of a template, which
...
@@ -35,7 +36,7 @@ Post-processors are added in the `post-processors` section of a template, which
we haven't created yet. Modify your `example.json` template and add the section.
we haven't created yet. Modify your `example.json` template and add the section.
Your template should look like the following:
Your template should look like the following:
```javascript
``` {.javascript}
{
{
"builders": ["..."],
"builders": ["..."],
"provisioners": ["..."],
"provisioners": ["..."],
...
@@ -44,8 +45,8 @@ Your template should look like the following:
...
@@ -44,8 +45,8 @@ Your template should look like the following:
```
```
In this case, we're enabling a single post-processor named "vagrant". This
In this case, we're enabling a single post-processor named "vagrant". This
post-processor is built-in to Packer and will create Vagrant boxes. You
post-processor is built-in to Packer and will create Vagrant boxes. You can
can always create [new post-processors](/docs/extend/post-processor.html), however.
description: Learn how Packer fits in with the rest of the HashiCorp ecosystem of tools
page_title:"PackerandtheHashiCorpEcosystem"
layout: intro
prev_url:"/intro/platforms.html"
next_title: 'Getting Started: Install Packer'
next_url:"/intro/getting-started/setup.html"
next_url: '/intro/getting-started/setup.html'
next_title:"GettingStarted:InstallPacker"
page_title: Packer and the HashiCorp Ecosystem
description:|-
prev_url: '/intro/platforms.html'
Learn how Packer fits in with the rest of the HashiCorp ecosystem of tools
...
---
# Packer and the HashiCorp Ecosystem
# Packer and the HashiCorp Ecosystem
HashiCorp is the creator of the open source projects Vagrant, Packer, Terraform, Serf, and Consul, and the commercial product Atlas. Packer is just one piece of the ecosystem HashiCorp has built to make application delivery a versioned, auditable, repeatable, and collaborative process. To learn more about our beliefs on the qualities of the modern datacenter and responsible application delivery, read [The Atlas Mindset: Version Control for Infrastructure](https://hashicorp.com/blog/atlas-mindset.html/?utm_source=packer&utm_campaign=HashicorpEcosystem).
HashiCorp is the creator of the open source projects Vagrant, Packer, Terraform,
Serf, and Consul, and the commercial product Atlas. Packer is just one piece of
the ecosystem HashiCorp has built to make application delivery a versioned,
auditable, repeatable, and collaborative process. To learn more about our
beliefs on the qualities of the modern datacenter and responsible application
delivery, read [The Atlas Mindset: Version Control for
If you are using Packer to build machine images and deployable artifacts, it's likely that you need a solution for deploying those artifacts. Terraform is our tool for creating, combining, and modifying infrastructure.
If you are using Packer to build machine images and deployable artifacts, it's
likely that you need a solution for deploying those artifacts. Terraform is our
tool for creating, combining, and modifying infrastructure.
Below are summaries of HashiCorp's open source projects and a graphic showing how Atlas connects them to create a full application delivery workflow.
Below are summaries of HashiCorp's open source projects and a graphic showing
how Atlas connects them to create a full application delivery workflow.
# HashiCorp Ecosystem
# HashiCorp Ecosystem
![Atlas Workflow](docs/atlas-workflow.png)
[Atlas](https://atlas.hashicorp.com/?utm_source=packer&utm_campaign=HashicorpEcosystem) is HashiCorp's only commercial product. It unites Packer, Terraform, and Consul to make application delivery a versioned, auditable, repeatable, and collaborative process.
[Packer](https://packer.io/?utm_source=packer&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating machine images and deployable artifacts such as AMIs, OpenStack images, Docker containers, etc.
[Terraform](https://terraform.io/?utm_source=packer&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for creating, combining, and modifying infrastructure. In the Atlas workflow Terraform reads from the artifact registry and provisions infrastructure.
![Atlas Workflow](docs/atlas-workflow.png)
[Consul](https://consul.io/?utm_source=packer&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for service discovery, service registry, and health checks. In the Atlas workflow Consul is configured at the Packer build stage and identifies the service(s) contained in each artifact. Since Consul is configured at the build phase with Packer, when the artifact is deployed with Terraform, it is fully configured with dependencies and service discovery pre-baked. This greatly reduces the risk of an unhealthy node in production due to configuration failure at runtime.
[Serf](https://serfdom.io/?utm_source=packer&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for cluster membership and failure detection. Consul uses Serf's gossip protocol as the foundation for service discovery.
[Vagrant](https://www.vagrantup.com/?utm_source=packer&utm_campaign=HashicorpEcosystem) is a HashiCorp tool for managing development environments that mirror production. Vagrant environments reduce the friction of developing a project and reduce the risk of unexpected behavior appearing after deployment. Vagrant boxes can be built in parallel with production artifacts with Packer to maintain parity between development and production.
Welcome to the world of Packer! This introduction guide will show you what
prev_url:"#"
Packer is, explain why it exists, the benefits it has to offer, and how you can
next_url:"/intro/why.html"
get started with it. If you're already familiar with Packer, the documentation
next_title:"WhyUsePacker?"
provides more of a reference for all available features.
description:|-
layout: intro
Welcome to the world of Packer! This introduction guide will show you what Packer is, explain why it exists, the benefits it has to offer, and how you can get started with it. If you're already familiar with Packer, the documentation provides more of a reference for all available features.
next_title: 'Why Use Packer?'
---
next_url: '/intro/why.html'
page_title: Introduction
prev_url: '# '
...
# Introduction to Packer
# Introduction to Packer
Welcome to the world of Packer! This introduction guide will show you what
Welcome to the world of Packer! This introduction guide will show you what
Packer is, explain why it exists, the benefits it has to offer, and how
Packer is, explain why it exists, the benefits it has to offer, and how you can
you can get started with it. If you're already familiar with Packer, the
get started with it. If you're already familiar with Packer, the
[documentation](/docs) provides more of a reference for all available features.
[documentation](/docs) provides more of a reference for all available features.
## What is Packer?
## What is Packer?
Packer is an open source tool for creating identical machine images for multiple platforms
Packer is an open source tool for creating identical machine images for multiple
from a single source configuration. Packer is lightweight, runs on every major
platforms from a single source configuration. Packer is lightweight, runs on
operating system, and is highly performant, creating machine images for
every major operating system, and is highly performant, creating machine images
multiple platforms in parallel. Packer does not replace configuration management
for multiple platforms in parallel. Packer does not replace configuration
like Chef or Puppet. In fact, when building images, Packer is able to use tools
management like Chef or Puppet. In fact, when building images, Packer is able to
like Chef or Puppet to install software onto the image.
use tools like Chef or Puppet to install software onto the image.
A _machine image_ is a single static unit that contains a pre-configured operating
A *machine image* is a single static unit that contains a pre-configured
system and installed software which is used to quickly create new running machines.
operating system and installed software which is used to quickly create new
Machine image formats change for each platform. Some examples include
running machines. Machine image formats change for each platform. Some examples
[AMIs](http://en.wikipedia.org/wiki/Amazon_Machine_Image) for EC2,
include [AMIs](http://en.wikipedia.org/wiki/Amazon_Machine_Image) for EC2,
VMDK/VMX files for VMware, OVF exports for VirtualBox, etc.
VMDK/VMX files for VMware, OVF exports for VirtualBox, etc.
Packer can create machine images for any platform. Packer ships with support for
prev_url:"/intro/use-cases.html"
a set of platforms, but can be extended through plugins to support any platform.
next_url:"/intro/hashicorp-ecosystem.html"
This page documents the list of supported image types that Packer supports
next_title:"Packer&theHashiCorpEcosystem"
creating.
description:|-
layout: intro
Packer can create machine images for any platform. Packer ships with support for a set of platforms, but can be extended through plugins to support any platform. This page documents the list of supported image types that Packer supports creating.
next_title: 'Packer & the HashiCorp Ecosystem'
---
next_url: '/intro/hashicorp-ecosystem.html'
page_title: Supported Platforms
prev_url: '/intro/use-cases.html'
...
# Supported Platforms
# Supported Platforms
Packer can create machine images for any platform. Packer ships with
Packer can create machine images for any platform. Packer ships with support for
support for a set of platforms, but can be [extended through plugins](/docs/extend/builder.html)
a set of platforms, but can be [extended through
to support any platform. This page documents the list of supported image
plugins](/docs/extend/builder.html) to support any platform. This page documents
types that Packer supports creating.
the list of supported image types that Packer supports creating.
If you were looking to see what platforms Packer is able to run on, see
If you were looking to see what platforms Packer is able to run on, see the page
the page on [installing Packer](/intro/getting-started/setup.html).
on [installing Packer](/intro/getting-started/setup.html).
-> **Note:** We're always looking to officially support more
->**Note:** We're always looking to officially support more target
target platforms. If you're interested in adding support for another
platforms. If you're interested in adding support for another platform, please
platform, please help by opening an issue or pull request within
help by opening an issue or pull request within
[GitHub](https://github.com/mitchellh/packer) so we can discuss
[GitHub](https://github.com/mitchellh/packer) so we can discuss how to make it
how to make it happen.
happen.
Packer supports creating images for the following platforms or targets.
Packer supports creating images for the following platforms or targets. The
The format of the resulting image and any high-level information about the
format of the resulting image and any high-level information about the platform
platform is noted. They are listed in alphabetical order. For more detailed
is noted. They are listed in alphabetical order. For more detailed information
information on supported configuration parameters and usage, please see
on supported configuration parameters and usage, please see the appropriate
the appropriate [documentation page within the documentation section](/docs).
[documentation page within the documentation section](/docs).
****Amazon EC2 (AMI)***. Both EBS-backed and instance-store AMIs within
-***Amazon EC2 (AMI)***. Both EBS-backed and instance-store AMIs within
[EC2](http://aws.amazon.com/ec2/), optionally distributed to multiple regions.
[EC2](http://aws.amazon.com/ec2/), optionally distributed to multiple regions.
****DigitalOcean***. Snapshots for [DigitalOcean](http://www.digitalocean.com/)
-***DigitalOcean***. Snapshots for [DigitalOcean](http://www.digitalocean.com/)
that can be used to start a pre-configured DigitalOcean instance of any size.
that can be used to start a pre-configured DigitalOcean instance of any size.
****Docker***. Snapshots for [Docker](http://www.docker.io/)
-***Docker***. Snapshots for [Docker](http://www.docker.io/) that can be used
that can be used to start a pre-configured Docker instance.
to start a pre-configured Docker instance.
****Google Compute Engine***. Snapshots for [Google Compute Engine](https://cloud.google.com/products/compute-engine)
-***Google Compute Engine***. Snapshots for [Google Compute
that can be used to start a pre-configured Google Compute Engine instance.
Engine](https://cloud.google.com/products/compute-engine) that can be used to
start a pre-configured Google Compute Engine instance.
****OpenStack***. Images for [OpenStack](http://www.openstack.org/)
-***OpenStack***. Images for [OpenStack](http://www.openstack.org/) that can be
that can be used to start pre-configured OpenStack servers.
used to start pre-configured OpenStack servers.
****Parallels (PVM)***. Exported virtual machines for [Parallels](http://www.parallels.com/downloads/desktop/),
-***Parallels (PVM)***. Exported virtual machines for
including virtual machine metadata such as RAM, CPUs, etc. These virtual
[Parallels](http://www.parallels.com/downloads/desktop/), including virtual
machines are portable and can be started on any platform Parallels runs on.
machine metadata such as RAM, CPUs, etc. These virtual machines are portable
and can be started on any platform Parallels runs on.
****QEMU***. Images for [KVM](http://www.linux-kvm.org/) or [Xen](http://www.xenproject.org/)
-***QEMU***. Images for [KVM](http://www.linux-kvm.org/) or
that can be used to start pre-configured KVM or Xen instances.
[Xen](http://www.xenproject.org/) that can be used to start pre-configured KVM
or Xen instances.
****VirtualBox (OVF)***. Exported virtual machines for [VirtualBox](https://www.virtualbox.org/),
-***VirtualBox (OVF)***. Exported virtual machines for
including virtual machine metadata such as RAM, CPUs, etc. These virtual
[VirtualBox](https://www.virtualbox.org/), including virtual machine metadata
machines are portable and can be started on any platform VirtualBox runs on.
such as RAM, CPUs, etc. These virtual machines are portable and can be started
on any platform VirtualBox runs on.
****VMware (VMX)***. Exported virtual machines for [VMware](http://www.vmware.com/)
-***VMware (VMX)***. Exported virtual machines for
that can be run within any desktop products such as Fusion, Player, or
[VMware](http://www.vmware.com/) that can be run within any desktop products
Workstation, as well as server products such as vSphere.
such as Fusion, Player, or Workstation, as well as server products such
as vSphere.
As previously mentioned, these are just the target image types that Packer
As previously mentioned, these are just the target image types that Packer ships
ships with out of the box. You can always [extend Packer through plugins](/docs/extend/builder.html)
with out of the box. You can always [extend Packer through
to support more.
plugins](/docs/extend/builder.html)to support more.
By now you should know what Packer does and what the benefits of image creation
prev_url:"/intro/why.html"
are. In this section, we'll enumerate *some* of the use cases for Packer. Note
next_url:"/intro/platforms.html"
that this is not an exhaustive list by any means. There are definitely use cases
next_title:"SupportedPlatforms"
for Packer not listed here. This list is just meant to give you an idea of how
description:|-
Packer may improve your processes.
By now you should know what Packer does and what the benefits of image creation are. In this section, we'll enumerate _some_ of the use cases for Packer. Note that this is not an exhaustive list by any means. There are definitely use cases for Packer not listed here. This list is just meant to give you an idea of how Packer may improve your processes.
layout: intro
---
next_title: Supported Platforms
next_url: '/intro/platforms.html'
page_title: Use Cases
prev_url: '/intro/why.html'
...
# Use Cases
# Use Cases
By now you should know what Packer does and what the benefits of image
By now you should know what Packer does and what the benefits of image creation
creation are. In this section, we'll enumerate _some_ of the use cases
are. In this section, we'll enumerate *some* of the use cases for Packer. Note
for Packer. Note that this is not an exhaustive list by any means. There are
that this is not an exhaustive list by any means. There are definitely use cases
definitely use cases for Packer not listed here. This list is just meant
for Packer not listed here. This list is just meant to give you an idea of how
to give you an idea of how Packer may improve your processes.
Packer may improve your processes.
### Continuous Delivery
### Continuous Delivery
...
@@ -24,30 +28,31 @@ can be used to generate new machine images for multiple platforms on every
...
@@ -24,30 +28,31 @@ can be used to generate new machine images for multiple platforms on every
change to Chef/Puppet.
change to Chef/Puppet.
As part of this pipeline, the newly created images can then be launched and
As part of this pipeline, the newly created images can then be launched and
tested, verifying the infrastructure changes work. If the tests pass, you can
tested, verifying the infrastructure changes work. If the tests pass, you can be
be confident that that image will work when deployed. This brings a new level
confident that that image will work when deployed. This brings a new level of
of stability and testability to infrastructure changes.
stability and testability to infrastructure changes.
### Dev/Prod Parity
### Dev/Prod Parity
Packer helps [keep development, staging, and production as similar as possible](http://www.12factor.net/dev-prod-parity).
Packer helps [keep development, staging, and production as similar as
Packer can be used to generate images for multiple platforms at the same time.
possible](http://www.12factor.net/dev-prod-parity). Packer can be used to
So if you use AWS for production and VMware (perhaps with [Vagrant](http://www.vagrantup.com))
generate images for multiple platforms at the same time. So if you use AWS for
for development, you can generate both an AMI and a VMware machine using
production and VMware (perhaps with [Vagrant](http://www.vagrantup.com)) for
Packer at the same time from the same template.
development, you can generate both an AMI and a VMware machine using Packer at
the same time from the same template.
Mix this in with the continuous delivery use case above, and you have a pretty
Mix this in with the continuous delivery use case above, and you have a pretty
slick system for consistent work environments from development all the
slick system for consistent work environments from development all the way
way through to production.
through to production.
### Appliance/Demo Creation
### Appliance/Demo Creation
Since Packer creates consistent images for multiple platforms in parallel,
Since Packer creates consistent images for multiple platforms in parallel, it is
it is perfect for creating [appliances](http://en.wikipedia.org/wiki/Software_appliance)
perfect for creating
and disposable product demos. As your software changes, you can automatically
[appliances](http://en.wikipedia.org/wiki/Software_appliance) and disposable
create appliances with the software pre-installed. Potential users can then
product demos. As your software changes, you can automatically create appliances
get started with your software by deploying it to the environment of their
with the software pre-installed. Potential users can then get started with your
choice.
software by deploying it to the environment of their choice.
Packaging up software with complex requirements has never been so easy.
Packaging up software with complex requirements has never been so easy. Or
Pre-baked machine images have a lot of advantages, but most have been unable to
prev_url:"/intro/index.html"
benefit from them because images have been too tedious to create and manage.
next_url:"/intro/use-cases.html"
There were either no existing tools to automate the creation of machine images
next_title:"PackerUseCases"
or they had too high of a learning curve. The result is that, prior to Packer,
description:|-
creating machine images threatened the agility of operations teams, and
Pre-baked machine images have a lot of advantages, but most have been unable to benefit from them because images have been too tedious to create and manage. There were either no existing tools to automate the creation of machine images or they had too high of a learning curve. The result is that, prior to Packer, creating machine images threatened the agility of operations teams, and therefore aren't used, despite the massive benefits.
therefore aren't used, despite the massive benefits.
---
layout: intro
next_title: Packer Use Cases
next_url: '/intro/use-cases.html'
page_title: 'Why Use Packer?'
prev_url: '/intro/index.html'
...
# Why Use Packer?
# Why Use Packer?
Pre-baked machine images have a lot of advantages, but most have been unable
Pre-baked machine images have a lot of advantages, but most have been unable to
to benefit from them because images have been too tedious to create and manage.
benefit from them because images have been too tedious to create and manage.
There were either no existing tools to automate the creation of machine images or
There were either no existing tools to automate the creation of machine images
they had too high of a learning curve. The result is that, prior to Packer,
or they had too high of a learning curve. The result is that, prior to Packer,
creating machine images threatened the agility of operations teams, and therefore
creating machine images threatened the agility of operations teams, and
aren't used, despite the massive benefits.
therefore aren't used, despite the massive benefits.
Packer changes all of this. Packer is easy to use and automates the creation
Packer changes all of this. Packer is easy to use and automates the creation of
of any type of machine image. It embraces modern configuration management by
any type of machine image. It embraces modern configuration management by
encouraging you to use a framework such as Chef or Puppet to install and
encouraging you to use a framework such as Chef or Puppet to install and
configure the software within your Packer-made images.
configure the software within your Packer-made images.
...
@@ -28,25 +33,26 @@ untapped potential and opening new opportunities.
...
@@ -28,25 +33,26 @@ untapped potential and opening new opportunities.
## Advantages of Using Packer
## Advantages of Using Packer
***Super fast infrastructure deployment***. Packer images allow you to launch
***Super fast infrastructure deployment***. Packer images allow you to launch
completely provisioned and configured machines in seconds, rather than
completely provisioned and configured machines in seconds, rather than several
several minutes or hours. This benefits not only production, but development as well,
minutes or hours. This benefits not only production, but development as well,
since development virtual machines can also be launched in seconds, without waiting
since development virtual machines can also be launched in seconds, without
for a typically much longer provisioning time.
waiting for a typically much longer provisioning time.
***Multi-provider portability***. Because Packer creates identical images for
***Multi-provider portability***. Because Packer creates identical images for
multiple platforms, you can run production in AWS, staging/QA in a private
multiple platforms, you can run production in AWS, staging/QA in a private cloud
cloud like OpenStack, and development in desktop virtualization solutions
like OpenStack, and development in desktop virtualization solutions such as
such as VMware or VirtualBox. Each environment is running an identical
VMware or VirtualBox. Each environment is running an identical machine image,
machine image, giving ultimate portability.
giving ultimate portability.
***Improved stability***. Packer installs and configures all the software for
***Improved stability***. Packer installs and configures all the software for a
a machine at the time the image is built. If there are bugs in these scripts,
machine at the time the image is built. If there are bugs in these scripts,
they'll be caught early, rather than several minutes after a machine is launched.
they'll be caught early, rather than several minutes after a machine is
launched.
***Greater testability***. After a machine image is built, that machine image
***Greater testability***. After a machine image is built, that machine image
can be quickly launched and smoke tested to verify that things appear to be
can be quickly launched and smoke tested to verify that things appear to be
working. If they are, you can be confident that any other machines launched
working. If they are, you can be confident that any other machines launched from
from that image will function properly.
that image will function properly.
Packer makes it extremely easy to take advantage of all these benefits.
Packer makes it extremely easy to take advantage of all these benefits.