Commit 27063b23 authored by ktruong7's avatar ktruong7

Merge branch 'master' of github.com:mitchellh/packer

parents 95fe4ad8 30ba7307
...@@ -4,3 +4,5 @@ ...@@ -4,3 +4,5 @@
/src /src
/website/.sass-cache /website/.sass-cache
/website/build /website/build
.vagrant
Vagrantfile
## 0.4.0 (unreleased) ## 0.4.1 (unreleased)
BUG FIXES:
* core: Don't change background color on CLI anymore, making things look
a tad nicer in some terminals.
## 0.4.0 (November 19, 2013)
FEATURES: FEATURES:
* Docker builder: build and export Docker containers, easily provisioned
with any of the Packer built-in provisioners.
* QEMU builder: builds a new VM compatible with KVM or Xen using QEMU. * QEMU builder: builds a new VM compatible with KVM or Xen using QEMU.
* Remote ESXi builder: builds a VMware VM using ESXi remotely using only * Remote ESXi builder: builds a VMware VM using ESXi remotely using only
SSH to an ESXi machine directly. SSH to an ESXi machine directly.
* vSphere post-processor: Can upload VMware artifacts to vSphere
* Vagrant post-processor can now make DigitalOcean provider boxes. [GH-504] * Vagrant post-processor can now make DigitalOcean provider boxes. [GH-504]
IMPROVEMENTS: IMPROVEMENTS:
...@@ -13,12 +23,22 @@ IMPROVEMENTS: ...@@ -13,12 +23,22 @@ IMPROVEMENTS:
IDs to apply. [GH-499] IDs to apply. [GH-499]
* builder/amazon/all: AWS API requests are now retried when a temporary * builder/amazon/all: AWS API requests are now retried when a temporary
network error occurs as well as 500 errors. [GH-559] network error occurs as well as 500 errors. [GH-559]
* builder/virtualbox: Use VBOX\_INSTALL\_PATH env var on Windows to find
VBoxManage. [GH-628]
* post-processor/vagrant: skips gzip compression when compression_level=0
* provisioner/chef-solo: Encrypted data bag support [GH-625]
BUG FIXES: BUG FIXES:
* builder/amazon/chroot: Copying empty directories works. [GH-588] * builder/amazon/chroot: Copying empty directories works. [GH-588]
* builder/amazon/chroot: Chroot commands work with shell provisioners. [GH-581] * builder/amazon/chroot: Chroot commands work with shell provisioners. [GH-581]
* builder/amazon/chroot: Don't choose a mount point that is a partition of
an already mounted device. [GH-635]
* builder/virtualbox: Ctrl-C interrupts during waiting for boot. [GH-618]
* builder/vmware: VMX modifications are now case-insensitive. [GH-608] * builder/vmware: VMX modifications are now case-insensitive. [GH-608]
* builder/vmware: VMware Fusion won't ask for VM upgrade.
* builder/vmware: Ctrl-C interrupts during waiting for boot. [GH-618]
* provisioner/chef-solo: Output is slightly prettier and more informative.
## 0.3.11 (November 4, 2013) ## 0.3.11 (November 4, 2013)
......
...@@ -7,7 +7,7 @@ DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...) ...@@ -7,7 +7,7 @@ DEPS = $(go list -f '{{range .TestImports}}{{.}} {{end}}' ./...)
all: deps all: deps
@mkdir -p bin/ @mkdir -p bin/
@echo "$(OK_COLOR)==> Building$(NO_COLOR)" @echo "$(OK_COLOR)==> Building$(NO_COLOR)"
@bash --norc -i ./scripts/build.sh @bash --norc -i ./scripts/devcompile.sh
deps: deps:
@echo "$(OK_COLOR)==> Installing dependencies$(NO_COLOR)" @echo "$(OK_COLOR)==> Installing dependencies$(NO_COLOR)"
......
...@@ -74,8 +74,15 @@ installed (version 1.1+ is _required_). Make sure you have Go properly installed ...@@ -74,8 +74,15 @@ installed (version 1.1+ is _required_). Make sure you have Go properly installed
including setting up your [GOPATH](http://golang.org/doc/code.html#GOPATH). including setting up your [GOPATH](http://golang.org/doc/code.html#GOPATH).
For some additional dependencies, Go needs [Mercurial](http://mercurial.selenic.com/) For some additional dependencies, Go needs [Mercurial](http://mercurial.selenic.com/)
to be installed. Packer itself doesn't require this but a dependency of a and [Bazaar](http://bazaar.canonical.com/en/) to be installed.
dependency does. Packer itself doesn't require these, but a dependency of a dependency does.
You'll also need [`gox`](https://github.com/mitchellh/packer)
to compile packer. You can install that with:
```
$ go get -u github.com/mitchellh/gox
```
Next, clone this repository into `$GOPATH/src/github.com/mitchellh/packer` and Next, clone this repository into `$GOPATH/src/github.com/mitchellh/packer` and
then just type `make`. In a few moments, you'll have a working `packer` executable: then just type `make`. In a few moments, you'll have a working `packer` executable:
......
...@@ -19,6 +19,14 @@ func AvailableDevice() (string, error) { ...@@ -19,6 +19,14 @@ func AvailableDevice() (string, error) {
letters := "fghijklmnop" letters := "fghijklmnop"
for _, letter := range letters { for _, letter := range letters {
device := fmt.Sprintf("/dev/%s%c", prefix, letter)
// If the block device itself, i.e. /dev/sf, exists, then we
// can't use any of the numbers either.
if _, err := os.Stat(device); err == nil {
continue
}
for i := 1; i < 16; i++ { for i := 1; i < 16; i++ {
device := fmt.Sprintf("/dev/%s%c%d", prefix, letter, i) device := fmt.Sprintf("/dev/%s%c%d", prefix, letter, i)
if _, err := os.Stat(device); err != nil { if _, err := os.Stat(device); err != nil {
......
package docker
import (
"fmt"
"os"
)
// ExportArtifact is an Artifact implementation for when a container is
// exported from docker into a single flat file.
type ExportArtifact struct {
path string
}
func (*ExportArtifact) BuilderId() string {
return BuilderId
}
func (a *ExportArtifact) Files() []string {
return []string{a.path}
}
func (*ExportArtifact) Id() string {
return "Container"
}
func (a *ExportArtifact) String() string {
return fmt.Sprintf("Exported Docker file: %s", a.path)
}
func (a *ExportArtifact) Destroy() error {
return os.Remove(a.path)
}
package docker
import (
"github.com/mitchellh/packer/packer"
"testing"
)
func TestExportArtifact_impl(t *testing.T) {
var _ packer.Artifact = new(ExportArtifact)
}
package docker
import (
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer"
"log"
)
const BuilderId = "packer.docker"
type Builder struct {
config *Config
runner multistep.Runner
}
func (b *Builder) Prepare(raws ...interface{}) ([]string, error) {
c, warnings, errs := NewConfig(raws...)
if errs != nil {
return warnings, errs
}
b.config = c
return warnings, nil
}
func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packer.Artifact, error) {
driver := &DockerDriver{Ui: ui}
if err := driver.Verify(); err != nil {
return nil, err
}
steps := []multistep.Step{
&StepTempDir{},
&StepPull{},
&StepRun{},
&StepProvision{},
&StepExport{},
}
// Setup the state bag and initial state for the steps
state := new(multistep.BasicStateBag)
state.Put("config", b.config)
state.Put("hook", hook)
state.Put("ui", ui)
// Setup the driver that will talk to Docker
state.Put("driver", driver)
// Run!
if b.config.PackerDebug {
b.runner = &multistep.DebugRunner{
Steps: steps,
PauseFn: common.MultistepDebugFn(ui),
}
} else {
b.runner = &multistep.BasicRunner{Steps: steps}
}
b.runner.Run(state)
// If there was an error, return that
if rawErr, ok := state.GetOk("error"); ok {
return nil, rawErr.(error)
}
// No errors, must've worked
artifact := &ExportArtifact{path: b.config.ExportPath}
return artifact, nil
}
func (b *Builder) Cancel() {
if b.runner != nil {
log.Println("Cancelling the step runner...")
b.runner.Cancel()
}
}
package docker
import (
"github.com/mitchellh/packer/packer"
"testing"
)
func TestBuilder_implBuilder(t *testing.T) {
var _ packer.Builder = new(Builder)
}
package docker
import (
"bytes"
"fmt"
"github.com/ActiveState/tail"
"github.com/mitchellh/packer/packer"
"io"
"io/ioutil"
"log"
"os"
"os/exec"
"path/filepath"
"strconv"
"sync"
"syscall"
"time"
)
type Communicator struct {
ContainerId string
HostDir string
ContainerDir string
lock sync.Mutex
}
func (c *Communicator) Start(remote *packer.RemoteCmd) error {
// Create a temporary file to store the output. Because of a bug in
// Docker, sometimes all the output doesn't properly show up. This
// file will capture ALL of the output, and we'll read that.
//
// https://github.com/dotcloud/docker/issues/2625
outputFile, err := ioutil.TempFile(c.HostDir, "cmd")
if err != nil {
return err
}
outputFile.Close()
// This file will store the exit code of the command once it is complete.
exitCodePath := outputFile.Name() + "-exit"
cmd := exec.Command("docker", "attach", c.ContainerId)
stdin_w, err := cmd.StdinPipe()
if err != nil {
// We have to do some cleanup since run was never called
os.Remove(outputFile.Name())
os.Remove(exitCodePath)
return err
}
// Run the actual command in a goroutine so that Start doesn't block
go c.run(cmd, remote, stdin_w, outputFile, exitCodePath)
return nil
}
func (c *Communicator) Upload(dst string, src io.Reader) error {
// Create a temporary file to store the upload
tempfile, err := ioutil.TempFile(c.HostDir, "upload")
if err != nil {
return err
}
defer os.Remove(tempfile.Name())
// Copy the contents to the temporary file
_, err = io.Copy(tempfile, src)
tempfile.Close()
if err != nil {
return err
}
// Copy the file into place by copying the temporary file we put
// into the shared folder into the proper location in the container
cmd := &packer.RemoteCmd{
Command: fmt.Sprintf("cp %s/%s %s", c.ContainerDir,
filepath.Base(tempfile.Name()), dst),
}
if err := c.Start(cmd); err != nil {
return err
}
// Wait for the copy to complete
cmd.Wait()
if cmd.ExitStatus != 0 {
return fmt.Errorf("Upload failed with non-zero exit status: %d", cmd.ExitStatus)
}
return nil
}
func (c *Communicator) UploadDir(dst string, src string, exclude []string) error {
// Create the temporary directory that will store the contents of "src"
// for copying into the container.
td, err := ioutil.TempDir(c.HostDir, "dirupload")
if err != nil {
return err
}
defer os.RemoveAll(td)
walkFn := func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
relpath, err := filepath.Rel(src, path)
if err != nil {
return err
}
hostpath := filepath.Join(td, relpath)
// If it is a directory, just create it
if info.IsDir() {
return os.MkdirAll(hostpath, info.Mode())
}
// It is a file, copy it over, including mode.
src, err := os.Open(path)
if err != nil {
return err
}
defer src.Close()
dst, err := os.Create(hostpath)
if err != nil {
return err
}
defer dst.Close()
if _, err := io.Copy(dst, src); err != nil {
return err
}
si, err := src.Stat()
if err != nil {
return err
}
return dst.Chmod(si.Mode())
}
// Copy the entire directory tree to the temporary directory
if err := filepath.Walk(src, walkFn); err != nil {
return err
}
// Determine the destination directory
containerSrc := filepath.Join(c.ContainerDir, filepath.Base(td))
containerDst := dst
if src[len(src)-1] != '/' {
containerDst = filepath.Join(dst, filepath.Base(src))
}
// Make the directory, then copy into it
cmd := &packer.RemoteCmd{
Command: fmt.Sprintf("set -e; mkdir -p %s; cp -R %s/* %s",
containerDst, containerSrc, containerDst),
}
if err := c.Start(cmd); err != nil {
return err
}
// Wait for the copy to complete
cmd.Wait()
if cmd.ExitStatus != 0 {
return fmt.Errorf("Upload failed with non-zero exit status: %d", cmd.ExitStatus)
}
return nil
}
func (c *Communicator) Download(src string, dst io.Writer) error {
panic("not implemented")
}
// Runs the given command and blocks until completion
func (c *Communicator) run(cmd *exec.Cmd, remote *packer.RemoteCmd, stdin_w io.WriteCloser, outputFile *os.File, exitCodePath string) {
// For Docker, remote communication must be serialized since it
// only supports single execution.
c.lock.Lock()
defer c.lock.Unlock()
// Clean up after ourselves by removing our temporary files
defer os.Remove(outputFile.Name())
defer os.Remove(exitCodePath)
// Tail the output file and send the data to the stdout listener
tail, err := tail.TailFile(outputFile.Name(), tail.Config{
Poll: true,
ReOpen: true,
Follow: true,
})
if err != nil {
log.Printf("Error tailing output file: %s", err)
remote.SetExited(254)
return
}
defer tail.Stop()
// Modify the remote command so that all the output of the commands
// go to a single file and so that the exit code is redirected to
// a single file. This lets us determine both when the command
// is truly complete (because the file will have data), what the
// exit status is (because Docker loses it because of the pty, not
// Docker's fault), and get the output (Docker bug).
remoteCmd := fmt.Sprintf("(%s) >%s 2>&1; echo $? >%s",
remote.Command,
filepath.Join(c.ContainerDir, filepath.Base(outputFile.Name())),
filepath.Join(c.ContainerDir, filepath.Base(exitCodePath)))
// Start the command
log.Printf("Executing in container %s: %#v", c.ContainerId, remoteCmd)
if err := cmd.Start(); err != nil {
log.Printf("Error executing: %s", err)
remote.SetExited(254)
return
}
go func() {
defer stdin_w.Close()
// This sleep needs to be here because of the issue linked to below.
// Basically, without it, Docker will hang on reading stdin forever,
// and won't see what we write, for some reason.
//
// https://github.com/dotcloud/docker/issues/2628
time.Sleep(2 * time.Second)
stdin_w.Write([]byte(remoteCmd + "\n"))
}()
// Start a goroutine to read all the lines out of the logs
go func() {
for line := range tail.Lines {
if remote.Stdout != nil {
remote.Stdout.Write([]byte(line.Text + "\n"))
} else {
log.Printf("Command stdout: %#v", line.Text)
}
}
}()
err = cmd.Wait()
if exitErr, ok := err.(*exec.ExitError); ok {
exitStatus := 1
// There is no process-independent way to get the REAL
// exit status so we just try to go deeper.
if status, ok := exitErr.Sys().(syscall.WaitStatus); ok {
exitStatus = status.ExitStatus()
}
// Say that we ended, since if Docker itself failed, then
// the command must've not run, or so we assume
remote.SetExited(exitStatus)
return
}
// Wait for the exit code to appear in our file...
log.Println("Waiting for exit code to appear for remote command...")
for {
fi, err := os.Stat(exitCodePath)
if err == nil && fi.Size() > 0 {
break
}
time.Sleep(1 * time.Second)
}
// Read the exit code
exitRaw, err := ioutil.ReadFile(exitCodePath)
if err != nil {
log.Printf("Error executing: %s", err)
remote.SetExited(254)
return
}
exitStatus, err := strconv.ParseInt(string(bytes.TrimSpace(exitRaw)), 10, 0)
if err != nil {
log.Printf("Error executing: %s", err)
remote.SetExited(254)
return
}
log.Printf("Executed command exit status: %d", exitStatus)
// Finally, we're done
remote.SetExited(int(exitStatus))
}
package docker
import (
"github.com/mitchellh/packer/packer"
"testing"
)
func TestCommunicator_impl(t *testing.T) {
var _ packer.Communicator = new(Communicator)
}
package docker
import (
"fmt"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer"
)
type Config struct {
common.PackerConfig `mapstructure:",squash"`
ExportPath string `mapstructure:"export_path"`
Image string
Pull bool
tpl *packer.ConfigTemplate
}
func NewConfig(raws ...interface{}) (*Config, []string, error) {
c := new(Config)
md, err := common.DecodeConfig(c, raws...)
if err != nil {
return nil, nil, err
}
c.tpl, err = packer.NewConfigTemplate()
if err != nil {
return nil, nil, err
}
// Default Pull if it wasn't set
hasPull := false
for _, k := range md.Keys {
if k == "Pull" {
hasPull = true
break
}
}
if !hasPull {
c.Pull = true
}
errs := common.CheckUnusedConfig(md)
templates := map[string]*string{
"export_path": &c.ExportPath,
"image": &c.Image,
}
for n, ptr := range templates {
var err error
*ptr, err = c.tpl.Process(*ptr, nil)
if err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Error processing %s: %s", n, err))
}
}
if c.ExportPath == "" {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("export_path must be specified"))
}
if c.Image == "" {
errs = packer.MultiErrorAppend(errs,
fmt.Errorf("image must be specified"))
}
if errs != nil && len(errs.Errors) > 0 {
return nil, nil, errs
}
return c, nil, nil
}
package docker
import (
"testing"
)
func testConfig() map[string]interface{} {
return map[string]interface{}{
"export_path": "foo",
"image": "bar",
}
}
func testConfigStruct(t *testing.T) *Config {
c, warns, errs := NewConfig(testConfig())
if len(warns) > 0 {
t.Fatalf("bad: %#v", len(warns))
}
if errs != nil {
t.Fatalf("bad: %#v", errs)
}
return c
}
func testConfigErr(t *testing.T, warns []string, err error) {
if len(warns) > 0 {
t.Fatalf("bad: %#v", warns)
}
if err == nil {
t.Fatal("should error")
}
}
func testConfigOk(t *testing.T, warns []string, err error) {
if len(warns) > 0 {
t.Fatalf("bad: %#v", warns)
}
if err != nil {
t.Fatalf("bad: %s", err)
}
}
func TestConfigPrepare_exportPath(t *testing.T) {
raw := testConfig()
// No export path
delete(raw, "export_path")
_, warns, errs := NewConfig(raw)
testConfigErr(t, warns, errs)
// Good export path
raw["export_path"] = "good"
_, warns, errs = NewConfig(raw)
testConfigOk(t, warns, errs)
}
func TestConfigPrepare_image(t *testing.T) {
raw := testConfig()
// No image
delete(raw, "image")
_, warns, errs := NewConfig(raw)
testConfigErr(t, warns, errs)
// Good image
raw["image"] = "path"
_, warns, errs = NewConfig(raw)
testConfigOk(t, warns, errs)
}
func TestConfigPrepare_pull(t *testing.T) {
raw := testConfig()
// No pull set
delete(raw, "pull")
c, warns, errs := NewConfig(raw)
testConfigOk(t, warns, errs)
if !c.Pull {
t.Fatal("should pull by default")
}
// Pull set
raw["pull"] = false
c, warns, errs = NewConfig(raw)
testConfigOk(t, warns, errs)
if c.Pull {
t.Fatal("should not pull")
}
}
package docker
import (
"io"
)
// Driver is the interface that has to be implemented to communicate with
// Docker. The Driver interface also allows the steps to be tested since
// a mock driver can be shimmed in.
type Driver interface {
// Export exports the container with the given ID to the given writer.
Export(id string, dst io.Writer) error
// Pull should pull down the given image.
Pull(image string) error
// StartContainer starts a container and returns the ID for that container,
// along with a potential error.
StartContainer(*ContainerConfig) (string, error)
// StopContainer forcibly stops a container.
StopContainer(id string) error
// Verify verifies that the driver can run
Verify() error
}
// ContainerConfig is the configuration used to start a container.
type ContainerConfig struct {
Image string
Volumes map[string]string
}
package docker
import (
"bytes"
"fmt"
"github.com/mitchellh/packer/packer"
"io"
"log"
"os/exec"
"strings"
)
type DockerDriver struct {
Ui packer.Ui
}
func (d *DockerDriver) Export(id string, dst io.Writer) error {
var stderr bytes.Buffer
cmd := exec.Command("docker", "export", id)
cmd.Stdout = dst
cmd.Stderr = &stderr
log.Printf("Exporting container: %s", id)
if err := cmd.Start(); err != nil {
return err
}
if err := cmd.Wait(); err != nil {
err = fmt.Errorf("Error exporting: %s\nStderr: %s",
err, stderr.String())
return err
}
return nil
}
func (d *DockerDriver) Pull(image string) error {
cmd := exec.Command("docker", "pull", image)
return runAndStream(cmd, d.Ui)
}
func (d *DockerDriver) StartContainer(config *ContainerConfig) (string, error) {
// Args that we're going to pass to Docker
args := []string{"run", "-d", "-i", "-t"}
if len(config.Volumes) > 0 {
volumes := make([]string, 0, len(config.Volumes))
for host, guest := range config.Volumes {
volumes = append(volumes, fmt.Sprintf("%s:%s", host, guest))
}
args = append(args, "-v", strings.Join(volumes, ","))
}
args = append(args, config.Image, "/bin/bash")
// Start the container
var stdout, stderr bytes.Buffer
cmd := exec.Command("docker", args...)
cmd.Stdout = &stdout
cmd.Stderr = &stderr
log.Printf("Starting container with args: %v", args)
if err := cmd.Start(); err != nil {
return "", err
}
log.Println("Waiting for container to finish starting")
if err := cmd.Wait(); err != nil {
if _, ok := err.(*exec.ExitError); ok {
err = fmt.Errorf("Docker exited with a non-zero exit status.\nStderr: %s",
stderr.String())
}
return "", err
}
// Capture the container ID, which is alone on stdout
return strings.TrimSpace(stdout.String()), nil
}
func (d *DockerDriver) StopContainer(id string) error {
return exec.Command("docker", "kill", id).Run()
}
func (d *DockerDriver) Verify() error {
if _, err := exec.LookPath("docker"); err != nil {
return err
}
return nil
}
package docker
import (
"io"
)
// MockDriver is a driver implementation that can be used for tests.
type MockDriver struct {
ExportReader io.Reader
ExportError error
PullError error
StartID string
StartError error
StopError error
VerifyError error
ExportCalled bool
ExportID string
PullCalled bool
PullImage string
StartCalled bool
StartConfig *ContainerConfig
StopCalled bool
StopID string
VerifyCalled bool
}
func (d *MockDriver) Export(id string, dst io.Writer) error {
d.ExportCalled = true
d.ExportID = id
if d.ExportReader != nil {
_, err := io.Copy(dst, d.ExportReader)
if err != nil {
return err
}
}
return d.ExportError
}
func (d *MockDriver) Pull(image string) error {
d.PullCalled = true
d.PullImage = image
return d.PullError
}
func (d *MockDriver) StartContainer(config *ContainerConfig) (string, error) {
d.StartCalled = true
d.StartConfig = config
return d.StartID, d.StartError
}
func (d *MockDriver) StopContainer(id string) error {
d.StopCalled = true
d.StopID = id
return d.StopError
}
func (d *MockDriver) Verify() error {
d.VerifyCalled = true
return d.VerifyError
}
package docker
import "testing"
func TestMockDriver_impl(t *testing.T) {
var _ Driver = new(MockDriver)
}
package docker
import "testing"
func TestDockerDriver_impl(t *testing.T) {
var _ Driver = new(DockerDriver)
}
package docker
import (
"fmt"
"github.com/mitchellh/iochan"
"github.com/mitchellh/packer/packer"
"io"
"log"
"os/exec"
"regexp"
"strings"
"sync"
"syscall"
)
func runAndStream(cmd *exec.Cmd, ui packer.Ui) error {
stdout_r, stdout_w := io.Pipe()
stderr_r, stderr_w := io.Pipe()
defer stdout_w.Close()
defer stderr_w.Close()
log.Printf("Executing: %s %v", cmd.Path, cmd.Args[1:])
cmd.Stdout = stdout_w
cmd.Stderr = stderr_w
if err := cmd.Start(); err != nil {
return err
}
// Create the channels we'll use for data
exitCh := make(chan int, 1)
stdoutCh := iochan.DelimReader(stdout_r, '\n')
stderrCh := iochan.DelimReader(stderr_r, '\n')
// Start the goroutine to watch for the exit
go func() {
defer stdout_w.Close()
defer stderr_w.Close()
exitStatus := 0
err := cmd.Wait()
if exitErr, ok := err.(*exec.ExitError); ok {
exitStatus = 1
// There is no process-independent way to get the REAL
// exit status so we just try to go deeper.
if status, ok := exitErr.Sys().(syscall.WaitStatus); ok {
exitStatus = status.ExitStatus()
}
}
exitCh <- exitStatus
}()
// This waitgroup waits for the streaming to end
var streamWg sync.WaitGroup
streamWg.Add(2)
streamFunc := func(ch <-chan string) {
defer streamWg.Done()
for data := range ch {
data = cleanOutputLine(data)
if data != "" {
ui.Message(data)
}
}
}
// Stream stderr/stdout
go streamFunc(stderrCh)
go streamFunc(stdoutCh)
// Wait for the process to end and then wait for the streaming to end
exitStatus := <-exitCh
streamWg.Wait()
if exitStatus != 0 {
return fmt.Errorf("Bad exit status: %d", exitStatus)
}
return nil
}
// cleanOutputLine cleans up a line so that '\r' don't muck up the
// UI output when we're reading from a remote command.
func cleanOutputLine(line string) string {
// Build a regular expression that will get rid of shell codes
re := regexp.MustCompile("(?i)\x1b\\[([0-9]{1,2}(;[0-9]{1,2})?)?[a|b|m|k]")
line = re.ReplaceAllString(line, "")
// Trim surrounding whitespace
line = strings.TrimSpace(line)
// Trim up to the first carriage return, since that text would be
// lost anyways.
idx := strings.LastIndex(line, "\r")
if idx > -1 {
line = line[idx+1:]
}
return line
}
package docker
import (
"testing"
)
func TestCleanLine(t *testing.T) {
cases := []struct {
input string
output string
}{
{
"\x1b[0A\x1b[2K\r8dbd9e392a96: Pulling image (precise) from ubuntu\r\x1b[0B\x1b[1A\x1b[2K\r8dbd9e392a96: Pulling image (precise) from ubuntu, endpoint: https://cdn-registry-1.docker.io/v1/\r\x1b[1B",
"8dbd9e392a96: Pulling image (precise) from ubuntu, endpoint: https://cdn-registry-1.docker.io/v1/",
},
}
for _, tc := range cases {
actual := cleanOutputLine(tc.input)
if actual != tc.output {
t.Fatalf("bad: %#v %#v", tc.input, actual)
}
}
}
package docker
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"os"
)
// StepExport exports the container to a flat tar file.
type StepExport struct{}
func (s *StepExport) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
driver := state.Get("driver").(Driver)
containerId := state.Get("container_id").(string)
ui := state.Get("ui").(packer.Ui)
// Open the file that we're going to write to
f, err := os.Create(config.ExportPath)
if err != nil {
err := fmt.Errorf("Error creating output file: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
ui.Say("Exporting the container")
if err := driver.Export(containerId, f); err != nil {
f.Close()
os.Remove(f.Name())
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
f.Close()
return multistep.ActionContinue
}
func (s *StepExport) Cleanup(state multistep.StateBag) {}
package docker
import (
"bytes"
"errors"
"github.com/mitchellh/multistep"
"io/ioutil"
"os"
"testing"
)
func testStepExportState(t *testing.T) multistep.StateBag {
state := testState(t)
state.Put("container_id", "foo")
return state
}
func TestStepExport_impl(t *testing.T) {
var _ multistep.Step = new(StepExport)
}
func TestStepExport(t *testing.T) {
state := testStepExportState(t)
step := new(StepExport)
defer step.Cleanup(state)
// Create a tempfile for our output path
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
tf.Close()
defer os.Remove(tf.Name())
config := state.Get("config").(*Config)
config.ExportPath = tf.Name()
driver := state.Get("driver").(*MockDriver)
driver.ExportReader = bytes.NewReader([]byte("data!"))
// run the step
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
// verify we did the right thing
if !driver.ExportCalled {
t.Fatal("should've exported")
}
if driver.ExportID != "foo" {
t.Fatalf("bad: %#v", driver.ExportID)
}
// verify the data exported to the file
contents, err := ioutil.ReadFile(tf.Name())
if err != nil {
t.Fatalf("err: %s", err)
}
if string(contents) != "data!" {
t.Fatalf("bad: %#v", string(contents))
}
}
func TestStepExport_error(t *testing.T) {
state := testStepExportState(t)
step := new(StepExport)
defer step.Cleanup(state)
// Create a tempfile for our output path
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
tf.Close()
if err := os.Remove(tf.Name()); err != nil {
t.Fatalf("err: %s", err)
}
config := state.Get("config").(*Config)
config.ExportPath = tf.Name()
driver := state.Get("driver").(*MockDriver)
driver.ExportError = errors.New("foo")
// run the step
if action := step.Run(state); action != multistep.ActionHalt {
t.Fatalf("bad action: %#v", action)
}
// verify we have an error
if _, ok := state.GetOk("error"); !ok {
t.Fatal("should have error")
}
// verify we didn't make that file
if _, err := os.Stat(tf.Name()); err == nil {
t.Fatal("export path shouldn't exist")
}
}
package docker
import (
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/common"
)
type StepProvision struct{}
func (s *StepProvision) Run(state multistep.StateBag) multistep.StepAction {
containerId := state.Get("container_id").(string)
tempDir := state.Get("temp_dir").(string)
// Create the communicator that talks to Docker via various
// os/exec tricks.
comm := &Communicator{
ContainerId: containerId,
HostDir: tempDir,
ContainerDir: "/packer-files",
}
prov := common.StepProvision{Comm: comm}
return prov.Run(state)
}
func (s *StepProvision) Cleanup(state multistep.StateBag) {}
package docker
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
)
type StepPull struct{}
func (s *StepPull) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
driver := state.Get("driver").(Driver)
ui := state.Get("ui").(packer.Ui)
if !config.Pull {
log.Println("Pull disabled, won't docker pull")
return multistep.ActionContinue
}
ui.Say(fmt.Sprintf("Pulling Docker image: %s", config.Image))
if err := driver.Pull(config.Image); err != nil {
err := fmt.Errorf("Error pulling Docker image: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
return multistep.ActionContinue
}
func (s *StepPull) Cleanup(state multistep.StateBag) {
}
package docker
import (
"errors"
"github.com/mitchellh/multistep"
"testing"
)
func TestStepPull_impl(t *testing.T) {
var _ multistep.Step = new(StepPull)
}
func TestStepPull(t *testing.T) {
state := testState(t)
step := new(StepPull)
defer step.Cleanup(state)
config := state.Get("config").(*Config)
driver := state.Get("driver").(*MockDriver)
// run the step
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
// verify we did the right thing
if !driver.PullCalled {
t.Fatal("should've pulled")
}
if driver.PullImage != config.Image {
t.Fatalf("bad: %#v", driver.PullImage)
}
}
func TestStepPull_error(t *testing.T) {
state := testState(t)
step := new(StepPull)
defer step.Cleanup(state)
driver := state.Get("driver").(*MockDriver)
driver.PullError = errors.New("foo")
// run the step
if action := step.Run(state); action != multistep.ActionHalt {
t.Fatalf("bad action: %#v", action)
}
// verify we have an error
if _, ok := state.GetOk("error"); !ok {
t.Fatal("should have error")
}
}
func TestStepPull_noPull(t *testing.T) {
state := testState(t)
step := new(StepPull)
defer step.Cleanup(state)
config := state.Get("config").(*Config)
config.Pull = false
driver := state.Get("driver").(*MockDriver)
// run the step
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
// verify we did the right thing
if driver.PullCalled {
t.Fatal("shouldn't have pulled")
}
}
package docker
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
)
type StepRun struct {
containerId string
}
func (s *StepRun) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*Config)
driver := state.Get("driver").(Driver)
tempDir := state.Get("temp_dir").(string)
ui := state.Get("ui").(packer.Ui)
runConfig := ContainerConfig{
Image: config.Image,
Volumes: map[string]string{
tempDir: "/packer-files",
},
}
ui.Say("Starting docker container with /bin/bash")
containerId, err := driver.StartContainer(&runConfig)
if err != nil {
err := fmt.Errorf("Error running container: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
// Save the container ID
s.containerId = containerId
state.Put("container_id", s.containerId)
ui.Message(fmt.Sprintf("Container ID: %s", s.containerId))
return multistep.ActionContinue
}
func (s *StepRun) Cleanup(state multistep.StateBag) {
if s.containerId == "" {
return
}
driver := state.Get("driver").(Driver)
ui := state.Get("ui").(packer.Ui)
// Kill the container. We don't handle errors because errors usually
// just mean that the container doesn't exist anymore, which isn't a
// big deal.
ui.Say(fmt.Sprintf("Killing the container: %s", s.containerId))
driver.StopContainer(s.containerId)
// Reset the container ID so that we're idempotent
s.containerId = ""
}
package docker
import (
"errors"
"github.com/mitchellh/multistep"
"testing"
)
func testStepRunState(t *testing.T) multistep.StateBag {
state := testState(t)
state.Put("temp_dir", "/foo")
return state
}
func TestStepRun_impl(t *testing.T) {
var _ multistep.Step = new(StepRun)
}
func TestStepRun(t *testing.T) {
state := testStepRunState(t)
step := new(StepRun)
defer step.Cleanup(state)
config := state.Get("config").(*Config)
driver := state.Get("driver").(*MockDriver)
driver.StartID = "foo"
// run the step
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
// verify we did the right thing
if !driver.StartCalled {
t.Fatal("should've called")
}
if driver.StartConfig.Image != config.Image {
t.Fatalf("bad: %#v", driver.StartConfig.Image)
}
// verify the ID is saved
idRaw, ok := state.GetOk("container_id")
if !ok {
t.Fatal("should've saved ID")
}
id := idRaw.(string)
if id != "foo" {
t.Fatalf("bad: %#v", id)
}
// Verify we haven't called stop yet
if driver.StopCalled {
t.Fatal("should not have stopped")
}
// Cleanup
step.Cleanup(state)
if !driver.StopCalled {
t.Fatal("should've stopped")
}
if driver.StopID != id {
t.Fatalf("bad: %#v", driver.StopID)
}
}
func TestStepRun_error(t *testing.T) {
state := testStepRunState(t)
step := new(StepRun)
defer step.Cleanup(state)
driver := state.Get("driver").(*MockDriver)
driver.StartError = errors.New("foo")
// run the step
if action := step.Run(state); action != multistep.ActionHalt {
t.Fatalf("bad action: %#v", action)
}
// verify the ID is not saved
if _, ok := state.GetOk("container_id"); ok {
t.Fatal("shouldn't save container ID")
}
// Verify we haven't called stop yet
if driver.StopCalled {
t.Fatal("should not have stopped")
}
// Cleanup
step.Cleanup(state)
if driver.StopCalled {
t.Fatal("should not have stopped")
}
}
package docker
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"io/ioutil"
"os"
)
// StepTempDir creates a temporary directory that we use in order to
// share data with the docker container over the communicator.
type StepTempDir struct {
tempDir string
}
func (s *StepTempDir) Run(state multistep.StateBag) multistep.StepAction {
ui := state.Get("ui").(packer.Ui)
ui.Say("Creating a temporary directory for sharing data...")
td, err := ioutil.TempDir("", "packer-docker")
if err != nil {
err := fmt.Errorf("Error making temp dir: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
s.tempDir = td
state.Put("temp_dir", s.tempDir)
return multistep.ActionContinue
}
func (s *StepTempDir) Cleanup(state multistep.StateBag) {
if s.tempDir != "" {
os.RemoveAll(s.tempDir)
}
}
package docker
import (
"github.com/mitchellh/multistep"
"os"
"testing"
)
func TestStepTempDir_impl(t *testing.T) {
var _ multistep.Step = new(StepTempDir)
}
func TestStepTempDir(t *testing.T) {
state := testState(t)
step := new(StepTempDir)
defer step.Cleanup(state)
// sanity test
if _, ok := state.GetOk("temp_dir"); ok {
t.Fatalf("temp_dir should not be in state yet")
}
// run the step
if action := step.Run(state); action != multistep.ActionContinue {
t.Fatalf("bad action: %#v", action)
}
// Verify that we got the temp dir
dirRaw, ok := state.GetOk("temp_dir")
if !ok {
t.Fatalf("should've made temp_dir")
}
dir := dirRaw.(string)
if _, err := os.Stat(dir); err != nil {
t.Fatalf("err: %s", err)
}
// Cleanup
step.Cleanup(state)
if _, err := os.Stat(dir); err == nil {
t.Fatalf("dir should be gone")
}
}
package docker
import (
"bytes"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"testing"
)
func testState(t *testing.T) multistep.StateBag {
state := new(multistep.BasicStateBag)
state.Put("config", testConfigStruct(t))
state.Put("driver", &MockDriver{})
state.Put("hook", &packer.MockHook{})
state.Put("ui", &packer.BasicUi{
Reader: new(bytes.Buffer),
Writer: new(bytes.Buffer),
})
return state
}
...@@ -10,6 +10,7 @@ import ( ...@@ -10,6 +10,7 @@ import (
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"runtime"
"strings" "strings"
"time" "time"
) )
...@@ -485,9 +486,29 @@ func (b *Builder) Cancel() { ...@@ -485,9 +486,29 @@ func (b *Builder) Cancel() {
} }
func (b *Builder) newDriver() (Driver, error) { func (b *Builder) newDriver() (Driver, error) {
vboxmanagePath, err := exec.LookPath("VBoxManage") var vboxmanagePath string
if err != nil {
return nil, err if runtime.GOOS == "windows" {
// On Windows, we check VBOX_INSTALL_PATH env var for the path
if installPath := os.Getenv("VBOX_INSTALL_PATH"); installPath != "" {
log.Printf("[DEBUG] builder/virtualbox: VBOX_INSTALL_PATH: %s",
installPath)
for _, path := range strings.Split(installPath, ";") {
path = filepath.Join(path, "VBoxManage.exe")
if _, err := os.Stat(path); err == nil {
vboxmanagePath = path
break
}
}
}
}
if vboxmanagePath == "" {
var err error
vboxmanagePath, err = exec.LookPath("VBoxManage")
if err != nil {
return nil, err
}
} }
log.Printf("VBoxManage path: %s", vboxmanagePath) log.Printf("VBoxManage path: %s", vboxmanagePath)
......
...@@ -42,7 +42,18 @@ func (s *stepRun) Run(state multistep.StateBag) multistep.StepAction { ...@@ -42,7 +42,18 @@ func (s *stepRun) Run(state multistep.StateBag) multistep.StepAction {
if int64(config.bootWait) > 0 { if int64(config.bootWait) > 0 {
ui.Say(fmt.Sprintf("Waiting %s for boot...", config.bootWait)) ui.Say(fmt.Sprintf("Waiting %s for boot...", config.bootWait))
time.Sleep(config.bootWait) wait := time.After(config.bootWait)
WAITLOOP:
for {
select {
case <-wait:
break WAITLOOP
case <-time.After(1 * time.Second):
if _, ok := state.GetOk(multistep.StateCancelled); ok {
return multistep.ActionHalt
}
}
}
} }
return multistep.ActionContinue return multistep.ActionContinue
......
...@@ -8,12 +8,13 @@ import ( ...@@ -8,12 +8,13 @@ import (
// Artifact is the result of running the VMware builder, namely a set // Artifact is the result of running the VMware builder, namely a set
// of files associated with the resulting machine. // of files associated with the resulting machine.
type Artifact struct { type Artifact struct {
dir string builderId string
f []string dir string
f []string
} }
func (*Artifact) BuilderId() string { func (a *Artifact) BuilderId() string {
return BuilderId return a.builderId
} }
func (a *Artifact) Files() []string { func (a *Artifact) Files() []string {
......
...@@ -10,13 +10,13 @@ import ( ...@@ -10,13 +10,13 @@ import (
"log" "log"
"math/rand" "math/rand"
"os" "os"
"path/filepath"
"strings" "strings"
"text/template" "text/template"
"time" "time"
) )
const BuilderId = "mitchellh.vmware" const BuilderId = "mitchellh.vmware"
const BuilderIdESX = "mitchellh.vmware-esx"
type Builder struct { type Builder struct {
config config config config
...@@ -405,6 +405,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe ...@@ -405,6 +405,7 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
}, },
&stepCreateDisk{}, &stepCreateDisk{},
&stepCreateVMX{}, &stepCreateVMX{},
&stepSuppressMessages{},
&stepHTTPServer{}, &stepHTTPServer{},
&stepConfigureVNC{}, &stepConfigureVNC{},
&stepRun{}, &stepRun{},
...@@ -458,24 +459,22 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe ...@@ -458,24 +459,22 @@ func (b *Builder) Run(ui packer.Ui, hook packer.Hook, cache packer.Cache) (packe
} }
// Compile the artifact list // Compile the artifact list
files := make([]string, 0, 10) files, err := state.Get("dir").(OutputDir).ListFiles()
visit := func(path string, info os.FileInfo, err error) error { if err != nil {
if err != nil { return nil, err
return err
}
if !info.IsDir() {
files = append(files, path)
}
return nil
} }
if err := filepath.Walk(b.config.OutputDir, visit); err != nil { // Set the proper builder ID
return nil, err builderId := BuilderId
if b.config.RemoteType != "" {
builderId = BuilderIdESX
} }
return &Artifact{b.config.OutputDir, files}, nil return &Artifact{
builderId: builderId,
dir: b.config.OutputDir,
f: files,
}, nil
} }
func (b *Builder) Cancel() { func (b *Builder) Cancel() {
......
...@@ -31,6 +31,10 @@ type Driver interface { ...@@ -31,6 +31,10 @@ type Driver interface {
// Stop stops a VM specified by the path to the VMX given. // Stop stops a VM specified by the path to the VMX given.
Stop(string) error Stop(string) error
// SuppressMessages modifies the VMX or surrounding directory so that
// VMware doesn't show any annoying messages.
SuppressMessages(string) error
// Get the path to the VMware ISO for the given flavor. // Get the path to the VMware ISO for the given flavor.
ToolsIsoPath(string) string ToolsIsoPath(string) string
......
package vmware package vmware
import ( import (
"bufio"
"bytes" "bytes"
gossh "code.google.com/p/go.crypto/ssh" gossh "code.google.com/p/go.crypto/ssh"
"encoding/csv" "encoding/csv"
...@@ -42,7 +43,7 @@ func (d *ESX5Driver) CreateDisk(diskPathLocal string, size string, typeId string ...@@ -42,7 +43,7 @@ func (d *ESX5Driver) CreateDisk(diskPathLocal string, size string, typeId string
} }
func (d *ESX5Driver) IsRunning(vmxPathLocal string) (bool, error) { func (d *ESX5Driver) IsRunning(vmxPathLocal string) (bool, error) {
vmxPath := d.datastorePath(vmxPathLocal) vmxPath := filepath.Join(d.outputDir, filepath.Base(vmxPathLocal))
state, err := d.run(nil, "vim-cmd", "vmsvc/power.getstate", vmxPath) state, err := d.run(nil, "vim-cmd", "vmsvc/power.getstate", vmxPath)
if err != nil { if err != nil {
return false, err return false, err
...@@ -68,6 +69,10 @@ func (d *ESX5Driver) Register(vmxPathLocal string) error { ...@@ -68,6 +69,10 @@ func (d *ESX5Driver) Register(vmxPathLocal string) error {
return d.sh("vim-cmd", "solo/registervm", vmxPath) return d.sh("vim-cmd", "solo/registervm", vmxPath)
} }
func (d *ESX5Driver) SuppressMessages(vmxPath string) error {
return nil
}
func (d *ESX5Driver) Unregister(vmxPathLocal string) error { func (d *ESX5Driver) Unregister(vmxPathLocal string) error {
vmxPath := filepath.Join(d.outputDir, filepath.Base(vmxPathLocal)) vmxPath := filepath.Join(d.outputDir, filepath.Base(vmxPathLocal))
return d.sh("vim-cmd", "vmsvc/unregister", vmxPath) return d.sh("vim-cmd", "vmsvc/unregister", vmxPath)
...@@ -80,11 +85,11 @@ func (d *ESX5Driver) UploadISO(localPath string) (string, error) { ...@@ -80,11 +85,11 @@ func (d *ESX5Driver) UploadISO(localPath string) (string, error) {
return "", err return "", err
} }
if err := d.mkdir(filepath.Dir(targetFile)); err != nil { finalPath := d.datastorePath(targetFile)
if err := d.mkdir(filepath.Dir(finalPath)); err != nil {
return "", err return "", err
} }
finalPath := d.datastorePath(targetFile)
if err := d.upload(finalPath, localPath); err != nil { if err := d.upload(finalPath, localPath); err != nil {
return "", err return "", err
} }
...@@ -210,10 +215,37 @@ func (d *ESX5Driver) DirExists() (bool, error) { ...@@ -210,10 +215,37 @@ func (d *ESX5Driver) DirExists() (bool, error) {
return err == nil, nil return err == nil, nil
} }
func (d *ESX5Driver) ListFiles() ([]string, error) {
stdout, err := d.ssh("ls -1p "+d.outputDir, nil)
if err != nil {
return nil, err
}
files := make([]string, 0, 10)
reader := bufio.NewReader(stdout)
for {
line, _, err := reader.ReadLine()
if err == io.EOF {
break
}
if line[len(line)-1] == '/' {
continue
}
files = append(files, filepath.Join(d.outputDir, string(line)))
}
return files, nil
}
func (d *ESX5Driver) MkdirAll() error { func (d *ESX5Driver) MkdirAll() error {
return d.mkdir(d.outputDir) return d.mkdir(d.outputDir)
} }
func (d *ESX5Driver) Remove(path string) error {
return d.sh("rm", path)
}
func (d *ESX5Driver) RemoveAll() error { func (d *ESX5Driver) RemoveAll() error {
return d.sh("rm", "-rf", d.outputDir) return d.sh("rm", "-rf", d.outputDir)
} }
......
...@@ -3,6 +3,7 @@ package vmware ...@@ -3,6 +3,7 @@ package vmware
import ( import (
"fmt" "fmt"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
"io/ioutil"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
...@@ -86,6 +87,15 @@ func (d *Fusion5Driver) Stop(vmxPath string) error { ...@@ -86,6 +87,15 @@ func (d *Fusion5Driver) Stop(vmxPath string) error {
return nil return nil
} }
func (d *Fusion5Driver) SuppressMessages(vmxPath string) error {
dir := filepath.Dir(vmxPath)
base := filepath.Base(vmxPath)
base = strings.Replace(base, ".vmx", "", -1)
plistPath := filepath.Join(dir, base+".plist")
return ioutil.WriteFile(plistPath, []byte(fusionSuppressPlist), 0644)
}
func (d *Fusion5Driver) Verify() error { func (d *Fusion5Driver) Verify() error {
if _, err := os.Stat(d.AppPath); err != nil { if _, err := os.Stat(d.AppPath); err != nil {
if os.IsNotExist(err) { if os.IsNotExist(err) {
...@@ -129,3 +139,12 @@ func (d *Fusion5Driver) ToolsIsoPath(k string) string { ...@@ -129,3 +139,12 @@ func (d *Fusion5Driver) ToolsIsoPath(k string) string {
func (d *Fusion5Driver) DhcpLeasesPath(device string) string { func (d *Fusion5Driver) DhcpLeasesPath(device string) string {
return "/var/db/vmware/vmnet-dhcpd-" + device + ".leases" return "/var/db/vmware/vmnet-dhcpd-" + device + ".leases"
} }
const fusionSuppressPlist = `<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>disallowUpgrade</key>
<true/>
</dict>
</plist>`
...@@ -114,6 +114,10 @@ func (d *Player5LinuxDriver) Stop(vmxPath string) error { ...@@ -114,6 +114,10 @@ func (d *Player5LinuxDriver) Stop(vmxPath string) error {
return nil return nil
} }
func (d *Player5LinuxDriver) SuppressMessages(vmxPath string) error {
return nil
}
func (d *Player5LinuxDriver) Verify() error { func (d *Player5LinuxDriver) Verify() error {
if err := d.findApp(); err != nil { if err := d.findApp(); err != nil {
return fmt.Errorf("VMware Player application ('vmplayer') not found in path.") return fmt.Errorf("VMware Player application ('vmplayer') not found in path.")
......
...@@ -89,6 +89,10 @@ func (d *Workstation9Driver) Stop(vmxPath string) error { ...@@ -89,6 +89,10 @@ func (d *Workstation9Driver) Stop(vmxPath string) error {
return nil return nil
} }
func (d *Workstation9Driver) SuppressMessages(vmxPath string) error {
return nil
}
func (d *Workstation9Driver) Verify() error { func (d *Workstation9Driver) Verify() error {
var err error var err error
if d.AppPath == "" { if d.AppPath == "" {
......
...@@ -2,6 +2,7 @@ package vmware ...@@ -2,6 +2,7 @@ package vmware
import ( import (
"os" "os"
"path/filepath"
) )
// OutputDir is an interface type that abstracts the creation and handling // OutputDir is an interface type that abstracts the creation and handling
...@@ -10,7 +11,9 @@ import ( ...@@ -10,7 +11,9 @@ import (
// VMware products as well as local. // VMware products as well as local.
type OutputDir interface { type OutputDir interface {
DirExists() (bool, error) DirExists() (bool, error)
ListFiles() ([]string, error)
MkdirAll() error MkdirAll() error
Remove(string) error
RemoveAll() error RemoveAll() error
SetOutputDir(string) SetOutputDir(string)
} }
...@@ -26,10 +29,30 @@ func (d *localOutputDir) DirExists() (bool, error) { ...@@ -26,10 +29,30 @@ func (d *localOutputDir) DirExists() (bool, error) {
return err == nil, nil return err == nil, nil
} }
func (d *localOutputDir) ListFiles() ([]string, error) {
files := make([]string, 0, 10)
visit := func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
files = append(files, path)
}
return nil
}
return files, filepath.Walk(d.dir, visit)
}
func (d *localOutputDir) MkdirAll() error { func (d *localOutputDir) MkdirAll() error {
return os.MkdirAll(d.dir, 0755) return os.MkdirAll(d.dir, 0755)
} }
func (d *localOutputDir) Remove(path string) error {
return os.Remove(path)
}
func (d *localOutputDir) RemoveAll() error { func (d *localOutputDir) RemoveAll() error {
return os.RemoveAll(d.dir) return os.RemoveAll(d.dir)
} }
......
...@@ -4,7 +4,6 @@ import ( ...@@ -4,7 +4,6 @@ import (
"fmt" "fmt"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"os"
"path/filepath" "path/filepath"
) )
...@@ -16,7 +15,7 @@ var KeepFileExtensions = []string{".nvram", ".vmdk", ".vmsd", ".vmx", ".vmxf"} ...@@ -16,7 +15,7 @@ var KeepFileExtensions = []string{".nvram", ".vmdk", ".vmsd", ".vmx", ".vmxf"}
// This step removes unnecessary files from the final result. // This step removes unnecessary files from the final result.
// //
// Uses: // Uses:
// config *config // dir OutputDir
// ui packer.Ui // ui packer.Ui
// //
// Produces: // Produces:
...@@ -24,39 +23,35 @@ var KeepFileExtensions = []string{".nvram", ".vmdk", ".vmsd", ".vmx", ".vmxf"} ...@@ -24,39 +23,35 @@ var KeepFileExtensions = []string{".nvram", ".vmdk", ".vmsd", ".vmx", ".vmxf"}
type stepCleanFiles struct{} type stepCleanFiles struct{}
func (stepCleanFiles) Run(state multistep.StateBag) multistep.StepAction { func (stepCleanFiles) Run(state multistep.StateBag) multistep.StepAction {
config := state.Get("config").(*config) dir := state.Get("dir").(OutputDir)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
ui.Say("Deleting unnecessary VMware files...") ui.Say("Deleting unnecessary VMware files...")
visit := func(path string, info os.FileInfo, err error) error { files, err := dir.ListFiles()
if err != nil { if err != nil {
return err state.Put("error", err)
} return multistep.ActionHalt
}
if !info.IsDir() { for _, path := range files {
// If the file isn't critical to the function of the // If the file isn't critical to the function of the
// virtual machine, we get rid of it. // virtual machine, we get rid of it.
keep := false keep := false
ext := filepath.Ext(path) ext := filepath.Ext(path)
for _, goodExt := range KeepFileExtensions { for _, goodExt := range KeepFileExtensions {
if goodExt == ext { if goodExt == ext {
keep = true keep = true
break break
}
} }
}
if !keep { if !keep {
ui.Message(fmt.Sprintf("Deleting: %s", path)) ui.Message(fmt.Sprintf("Deleting: %s", path))
return os.Remove(path) if err = dir.Remove(path); err != nil {
state.Put("error", err)
return multistep.ActionHalt
} }
} }
return nil
}
if err := filepath.Walk(config.OutputDir, visit); err != nil {
state.Put("error", err)
return multistep.ActionHalt
} }
return multistep.ActionContinue return multistep.ActionContinue
......
...@@ -85,8 +85,8 @@ func (s *stepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction { ...@@ -85,8 +85,8 @@ func (s *stepConfigureVNC) Run(state multistep.StateBag) multistep.StepAction {
log.Printf("Found available VNC port: %d", vncPort) log.Printf("Found available VNC port: %d", vncPort)
vmxData := ParseVMX(string(vmxBytes)) vmxData := ParseVMX(string(vmxBytes))
vmxData["RemoteDisplay.vnc.enabled"] = "TRUE" vmxData["remotedisplay.vnc.enabled"] = "TRUE"
vmxData["RemoteDisplay.vnc.port"] = fmt.Sprintf("%d", vncPort) vmxData["remotedisplay.vnc.port"] = fmt.Sprintf("%d", vncPort)
if err := WriteVMX(vmxPath, vmxData); err != nil { if err := WriteVMX(vmxPath, vmxData); err != nil {
err := fmt.Errorf("Error writing VMX data: %s", err) err := fmt.Errorf("Error writing VMX data: %s", err)
......
package vmware package vmware
import ( import (
"fmt"
"github.com/mitchellh/multistep" "github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer" "github.com/mitchellh/packer/packer"
"log" "log"
...@@ -24,9 +25,14 @@ func (s *stepPrepareOutputDir) Run(state multistep.StateBag) multistep.StepActio ...@@ -24,9 +25,14 @@ func (s *stepPrepareOutputDir) Run(state multistep.StateBag) multistep.StepActio
return multistep.ActionHalt return multistep.ActionHalt
} }
if exists && config.PackerForce { if exists {
ui.Say("Deleting previous output directory...") if config.PackerForce {
dir.RemoveAll() ui.Say("Deleting previous output directory...")
dir.RemoveAll()
} else {
state.Put("error", fmt.Errorf("Output directory '%s' already exists.", config.OutputDir))
return multistep.ActionHalt
}
} }
if err := dir.MkdirAll(); err != nil { if err := dir.MkdirAll(); err != nil {
...@@ -36,6 +42,8 @@ func (s *stepPrepareOutputDir) Run(state multistep.StateBag) multistep.StepActio ...@@ -36,6 +42,8 @@ func (s *stepPrepareOutputDir) Run(state multistep.StateBag) multistep.StepActio
s.dir = dir s.dir = dir
state.Put("dir", dir)
return multistep.ActionContinue return multistep.ActionContinue
} }
......
...@@ -65,7 +65,19 @@ func (s *stepRun) Run(state multistep.StateBag) multistep.StepAction { ...@@ -65,7 +65,19 @@ func (s *stepRun) Run(state multistep.StateBag) multistep.StepAction {
// Wait the wait amount // Wait the wait amount
if int64(config.bootWait) > 0 { if int64(config.bootWait) > 0 {
ui.Say(fmt.Sprintf("Waiting %s for boot...", config.bootWait.String())) ui.Say(fmt.Sprintf("Waiting %s for boot...", config.bootWait.String()))
time.Sleep(config.bootWait) wait := time.After(config.bootWait)
WAITLOOP:
for {
select {
case <-wait:
break WAITLOOP
case <-time.After(1 * time.Second):
if _, ok := state.GetOk(multistep.StateCancelled); ok {
return multistep.ActionHalt
}
}
}
} }
return multistep.ActionContinue return multistep.ActionContinue
......
package vmware
import (
"fmt"
"github.com/mitchellh/multistep"
"github.com/mitchellh/packer/packer"
"log"
)
// This step suppresses any messages that VMware product might show.
type stepSuppressMessages struct{}
func (s *stepSuppressMessages) Run(state multistep.StateBag) multistep.StepAction {
driver := state.Get("driver").(Driver)
ui := state.Get("ui").(packer.Ui)
vmxPath := state.Get("vmx_path").(string)
log.Println("Suppressing messages in VMX")
if err := driver.SuppressMessages(vmxPath); err != nil {
err := fmt.Errorf("Error suppressing messages: %s", err)
state.Put("error", err)
ui.Error(err.Error())
return multistep.ActionHalt
}
return multistep.ActionContinue
}
func (s *stepSuppressMessages) Cleanup(state multistep.StateBag) {}
...@@ -16,10 +16,16 @@ import ( ...@@ -16,10 +16,16 @@ import (
// //
// Produces: // Produces:
// <nothing> // <nothing>
type StepProvision struct{} type StepProvision struct {
Comm packer.Communicator
}
func (s *StepProvision) Run(state multistep.StateBag) multistep.StepAction {
comm := s.Comm
if comm == nil {
comm = state.Get("communicator").(packer.Communicator)
}
func (*StepProvision) Run(state multistep.StateBag) multistep.StepAction {
comm := state.Get("communicator").(packer.Communicator)
hook := state.Get("hook").(packer.Hook) hook := state.Get("hook").(packer.Hook)
ui := state.Get("ui").(packer.Ui) ui := state.Get("ui").(packer.Ui)
......
...@@ -23,6 +23,7 @@ const defaultConfig = ` ...@@ -23,6 +23,7 @@ const defaultConfig = `
"amazon-chroot": "packer-builder-amazon-chroot", "amazon-chroot": "packer-builder-amazon-chroot",
"amazon-instance": "packer-builder-amazon-instance", "amazon-instance": "packer-builder-amazon-instance",
"digitalocean": "packer-builder-digitalocean", "digitalocean": "packer-builder-digitalocean",
"docker": "packer-builder-docker",
"openstack": "packer-builder-openstack", "openstack": "packer-builder-openstack",
"qemu": "packer-builder-qemu", "qemu": "packer-builder-qemu",
"virtualbox": "packer-builder-virtualbox", "virtualbox": "packer-builder-virtualbox",
...@@ -37,7 +38,8 @@ const defaultConfig = ` ...@@ -37,7 +38,8 @@ const defaultConfig = `
}, },
"post-processors": { "post-processors": {
"vagrant": "packer-post-processor-vagrant" "vagrant": "packer-post-processor-vagrant",
"vsphere": "packer-post-processor-vsphere"
}, },
"provisioners": { "provisioners": {
......
...@@ -82,6 +82,7 @@ func wrappedMain() int { ...@@ -82,6 +82,7 @@ func wrappedMain() int {
"Packer Version: %s %s %s", "Packer Version: %s %s %s",
packer.Version, packer.VersionPrerelease, packer.GitCommit) packer.Version, packer.VersionPrerelease, packer.GitCommit)
log.Printf("Packer Target OS/Arch: %s %s", runtime.GOOS, runtime.GOARCH) log.Printf("Packer Target OS/Arch: %s %s", runtime.GOOS, runtime.GOARCH)
log.Printf("Built with Go Version: %s", runtime.Version())
// Prepare stdin for plugin usage by switching it to a pipe // Prepare stdin for plugin usage by switching it to a pipe
setupStdin() setupStdin()
......
...@@ -106,7 +106,7 @@ func (u *ColoredUi) colorize(message string, color UiColor, bold bool) string { ...@@ -106,7 +106,7 @@ func (u *ColoredUi) colorize(message string, color UiColor, bold bool) string {
attr = 1 attr = 1
} }
return fmt.Sprintf("\033[%d;%d;40m%s\033[0m", attr, color, message) return fmt.Sprintf("\033[%d;%dm%s\033[0m", attr, color, message)
} }
func (u *ColoredUi) supportsColors() bool { func (u *ColoredUi) supportsColors() bool {
......
...@@ -19,19 +19,19 @@ func TestColoredUi(t *testing.T) { ...@@ -19,19 +19,19 @@ func TestColoredUi(t *testing.T) {
ui.Say("foo") ui.Say("foo")
result := readWriter(bufferUi) result := readWriter(bufferUi)
if result != "\033[1;33;40mfoo\033[0m\n" { if result != "\033[1;33mfoo\033[0m\n" {
t.Fatalf("invalid output: %s", result) t.Fatalf("invalid output: %s", result)
} }
ui.Message("foo") ui.Message("foo")
result = readWriter(bufferUi) result = readWriter(bufferUi)
if result != "\033[0;33;40mfoo\033[0m\n" { if result != "\033[0;33mfoo\033[0m\n" {
t.Fatalf("invalid output: %s", result) t.Fatalf("invalid output: %s", result)
} }
ui.Error("foo") ui.Error("foo")
result = readWriter(bufferUi) result = readWriter(bufferUi)
if result != "\033[1;31;40mfoo\033[0m\n" { if result != "\033[1;31mfoo\033[0m\n" {
t.Fatalf("invalid output: %s", result) t.Fatalf("invalid output: %s", result)
} }
} }
......
...@@ -10,7 +10,7 @@ import ( ...@@ -10,7 +10,7 @@ import (
var GitCommit string var GitCommit string
// The version of packer. // The version of packer.
const Version = "0.4.0" const Version = "0.4.1"
// Any pre-release marker for the version. If this is "" (empty string), // Any pre-release marker for the version. If this is "" (empty string),
// then it means that it is a final release. Otherwise, this is the // then it means that it is a final release. Otherwise, this is the
......
package main
import (
"github.com/mitchellh/packer/builder/docker"
"github.com/mitchellh/packer/packer/plugin"
)
func main() {
plugin.ServeBuilder(new(docker.Builder))
}
package main
import (
"github.com/mitchellh/packer/packer/plugin"
"github.com/mitchellh/packer/post-processor/vsphere"
)
func main() {
plugin.ServePostProcessor(new(vsphere.PostProcessor))
}
...@@ -2,6 +2,7 @@ package vagrant ...@@ -2,6 +2,7 @@ package vagrant
import ( import (
"archive/tar" "archive/tar"
"compress/flate"
"compress/gzip" "compress/gzip"
"encoding/json" "encoding/json"
"fmt" "fmt"
...@@ -52,13 +53,19 @@ func DirToBox(dst, dir string, ui packer.Ui, level int) error { ...@@ -52,13 +53,19 @@ func DirToBox(dst, dir string, ui packer.Ui, level int) error {
} }
defer dstF.Close() defer dstF.Close()
gzipWriter, err := gzip.NewWriterLevel(dstF, level) var dstWriter io.Writer = dstF
if err != nil { if level != flate.NoCompression {
return err log.Printf("Compressing with gzip compression level: %d", level)
gzipWriter, err := gzip.NewWriterLevel(dstWriter, level)
if err != nil {
return err
}
defer gzipWriter.Close()
dstWriter = gzipWriter
} }
defer gzipWriter.Close()
tarWriter := tar.NewWriter(gzipWriter) tarWriter := tar.NewWriter(dstWriter)
defer tarWriter.Close() defer tarWriter.Close()
// This is the walk func that tars each of the files in the dir // This is the walk func that tars each of the files in the dir
......
package vsphere
import (
"bytes"
"fmt"
"github.com/mitchellh/packer/common"
"github.com/mitchellh/packer/packer"
"os/exec"
"strings"
)
var builtins = map[string]string{
"mitchellh.vmware": "vmware",
}
type Config struct {
common.PackerConfig `mapstructure:",squash"`
Insecure bool `mapstructure:"insecure"`
Datacenter string `mapstructure:"datacenter"`
Datastore string `mapstructure:"datastore"`
Host string `mapstructure:"host"`
Password string `mapstructure:"password"`
PathToResourcePool string `mapstructure:"path_to_resource_pool"`
Username string `mapstructure:"username"`
VMFolder string `mapstructure:"vm_folder"`
VMName string `mapstructure:"vm_name"`
VMNetwork string `mapstructure:"vm_network"`
}
type PostProcessor struct {
config Config
}
func (p *PostProcessor) Configure(raws ...interface{}) error {
_, err := common.DecodeConfig(&p.config, raws...)
if err != nil {
return err
}
tpl, err := packer.NewConfigTemplate()
if err != nil {
return err
}
tpl.UserVars = p.config.PackerUserVars
// Accumulate any errors
errs := new(packer.MultiError)
if _, err := exec.LookPath("ovftool"); err != nil {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("ovftool not found: %s", err))
}
validates := map[string]*string{
"datacenter": &p.config.Datacenter,
"datastore": &p.config.Datastore,
"host": &p.config.Host,
"vm_network": &p.config.VMNetwork,
"password": &p.config.Password,
"path_to_resource_pool": &p.config.PathToResourcePool,
"username": &p.config.Username,
"vm_folder": &p.config.VMFolder,
"vm_name": &p.config.VMName,
}
for n := range validates {
if *validates[n] == "" {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("%s must be set", n))
}
}
if len(errs.Errors) > 0 {
return errs
}
return nil
}
func (p *PostProcessor) PostProcess(ui packer.Ui, artifact packer.Artifact) (packer.Artifact, bool, error) {
if _, ok := builtins[artifact.BuilderId()]; !ok {
return nil, false, fmt.Errorf("Unknown artifact type, can't build box: %s", artifact.BuilderId())
}
vmx := ""
for _, path := range artifact.Files() {
if strings.HasSuffix(path, ".vmx") {
vmx = path
break
}
}
if vmx == "" {
return nil, false, fmt.Errorf("VMX file not found")
}
ui.Message(fmt.Sprintf("Uploading %s to vSphere", vmx))
args := []string{
fmt.Sprintf("--noSSLVerify=%t", p.config.Insecure),
"--acceptAllEulas",
fmt.Sprintf("--name=%s", p.config.VMName),
fmt.Sprintf("--datastore=%s", p.config.Datastore),
fmt.Sprintf("--network=%s", p.config.VMNetwork),
fmt.Sprintf("--vmFolder=%s", p.config.VMFolder),
fmt.Sprintf("vi://%s:%s@%s/%s/%s",
p.config.Username,
p.config.Password,
p.config.Host,
p.config.Datacenter,
p.config.PathToResourcePool),
}
var out bytes.Buffer
cmd := exec.Command("ovftool", args...)
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, false, fmt.Errorf("Failed: %s\nStdout: %s", err, out.String())
}
ui.Message(fmt.Sprintf("%s", out.String()))
return artifact, false, nil
}
...@@ -18,20 +18,21 @@ import ( ...@@ -18,20 +18,21 @@ import (
type Config struct { type Config struct {
common.PackerConfig `mapstructure:",squash"` common.PackerConfig `mapstructure:",squash"`
ChefEnvironment string `mapstructure:"chef_environment"` ChefEnvironment string `mapstructure:"chef_environment"`
ConfigTemplate string `mapstructure:"config_template"` ConfigTemplate string `mapstructure:"config_template"`
CookbookPaths []string `mapstructure:"cookbook_paths"` CookbookPaths []string `mapstructure:"cookbook_paths"`
RolesPath string `mapstructure:"roles_path"` RolesPath string `mapstructure:"roles_path"`
DataBagsPath string `mapstructure:"data_bags_path"` DataBagsPath string `mapstructure:"data_bags_path"`
EnvironmentsPath string `mapstructure:"environments_path"` EncryptedDataBagSecretPath string `mapstructure:"encrypted_data_bag_secret_path"`
ExecuteCommand string `mapstructure:"execute_command"` EnvironmentsPath string `mapstructure:"environments_path"`
InstallCommand string `mapstructure:"install_command"` ExecuteCommand string `mapstructure:"execute_command"`
RemoteCookbookPaths []string `mapstructure:"remote_cookbook_paths"` InstallCommand string `mapstructure:"install_command"`
Json map[string]interface{} RemoteCookbookPaths []string `mapstructure:"remote_cookbook_paths"`
PreventSudo bool `mapstructure:"prevent_sudo"` Json map[string]interface{}
RunList []string `mapstructure:"run_list"` PreventSudo bool `mapstructure:"prevent_sudo"`
SkipInstall bool `mapstructure:"skip_install"` RunList []string `mapstructure:"run_list"`
StagingDir string `mapstructure:"staging_directory"` SkipInstall bool `mapstructure:"skip_install"`
StagingDir string `mapstructure:"staging_directory"`
tpl *packer.ConfigTemplate tpl *packer.ConfigTemplate
} }
...@@ -41,18 +42,20 @@ type Provisioner struct { ...@@ -41,18 +42,20 @@ type Provisioner struct {
} }
type ConfigTemplate struct { type ConfigTemplate struct {
CookbookPaths string CookbookPaths string
DataBagsPath string DataBagsPath string
RolesPath string EncryptedDataBagSecretPath string
EnvironmentsPath string RolesPath string
ChefEnvironment string EnvironmentsPath string
ChefEnvironment string
// Templates don't support boolean statements until Go 1.2. In the // Templates don't support boolean statements until Go 1.2. In the
// mean time, we do this. // mean time, we do this.
// TODO(mitchellh): Remove when Go 1.2 is released // TODO(mitchellh): Remove when Go 1.2 is released
HasDataBagsPath bool HasDataBagsPath bool
HasRolesPath bool HasEncryptedDataBagSecretPath bool
HasEnvironmentsPath bool HasRolesPath bool
HasEnvironmentsPath bool
} }
type ExecuteTemplate struct { type ExecuteTemplate struct {
...@@ -97,12 +100,13 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { ...@@ -97,12 +100,13 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
errs := common.CheckUnusedConfig(md) errs := common.CheckUnusedConfig(md)
templates := map[string]*string{ templates := map[string]*string{
"config_template": &p.config.ConfigTemplate, "config_template": &p.config.ConfigTemplate,
"data_bags_path": &p.config.DataBagsPath, "data_bags_path": &p.config.DataBagsPath,
"roles_path": &p.config.RolesPath, "encrypted_data_bag_secret": &p.config.EncryptedDataBagSecretPath,
"staging_dir": &p.config.StagingDir, "roles_path": &p.config.RolesPath,
"environments_path": &p.config.EnvironmentsPath, "staging_dir": &p.config.StagingDir,
"chef_environment": &p.config.ChefEnvironment, "environments_path": &p.config.EnvironmentsPath,
"chef_environment": &p.config.ChefEnvironment,
} }
for n, ptr := range templates { for n, ptr := range templates {
...@@ -181,6 +185,15 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { ...@@ -181,6 +185,15 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
} }
} }
if p.config.EncryptedDataBagSecretPath != "" {
pFileInfo, err := os.Stat(p.config.EncryptedDataBagSecretPath)
if err != nil || pFileInfo.IsDir() {
errs = packer.MultiErrorAppend(
errs, fmt.Errorf("Bad encrypted data bag secret '%s': %s", p.config.EncryptedDataBagSecretPath, err))
}
}
if p.config.EnvironmentsPath != "" { if p.config.EnvironmentsPath != "" {
pFileInfo, err := os.Stat(p.config.EnvironmentsPath) pFileInfo, err := os.Stat(p.config.EnvironmentsPath)
...@@ -206,6 +219,8 @@ func (p *Provisioner) Prepare(raws ...interface{}) error { ...@@ -206,6 +219,8 @@ func (p *Provisioner) Prepare(raws ...interface{}) error {
} }
func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
ui.Say("Provisioning with chef-solo")
if !p.config.SkipInstall { if !p.config.SkipInstall {
if err := p.installChef(ui, comm); err != nil { if err := p.installChef(ui, comm); err != nil {
return fmt.Errorf("Error installing Chef: %s", err) return fmt.Errorf("Error installing Chef: %s", err)
...@@ -242,6 +257,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { ...@@ -242,6 +257,14 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
} }
} }
encryptedDataBagSecretPath := ""
if p.config.EncryptedDataBagSecretPath != "" {
encryptedDataBagSecretPath = fmt.Sprintf("%s/encrypted_data_bag_secret", p.config.StagingDir)
if err := p.uploadFile(ui, comm, encryptedDataBagSecretPath, p.config.EncryptedDataBagSecretPath); err != nil {
return fmt.Errorf("Error uploading encrypted data bag secret: %s", err)
}
}
environmentsPath := "" environmentsPath := ""
if p.config.EnvironmentsPath != "" { if p.config.EnvironmentsPath != "" {
environmentsPath = fmt.Sprintf("%s/environments", p.config.StagingDir) environmentsPath = fmt.Sprintf("%s/environments", p.config.StagingDir)
...@@ -250,7 +273,7 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error { ...@@ -250,7 +273,7 @@ func (p *Provisioner) Provision(ui packer.Ui, comm packer.Communicator) error {
} }
} }
configPath, err := p.createConfig(ui, comm, cookbookPaths, rolesPath, dataBagsPath, environmentsPath, p.config.ChefEnvironment) configPath, err := p.createConfig(ui, comm, cookbookPaths, rolesPath, dataBagsPath, encryptedDataBagSecretPath, environmentsPath, p.config.ChefEnvironment)
if err != nil { if err != nil {
return fmt.Errorf("Error creating Chef config file: %s", err) return fmt.Errorf("Error creating Chef config file: %s", err)
} }
...@@ -287,7 +310,17 @@ func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, ds ...@@ -287,7 +310,17 @@ func (p *Provisioner) uploadDirectory(ui packer.Ui, comm packer.Communicator, ds
return comm.UploadDir(dst, src, nil) return comm.UploadDir(dst, src, nil)
} }
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, localCookbooks []string, rolesPath string, dataBagsPath string, environmentsPath string, chefEnvironment string) (string, error) { func (p *Provisioner) uploadFile(ui packer.Ui, comm packer.Communicator, dst string, src string) error {
f, err := os.Open(src)
if err != nil {
return err
}
defer f.Close()
return comm.Upload(dst, f)
}
func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, localCookbooks []string, rolesPath string, dataBagsPath string, encryptedDataBagSecretPath string, environmentsPath string, chefEnvironment string) (string, error) {
ui.Message("Creating configuration file 'solo.rb'") ui.Message("Creating configuration file 'solo.rb'")
cookbook_paths := make([]string, len(p.config.RemoteCookbookPaths)+len(localCookbooks)) cookbook_paths := make([]string, len(p.config.RemoteCookbookPaths)+len(localCookbooks))
...@@ -318,14 +351,16 @@ func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, local ...@@ -318,14 +351,16 @@ func (p *Provisioner) createConfig(ui packer.Ui, comm packer.Communicator, local
} }
configString, err := p.config.tpl.Process(tpl, &ConfigTemplate{ configString, err := p.config.tpl.Process(tpl, &ConfigTemplate{
CookbookPaths: strings.Join(cookbook_paths, ","), CookbookPaths: strings.Join(cookbook_paths, ","),
RolesPath: rolesPath, RolesPath: rolesPath,
DataBagsPath: dataBagsPath, DataBagsPath: dataBagsPath,
EnvironmentsPath: environmentsPath, EncryptedDataBagSecretPath: encryptedDataBagSecretPath,
HasRolesPath: rolesPath != "", EnvironmentsPath: environmentsPath,
HasDataBagsPath: dataBagsPath != "", HasRolesPath: rolesPath != "",
HasEnvironmentsPath: environmentsPath != "", HasDataBagsPath: dataBagsPath != "",
ChefEnvironment: chefEnvironment, HasEncryptedDataBagSecretPath: encryptedDataBagSecretPath != "",
HasEnvironmentsPath: environmentsPath != "",
ChefEnvironment: chefEnvironment,
}) })
if err != nil { if err != nil {
return "", err return "", err
...@@ -483,6 +518,9 @@ role_path "{{.RolesPath}}" ...@@ -483,6 +518,9 @@ role_path "{{.RolesPath}}"
{{if .HasDataBagsPath}} {{if .HasDataBagsPath}}
data_bag_path "{{.DataBagsPath}}" data_bag_path "{{.DataBagsPath}}"
{{end}} {{end}}
{{if .HasEncryptedDataBagSecretPath}}
encrypted_data_bag_secret "{{.EncryptedDataBagSecretPath}}"
{{end}}
{{if .HasEnvironmentsPath}} {{if .HasEnvironmentsPath}}
environments_path "{{.EnvironmentsPath}}" environments_path "{{.EnvironmentsPath}}"
chef_environment "{{.ChefEnvironment}}" chef_environment "{{.ChefEnvironment}}"
......
...@@ -155,6 +155,49 @@ func TestProvisionerPrepare_dataBagsPath(t *testing.T) { ...@@ -155,6 +155,49 @@ func TestProvisionerPrepare_dataBagsPath(t *testing.T) {
} }
} }
func TestProvisionerPrepare_encryptedDataBagSecretPath(t *testing.T) {
var err error
var p Provisioner
// Test no config template
config := testConfig()
delete(config, "encrypted_data_bag_secret_path")
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
// Test with a file
tf, err := ioutil.TempFile("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.Remove(tf.Name())
config = testConfig()
config["encrypted_data_bag_secret_path"] = tf.Name()
p = Provisioner{}
err = p.Prepare(config)
if err != nil {
t.Fatalf("err: %s", err)
}
// Test with a directory
td, err := ioutil.TempDir("", "packer")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.RemoveAll(td)
config = testConfig()
config["encrypted_data_bag_secret_path"] = td
p = Provisioner{}
err = p.Prepare(config)
if err == nil {
t.Fatal("should have err")
}
}
func TestProvisionerPrepare_environmentsPath(t *testing.T) { func TestProvisionerPrepare_environmentsPath(t *testing.T) {
var p Provisioner var p Provisioner
......
#!/bin/bash #!/bin/bash
# #
# This script only builds the application from source. # This script compiles Packer for various platforms (specified by the
# PACKER_OS and PACKER_ARCH environmental variables).
set -e set -e
NO_COLOR="\x1b[0m" NO_COLOR="\x1b[0m"
...@@ -20,82 +21,32 @@ cd $DIR ...@@ -20,82 +21,32 @@ cd $DIR
GIT_COMMIT=$(git rev-parse HEAD) GIT_COMMIT=$(git rev-parse HEAD)
GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true) GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true)
# If we're building on Windows, specify an extension # Determine the arch/os combos we're building for
EXTENSION="" XC_ARCH=${XC_ARCH:-"386 amd64 arm"}
if [ "$(go env GOOS)" = "windows" ]; then XC_OS=${XC_OS:-linux darwin windows freebsd openbsd}
EXTENSION=".exe"
fi
# Make sure that if we're killed, we kill all our subprocseses # Make sure that if we're killed, we kill all our subprocseses
trap "kill 0" SIGINT SIGTERM EXIT trap "kill 0" SIGINT SIGTERM EXIT
# If we're building a race-enabled build, then set that up. echo -e "${OK_COLOR}==> Installing dependencies to speed up builds...${NO_COLOR}"
if [ ! -z $PACKER_RACE ]; then
echo -e "${OK_COLOR}--> Building with race detection enabled${NO_COLOR}"
PACKER_RACE="-race"
fi
echo -e "${OK_COLOR}--> Installing dependencies to speed up builds...${NO_COLOR}"
go get ./... go get ./...
# This function waits for all background tasks to complete echo -e "${OK_COLOR}==> Beginning compile...${NO_COLOR}"
waitAll() { rm -rf pkg/
RESULT=0 gox \
for job in `jobs -p`; do -os="${XC_OS}" \
wait $job -arch="${XC_ARCH}" \
if [ $? -ne 0 ]; then
RESULT=1
fi
done
if [ $RESULT -ne 0 ]; then
exit $RESULT
fi
}
waitSingle() {
if [ ! -z $PACKER_NO_BUILD_PARALLEL ]; then
waitAll
fi
}
if [ -z $PACKER_NO_BUILD_PARALLEL ]; then
echo -e "${OK_COLOR}--> NOTE: Compilation of components " \
"will be done in parallel.${NO_COLOR}"
fi
# Compile the main Packer app
echo -e "${OK_COLOR}--> Compiling Packer${NO_COLOR}"
(
go build \
${PACKER_RACE} \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \ -ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \ -output "pkg/{{.OS}}_{{.Arch}}/packer-{{.Dir}}" \
-o bin/packer${EXTENSION} . ./...
cp bin/packer${EXTENSION} ${GOPATH}/bin # Make sure "packer-packer" is renamed properly
) & for PLATFORM in $(find ./pkg -mindepth 1 -maxdepth 1 -type d); do
set +e
waitSingle mv ${PLATFORM}/packer-packer ${PLATFORM}/packer 2>/dev/null
mv ${PLATFORM}/packer-packer.exe ${PLATFORM}/packer.exe 2>/dev/null
# Go over each plugin and build it set -e
for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do
PLUGIN_NAME=$(basename ${PLUGIN})
echo -e "${OK_COLOR}--> Compiling Plugin: ${PLUGIN_NAME}${NO_COLOR}"
(
go build \
${PACKER_RACE} \
-ldflags "-X github.com/mitchellh/packer/packer.GitCommit ${GIT_COMMIT}${GIT_DIRTY}" \
-v \
-o bin/packer-${PLUGIN_NAME}${EXTENSION} ${PLUGIN}
cp bin/packer-${PLUGIN_NAME}${EXTENSION} ${GOPATH}/bin
) &
waitSingle
done done
waitAll
# Reset signal trapping to avoid "Terminated: 15" at the end # Reset signal trapping to avoid "Terminated: 15" at the end
trap - SIGINT SIGTERM EXIT trap - SIGINT SIGTERM EXIT
#!/bin/bash
#
# This script only builds the application from source.
set -e
NO_COLOR="\x1b[0m"
OK_COLOR="\x1b[32;01m"
ERROR_COLOR="\x1b[31;01m"
WARN_COLOR="\x1b[33;01m"
# Get the parent directory of where this script is.
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
# Change into that directory
cd $DIR
# Compile the thing
export XC_ARCH=$(go env GOARCH)
export XC_OS=$(go env GOOS)
./scripts/compile.sh
# Move all the compiled things to the PATH
cp pkg/${XC_OS}_${XC_ARCH}/* ${GOPATH}/bin
...@@ -19,28 +19,6 @@ if [ ! -z $PREVERSION ]; then ...@@ -19,28 +19,6 @@ if [ ! -z $PREVERSION ]; then
VERSIONDIR="${VERSIONDIR}-${PREVERSION}" VERSIONDIR="${VERSIONDIR}-${PREVERSION}"
fi fi
echo "Version: ${VERSION} ${PREVERSION}"
# Determine the arch/os combos we're building for
XC_ARCH=${XC_ARCH:-"386 amd64 arm"}
XC_OS=${XC_OS:-linux darwin windows freebsd openbsd}
echo "Arch: ${XC_ARCH}"
echo "OS: ${XC_OS}"
# This function builds whatever directory we're in...
xc() {
goxc \
-arch="$XC_ARCH" \
-os="$XC_OS" \
-d="${DIR}/pkg" \
-pv="${VERSION}" \
-pr="${PREVERSION}" \
$XC_OPTS \
go-install \
xc
}
# This function waits for all background tasks to complete # This function waits for all background tasks to complete
waitAll() { waitAll() {
RESULT=0 RESULT=0
...@@ -56,28 +34,15 @@ waitAll() { ...@@ -56,28 +34,15 @@ waitAll() {
fi fi
} }
# Compile the main project
./scripts/compile.sh
# Make sure that if we're killed, we kill all our subprocseses # Make sure that if we're killed, we kill all our subprocseses
trap "kill 0" SIGINT SIGTERM EXIT trap "kill 0" SIGINT SIGTERM EXIT
# Build our root project
xc
# Build all the plugins
for PLUGIN in $(find ./plugin -mindepth 1 -maxdepth 1 -type d); do
PLUGIN_NAME=$(basename ${PLUGIN})
find ./pkg \
-type f \
-name ${PLUGIN_NAME} \
-execdir mv ${PLUGIN_NAME} packer-${PLUGIN_NAME} ';'
find ./pkg \
-type f \
-name ${PLUGIN_NAME}.exe \
-execdir mv ${PLUGIN_NAME}.exe packer-${PLUGIN_NAME}.exe ';'
done
# Zip all the packages # Zip all the packages
mkdir -p ./pkg/${VERSIONDIR}/dist mkdir -p ./pkg/dist
for PLATFORM in $(find ./pkg/${VERSIONDIR} -mindepth 1 -maxdepth 1 -type d); do for PLATFORM in $(find ./pkg -mindepth 1 -maxdepth 1 -type d); do
PLATFORM_NAME=$(basename ${PLATFORM}) PLATFORM_NAME=$(basename ${PLATFORM})
ARCHIVE_NAME="${VERSIONDIR}_${PLATFORM_NAME}" ARCHIVE_NAME="${VERSIONDIR}_${PLATFORM_NAME}"
...@@ -87,7 +52,7 @@ for PLATFORM in $(find ./pkg/${VERSIONDIR} -mindepth 1 -maxdepth 1 -type d); do ...@@ -87,7 +52,7 @@ for PLATFORM in $(find ./pkg/${VERSIONDIR} -mindepth 1 -maxdepth 1 -type d); do
( (
pushd ${PLATFORM} pushd ${PLATFORM}
zip ${DIR}/pkg/${VERSIONDIR}/dist/${ARCHIVE_NAME}.zip ./* zip ${DIR}/pkg/dist/${ARCHIVE_NAME}.zip ./*
popd popd
) & ) &
done done
...@@ -95,7 +60,7 @@ done ...@@ -95,7 +60,7 @@ done
waitAll waitAll
# Make the checksums # Make the checksums
pushd ./pkg/${VERSIONDIR}/dist pushd ./pkg/dist
shasum -a256 * > ./${VERSIONDIR}_SHA256SUMS shasum -a256 * > ./${VERSIONDIR}_SHA256SUMS
popd popd
......
...@@ -22,7 +22,7 @@ if [ -z $BINTRAY_API_KEY ]; then ...@@ -22,7 +22,7 @@ if [ -z $BINTRAY_API_KEY ]; then
exit 1 exit 1
fi fi
for ARCHIVE in ./pkg/${VERSION}/dist/*; do for ARCHIVE in ./pkg/dist/*; do
ARCHIVE_NAME=$(basename ${ARCHIVE}) ARCHIVE_NAME=$(basename ${ARCHIVE})
echo Uploading: $ARCHIVE_NAME echo Uploading: $ARCHIVE_NAME
......
...@@ -33,7 +33,7 @@ Packer in the form of plugins. ...@@ -33,7 +33,7 @@ Packer in the form of plugins.
<a class="term" id="term-command"></a> <a class="term" id="term-command"></a>
**Commands** are sub-commands for the `packer` program that perform some **Commands** are sub-commands for the `packer` program that perform some
job. An example somecommand is "build", which is invoked as `packer build`. job. An example some command is "build", which is invoked as `packer build`.
Packer ships with a set of commands out of the box in order to define Packer ships with a set of commands out of the box in order to define
its [command-line interface](#). Commands can also be created and added to its [command-line interface](#). Commands can also be created and added to
Packer in the form of plugins. Packer in the form of plugins.
......
...@@ -184,6 +184,33 @@ out of your AMI builds. ...@@ -184,6 +184,33 @@ out of your AMI builds.
Packer properly obtains a process lock for the parallelism-sensitive parts Packer properly obtains a process lock for the parallelism-sensitive parts
of its internals such as finding an available device. of its internals such as finding an available device.
## Gotchas
One of the difficulties with using the chroot builder is that your provisioning
scripts must not leave any processes running or packer will be unable to unmount
the filesystem.
For debian based distributions you can setup a [policy-rc.d](http://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt) file which will
prevent packages installed by your provisioners from starting services:
<pre class="prettyprint">
{
"type": "shell",
"inline": [
"echo '#!/bin/sh' > /usr/sbin/policy-rc.d",
"echo 'exit 101' >> /usr/sbin/policy-rc.d",
"chmod a+x /usr/sbin/policy-rc.d"
]
},
# ...
{
"type": "shell",
"inline": [
"rm -f /usr/sbin/policy-rc.d"
]
}
</pre>
## Using an IAM Instance Profile ## Using an IAM Instance Profile
If AWS keys are not specified in the template or through environment variables If AWS keys are not specified in the template or through environment variables
......
...@@ -238,7 +238,7 @@ sudo -n ec2-bundle-vol \ ...@@ -238,7 +238,7 @@ sudo -n ec2-bundle-vol \
-u {{.AccountId}} \ -u {{.AccountId}} \
-c {{.CertPath}} \ -c {{.CertPath}} \
-r {{.Architecture}} \ -r {{.Architecture}} \
-e {{.PrivatePath}} \ -e {{.PrivatePath}}/* \
-d {{.Destination}} \ -d {{.Destination}} \
-p {{.Prefix}} \ -p {{.Prefix}} \
--batch --batch
......
---
layout: "docs"
---
# Docker Builder
Type: `docker`
The Docker builder builds [Docker](http://www.docker.io) images using
Docker. The builder starts a Docker container, runs provisioners within
this container, then exports the container for re-use.
Packer builds Docker containers _without_ the use of
[Dockerfiles](http://docs.docker.io/en/latest/use/builder/).
By not using Dockerfiles, Packer is able to provision
containers with portable scripts or configuration management systems
that are not tied to Docker in any way. It also has a simpler mental model:
you provision containers much the same way you provision a normal virtualized
or dedicated server. For more information, read the section on
[Dockerfiles](#toc_3).
The Docker builder must run on a machine that has Docker installed. Therefore
the builder only works on machines that support Docker (modern Linux machines).
If you want to use Packer to build Docker containers on another platform,
use [Vagrant](http://www.vagrantup.com) to start a Linux environment, then
run Packer within that environment.
## Basic Example
Below is a fully functioning example. It doesn't do anything useful, since
no provisioners are defined, but it will effectively repackage an image.
<pre class="prettyprint">
{
"type": "docker",
"image": "ubuntu",
"export_path": "image.tar"
}
</pre>
## Configuration Reference
Configuration options are organized below into two categories: required and optional. Within
each category, the available options are alphabetized and described.
Required:
* `export_path` (string) - The path where the final container will be exported
as a tar file.
* `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it
doesn't already exist.
Optional:
* `pull` (bool) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already
exists and can be used. This defaults to true if not set.
## Dockerfiles
This builder allows you to build Docker images _without_ Dockerfiles.
With this builder, you can repeatably create Docker images without the use
a Dockerfile. You don't need to know the syntax or semantics of Dockerfiles.
Instead, you can just provide shell scripts, Chef recipes, Puppet manifests,
etc. to provision your Docker container just like you would a regular
virtualized or dedicated machine.
While Docker has many features, Packer views Docker simply as an LXC
container runner. To that end, Packer is able to repeatably build these
LXC containers using portable provisioning scripts.
Dockerfiles have some additional features that Packer doesn't support
which are able to be worked around. Many of these features will be automated
by Packer in the future:
* Dockerfiles will snapshot the container at each step, allowing you to
go back to any step in the history of building. Packer doesn't do this yet,
but inter-step snapshotting is on the way.
* Dockerfiles can contain information such as exposed ports, shared
volumes, and other metadata. Packer builds a raw Docker container image
that has none of this metadata. You can pass in much of this metadata
at runtime with `docker run`.
* Images made without dockerfiles are missing critical metadata that
make them easily pushable to the Docker registry. You can work around
this by using a metadata-only Dockerfile with the exported image and
building that. A future Packer version will automatically do this for you.
...@@ -152,7 +152,7 @@ Optional: ...@@ -152,7 +152,7 @@ Optional:
access the remote machine. By default this is empty. This only has an access the remote machine. By default this is empty. This only has an
effect if `remote_type` is enabled. effect if `remote_type` is enabled.
* `remote_user` (string) - The username for the SSH user that will access * `remote_username` (string) - The username for the SSH user that will access
the remote machine. This is required if `remote_type` is enabled. the remote machine. This is required if `remote_type` is enabled.
* `skip_compaction` (bool) - VMware-created disks are defragmented * `skip_compaction` (bool) - VMware-created disks are defragmented
......
---
layout: "docs"
page_title: "vSphere Post-Processor"
---
# vSphere Post-Processor
Type: `vsphere-upload`
The vSphere post-processor takes an artifact from the VMware builder
and uploads it to a vSphere endpoint.
## Configuration
There are many configuration options available for the post-processor. They are
segmented below into two categories: required and optional parameters. Within
each category, the available configuration keys are alphabetized.
Required:
* `datacenter` (string) - The name of the datacenter within vSphere to
add the VM to.
* `datastore` (string) - The name of the datastore to store this VM.
* `host` (string) - The vSphere host that will be contacted to perform
the VM upload.
* `password` (string) - Password to use to authenticate to the vSphere
endpoint.
* `path_to_resource_pool` (string) - The path within the resource pool to
store the VM.
* `username` (string) - The username to use to authenticate to the vSphere
endpoint.
* `vm_folder` (string) - The folder within the datastore to store the VM.
* `vm_name` (string) - The name of the VM once it is uploaded.
* `vm_network` (string) - The name of the VM network this VM will be
added to.
Optional:
* `insecure` (bool) - Whether or not the connection to vSphere can be done
over an insecure connection. By default this is false.
...@@ -44,13 +44,13 @@ configuration is actually required, but at least `run_list` is recommended. ...@@ -44,13 +44,13 @@ configuration is actually required, but at least `run_list` is recommended.
to the remote machine in the directory specified by the `staging_directory`. to the remote machine in the directory specified by the `staging_directory`.
By default, this is empty. By default, this is empty.
* `roles_path` (string) - The path to the "roles" directory on your local filesystem. * `data_bags_path` (string) - The path to the "data\_bags" directory on your local filesystem.
These will be uploaded to the remote machine in the directory specified by the These will be uploaded to the remote machine in the directory specified by the
`staging_directory`. By default, this is empty. `staging_directory`. By default, this is empty.
* `data_bags_path` (string) - The path to the "data_bags" directory on your local filesystem. * `encrypted_data_bag_secret_path` (string) - The path to the file containing
These will be uploaded to the remote machine in the directory specified by the the secret for encrypted data bags. By default, this is empty, so no
`staging_directory`. By default, this is empty. secret will be available.
* `execute_command` (string) - The command used to execute Chef. This has * `execute_command` (string) - The command used to execute Chef. This has
various [configuration template variables](/docs/templates/configuration-templates.html) various [configuration template variables](/docs/templates/configuration-templates.html)
...@@ -60,18 +60,22 @@ configuration is actually required, but at least `run_list` is recommended. ...@@ -60,18 +60,22 @@ configuration is actually required, but at least `run_list` is recommended.
various [configuration template variables](/docs/templates/configuration-templates.html) various [configuration template variables](/docs/templates/configuration-templates.html)
available. See below for more information. available. See below for more information.
* `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
* `remote_cookbook_paths` (array of string) - A list of paths on the remote * `remote_cookbook_paths` (array of string) - A list of paths on the remote
machine where cookbooks will already exist. These may exist from a previous machine where cookbooks will already exist. These may exist from a previous
provisioner or step. If specified, Chef will be configured to look for provisioner or step. If specified, Chef will be configured to look for
cookbooks here. By default, this is empty. cookbooks here. By default, this is empty.
* `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
* `prevent_sudo` (boolean) - By default, the configured commands that are * `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true, executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted. then the sudo will be omitted.
* `roles_path` (string) - The path to the "roles" directory on your local filesystem.
These will be uploaded to the remote machine in the directory specified by the
`staging_directory`. By default, this is empty.
* `run_list` (array of strings) - The [run list](http://docs.opscode.com/essentials_node_object_run_lists.html) * `run_list` (array of strings) - The [run list](http://docs.opscode.com/essentials_node_object_run_lists.html)
for Chef. By default this is empty. for Chef. By default this is empty.
...@@ -101,8 +105,14 @@ cookbook_path [{{.CookbookPaths}}] ...@@ -101,8 +105,14 @@ cookbook_path [{{.CookbookPaths}}]
This template is a [configuration template](/docs/templates/configuration-templates.html) This template is a [configuration template](/docs/templates/configuration-templates.html)
and has a set of variables available to use: and has a set of variables available to use:
* `ChefEnvironment` - The current enabled environment. Only non-empty
if the environment path is set.
* `CookbookPaths` is the set of cookbook paths ready to embedded directly * `CookbookPaths` is the set of cookbook paths ready to embedded directly
into a Ruby array to configure Chef. into a Ruby array to configure Chef.
* `DataBagsPath` is the path to the data bags folder.
* `EncryptedDataBagSecretPath` - The path to the encrypted data bag secret
* `EnvironmentsPath` - The path to the environments folder.
* `RolesPath` - The path the folders folder.
## Execute Command ## Execute Command
...@@ -121,6 +131,7 @@ As you can see from the default value above, the value of this configuration ...@@ -121,6 +131,7 @@ As you can see from the default value above, the value of this configuration
can contain various template variables, defined below: can contain various template variables, defined below:
* `ConfigPath` - The path to the Chef configuration file. * `ConfigPath` - The path to the Chef configuration file.
file.
* `JsonPath` - The path to the JSON attributes file for the node. * `JsonPath` - The path to the JSON attributes file for the node.
* `Sudo` - A boolean of whether to `sudo` the command or not, depending on * `Sudo` - A boolean of whether to `sudo` the command or not, depending on
the value of the `prevent_sudo` configuration. the value of the `prevent_sudo` configuration.
......
...@@ -64,7 +64,7 @@ array. ...@@ -64,7 +64,7 @@ array.
{ {
"type": "digitalocean", "type": "digitalocean",
"api_key": "INSERT API KEY HERE", "api_key": "INSERT API KEY HERE",
"client_id": "INSERT CILENT ID HERE" "client_id": "INSERT CLIENT ID HERE"
} }
</pre> </pre>
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
<li><h4>Builders</h4></li> <li><h4>Builders</h4></li>
<li><a href="/docs/builders/amazon.html">Amazon EC2 (AMI)</a></li> <li><a href="/docs/builders/amazon.html">Amazon EC2 (AMI)</a></li>
<li><a href="/docs/builders/digitalocean.html">DigitalOcean</a></li> <li><a href="/docs/builders/digitalocean.html">DigitalOcean</a></li>
<li><a href="/docs/builders/docker.html">Docker</a></li>
<li><a href="/docs/builders/openstack.html">OpenStack</a></li> <li><a href="/docs/builders/openstack.html">OpenStack</a></li>
<li><a href="/docs/builders/qemu.html">QEMU</a></li> <li><a href="/docs/builders/qemu.html">QEMU</a></li>
<li><a href="/docs/builders/virtualbox.html">VirtualBox</a></li> <li><a href="/docs/builders/virtualbox.html">VirtualBox</a></li>
...@@ -53,6 +54,7 @@ ...@@ -53,6 +54,7 @@
<ul> <ul>
<li><h4>Post-Processors</h4></li> <li><h4>Post-Processors</h4></li>
<li><a href="/docs/post-processors/vagrant.html">Vagrant</a></li> <li><a href="/docs/post-processors/vagrant.html">Vagrant</a></li>
<li><a href="/docs/post-processors/vsphere.html">vSphere</a></li>
</ul> </ul>
<ul> <ul>
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment