@hashicorp
The googlecompute plugin can be used with HashiCorp Packer to create custom images on GCE.
- Official
Updated 3 months ago
- GitHub(opens in new tab)
Google Cloud Platform Image Exporter
Type: googlecompute-export
Artifact BuilderId: packer.post-processor.googlecompute-export
The Google Compute Image Exporter post-processor exports the resultant image from a googlecompute build as a gzipped tarball to Google Cloud Storage (GCS).
The exporter uses the same Google Cloud Platform (GCP) project and
authentication credentials as the googlecompute build that produced the image.
A temporary VM is started in the GCP project using these credentials. The VM
mounts the built image as a disk then dumps, compresses, and tars the image.
The VM then uploads the tarball to the provided GCS paths
using the same
credentials.
As such, the authentication credentials that built the image must have write
permissions to the GCS paths
.
Note: By default the GCE image being exported will be deleted once the image has been exported.
To prevent Packer from deleting the image set the keep_input_artifact
configuration option to true
. See Post-Processor Input Artifacts for more details.
Configuration
Required
paths
([]string) - A list of GCS paths where the image will be exported. For example'gs://mybucket/path/to/file.tar.gz'
Optional
access_token
(string) - A temporary OAuth 2.0 access tokenaccount_file
(string) - The JSON file containing your account credentials. If specified, the account file will take precedence over anygooglecompute
builder authentication method.impersonate_service_account
(string) - This allows service account impersonation as per the docs.scopes
([]string) - The service account scopes for launched exporter post-processor instance. Defaults to:disk_size
(int64) - The size of the export instances disk. The disk is unused for the export but a larger size will increasepd-ssd
read speed. This defaults to200
, which is 200GB.disk_type
(string) - Type of disk used to back the export instance, likepd-ssd
orpd-standard
. Defaults topd-ssd
.machine_type
(string) - The export instance machine type. Defaults to"n1-highcpu-4"
.network
(string) - The Google Compute network id or URL to use for the export instance. Defaults to"default"
. If the value is not a URL, it will be interpolated toprojects/((builder_project_id))/global/networks/((network))
. This value is not required if asubnet
is specified.subnetwork
(string) - The Google Compute subnetwork id or URL to use for the export instance. Only required if thenetwork
has been created with custom subnetting. Note, the region of the subnetwork must match thezone
in which the VM is launched. If the value is not a URL, it will be interpolated toprojects/((builder_project_id))/regions/((region))/subnetworks/((subnetwork))
zone
(string) - The zone in which to launch the export instance. Defaults togooglecompute
builder zone. Example:"us-central1-a"
vault_gcp_oauth_engine
(string) - Vault GCP Oauth Engineservice_account_email
(string) - Service Account Email
Basic Example
The following example builds a GCE image in the project, my-project
, with an
account whose keyfile is account.json
. After the image build, a temporary VM
will be created to export the image as a gzipped tarball to
gs://mybucket1/path/to/file1.tar.gz
and
gs://mybucket2/path/to/file2.tar.gz
. keep_input_artifact
is true, so the
GCE image won't be deleted after the export.
In order for this example to work, the account associated with account.json
must have write access to both gs://mybucket1/path/to/file1.tar.gz
and
gs://mybucket2/path/to/file2.tar.gz
.
JSON
HCL2