Warm tip: This article is reproduced from stackoverflow.com, please click
docker-compose google-compute-engine terraform

How can I redeploy a docker-compose stack with terraform?

发布于 2020-04-13 09:34:33

I use terraform to configure a GCE instance which runs a docker-compose stack. The docker-compose stack references an image with a tag and I would like to be able to rerun docker-compose up when the tag changes, so that a new version of the service can be run. Currently, I do the following in my terraform files:

  provisioner "file" {
    source      = "training-server/docker-compose.yml"
    destination = "/home/curry/docker-compose.yml"

    connection {
      type = "ssh"
      user = "curry"
      host = google_compute_address.training-address.address
      private_key = file(var.private_key_file)
    }
  }

  provisioner "remote-exec" {
    inline = [
      "IMAGE_ID=${var.image_id} docker-compose -f /home/curry/docker-compose.yml up -d"
    ]

    connection {
      type = "ssh"
      user = "root"
      host = google_compute_address.training-address.address
      private_key = file(var.private_key_file)
    }
  }

but this is wrong for various reasons:

  1. Provisioners are somewhat frowned upon according to terraform documentation
  2. If the image_id change this won't be considered a change in configuration by terraform so it won't run the provisioners

What I want is to consider my application stack like a resource, so that when one of its attributes change, eg. the image_id, the resource is recreated but the VM instance itself is not.

How can I do that with terraform? Or is there another better approach?

Questioner
insitu
Viewed
88
David Maze 2020-02-03 06:06

Terraform has a Docker provider, and if you wanted to use Terraform to manage your container stack, that's probably the right tool. But, using it requires essentially translating your Compose file into Terraform syntax.

I'm a little more used to a split where you use Terraform to manage infrastructure – set up EC2 instances and their network setup, for example – but use another tool like Ansible, Chef, or Salt Stack to actually run software on them. Then to update the software (Docker containers) you'd update your configuration management tool's settings to say which version (Docker image tag) you want, and then re-run that.

One trick that may help is to use the null resource which will let you "reprovision the resource" whenever the image ID changes:

resource "null_resource" "docker_compose" {
  triggers = {
    image_id = "${var.image_id}"
  }
  provisioner "remote_exec" {
    ...
  }
}

If you wanted to go down the all-Terraform route, in theory you could write a Terraform configuration like

provider "docker" {
  host = "ssh://root@${google_compute_address.training-address.address}"
  # (where do its credentials come from?)
}

resource "docker_image" "myapp" {
  name = "myapp:${var.image_id}"
}

resource "docker_container" "myapp" {
  name = "myapp"
  image = "${docker_image.myapp.latest}"
}

but you'd have to translate your entire Docker Compose configuration to this syntax, and set it up so that there's an option for developers to run it locally, and replicate Compose features like the default network, and so on. I don't feel like this is generally done in practice.